19. What do you understand by thread-safety ? Why is it required ? And finally, how to achieve thread-safety in Java Applications ?
Java Memory Model defines the legal interaction of threads with the memory in a real computer system. In a way, it describes what behaviors are legal in multi-threaded code. It determines when a Thread can reliably see writes to variables made by other threads. It defines semantics for volatile, final & synchronized, that makes guarantee of visibility of memory operations across the Threads.
There are two type of memory barrier instructions in JMM - read barriers and write barrier.
A read barrier invalidates the local memory (cache, registers, etc) and then reads the contents from the main memory, so that changes made by other threads becomes visible to the current Thread.
A write barrier flushes out the contents of the processor's local memory to the main memory, so that changes made by the current Thread becomes visible to the other threads.
JMM semantics for synchronized
When a thread acquires monitor of an object, by entering into a synchronized block of code, it performs a read barrier (invalidates the local memory and reads from the heap instead). Similarly exiting from a synchronized block as part of releasing the associated monitor, it performs a write barrier (flushes changes to the main memory)
Thus modifications to a shared state using synchronized block by one Thread, is guaranteed to be visible to subsequent synchronized reads by other threads. This guarantee is provided by JMM in presence of synchronized code block.
JMM semantics for Volatile fields
Read & write to volatile variables have same memory semantics as that of acquiring and releasing a monitor using synchronized code block. So the visibility of volatile field is guaranteed by the JMM.
Moreover afterwards Java 1.5, volatile reads and writes are not reorderable with any other memory operations (volatile and non-volatile both). Thus when Thread A writes to a volatile variable V, and afterwards Thread B reads from variable V, any variable values that were visible to A at the time V was written are guaranteed now to be visible to B.
http://media.pragprog.com/titles/vspcon/code/introduction/RaceCondition.java
Programming Concurrency on the JVM
java -d32 RaceCondition (asking it to be run in client mode on the Mac).
What’s This Memory Barrier?
First, the JIT compiler may optimize the while loop; after all, it does not see the variable done changing within the context of the thread. Furthermore, the second thread may end up reading the value of the flag from its registers or cache instead of going to memory. As a result, it may never see the change made by the first thread to this flag
it is the copying from local or working memory to main memory.
A change made by one thread is guaranteed to be visible to another thread only if the writing thread crosses the memory barrier and then the reading thread crosses the memory barrier. synchronized and volatile keywords force that the changes are globally visible on a timely basis;
The changes are first made locally in the registers and caches and then cross the memory barrier as they are copied to the main memory. The sequence or ordering of these crossing is called happens-before
The write has to happens-before the read, meaning the writing thread has to cross the memory barrier before the reading thread does, for the change to be visible.
Quite a few operations in the concurrency API implicitly cross the memory barrier: volatile, synchronized, methods on Thread such as start and interrupt, methods on ExecutorService, and some synchronization facilitators like CountDownLatch.
The volatile keyword tells the JIT compiler not to perform any optimization that may affect the ordering of access to that variable. It warns that the variable may change behind the back of a thread and that each access, read or write, to this variable should bypass cache and go all the way to the memory. I call this a quick fix because arbitrarily making all variables volatile may avoid the problem but will result in very poor performance because every access has to cross the memory barrier. Also, volatile does not help with atomicity when multiple fields are accessed, because the access to each of the volatile fields is separately handled and not coordinated into one access—this would leave a wide opportunity for threads to see partial changes to some fields and not the others.
AVOID SHARED MUTABILITY
When we have a nonfinal (mutable) field, each time a thread changes the value, we have to consider whether we have to put the change back to the memory or leave it in the registers/cache. Each time we read the field, we need to be concerned if we read the latest valid value or a stale value left behind in the cache. We need to ensure the changes to variables are atomic; that is, threads don’t see partial changes. Furthermore, we need to worry about protecting multiple threads from changing the data at the same time.
Read full article from Top 25 Most Frequently Asked Interview Core Java Interview Questions And Answers
Java Memory Model defines the legal interaction of threads with the memory in a real computer system. In a way, it describes what behaviors are legal in multi-threaded code. It determines when a Thread can reliably see writes to variables made by other threads. It defines semantics for volatile, final & synchronized, that makes guarantee of visibility of memory operations across the Threads.
There are two type of memory barrier instructions in JMM - read barriers and write barrier.
A read barrier invalidates the local memory (cache, registers, etc) and then reads the contents from the main memory, so that changes made by other threads becomes visible to the current Thread.
A write barrier flushes out the contents of the processor's local memory to the main memory, so that changes made by the current Thread becomes visible to the other threads.
JMM semantics for synchronized
When a thread acquires monitor of an object, by entering into a synchronized block of code, it performs a read barrier (invalidates the local memory and reads from the heap instead). Similarly exiting from a synchronized block as part of releasing the associated monitor, it performs a write barrier (flushes changes to the main memory)
Thus modifications to a shared state using synchronized block by one Thread, is guaranteed to be visible to subsequent synchronized reads by other threads. This guarantee is provided by JMM in presence of synchronized code block.
JMM semantics for Volatile fields
Read & write to volatile variables have same memory semantics as that of acquiring and releasing a monitor using synchronized code block. So the visibility of volatile field is guaranteed by the JMM.
Moreover afterwards Java 1.5, volatile reads and writes are not reorderable with any other memory operations (volatile and non-volatile both). Thus when Thread A writes to a volatile variable V, and afterwards Thread B reads from variable V, any variable values that were visible to A at the time V was written are guaranteed now to be visible to B.
http://media.pragprog.com/titles/vspcon/code/introduction/RaceCondition.java
Programming Concurrency on the JVM
public class RaceCondition {
private static boolean done;
public static void main(final String[] args) throws InterruptedException{
new Thread(
new Runnable() {
public void run() {
int i = 0;
while(!done) { i++; }
System.out.println("Done!");
}
}
).start();
System.out.println("OS: " + System.getProperty("os.name"));
Thread.sleep(2000);
done = true;
System.out.println("flag done set to true");
}
}
java -server RaceConditionjava -d32 RaceCondition (asking it to be run in client mode on the Mac).
What’s This Memory Barrier?
First, the JIT compiler may optimize the while loop; after all, it does not see the variable done changing within the context of the thread. Furthermore, the second thread may end up reading the value of the flag from its registers or cache instead of going to memory. As a result, it may never see the change made by the first thread to this flag
it is the copying from local or working memory to main memory.
A change made by one thread is guaranteed to be visible to another thread only if the writing thread crosses the memory barrier and then the reading thread crosses the memory barrier. synchronized and volatile keywords force that the changes are globally visible on a timely basis;
The changes are first made locally in the registers and caches and then cross the memory barrier as they are copied to the main memory. The sequence or ordering of these crossing is called happens-before
The write has to happens-before the read, meaning the writing thread has to cross the memory barrier before the reading thread does, for the change to be visible.
Quite a few operations in the concurrency API implicitly cross the memory barrier: volatile, synchronized, methods on Thread such as start and interrupt, methods on ExecutorService, and some synchronization facilitators like CountDownLatch.
The volatile keyword tells the JIT compiler not to perform any optimization that may affect the ordering of access to that variable. It warns that the variable may change behind the back of a thread and that each access, read or write, to this variable should bypass cache and go all the way to the memory. I call this a quick fix because arbitrarily making all variables volatile may avoid the problem but will result in very poor performance because every access has to cross the memory barrier. Also, volatile does not help with atomicity when multiple fields are accessed, because the access to each of the volatile fields is separately handled and not coordinated into one access—this would leave a wide opportunity for threads to see partial changes to some fields and not the others.
AVOID SHARED MUTABILITY
When we have a nonfinal (mutable) field, each time a thread changes the value, we have to consider whether we have to put the change back to the memory or leave it in the registers/cache. Each time we read the field, we need to be concerned if we read the latest valid value or a stale value left behind in the cache. We need to ensure the changes to variables are atomic; that is, threads don’t see partial changes. Furthermore, we need to worry about protecting multiple threads from changing the data at the same time.
Read full article from Top 25 Most Frequently Asked Interview Core Java Interview Questions And Answers