http://blog.takipi.com/java-8-stampedlocks-vs-readwritelocks-and-synchronized/
Each object we allocate in our code has locking capabilities built right into its header at the native GC level. The same goes for the JIT compiler that compiles and re-compiles bytecode depending on the specific state and contention levels for a specific lock.
ReadWriteLocks:
You can specify which threads block everyone else (writers), and which ones play well with others for consuming content (readers).
This RW lock employs a new set of algorithms and memory fencing features added to the Java 8 JDK to help make this lock faster and more robust.
They employ a concept of stamps that are long values that serve as tickets used by any lock / unlock operation. This means that to unlock a R/W operation you need to pass it its correlating lock stamp. Pass the wrong stamp, and you’re risking an exception, or worse – unexpected behavior.
unlike RWLocks, StampedLocks are not reentrant. So while they may be faster, they have the downside that threads can now deadlock against themselves. In practice, this means that more than ever, you should make sure that locks and stamps do not escape their enclosing code blocks.
Optimistic locking. The most important piece in terms of new capabilities for this lock is the new Optimistic locking mode.
http://slamke.blogspot.com/2016/03/javastart.html
http://sighingnow.github.io/%E7%BC%96%E7%A8%8B%E8%AF%AD%E8%A8%80/java_thread.html
Must Read:
Printing Even and Odd using two Threads
http://calvin1978.blogcn.com/articles/java-threadpool.html
Each object we allocate in our code has locking capabilities built right into its header at the native GC level. The same goes for the JIT compiler that compiles and re-compiles bytecode depending on the specific state and contention levels for a specific lock.
ReadWriteLocks:
You can specify which threads block everyone else (writers), and which ones play well with others for consuming content (readers).
Unlike synchronized blocks, RW locks are not built-in to the JVM and have the same capabilities as mere mortal code. Still, to implement a locking idiom you need to instruct the CPU to perform specific operations atomically, or in specific order, to avoid race conditions. This is traditionally done through the magical portal-hole into the JVM – the unsafe class. RW Locks use Compare-And-Swap (CAS) operations to set values directly into memory as part of their thread queuing algorithm
Even so, RWLocks are just not fast enough, and at times prove to be really darn slow, to the point of not being worth bothering with.
StampedLockThis RW lock employs a new set of algorithms and memory fencing features added to the Java 8 JDK to help make this lock faster and more robust.
They employ a concept of stamps that are long values that serve as tickets used by any lock / unlock operation. This means that to unlock a R/W operation you need to pass it its correlating lock stamp. Pass the wrong stamp, and you’re risking an exception, or worse – unexpected behavior.
unlike RWLocks, StampedLocks are not reentrant. So while they may be faster, they have the downside that threads can now deadlock against themselves. In practice, this means that more than ever, you should make sure that locks and stamps do not escape their enclosing code blocks.
Optimistic locking. The most important piece in terms of new capabilities for this lock is the new Optimistic locking mode.
long stamp = lock.tryOptimisticRead(); // non blocking read(); if (!lock.validate(stamp)){ // if a write occurred, try again with a read lock long stamp = lock.readLock(); try { read(); } finally { lock.unlock(stamp); } }
|
Java中一个线程可以多次start吗?
不可以
通过Thread实例的start(),一个Thread的实例只能产生一个线程。一个Thread的实例一旦调用start()方法,这个实例的started标记就标记为true,事实中不管这个线程后来有没有执行到底,只要调用了一次start()就再也没有机会运行了。
一个线程对象只能调用一次start方法.从new到等待运行是单行道,所以如果你对一个已经启动的线程对象再调用一次start方法的话,会产生:IllegalThreadStateException异常. 可以被重复调用的是run()方法。
Thread类中run()和start()方法的区别如下:
run()方法: 在本线程内调用该Runnable对象的run()方法,可以重复多次调用;
start()方法: 启动一个线程,调用该Runnable对象的run()方法,不能多次启动一个线程;
通过Thread实例的start(),一个Thread的实例只能产生一个线程。一个Thread的实例一旦调用start()方法,这个实例的started标记就标记为true,事实中不管这个线程后来有没有执行到底,只要调用了一次start()就再也没有机会运行了。
一个线程对象只能调用一次start方法.从new到等待运行是单行道,所以如果你对一个已经启动的线程对象再调用一次start方法的话,会产生:IllegalThreadStateException异常. 可以被重复调用的是run()方法。
Thread类中run()和start()方法的区别如下:
run()方法: 在本线程内调用该Runnable对象的run()方法,可以重复多次调用;
start()方法: 启动一个线程,调用该Runnable对象的run()方法,不能多次启动一个线程;
private volatile int threadStatus = 0;
public synchronized void start() {
/**
* This method is not invoked for the main method thread or "system"
* group threads created/set up by the VM. Any new functionality added
* to this method in the future may have to also be added to the VM.
*
* A zero status value corresponds to state "NEW".
*/
if (threadStatus != 0)
throw new IllegalThreadStateException();
}
public final synchronized void setName(String name) {
checkAccess();
this.name = name.toCharArray();
if (threadStatus != 0) {
setNativeName(name);
}
}
http://segmentfault.com/a/1190000000635964两个等价线程并发的执行下列程序,a为全局变量,初始为0,假设printf、++、--操作都是原子性的,则输出不肯哪个是(A) void foo() { if(a <= 0) { //1 a++;//2 } else { //3 a--; //4 } printf("%d", a); //5 } A.01 B.10 C.12 D.22
用A1表示线程A进行到了第1条指令后面
01
A1 A2 B3 B4 B5 A5 只有这样才能输出第一个0 ,但是第二个不可能是1了。
输出 00
A1 A2 B3 B4 B5 A5 只有这样才能输出第一个0 ,但是第二个不可能是1了。
输出 00
10
A1 A2 A5 B3 B4 B5
A1 A2 A5 B3 B4 B5
12
A1 B1 A2 A5 B2 B5
A1 B1 A2 A5 B2 B5
22
A1 B1 A2 B2 A5 B5
http://blog.csdn.net/youzai24/article/details/8237726A1 B1 A2 B2 A5 B5
Condition可以用来实现wait、notify、notifyAll方法的线程间同步通信。但其有增强的地方。我们先看一下wait等方法的使用情况:我们先得到一个对象的监视器,进入同步代码块,发现有些条件限制,我们的线程就要wait在这个对象监视器上,如果我们有100个线程,这100条线程有可能会因为不同的条件限制而要wait,但结果是他们都wait在同一个对象监视器上。一旦有另一个线程处理完了某种条件限制,这种限制的解除会让这100条线程中的5条可以继续执行,但这个线程无法通过notify去精确通知这5条线程,他只能调用notifyAll,去通知所有线程,然后其中95条再重新获取到对象监视器后发现不得不继续wait!!这是wait等方法低效的地方,Condition就对这种情况进行了很好的改进!在使用同一个锁进行互斥控制的线程,可以在不同的Condition对象上进行等待和被唤醒!这就是"多路Condition"的概念!
* 有时候线程取得lock后需要在一定条件下才能做某些工作,比如说经典的Producer和Consumer问题。 * 在Java 5.0以前,这种功能是由Object类的wait(), notify()和notifyAll()等方法实现的, * 在5.0里面,这些功能集中到了Condition这个接口来实现。
http://sighingnow.github.io/%E7%BC%96%E7%A8%8B%E8%AF%AD%E8%A8%80/java_thread.html
Condition的功能更类似于传统多线程技术中的
Object.wait()
(Condition.await()
)和Object.notifyAll()
(Condition.signal()
),用于实现线程间通信,一个Lock锁可以支持多个Condition。ArrayBlockingQueue
规定大小的BlockingQueue,其构造函数必须带一个int参数来指明其大小。其所含的对象是以FIFO(先入先出)顺序排序的。
LinkedBlockingQueue
大小不定的BlockingQueue,若其构造函数带一个规定大小的参数,生成的BlockingQueue有大小限制。若不带大小参数,所生成的BlockingQueue的大小由Integer.MAX_VALUE来决定。其所含的对象是以FIFO(先入先出)顺序排序的。LinkedBlockingQueue和ArrayBlockingQueue比较起来,它们背后所用的数据结构不一样,导致LinkedBlockingQueue的数据吞吐量要大于ArrayBlockingQueue,但在线程数量很大时其性能的可预见性低于ArrayBlockingQueue。
PriorityBlockingQueue
类似于LinkedBlockingQueue,但其所含对象的排序不是FIFO,而是依据对象的自然排序顺序或者是构造函数所带的Comparator决定的顺序。
SynchronousQueue
特殊的BlockingQueue,对其的操作必须是放和取交替完成的。
BlockingQueue特别适用于线程间共享缓冲区的场景。BlockingQueue的四种实现也能够满足大多数的缓冲区调度需求。
Printing Even and Odd using two Threads
http://calvin1978.blogcn.com/articles/java-threadpool.html
线程池应对于突然增大、来不及处理的请求,无非两种应对方式:
- 将未完成的请求放在队列里等待
- 临时增加处理线程,等高峰回落后再结束临时线程
JDK的Executors.newFixedPool() 和newCachedPool(),分别使用了这两种方式。
不过,这俩函数在方便之余,也屏蔽了ThreadPool原本多样的配置,对一些不求甚解的码农来说,就错过了一些更适合自己项目的选择。
1. ThreadPoolExecutor的原理
经典书《Java Concurrency in Pratice(Java并发编程实战)》的第8章,浓缩如下:
1. 每次提交任务时,如果线程数还没达到coreSize就创建新线程并绑定该任务。
所以第coreSize次提交任务后线程总数必达到coreSize,不会重用之前的空闲线程。
所以第coreSize次提交任务后线程总数必达到coreSize,不会重用之前的空闲线程。
2. 线程数达到coreSize后,新增的任务就放到工作队列里,而线程池里的线程则努力的使用take()阻塞地从工作队列里拉活来干。
3. 如果队列是个有界队列,又如果线程池里的线程不能及时将任务取走,工作队列可能会满掉,插入任务就会失败,此时线程池就会紧急的再创建新的临时线程来补救。
4. 临时线程使用poll(keepAliveTime,timeUnit)来从工作队列拉活,如果时候到了仍然两手空空没拉到活,表明它太闲了,就会被解雇掉。
5. 如果core线程数+临时线程数 >maxSize,则不能再创建新的临时线程了,转头执行RejectExecutionHanlder。默认的AbortPolicy抛RejectedExecutionException异常,其他选择包括静默放弃当前任务(Discard),放弃工作队列里最老的任务(DisacardOldest),或由主线程来直接执行(CallerRuns),或你自己发挥想象力写的一个。
2. FixedPool 与 CachedPool
FixedPool默认用了一条无界的工作队列 LinkedBlockingQueue, 所以只去到上面的第2步就不会继续往下走了,coreSize的线程做不完的任务不断堆积到无限长的Queue中。
所以只有coreSize一个参数,其他maxSize,keepAliveTime,RejectHandler的配置都不会实际生效。
所以只有coreSize一个参数,其他maxSize,keepAliveTime,RejectHandler的配置都不会实际生效。
CachedPool则把coreSize设成0,然后选用了一种特殊的Queue -- SynchronousQueue,只要当前没有空闲线程,Queue就会立刻报插入失败,让线程池增加新的临时线程,默认的KeepAliveTime是1分钟,而且maxSize是整型的最大值,也就是说只要有干不完的活,都会无限增增加线程数,直到高峰过去线程数才会回落。
3. 对FixedPool的进一步配置
3.1 设置QueueSize
如果不想搞一条无限长的Queue,避免任务无限等待显得像假死,同时占用太多内存,可能会把它换成一条有界的ArrayBlockingQueue,那就要同时关注一下这条队列满了之后的场景,选择正确的rejectHanlder。
此时,最好还是把maxSize设为coreSize一样的值,不把临时线程及其keepAlive时间拉进来,Queue+临时线程两者结合听是好听,但很难设置好。
3.2 有界队列选LinkedBlockingQueue 还是ArrayBlockingQueue?
按Executors的JavaDoc上说是ArrayBlockingQueue,起码ArrayBlockingQueue每插入一个Runnable就直接放到内部的数组里,而LinkedBlockingQueue则要 new Node(runnable),无疑会产生更多对象。而性能方面有兴趣的同学可以自己测一下。
4. 对CachedPool的进一步配置
4.1 设置coreSize
coreSize默认为0,但很多时候也希望是一个类似FixedPool的固定值,能处理大部分的情况,不要有太多加加减减的波动,等待和消耗的精力。
4.2 设置maxSize及rejectHandler
同理,maxSize默认是整形最大值,但太多的线程也很可能让系统崩溃,所以建议还是设一下maxSize和rejectHandler。
4.3 设置keepAliveTime
默认1分钟,可以根据项目再设置一把。
4.4 SynchronousQueue的性能?
高并发下,SynchronousQueue的性能绝对比LinkedBlockingQueue/ArrayBlockingQueue低一大截。虽然JDK6的实现号称比JDK5的改进很多,但还是慢,据文章说只在20线程并发下它才是快的。
所以某些极端高并发场景下,也可以考虑继续用LinkedBlockingQueue/ArrayBlockingQueue,设一个不大不小的队列长度,太小了线程池来不及取任务不停的创建临时线程,太大了要阻塞了有一阵才反映过来加新线程。所以只适合在core size的线程能满足高并发下的需求,临时线程只用于core线程被block住的异常场景。