One, concurrent collection
The Java.util package provides a lot of collection classes, such as ArrayList, TreeSet, HashMap, but these
The collection is non-thread-safe, and for an iterator to a single-column collection, the fast-fail mechanism is used when iterating
Java.util.ConcurrentModificationException is thrown when the traversed collection is modified by another thread.
This is obviously a very inconvenient collection of multithreaded operations, but early Colections has a method in the tool class that can return
Thread-Safe collections, however, this collection is very low performance for high concurrency environments.
Thus, the Java.util.concurrent package provides a number of classes of related collections, namely concurrent collections, which
has concurrency and high scalability, and returns an iterator that is weakly consistent, that is, an iterator that is no longer a fast-failing mechanism.
Generally described above, the specific search for JDK. The following is a separate study of the problem of using concurrent collections to complete producer consumers.
Using Blockingqueue and Arrayblockingqueue to implement producer consumers:
The Arrayblockingqueue is a bounded block queue of array support, and when the queue is full, the put () method blocks when
The take () method blocks when the queue is empty, so it is easy to implement consumers and producers with this queue.
go to http://blog.51cto.com/12222886/1963884
Deep learning Concurrenthashmap:
Focus on understanding this approach.
Concurrenthashmap is a thread-safe collection, but it is not thread-safe under multi-line Chandran operations
, such as the following code:
if (!map.containskey ("Something")) Map.put ("Something", "something");
Just think, when we're done, there's a thread that puts a value of a key something into the map before it's put.
And then we go on with the put, so is that value being overwritten? This creates a security issue. To ensure
Security we should do this:
Synchronized (map) {if (!map.containskey ("Something")) Map.put ("Something", "something");}
The above can solve the security problem, but the performance will be reduced. We should use the Concurrenthashmap to provide
A api,putifabsent (K key,v value), which means that if the key value does not exist, it is put in, it is the thread
Secure, and with higher performance, the equivalent of the following code:
Synchronized (map) {if (!map.containskey ("Something")) return Map.put ("Something", "something"); Elsereturn Map.get ( key);}
Second, the atomic variable
the built-in locks associated with the object listener have always been poorly performing, and later
Many non-blocking algorithms can greatly improve performance and scalability.
Java.util.concurrent.atomic provides an efficient non-blocking algorithm. It supports a single variable that can be used without
Lock and thread-safe operations. There are the following atomic classes:
Atomic variables are used to implement counters, sequence generators, and other constructs. In an online occurrence contention environment, these constructs require mutual exclusion
without compromising performance. If you have the following code:
Package Xiancheng;public class ID {private static volatile long NextID = 1;static synchronized Long Getnextid () {return NE xtid++;}}
Volatile in the above code guarantees visibility, synchronized guarantees the mutex, in a multithreaded environment, the above code does not
Problems, but performance is low in high-contention environments. We can use atomic variables instead of the above code:
Class Id2{private static Atomiclong NextID = new Atomiclong ();p ublic static Long Getnextid () {return nextid.getandincreme NT ();}}
The above code fully guarantees visibility, mutex, and atomicity of operations, and because it implements a non-blocking algorithm under high contention,
As a result, its performance is relatively high.
So the question is, why do atomic variables improve performance????
Compare-and-swap (CAS mechanism)
The Compare-and-swap (CAS mechanism) is a broad term for a specified instruction for a non-preemptive microprocessor, which
Instruction reads the location of the memory, compares the read value to the expected value, stores the new value in the memory when the value is read and the expectation match
Position, otherwise, nothing will happen.
CAS supports the atomic read-change-write sequence, which is typically used:
(1) Read x from address a
(2) Multi-step calculation at x
(3) Use CAs to change the a value from X to Y. When doing this, CAs succeeds if the value of a does not change.
Where is the CAs in the end?
Package Xiancheng;public class ID {private static volatile long NextID = 1;static synchronized Long Getnextid () {return NE xtid++;}}
The above code uses synchronized, and a listening lock in a high-contention environment can cause excessive context switching, which prevents all threads
and cause the application to not scale well. The CAS mechanism does not apply to the listener to be an operation atomization, but before modifying the value of the ID will
To judge, if the value has not changed, the new value is assigned to the variable, and if there is a change, do nothing, and in the middle
The judgment of whether the value has changed is accomplished by using CAS directives.
Java.util.concurrent.locks.ReentrantLock uses the CAS mechanism to improve performance, and atomic classes take advantage of CAS mechanisms.
Java Threading and concurrency programming practices----additional concurrency tool classes