Java concurrent context switching, volatile, synchronized

1. Context switching

  Context switching refers to the switching process of the CPU between executing different tasks is called context switching (if it is reflected in the thread, it is a thread state change, then it is a context switch). Context switching takes time, which is what we need to consider in concurrent programming. If the context switching time is too long, so many threads will slow down.

 Here is the test code:

 

/**
 * Created by coffice on 2017/10/20.
 */
public class SimpleTest {
    private static final long count = 100000000;

    private static void concurrency () throws InterruptedException {
        long start = System.currentTimeMillis();
        Thread thread1 = new Thread(new Runnable() {
            @Override
            public void run() {
                int a = 0;
                for (long i = 0; i < count; i++) {
                    a += 5;
                }
            }
        });

        Thread thread2 = new Thread(new Runnable() {
            @Override
            public void run() {
                int a = 0;
                for (long i = 0; i < count; i++) {
                    a += 5;
                }
            }
        });
        thread1.start();
        thread2.start();
        thread1.join();
        thread2.join();
        long end = System.currentTimeMillis();
        System.out.println("concurrency: " + (end-start) );
    }
    private static void serial () {
        long start = System.currentTimeMillis();
        int a = 0;
        for (long i = 0; i < count; i++) {
            a += 5;
        }

        int b = 0;
        for (long i = 0; i < count; i++) {
            b += 5;
        }
        long end = System.currentTimeMillis();
        System.out.println("serial: " + (end-start) );
    }

    public static  void main (String args[]) throws InterruptedException {
        concurrency();
        serial();
    }

}
   Test Results

  

Cycles Serial execution time concurrent execution time Concurrency than execution
100000000 74 40 quick
10 million 11 7 quick
1 million 6 5 slow
100,000 3 3 Same
10000 0 1 slow

  When the number of runs is small, the impact of context switching is much larger, so how to solve this problem obviously needs to reduce the number of context switches. Threads need to switch contexts once from wating->runnable, as long as the thread's waiting is reduced, multiple context switches can be avoided.

  • lock-free concurrency
  • CAS algorithm instead of locks (compare and swap algorithm)
    AtomicInteger integer = new AtomicInteger();
    final boolean b = integer.compareAndSet(1, 2);
     
  • Threads cannot be created too many, so most of them are managed by thread pools

3. synchronized

  3.1 Bias lock

   It will be biased towards the first thread that accesses the lock. If the lock is not accessed by other threads during the next run, the thread holding the biased lock will never need to trigger synchronization.

   If other threads preempt the lock during operation, the thread holding the biased lock will be suspended, and the JVM will try to eliminate the biased lock on it and restore the lock to a standard lightweight lock. (Biased locks can only work under a single thread) So the process is such as biased locks -> lightweight locks -> heavyweight locks

  3.2 Lightweight Lock

   3.2.1 Spin

    When encountering competition, I will not block first. I will intoxicate myself first to see if I can obtain a competition lock during the intoxication time, and block if unsuccessful (this is a very important performance improvement for those code blocks with very short execution time). , the spin can acquire the competitive lock in this short period of time) to minimize the possibility of blocking and thus reduce context switching and improve efficiency, so the spinning time should be less than one context switching time.

 

  After 1.6, "biased lock" and "lightweight lock" plus heavyweight lock, a total of 3 types of locks were introduced. There are 4 lock states from low to high: stateless lock, biased lock state, lightweight lock and heavyweight lock Locks, these states are gradually escalated with competition.

Lock advantage shortcoming scenes to be used
Bias lock Locking and unlocking do not require additional consumption, and there is only a nanosecond gap compared to executing asynchronous methods If there is a thread competition, it will bring additional consumption of lock revocation Applies to a thread to access the synchronization module
Lightweight lock Competing threads will not block, improving thread access speed Using spin consumes CPU if you never get a thread contending for a lock Pursuit of response time, synchronized block execution speed is fast
heavyweight lock Competing threads do not use it to spin and consume no CPU Thread blocking response time is slow, synchronized block execution time is long Pursuit of throughput

 

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326689702&siteId=291194637