001 --- acquaintance concurrent concurrent programming

What is a concurrent programming

Simply put, the so-called concurrent programming refers to the same processor "simultaneously" to handle multiple tasks.

Concurrent three scenarios

1, the division of labor

      Reasonable dismantling different tasks, and can be assigned to the thread, the execution of multiple tasks more efficiently.

2, synchronous

      Thread of execution depends on the results of other threads.

3, exclusive

      Multiple threads need to seize the shared resource.

The source of concurrency issues

Despite multiple threads can improve the efficiency of the application, but inevitably, also introduced a number of issues, the source of these problems are as follows:

1, the cache bring visibility problems

     Since the read and write speeds far greater than the CPU memory read and write speeds, so the CPU to utilize the cache memory and the CPU write detente bring the speed difference;

     For multi-core processors, each core has its own cache, so the CPU After numerical calculation, the value stored in the cache memory is written but the timing is uncertain, so the cache visibility problem occurs

     

 

             Example: The following procedure, the expected result is 20,000, the actual execution result between 10,000 to 20,000

public class Add {
    private static long count = 0;
    
    public static long testAdd() throws InterruptedException {
        Thread thread1 = new Thread(new Runnable() {
            @Override
            public void run() {
                for (int i = 0; i < 10000; i++) {
                    count ++;
                }
            }
        });

        Thread thread2 = new Thread(new Runnable() {
            @Override
            public void run() {
                for (int i = 0; i < 10000; i++) {
                    count ++;
                }
            }
        });

        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();

        return count;
    }
}

2, thread switching problems caused by atomic

      The maximum number of threads supported by the operating system is much larger than the number of core operating system, which is to alleviate the difference in CPU speed IO, using the time division multiplexing mechanism.

      The reason atomicity question is: when multi-threaded operating shared variable, the variable has not been a thread operation is completed, the value of time-multiplexing strategy, another thread gets the executive power, the thread is likely to get wrong .

      Common interview questions: long type variables in highly concurrent applications 32-bit system, why there is a thread safety problem?

      Reason: long variable is 64-bit, Procedure 32-bit operating system long variable assignment: first high 32-bit assignment, and then the lower 32 bits assignment; Thus, if the intermediate occurs thread switching, it is possible to obtain to the wrong value

3, compiler optimization problems brought orderliness

      Ordering problem is the compiler would optimize our instruction reordering, this will not affect the final results of the implementation; but sometimes some unexpected problems occur;

      Example: double check the singleton

public class Singleton {
    private static Singleton instance = null;
    
    public static Singleton getInstance() {
        if (null == instance) {
            synchronized (Singleton.class) {
                if (null == instance) {
                    instance = new Singleton();
                    return instance;
                }
            }
        }
        
        return instance;
    }
}

       The first layer sentenced to air in order to avoid performance problems due to lock; the second layer sentenced to air in order to avoid creating multiple instances; it seems to be no problem, but because the compiler command rearrangement, problems may arise.

            Create an instance of the normal instruction sequence is: allocate memory ---> memory initialization ----> variable to point to the memory address

            The compiler may be optimized instruction sequence: allocate memory ---> variable points to memory address ---> Memory Initialization

            If the thread execution to perform the second step when they were deprived of the right to judge the results of another thread empty non-empty, which directly returned instance; at this time due to the uninitialized instance, may cause a null pointer exception

     Three concurrent problems caused by

       1, security issues

             Essentially accuracy of the data security issues, in order to ensure thread safety, should avoid the same time operating different threads to share data.

       2, active issues

            Hunger: Due to the reasons for low thread priority, may cause the thread has not been executed

            Deadlock: thread contention shared resources, and hold each other's lock, resulting in multiple threads have been waiting for, resulting in a deadlock.

            Live Lock: Deadlock and contrast, live lock because of "too much humility" problems caused; thread access to shared resources, find another thread also need access to shared resources, then exits, waiting to retry;

                       Another thread as well, so there livelock problem.

       3, performance problems 

             Lock overuse, leading to a range of serial execution of the program is too large, so contrary to the advantage of concurrent programming;

             In practical applications, the lock should minimize the use of unnecessary minimize serial

Guess you like

Origin www.cnblogs.com/sniffs/p/11622728.html