Thorough explanation: the advantages and disadvantages of concurrent programming

I compiled a free Java Advanced information, covering Java, Redis, MongoDB, MySQL, Zookeeper, Spring Cloud, Dubbo distributed high concurrency and other tutorials, a total of 30G, needs its own collection.

Portal: https://mp.weixin.qq.com/s/JzddfH-7yNudmkjT0IRL8Q


Concurrent programming has been for white joined the line that is always enigmatic, and thus, was born to write on the next record, in order to raise awareness and understanding heap concurrent programming. Why we need to use concurrency? There should always be what trade-off between good and bad sides, that is to say which has the disadvantage of concurrent programming? And what should understand and master during concurrent programming concept is that? This article mainly three issues to talk about.

1. Why A concurrent

All along, the hardware development is extremely rapid, there is also a very famous "Moore's Law", you may wonder clearly why concurrent programming will be side-tracked the development of hardware discussion, which the relationship should be the development of multi-core CPU for concurrent programming provided hardware foundation. Moore's Law is not a law of nature or the laws of physics, it's just based on the observed data, predicting the future think. According to the predicted speed, our computing capabilities will follow the index level of growth rate, the future will soon have superior computing power, it is time to think about the future in the year 2004, Intel announced plans to 4GHz chip postponed to 2005, then in the fall of 2004, Intel announced plans to completely cancel 4GHz, which means that more than half a century the validity of Moore's Law to an abrupt end. However, smart hardware engineer did not stop the pace of development, that is, they formed a multi-core CPU in order to further enhance the computing speed, not again seek a separate computing unit, but the multiple computing units integrated together. A short period of time more than a decade, home-based CPU, such as Intel i7 core can reach even 4 to 8 cores. And professional server is usually up to several independent CPU, each CPU has even more up to eight cores. Therefore, Moore's Law seems to continue to expand on the CPU core experience. Thus, in the context of the multi-core CPU, concurrent programming trend spawned by may play a concurrent programming of the computing power in the form of multi-core CPU to the extreme, performance has been improved .

Top computer scientist Donald Ervin Knuth says of this case: In my opinion, this phenomenon (concurrent) more or less like the hardware designers run out of ideas, the responsibility they will push Moore's Law to the software developer.

In addition, inherent in specific business scene is appropriate for concurrent programming. For example, in the field of image processing, a 1024X768 pixel image contains reach over 780,006 Kpixels. Instantly while traversing all the pixels requires a long period of time, the face of such complex computational capabilities will need to take full advantage of multi-core computing. Another example is when we shop online, in order to enhance the speed of response, need to split, reduce inventory, generate orders, and so these operations can be carried out using the split multi-threading technology to complete. The face of complex business model, parallel program will be more responsive to business needs than a serial program, and concurrent programming to better fit this business split  . Because of these advantages, making multi-threading technology to get attention, but also a CS learners should master:

  • Using the calculated full capacity of the multi-core CPU;

  • Split facilitate business, improve application performance

2. What are the disadvantages of concurrent programming

Multi-threading technology has so many benefits, Is not it a little drawback, in any scenario will certainly apply it? Obviously not.

2.1 frequent context switching

CPU time slice is assigned to each thread of time, because time is very short, so the CPU by constantly switching threads, made us feel multiple threads are executing simultaneously, the time slice is typically tens of milliseconds. And each time you switch, you need to save the current state together, to be able to restore the previous state, and this very loss when switching, too often they are unable to play the advantages of multi-threaded programming. Typically reduces context switching may be used without concurrent programming lock, CAS algorithm, with minimal use of threads and coroutine.

  • No concurrent programming locks: the lock segment reference may be thought concurrentHashMap, different threads processing data in different segments, under conditions such multithreaded competition can reduce context switch time.

  • CAS algorithm, using the algorithm using the Atomic CAS to update the data, using the optimistic locking, can effectively reduce the unnecessary part of the context brought lock contention switch

  • With a minimum thread: avoid creating unwanted threads, such as small task, but created a lot of threads, it will cause a lot of threads are in a wait state

  • Coroutine: multitask scheduling implemented in a single thread, and maintain the switching between a plurality of tasks in a single thread

Since the context switching is also relatively time consuming operation, so the "java concurrent programming art" book had an experiment, concurrent accumulation may not be faster than speed serial accumulation. You may be used Lmbench3 measure the length of time a context switch  vmstat measuring the number of context switches

2.2 thread-safe

Multi-threaded programming is the most difficult to grasp is the critical section thread safety issues, little attention there will be a deadlock situation, once the deadlock will cause the system function is not available.

public class DeadLockDemo {    private static String resource_a = "A";    private static String resource_b = "B";    public static void main(String[] args) {
        deadLock();
    }    public static void deadLock() {
        Thread threadA = new Thread(new Runnable() {            @Override
            public void run() {                synchronized (resource_a) {
                    System.out.println("get resource a");                    try {
                        Thread.sleep(3000);                        synchronized (resource_b) {
                            System.out.println("get resource b");
                        }
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
            }
        });
        Thread threadB = new Thread(new Runnable() {            @Override
            public void run() {                synchronized (resource_b) {
                    System.out.println("get resource b");                    synchronized (resource_a) {
                        System.out.println("get resource a");
                    }
                }
            }
        });
        threadA.start();
        threadB.start();

    }
}

在上面的这个demo中,开启了两个线程threadA, threadB,其中threadA占用了resource_a, 并等待被threadB释放的resource _b。threadB占用了resource _b正在等待被threadA释放的resource _a。因此threadA,threadB出现线程安全的问题,形成死锁。同样可以通过jps,jstack证明这种推论:

"Thread-1":
  waiting to lock monitor 0x000000000b695360 (object 0x00000007d5ff53a8, a java.lang.String),
  which is held by "Thread-0""Thread-0":
  waiting to lock monitor 0x000000000b697c10 (object 0x00000007d5ff53d8, a java.lang.String),
  which is held by "Thread-1"Java stack information for the threads listed above:
==================================================="Thread-1":        at learn.DeadLockDemo$2.run(DeadLockDemo.java:34)
        - waiting to lock <0x00000007d5ff53a8(a java.lang.String)
        - locked <0x00000007d5ff53d8(a java.lang.String)        at java.lang.Thread.run(Thread.java:722)"Thread-0":        at learn.DeadLockDemo$1.run(DeadLockDemo.java:20)
        - waiting to lock <0x00000007d5ff53d8(a java.lang.String)
        - locked <0x00000007d5ff53a8(a java.lang.String)        at java.lang.Thread.run(Thread.java:722)Found 1 deadlock.

如上所述,完全可以看出当前死锁的情况。

那么,通常可以用如下方式避免死锁的情况:

  1. 避免一个线程同时获得多个锁;

  2. 避免一个线程在锁内部占有多个资源,尽量保证每个锁只占用一个资源;

  3. 尝试使用定时锁,使用lock.tryLock(timeOut),当超时等待时当前线程不会阻塞;

  4. 对于数据库锁,加锁和解锁必须在一个数据库连接里,否则会出现解锁失败的情况

所以,如何正确的使用多线程编程技术有很大的学问,比如如何保证线程安全,如何正确理解由于JMM内存模型在原子性,有序性,可见性带来的问题,比如数据脏读,DCL等这些问题(在后续篇幅会讲述)。而在学习多线程编程技术的过程中也会让你收获颇丰。

3. 应该了解的概念

3.1 同步VS异步

同步和异步通常用来形容一次方法调用。同步方法调用一开始,调用者必须等待被调用的方法结束后,调用者后面的代码才能执行。而异步调用,指的是,调用者不用管被调用方法是否完成,都会继续执行后面的代码,当被调用的方法完成后会通知调用者。比如,在超时购物,如果一件物品没了,你得等仓库人员跟你调货,直到仓库人员跟你把货物送过来,你才能继续去收银台付款,这就类似同步调用。而异步调用了,就像网购,你在网上付款下单后,什么事就不用管了,该干嘛就干嘛去了,当货物到达后你收到通知去取就好。

3.2 并发与并行

并发和并行是十分容易混淆的概念。并发指的是多个任务交替进行,而并行则是指真正意义上的“同时进行”。实际上,如果系统内只有一个CPU,而使用多线程时,那么真实系统环境下不能并行,只能通过切换时间片的方式交替进行,而成为并发执行任务。真正的并行也只能出现在拥有多个CPU的系统中。

3.3 阻塞和非阻塞

阻塞和非阻塞通常用来形容多线程间的相互影响,比如一个线程占有了临界区资源,那么其他线程需要这个资源就必须进行等待该资源的释放,会导致等待的线程挂起,这种情况就是阻塞,而非阻塞就恰好相反,它强调没有一个线程可以阻塞其他线程,所有的线程都会尝试地往前运行。

3.4 临界区

临界区用来表示一种公共资源或者说是共享数据,可以被多个线程使用。但是每个线程使用时,一旦临界区资源被一个线程占有,那么其他线程必须等待。


Guess you like

Origin blog.51cto.com/14440216/2439902