Java Concurrent Programming Learning 1-Introduction to Concurrency

Introduction

Early computers did not contain an operating system, they only executed one program from start to finish, and this program could access all the resources in the computer. In this bare-metal environment, it is not only difficult to write and run programs, but only one program can be run at a time, which is also a waste of expensive and rare computer resources.

The emergence of the operating system allows the computer to run multiple programs at a time, and different programs run in separate processes: the operating system allocates various resources for each independently executed process, including memory, file handles, and security certificates. If necessary, some coarse-grained communication mechanisms can be used to exchange data between different processes, including: sockets, signal processors, shared memory, semaphores, and files.

In the 1960s, the process has always been the basic unit that can run independently in the operating system . However, with the development of computer technology, processes have slowly appeared many drawbacks. First, because the process is the resource owner, there is a large space-time overhead in creation, cancellation, and switching; the second is due to the emergence of symmetric multiprocessors (SMP). To satisfy multiple operating units, the overhead of multiple processes in parallel is too large. In order to maximize the performance of the system, in the 1980s, people put forward a basic unit that is smaller than a process and can run independently— thread .

If we say that the purpose of introducing processes in the operating system is to make multiple programs execute concurrently to improve resource utilization and increase system throughput; then, the introduction of threads in the operating system is to reduce concurrent program execution time The time and space overhead paid makes the operating system have better concurrency.

Threads are an indispensable and important function in the Java language. They can make complex asynchronous code simpler, which greatly simplifies the development of complex systems; threads share process-wide resources, such as memory handles and file handles, but Each thread has its own program counter, stack, and local variables. Threads also provide an intuitive decomposition mode to take full advantage of the hardware parallelism in a multi-processor system, and multiple threads in the same program can also be scheduled to run on multiple CPUs at the same time.

Advantages of threads

If used properly, threads can effectively reduce the cost of program development and maintenance, while improving the performance of complex applications. Threads can convert most asynchronous workflows into serial workflows, so they can better simulate the way humans work and interact. In addition, threads can also reduce the complexity of the code, making it easier to write, read, and maintain.

  • Utilize the power of multi-processors
    Multi-threaded programs can be executed on multiple processors at the same time. If designed correctly, multithreaded programs can increase system throughput by increasing the utilization of processor resources. In a multithreaded program, if one thread is waiting for the completion of an I/O operation, another thread can continue to run, allowing the program to continue running during the I/O blocking period.

  • Simplicity of modeling
    By using threads, complex and asynchronous workflows can be further decomposed into a set of simple and synchronous workflows. Each workflow runs in a separate thread and interacts in a specific synchronous position.

  • Simplified processing of asynchronous events When a
    server application accepts socket connection requests from multiple remote clients, if each connection is allocated its own thread and synchronous I/O is used, it will reduce the development of such programs Difficulty. If each request has its own processing thread, the blocking that occurs while processing a request will not affect the processing of other requests.

  • Respond to a more sensitive user interface.
    Traditional GUI applications are usually single-threaded. The poll method needs to be called in various places in the code to obtain input events or indirectly execute the application through a "Main Event Loop" All code of the program. If the code called in the main event loop takes a long time to complete, the user interface will "freeze", and subsequent user interface events can be processed only after execution control returns to the main event loop. If the long-running task is run in a separate thread, the event thread can process interface events in a timely manner, thereby making the user interface more sensitive. In modern GUI frameworks, tools such as AWT and Swing use an Event Dispatch Thread (EDT) to replace the main event loop.

The risk of threads

  • Security issues
    Thread safety can be very complicated. Without sufficient synchronization, the order of execution of operations in multiple threads is unpredictable and may even produce strange results. Since multiple threads share the same memory address space and execute concurrently, they may access or modify variables that are being used by other threads. When multiple threads access and modify the same variable at the same time, non-serial factors will be introduced into the serial programming model, and this non-seriality is difficult to analyze. In order to make the behavior of a multithreaded program predictable, the access operations of shared variables must be coordinated so that the threads will not interfere with each other.

  • Liveness issues
    Security means “never bad things happen”, while liveness focuses on “something right will happen eventually”. When an operation cannot be continued, liveness problems occur. Follow-up notes will slowly introduce various forms of liveness problems and how to avoid these problems, including deadlock, starvation, and livelock.

  • Performance issues
    Closely related to liveness issues are performance issues. Performance problems include multiple aspects, such as too long service time, insensitive response, low throughput rate, high resource consumption, or low scalability.

The learning of Java concurrent programming is destined to be a boring process. In order to learn concurrent programming in combination with actual combat, the author recommends the "Java Concurrent Programming Actual Combat" that is currently studying. The original intention of the author to organize this series is to consolidate the current knowledge of concurrent programming by writing a blog. If you can help the young partners who are learning concurrent programming in this process, it is also a happy thing. . I believe that a learning process with communication and summary will not be so boring. So in the next article, we begin to understand the basics of thread safety.

Guess you like

Origin blog.csdn.net/u012855229/article/details/113100659