Java concurrent programming and high concurrency solutions

Java concurrent programming and high concurrency solutions

Chapter 1 Course Preparation

This chapter firstly introduces the course as a whole from the aspects of course focus, characteristics, suitable population and learning gains, and then starts with an actual counting scenario implementation to show you the thread insecurity in multi-threaded concurrency, so that you can I have experienced concurrent programming for the first time, and then I will explain the concepts of concurrency and high concurrency, and let everyone understand what concurrency and high concurrency are through comparison. Finally, I will give the knowledge and skills involved in the course to prepare for subsequent learning...

Chapter 2 Concurrency Basics

This chapter mainly explains some basic concepts that must be understood in concurrent learning, mainly including CPU multi-level cache and Java Memory Model (JMM). Among them, the CPU multi-level cache explains in-depth cache coherence and out-of-order execution optimization. The Java Memory Model (JMM) explains in detail the JMM regulations, the JMM abstract structure, and eight synchronization operations and synchronization rules. These basic concepts are very important for subsequent concurrent programming, and they are also common test sites for interviews, which need to be carefully understood and mastered. Final summary...

Chapter 3 Project Preparation

This chapter is mainly to make the necessary preparations for the code demonstration in the course. First, we will quickly build a Java project for easy demonstration based on SpringBoot, and then briefly introduce the code cloud and code management. After the project is set up, I will use a simple example to demonstrate the concurrency simulation verification, mainly including the use of tools Postman, JMeter, Apache Bench (AB), and the use of concurrent code to verify the correctness of concurrent processing. ...

Chapter 4 Thread Safety

This chapter explains thread safety, mainly from three aspects: atomicity, visibility, and orderliness. The atomic part will explain in detail the use and precautions of related classes under the atomic package, CAS principles, Unsafe classes, synchronized keywords, etc. The visibility part mainly introduces the rules and use of the volatile keyword, and the visibility of the synchronized keyword. The orderly part focuses on the happens-before principle. This involves...

Chapter 5 Securely Publishing Objects

This chapter mainly explains some core methods of safely publishing objects, mainly through the various implementation methods of singleton class, so that everyone can understand the specific meaning of these methods in the implementation process. This chapter is also the consolidation of thread safety, and it is also to put some keywords and classes involved in thread safety into practical scenarios again to deepen everyone's impression and understanding of them. ...

Chapter 6 Thread Safety Policies

This chapter mainly explains thread safety strategies, including defining immutable objects, thread closure, synchronization containers, concurrent containers, etc., and leads to the key knowledge JUC in concurrency. At the same time, some common thread-unsafe classes and writing methods in development are additionally introduced, and their corresponding alternatives are given. The content covered in this chapter is covered a lot in day-to-day development and interviews. ...

Chapter 7 AQS of JUC

AQS is an important component of JUC and an important test site for interviews. This chapter will focus on the principles and use of AQS model design and related synchronization components, which are very practical, including: CountDownLatch, Semaphore, CyclicBarrier, ReentrantLock and lock, Condition, etc. These components require everyone to be proficient in understanding their uses and differences. Not only can they be used, but they must also clearly know the different effects of different method calls. ...

Chapter 8 JUC Component Extensions

This chapter continues to explain JUC-related components, including FutureTask, Fork/Join framework, and BlockingQueue. FutureTask will be compared to Callable, Runnable, and Future when explaining it. The usage scenarios of these components will be less than that of AQS, but they are also an important part of JUC and need to be mastered.

Chapter 9 Thread Scheduling - Thread Pools

This chapter explains the last part of JUC: the thread pool. There is a high probability that the interview will ask about the knowledge points related to the thread pool. This chapter will mainly explain the disadvantages of new Thread, the benefits of thread pools, detailed introduction of ThreadPoolExecutor (parameters, states, methods), thread pool class diagram, Executor framework interface, etc. You need to understand many details and configurations of thread pools. And can be used correctly in actual projects. ...

Chapter 10 Multithreaded Concurrency Scaling

This chapter will make some supplements to concurrent programming, but they are all close to the current interview, mainly explaining the conditions and prevention of deadlocks, the best practices of multi-threaded concurrent programming, Spring and thread safety, and HashMap and ConcurrentMap, which are especially popular in interviews. Source details. Of course, the questions that interviewers like to ask are also particularly important for actual project development. ...

Chapter 11 High Concurrency Expansion Ideas

The focus of this chapter is to let everyone learn the ideas and means of solving high concurrency problems, as well as the use of key classes. When explaining the expansion, we first introduce the difference between vertical expansion and horizontal expansion through an example, and then introduce the read operation expansion and write operation expansion of the database in detail. The most basic means of expansion, I believe that everyone will not have any problems, the key is to analyze what kind of expansion according to the actual scene. ...

Chapter 12 Cache Ideas for High Concurrency

This chapter explains the caching scheme in high concurrency. Including cache characteristics (hit rate, maximum element, flushing strategy), factors affecting cache hit rate, cache classification and application scenarios (local cache, distributed cache), common cache problems in high concurrency scenarios (cache consistency, cache concurrency, A detailed introduction to cache penetration, avalanche), etc. In addition, the principle analysis of the commonly used cache components Guava Cache, Memcache, Redis has also been done, and demonstrated...

Chapter 13 High Concurrency Message Queuing Ideas

This chapter focuses on the characteristics of message queues (business irrelevance, FIFO, disaster tolerance, performance), why message queues are needed, and the benefits of message queues (business decoupling, eventual consistency, broadcasting, peak shift and flow control), and finally The architecture analysis and feature introduction of the current popular message queue components kafka and rabbitmq, so that everyone can have a clear understanding of the message queue. ...

Chapter 14 High Concurrency Application Splitting Ideas

This chapter starts directly from the actual project splitting steps, so that you can actually feel the benefits of application splitting and the problems solved, and then introduce the principles of application splitting (business priority, step-by-step, taking into account technology, reliable testing) and application splitting. The content of thinking (communication between applications, database design between applications, avoiding cross-application transactions), and introduce the framework introduction of service-oriented Dubbo and microservice Spring Cloud. ...

Chapter 15 High Concurrency Application Current Limiting Ideas

This chapter starts with the current-limiting scenario where the actual project saves millions of data, let you feel the difference between using current-limiting and not using current-limiting in some high-concurrency scenarios, and clarify the important role of current-limiting. After that, four commonly used algorithms for current limiting are introduced in detail: counting method, sliding window, leaky bucket algorithm and token bucket algorithm, and a simple comparison of them is made. ...

Chapter 16 High Concurrency Service Degradation and Service Melting Ideas

This chapter first uses examples to let everyone understand what service degradation and service fuse are, and then introduces the classification of service degradation: automatic degradation (timeout, failure times, faults, current limiting) and manual degradation (switch), and summarizes service degradation and service fuse. The commonalities (purpose, final performance, granularity, autonomy) and differences (reasons of departure, level of management objectives, implementation methods) and issues to be considered for service degradation. Finally, the introduction of Hystrix in service degradation and service fusion...

Chapter 17 High Concurrency Database Slicing and Database Sharding Ideas

This chapter starts from the database bottleneck, and leads to the introduction of database partitioning, database partitioning and table partitioning. The database cut library focuses on the design of read-write separation, and compares the difference between supporting multiple data sources and sub-databases; finally, it introduces when to consider sub-tables, horizontal sub-tables and vertical sub-tables, and the paging plug-in shardbatis2 through mybatis. 0 implements database sub-tables. ...

Chapter 18 Introduction to High Concurrency and High Availability

This chapter mainly introduces three common methods of high availability: distributed task scheduling system, design of active-standby switchover, and introduction of monitoring and alarm mechanism. The distributed part of the task scheduling system introduces the advantages, ideas, and characteristics of elastic-job, and the main-standby switching design part introduces the typical application of zookeeper's distributed lock. ...

Chapter 19 Course Summary

This chapter first summarizes and reviews the knowledge of this course, and then asks questions about concurrency and high concurrency in the interview. I hope everyone can gain something, and look forward to discussing the topics of concurrency and high concurrency with you.


download link:

Baidu network disk download

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324528001&siteId=291194637