5 common concurrency models

5 common concurrency models

Preface

Concurrency is now a very common problem. Due to the increase in the amount of human information, a lot of information needs to be processed concurrently, and the original serial processing has been difficult to meet the actual needs.
Today we will talk about 5 common concurrency models

1. The Future model

The Future model is the product of combining asynchronous request and proxy mode . For
example: Suppose we are an e-commerce platform and users place orders on the website. What the user operates is that the client will send data to the Future server, and the server will obtain the complete order data from the background data interface and respond to the user.
Let's simulate the behavior of user orders:

A. The user starts to place an order after picking the product, and then the client sends a request 1 to the server.

B. The server obtains complete order data from the backend according to the client's information. Here is an explanation. For example, the user client only sent the id and quantity of a few products. Our server needs to read various information such as merchants, products, orders, inventory, etc. from the back-end database, and finally put together a complete order to return .

C. Step 2 is time-consuming, so the server directly returns to the client a forged data, such as an order id.

D. After the client receives the order id, it starts to check the order information, such as checking whether the quantity of goods is correct.
Note:
If you need to pay here, you have to wait until the return of the final order data, that is, the return of the real data. If the data does not return, it has to wait until it returns.

At this time, the complete order information splicing is completed, the complete data of the order is returned, and the user pays and completes the order.
Insert picture description here
The client sends a long-term request. The server does not need to wait for the completion of the data processing, and immediately returns a fake proxy data (equivalent to the product order, not the product itself), and the user does not need to wait. After performing some other operations first , And then call the real data that the server has assembled . The model makes full use of the time segment of waiting

2. Fork/Join model

Divide the task into small enough tasks , then let different threads do the small things that are divided, and then join after completion, and assemble the results of the small tasks into the results of the big tasks
Insert picture description here

3. Actor model

Each thread is an Actor. These Actors do not share any memory . All data is carried out by means of message passing.
An Actor refers to the most basic unit of computation. It can receive a message and perform calculations based on it

Actors have mailboxes
Although many actors are running at the same time, one actor can only process messages sequentially . In other words, other actors sent three messages to one actor, and this actor can only process one at a time. So if you want to process 3 messages in parallel, you need to send this message to 3 actors.
Messages are delivered to the actor asynchronously, so when the actor is processing the message, the new message should be stored elsewhere. Mailbox is where these messages are stored
Insert picture description here

4. Producer consumer model

The core is to use a cache to save tasks. Start one or more threads to produce tasks , and then open one or more threads to remove tasks from the cache for processing .
The advantage of this is that task generation and processing are separated, and the producer does not need to process tasks, but is only responsible for generating tasks and then saving them to the cache. The consumer only needs to remove the task from the cache for processing. When used, different threads can be opened for processing according to the generation and processing of the task.
The generated tasks are faster, so you can flexibly open several consumer threads for processing, so that you can avoid the problem of slow response to task processing

5. Master-Worker model

Core idea: There are two processes in the system, the
master process is responsible for receiving and distributing tasks ; the
worker process is responsible for processing subtasks .
When the Worker process completes the processing of the subtasks, the results are returned to the Master process, and the Master process summarizes the final results.
Insert picture description here
Insert picture description here
Worker: used to actually process a task;
Master: task assignment and final result synthesis;
Main: Start the program and schedule to start the Master.

—————————————————————————————————————
Reference Blog:
Talking about Concurrent Programming: Future Model (Java, Clojure , Scala multilingual analysis)

Concurrency model (1)-Future mode

Java Fork/Join parallel framework

Learn about the Actor model in 10 minutes

Understanding Actor Mode in Ten Minutes

Thank you very much for the blog above!

Guess you like

Origin blog.csdn.net/rjszz1314/article/details/104269719