Detailed explanation of process, thread and coroutine interview

The difference between process and thread and coroutine

Process (Process), thread (Thread) and coroutine (Coroutine) are different levels of abstraction of program execution flow. They differ in how they handle task concurrency, resource usage, and context switching.

  1. Process : A process is the basic unit for resource allocation and scheduling by the operating system, and it is also the unit for isolation between processes. Each process has its own independent memory space, and a process can contain multiple threads. The context switching cost of the process is relatively high.

  2. Thread : A thread is the smallest unit of program execution flow, and a thread is a smaller operating unit that is separated on the basis of a process. A process can have multiple threads, and these threads share the resources of the process, such as memory, file handles, and so on. Context switches between threads are less expensive than processes.

  3. Coroutine : A coroutine is a lightweight thread in user mode, also known as a micro-thread or a fiber. The scheduling of coroutines is completely under user control. A coroutine has its own register context and stack. When the coroutine schedule is switched, the state of the current coroutine can be directly saved, and then the state of another coroutine can be directly restored, which avoids switching between the kernel mode and the user mode, and the performance overhead is smaller than that of threads.

Key difference:

  • Processes have their own independent address space, and each process has at least one thread. Their relationship is: a process can generate one or more threads during its execution.

  • A thread is a part of a process. Threads under the same process share the memory space and resources of the process. Threads can communicate directly, and the switching overhead of threads is less than that of processes.

  • A coroutine is a lighter-weight thread that is not managed by the operating system kernel, but completely controlled by the program (that is, executed in user mode). This means that during the switching of coroutines, there is almost no need to switch the kernel state, and there is no additional system overhead, which can achieve extremely high concurrent processing capabilities. However, the disadvantage of the coroutine is that the programmer needs to schedule it himself, which may complicate the program design.

These concepts are all designed to solve computer concurrency problems. Different methods are suitable for different situations and need to be selected according to specific needs.

Detailed explanation of communication methods between processes

It is recommended to read this article: Communication between processes

Is the shared memory area here in user space?

Yes, the shared memory area is in user space. Shared memory is to share and access a memory area that can be read and written to each other between two or more processes. This method of communication is very fast because data does not need to be copied between processes. All shared memory processes can directly access the shared memory area, and the data in the memory can be accessed and changed by any process that has access to this memory.

Shared memory is just a memory area created by the kernel, and a process can connect this memory to its own address space, just like a piece of memory is shared by multiple processes. There is only one copy in physical memory, but it is mapped in the virtual address spaces of multiple processes. This is called "sharing". So at this level, shared memory exists in user space.

However, it should be noted that the work of managing shared memory (such as creating, connecting, disconnecting, deleting, etc.) is done by the kernel, and this part of the work is carried out in kernel space. However, once the shared memory area is connected to its address space by a process, the process can access the shared memory area like ordinary memory, which is performed in user space.

Thread Pool

It is recommended to check out this article:
Thread Pool

Does single machine thread refer to the situation of single thread pool?

What should I do if the single-machine thread pool is powered off? We can perform transaction management on the tasks that are being processed and blocked queues or persist the tasks in the blocked queues, and when the power failure or system crashes and the operation cannot continue, we can cancel the processing tasks by means of backtracking logs A successful operation has been performed. Then re-execute the entire blocking queue. That is: the blocking queue is persistent; the transaction control of the task is being processed; the rollback of the task is being processed after a power failure, and the operation is restored through the log; the data in the blocking queue is reloaded after the server restarts.

A: In computer science, "standalone" usually refers to a single computer system or server, as opposed to a distributed or clustered system. A "thread pool" is a form of multithreading that creates and manages multiple threads waiting for the main thread to assign tasks that can be processed in parallel (these tasks are usually submitted to a blocking queue). When a thread completes the task processing, it will return to the thread pool to become idle and can be used again.

Therefore, "stand-alone thread pool" should refer to a thread pool running on a single computer system or server. This is not necessarily the case for a single thread, a thread pool usually contains multiple threads in order to process multiple tasks in parallel.

The processing strategies you describe (persistence of blocking queues, transaction control of processing tasks, rollback of processing tasks after power failure, and reloading of data in blocking queues after server restarts, etc.) are processed in this thread pool Failures that may occur, especially in the event of a system power outage or crash, ensure that tasks are properly processed and resumed.

Talk about the usage of CountDownLatch/CyclicBarrier/Semaphore

The difference between CountDownLatch and CyclicBarrier

CountDownLatchand CyclicBarrierare both synchronization auxiliary classes in Java concurrent programming, both of which can control threads to wait for other threads to complete tasks. However, they have some important differences in usage and functionality:

  1. The difference in usage :

    • CountDownLatchIt is one-time, and the value of the counter can only be set in the constructor, and once set, it cannot be reset. When the counter is decremented to 0, all CountDownLatchthreads waiting on will be woken up. This object cannot be reused afterwards CountDownLatch.

    • CyclicBarrierThe counter can be reset, so it can be used multiple times. When the value of the counter is reduced to 0, all waiting threads will be woken up, and then CyclicBarrierautomatically reset to the initial count value.

  2. Functional difference :

    • CountDownLatchThe main function of is to allow one or more threads to wait for other threads to complete operations. For example, the main thread waits for all other child threads to complete their tasks.

    • CyclicBarrierIts main function is to allow a group of threads to reach a synchronization point and then continue execution at the same time. For example, a group of threads need to wait for each other until all threads are ready to continue execution together.

  3. Operational difference :

    • CountDownLatchcountDownThe counter for can only be decremented, not incremented, using the method.

    • CyclicBarrierawaitThe counter of is automatically reset to the initial value after being reduced to 0 by the method.

The choice of these two tool classes depends on the specific usage scenario, and the most suitable auxiliary class needs to be selected according to the actual needs of the problem.

Give a few examples of use

Of course, I'll give an example of each class usage.

  1. CountDownLatch example :

Suppose we are building a complex multi-part product. The final step in the production is bringing all the parts together, but this can only be done after all the parts are ready. We can use CountDownLatch to achieve this.

int PART_COUNT = 10;
CountDownLatch latch = new CountDownLatch(PART_COUNT);

for(int i=0; i<PART_COUNT; i++){
    
    
    new Thread(() -> {
    
    
        System.out.println("部件"+Thread.currentThread().getId()+"已准备好");
        latch.countDown();  // 每准备好一个部件,计数减1
    }).start();
}

try {
    
    
    latch.await();  // 主线程等待所有部件准备好
} catch (InterruptedException e) {
    
    
    e.printStackTrace();
}

System.out.println("所有部件准备完毕,开始组装产品!");
  1. CyclicBarrier example :

Let's say there is a team of 4 that needs to go on a mission, but they need to get together at each stage of the mission, and only when all members are ready can they move on to the next stage. We can use CyclicBarrier to achieve this process.

int TEAM_SIZE = 4;
CyclicBarrier barrier = new CyclicBarrier(TEAM_SIZE);

for(int i=0; i<TEAM_SIZE; i++){
    
    
    new Thread(() -> {
    
    
        try {
    
    
            System.out.println("队员"+Thread.currentThread().getId()+"已准备好");
            barrier.await();  // 等待所有队员准备好
            System.out.println("所有队员都已准备好,队员"+Thread.currentThread().getId()+"开始执行任务...");
            barrier.await();  // 等待所有队员完成当前阶段任务
            System.out.println("队员"+Thread.currentThread().getId()+"完成了任务...");
        } catch (Exception e) {
    
    
            e.printStackTrace();
        }
    }).start();
}
  1. Semaphore example :

Suppose there are 10 people queuing at the bank to use 3 ATM machines, you can use Semaphore to limit the number of people who can use the ATM machines at the same time.

int ATM_MACHINE = 3;
Semaphore semaphore = new Semaphore(ATM_MACHINE);

for(int i=0; i<10; i++){
    
    
    new Thread(() -> {
    
    
        try {
    
    
            semaphore.acquire();  // 获取ATM机
            System.out.println("顾客"+Thread.currentThread().getId()+"开始使用ATM机...");
            Thread.sleep(2000); // 使用ATM机的时间
            System.out.println("顾客"+Thread.currentThread().getId()+"结束使用ATM机...");
            semaphore.release();  // 使用完毕,释放ATM机
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
    }).start();
}

In each example, we use synchronization helper classes to control concurrency and synchronization.

Guess you like

Origin blog.csdn.net/yxg520s/article/details/131761557