[Operating system] thread synchronization, thread mutual exclusion, atomic operation

Thread mutex

Introduce

Let's look at a piece of multi-threaded code. This is a classic example of selling train tickets. Xi'an Railway Station now has 10 tickets left to Beijing West. There are 3 ticket windows for buying tickets:

// 销售火车票
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>

int ticket = 10;

// 每个窗口都执行的售票操作,假设每个窗口每次只卖出 1 张
void* SellTicket(void*);

int main()
{
    // 有 3 个售票窗口在售票
    pthread_t tid1, tid2, tid3;
    pthread_create(&tid1, NULL, SellTicket, "窗口1");
    pthread_create(&tid2, NULL, SellTicket, "窗口2");
    pthread_create(&tid3, NULL, SellTicket, "窗口3");
    
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);
    pthread_join(tid3, NULL);

    return 0;
}

void*
SellTicket(void* arg)
{
    char* id = (char*) arg;
    while (1)
    {
        if (ticket > 0) 
        {
            sleep(1);  // 售票员小姐姐操作一下
            --ticket;
            printf("%s 售出 1 张, 剩余 %d 张\n", id, ticket);
        }
        else
        {
            printf("票售罄了!\n");
            break;  // 关闭售票窗口
        }
    }
}

There is nothing wrong with the code, and there are no problems with two steps?
Compile gcc 3-销售火车票.c -lpthread
run ./a.out
Insert picture description here
to see the result scared, why he did not have the other ticket window still sell?
The culprit isPreemptive execution of threads

the reason

How many steps does it take to put the elephant in the refrigerator?
Open the refrigerator door, tuck the elephant in, and close the refrigerator door.

Then --ticket takes a few steps?
Under the von Neumann architecture:
read the ticket into the register, the circuit will convert the value of the register to -1, and put the value back into the memory corresponding to the ticket.

Since threads are preemptively scheduled, the following situation may occur:
Suppose there are now 10 tickets.
Thread 1: ticket-> register [ticket: 10, register: 10]
[Thread 1 CPU time slice to save site, switch to thread 2 execution]
Thread 2: ticket-> register [ticket: 10, register: 10]
thread 2: Register value -1 [ticket: 10, register: 9]
Thread 2: register-> ticket [ticket: 9, register: 9]
[Thread 2 is completed, switch to thread 1 execution, resume the scene]
Thread 1: register Value -1 [ticket: 10, register: 10]
Thread 1: register-> ticket [ticket: 9, register: 9]
Finally! ! ! A total of 2 tickets were sold in the two windows, but the ticket was 9! ! !

solve

Let's first understand some concepts:
critical resources : resources shared by multi-threaded execution streams are called critical resources. For example, the above ticketis a critical resource.
Critical section : The code that accesses critical resources inside each thread is called the critical section. Like the one above --ticket.
Atomicity : An operation that will not be interrupted by any scheduling mechanism. The operation has only two states, either completed or not completed.

Thread mutex

Mutual exclusion : At any time, mutual exclusion guarantees that only one execution flow enters the critical area and accesses critical resources, usually protecting critical resources.

When thread 1 is doing the three-step operation of "installing the elephant in the refrigerator", it does not allow other threads to steal the execution right of thread 1. The professional point is that when the code enters the critical section for execution, other threads are not allowed to enter the critical section.
To achieve mutual exclusion then we need something to identify whether a thread is currently using critical resources. This thing is called a mutex (mutexe).

Let's take a look at the modified code:

// 销售火车票
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>

int ticket = 10;
pthread_mutex_t mutex;

void* SellTicket(void*);

int main()
{
    pthread_mutex_init(&mutex, NULL);

    pthread_t tid1, tid2, tid3;
    pthread_create(&tid1, NULL, SellTicket, "窗口1");
    pthread_create(&tid2, NULL, SellTicket, "窗口2");
    pthread_create(&tid3, NULL, SellTicket, "窗口3");
    
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);
    pthread_join(tid3, NULL);

    pthread_mutex_destroy(&mutex);
    return 0;
}

void* SellTicket(void* arg)
{
    char* id = (char*) arg;
    while (1)
    {
        pthread_mutex_lock(&mutex);
        if (ticket > 0) 
        {
            sleep(1);
            --ticket;
            printf("%s 售出 1 张, 剩余 %d 张\n", id, ticket);
            pthread_mutex_unlock(&mutex);
            sched_yield();  // 测试:放弃 CPU 执行权
        }
        else
        {
            pthread_mutex_unlock(&mutex);
            printf("票售罄了!\n");
            break;
        }
    }
}

When a thread enters the critical section, it locks and unlocks it after exiting the critical section.
If the lock is used when other threads execute here, then wait.

Insert picture description here

Mutex is a kind of mutex Pending lockOnce a process is locked, other processes fail to acquire the lock and will hang (enter the operating system's waiting queue).
When the lock is released and scheduled by the operating system, it can continue to execute!
Mutexes can guarantee thread safety, but the efficiency of the final program is affected, in addition to this there may be more serious problemsDeadlock.
Then it should be noted that the lock and unlock state of the mutex must also be an atomic operation.

In order to achieve mutual exclusion lock operation, most architectures provide swap or exchange instructions.The purpose of this instruction is to exchange data between registers and memory units.Since there is only one instruction, atomicity is guaranteed, even on multiprocessor platforms. There are also bus cycles to access the memory. When the exchange instruction on one processor is executed, the exchange instruction of the other processor can only wait for the bus cycle.

Other types of locks:
Pessimistic locks: Every time you fetch data, you always worry that the data will be modified by other threads, so you will add locks (read locks, write locks, row locks, etc.) before fetching data. When other threads want to access the data, Blocked and hung.
Optimistic locking: Each time data is fetched, it is always optimistic that the data will not be modified by other threads, so it is not locked. But before updating the data, it will determine whether other data have been modified before the update. There are two main methods: version number mechanism and CAS operation.
CAS operation: When the data needs to be updated, determine whether the current memory value is equal to the previously obtained value. If they are equal, they are updated with new values. If it is not equal, it will fail, and if it fails, it will be retried. It is generally a spin process, that is, continuous retry.
Spin lock, fair lock, unfair lock?

Thread synchronization

Synchronization : Synchronization controls the execution order between threads and prevents them from preemptive execution.
Under the premise of ensuring data security, allowing threads to access critical resources in a certain order to effectively avoid hunger problems is called synchronization.


Let 's take a chestnut in life: In a basketball game, there are two actions of passing and dunking. Assuming that the passing and dunking are done by two people, then you need to have a sequence, pass the ball first, and then dunk.
Suppose the pass takes 789789ms and the dunk takes 123123ms.

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <time.h>

// 传球动作
void* ThreadEntry1(void* args) {
    (void) args;
    while (1) {
        printf("传球\n");
        usleep(789789);
    }
    return NULL;
}

// 扣篮动作
void* ThreadEntry2(void* args) {
    (void)args;
    while (1) {
        printf("-扣篮\n");
        usleep(123123);
    }
    return NULL;
}

int main() {
    pthread_t tid1, tid2;
    pthread_create(&tid1, NULL, ThreadEntry1, NULL);
    pthread_create(&tid2, NULL, ThreadEntry2, NULL);
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);

    return 0;
}

Run two steps:
you can see that you dunk without getting the ball. In this case, you need to control the order. First you have to get the ball before you can dunk.
Insert picture description here
We add control to the above code:

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <time.h>

// 互斥锁
pthread_mutex_t mutex;
// 条件变量
pthread_cond_t cond;

// 传球动作
void* ThreadEntry1(void* args) {
    (void) args;
    while (1) {
        printf("传球\n");
        // 传球过去了,通知一下
        pthread_cond_signal(&cond);
        usleep(788789);
    }
    return NULL;
}

// 扣篮动作
void* ThreadEntry2(void* args) {
    (void)args;
    while (1) {
        // 首先得等待球传过来
        // 一直等到球传过来
        pthread_cond_wait(&cond, &mutex);
        printf("-扣篮\n");
        usleep(123123);
    }
    return NULL;
}

int main() {
    // 初始化 cond 和 mutex
    pthread_mutex_init(&mutex, NULL);
    pthread_cond_init(&cond, NULL);

    pthread_t tid1, tid2;
    pthread_create(&tid1, NULL, ThreadEntry1, NULL);
    pthread_create(&tid2, NULL, ThreadEntry2, NULL);
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);

    pthread_mutex_destroy(&mutex);
    pthread_cond_destroy(&cond);
    return 0;
}

Run for a run:
Insert picture description here
noteThreadEntry2 execution to pthread_cond_wait()when it will do three actions:
1, first release the lock;
2, waiting for cond condition ready;
3, re-acquire the lock, perform the following operations.
One or two of these operations must be atomic, otherwise you may miss other thread notification messages, resulting in stupidity waiting here.
In most cases, condition variables have to be used with mutex locks.

Why does pthread_cond_wait need a mutex?
Conditional waiting is a means of synchronization between threads. If there is only one thread, the conditions are not met, and the wait will not be satisfied, so there must be a thread through certain operations to change the shared variables, so that the conditions that were not previously met become To be satisfied, and friendly notification waiting for the thread on the condition variable. The conditions will not suddenly become satisfied for no reason, and will inevitably involve changes in shared data. Therefore, it must be protected with a mutex. Without mutual exclusion locks, shared data cannot be obtained and modified safely.

  1. For example, if both threads need to access a shared resource, does the shared resource need to be locked?
  2. If the waiting function acquires the lock first, what if the other signalling thread needs to acquire the lock, then the thread that needs to receive the signal releases the lock during the wait function, and waits for the signalling thread to signal after accessing critical resources .
  3. If, before waiting for the function, the lock is released, then the signalling thread sends the signal. Well, it has not been time to enter the wait function signal has been missed, then this will not wait forever.
  4. Therefore, the unlocking and waiting actions are atomic, so this function requires the mutex. Then inside the function, the programmer will use some atomic instructions to complete these two operations.

EOF

98 original articles have been published · 91 praises · 40,000+ views

Guess you like

Origin blog.csdn.net/Hanoi_ahoj/article/details/105272260
Recommended