Thread locks: mutex locks, spin locks, read-write locks, condition variables, semaphores

Mutex (lock):

1. Define a mutex

pthread_mutex_t mutex;

2. Initialize the mutex

静态分配
pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZED

动态分配
int pthread_mutex_init  (  pthread_mutex_t *restrict  mutex, 
          const pthread_mutexattr_t *restrict attr);

First look at a piece of code

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>         //创建两个线程,分别对两个全变量进行++操作,判断两个变量是否相等,不相等打印

int a = 0;
int b = 0;
// 未初始化 和0初始化的成员放在bbs
pthread_mutex_t mutex;

void* route()
{
    while(1)            //初衷不会打印
    {
        a++;
        b++;
        if(a != b)
        {
            printf("a =%d, b = %d\n", a, b);
            a = 0;
            b = 0;
        }
    }
}

int main()
{
    pthread_t tid1, tid2;
    pthread_create(&tid1, NULL, route, NULL);
    pthread_create(&tid2, NULL, route, NULL);
    
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);
    return 0;
}

The advantages of the operation results of the segment code are beyond our expectations:

The structure we expected should not be printed, and here we printed out the unexpected results. Even equal data is printed out, why is this happening?

Explanation: Two threads preempt each other's CPU resources. After one thread performs a ++ operation on global variables, it has not had time to compare the output operation. The other thread preempts the CPU and compares the print output. In order to avoid this situation, you need to use the mutex lock described below.

 

Mutexes (locks): used to protect critical code segments to ensure their exclusive access.

1. Define the mutex: pthread_mutex_t mutex;
2. Initialize the mutex: pthread_mutex_init (& mutex, NULL); // The second parameter is not studied and set to NULL; // Initialization is 1 (only for memory)
3. Lock pthread_mutex_lock (& mutex); 1-> 0; 0 wait // The mutex will hang when it is not available, give up the CPU
4. Unlock pthread_mutex_unlock (& ​​mutex); Set 1 to return

5.销毁           pthread_mutex_destroy(&mutex);  

Return value: return 0 if successful, return error number if an error occurs.

Note:  Mutexes, when multiple threads access shared resources, lock the mutex before accessing the shared resources, and unlock after the access. After the mutex is locked, other threads will block. Until the current thread is accessed and the lock is released. If multiple threads are blocked when the mutex is released, all blocked threads will become runnable, and the first thread that becomes runnable can lock the mutex. This ensures that only one thread accesses the shared resource at a time.

At this point, it seems that we can solve the above problem through mutual exclusion locks:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

int a = 0;
int b = 0;
// 未初始化 和0初始化的成员放在bbs
pthread_mutex_t mutex;

void* route()
{
    while(1)            //初衷不会打印
    {
        pthread_mutex_lock(&mutex);   
        a++;
        b++;
        if(a != b)
        {
            printf("a =%d, b = %d\n", a, b);
            a = 0;
            b = 0;
        }
        pthread_mutex_unlock(&mutex);
    }
}

int main()
{
    pthread_t tid1, tid2;
    pthread_mutex_init(&mutex, NULL);
    pthread_create(&tid1, NULL, route, NULL);
    pthread_create(&tid2, NULL, route, NULL);
    
    pthread_join(tid1, NULL);
    pthread_join(tid2, NULL);
    pthread_mutex_destroy(&mutex);

    return 0;
}

The following scenarios exist: thread 1 and thread 2, thread 1 executes function A, and thread 2 executes function B. Now only one lock is used to lock and unlock the execution process of A and B functions, respectively.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>           //线程的取消动作发生在加锁和解锁过程中时,当发生线程2取消后而没有进行解锁时,就会出现线程1将一直阻塞

pthread_mutex_t mutex;

void* odd(void* arg)
{
  int i = 1;
  for(; ; i+=2)
  {
    pthread_mutex_lock(&mutex);
    printf("%d\n", i);
    pthread_mutex_unlock(&mutex);
  }
}

void* even(void* arg)
{
  int i = 0;
  for(; ; i+=2)
  {
    pthread_mutex_lock(&mutex);
    printf("%d\n", i);
    pthread_mutex_unlock(&mutex);
  }
}


int main()
{
    pthread_t t1, t2;
    pthread_mutex_init(&mutex, NULL);
    pthread_create(&t1, NULL, even, NULL);
    pthread_create(&t2, NULL, odd, NULL);
    //pthread_create(&t3, NULL, even, NULL);
    
    sleep(3);
    pthread_cancel(t2);             //取消线程2,这个动作可能发生在线程2加锁之后和解锁之前

    pthread_join(t1, NULL);
    pthread_join(t2, NULL);
    pthread_mutex_destroy(&mutex);

    return 0;
}

A limit case is: the cancellation of thread 2 occurs before the unlocking of thread 2, then it will cause the thread 1 cannot continue to run because the lock is not unlocked .

To solve such problems, we can use the following macro functions:

Macro: // Register the thread callback function, which can be used to prevent the problem that the thread is not unlocked after cancellation

void pthread_cleanup_push (void (* routine) (void *), // callback function
                                              void * arg); // parameter of
callback function // callback function execution timing
           1.pthread_exit
           2.pthread_cancel

           3. The cleanaup_pop parameter is not 0, when the cleanup_pop is executed, the callback function is called

void pthread_cleanup_pop(int execute);

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>           //线程的取消动作发生在加锁和解锁过程中时,当发生线程2取消后而没有进行解锁时,就会出现线程1将一直阻塞

pthread_mutex_t mutex;

void callback(void* arg)      //在cancel中进行解锁
{
  printf("callback\n");
  sleep(1);
  pthread_mutex_unlock(&mutex); 
}

void* odd(void* arg)
{
  int i = 1;
  for(; ; i+=2)
  {
    pthread_cleanup_push(callback, NULL);//因为调用了cancel函数,从而触发了回调函数。
    pthread_mutex_lock(&mutex);
    printf("%d\n", i);
    pthread_mutex_unlock(&mutex);
    pthread_cleanup_pop(0);
  }
}

void* even(void* arg)
{
  int i = 0;
  for(; ; i+=2)
  {
    pthread_mutex_lock(&mutex);
    printf("%d\n", i);
    pthread_mutex_unlock(&mutex);
  }
}


int main()
{
    pthread_t t1, t2;
    pthread_mutex_init(&mutex, NULL);
    pthread_create(&t1, NULL, even, NULL);
    pthread_create(&t2, NULL, odd, NULL);
    //pthread_create(&t3, NULL, even, NULL);
    
    sleep(3);
    pthread_cancel(t2);             //取消线程2,这个动作可能发生在线程2加锁之后和解锁之前
    //pthread_mutex_unlock(&mutex);   有问题,如果执行even的程序有两个,而一个取消线程的函数执行时正好t3函数阻塞,就会导致t3和t1同时在执行even

    pthread_join(t1, NULL);
    pthread_join(t2, NULL);
    pthread_mutex_destroy(&mutex);

    return 0;
}

note:

1. Do not destroy a locked mutex. The destroyed mutex ensures that no more threads will be used later.

2. The lock and unlock functions must be used in pairs

3. Choose the appropriate lock granularity (number). If the granularity is too coarse, many threads will block and wait for the same lock, resulting in minimal improvement in concurrency. If the granularity of the lock is too fine, then too much lock overhead will affect the performance of the system, and the code will become quite complex.

4. The minimum (range) lock should be added to reduce the system load

The use of mutex locks must pay attention to avoid deadlocks: "Linux High-Performance Server Programming" 14.5.3 introduced two mutex due to the order of the request deadlock problem

          If a thread tries to lock the same mutex twice, it will itself be in a deadlock state. There are other less obvious ways to generate a deadlock when using a mutex. For example, when multiple mutexes are used in a program, if a thread is allowed to occupy the first mutex all the time, and it is blocked when trying to lock the second mutex, but it has the second mutex The thread is also trying to lock the first mutex, and a deadlock will occur. Because both threads are requesting resources owned by the other thread, neither thread can run forward, and a deadlock occurs.

          You can avoid deadlocks by carefully controlling the order in which mutexes are locked. For example, suppose you need to lock both mutexes A and B at the same time. If all threads always lock mutex A before locking mutex B, then using these two mutexes will not cause death. Locks (of course deadlocks may still occur on other resources); similarly, if all threads always lock mutex B before locking mutex A, then no deadlock will occur. Only when one thread attempts to lock the mutex in the reverse order of another thread can a deadlock occur.

         In order to deal with deadlocks, in addition to adding synchronous mutexes in actual programming, the following three principles can also be used to avoid writing deadlock code:

1> Short: write code as concisely as possible

2> Ping: No complex function calls in the code

3> Fast: The execution speed of the code is as fast as possible

 

Spin lock: It is used in the occasions with high real-time requirements (disadvantages: CPU waste)

pthread_mutex_spin;

pthread_spin_lock (); // Not available, busy waiting for busyloop, always occupying the CPU; while the mutex lock will hang and wait when it is not available, giving up the CPU

pthread_spin_unlock(); 

 pthread_spin_destroy(pthread_spinlock_t *lock);

pthread_spin_init(pthread_spinlock_t *lock, int pshared);

 

Read-write lock (shared-exclusive lock): application scenario --- a large number of read operations and less write operations

Note: read-read sharing, read-write mutual exclusion, high write priority (at the same time)

1. pthread_rwlock_t rwlock; // define

2.int pthread_rwlock_init () // Initialization

3.pthread_rwlock_rdlock()//pthread_rwlock_wrlock//读锁/写锁

4.pthread_rwlock_unlock () // unlock

5.int pthread_rwlock_destroy(pthread_rwlock_t *rwlock);//销毁锁

Return value: successful return 0, error return error number

Note: Whenever you want to add a job to the queue or delete a job from the queue, use a write lock.

Whenever you search the queue, first acquire the lock in read mode, allowing all worker threads to search the queue concurrently. In this case only the thread

The frequency of searching the queue is much higher than when adding or deleting jobs, and using read-write locks may improve performance.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>                //创建8个线程,3个写线程,5个读线程

pthread_rwlock_t rwlock;
int counter = 0;

void* readfunc(void* arg)
{
  int id = *(int*)arg;
  free(arg);
  while(1)
  {
    pthread_rwlock_rdlock(&rwlock);
    printf("read thread %d : %d\n", id, counter);
    pthread_rwlock_unlock(&rwlock);
    usleep(100000);
  }
}

void* writefunc(void* arg)
{
  int id = *(int*)arg;
  free(arg);
  while(1)
  {
    int t = counter;
    pthread_rwlock_wrlock(&rwlock);
    printf("write thread %d : t= %d,  %d\n", id, t, ++counter);
    pthread_rwlock_unlock(&rwlock);
    usleep(100000);
  }
}
int main()
{
    pthread_t tid[8];
    pthread_rwlock_init(&rwlock, NULL);
    int i = 0;
    for(i = 0; i < 3; i++)
    {
      int* p =(int*) malloc(sizeof(int));
      *p = i;
      pthread_create(&tid[i], NULL, writefunc, (void*)p);
    }
    for(i = 0; i < 5; i++)
    {
      int* p = (int*)malloc(sizeof(int));
      *p = i;
      pthread_create(&tid[3+i], NULL, readfunc, (void*)p);
    }

    for(i = 0; i < 8; i++)
    {
      pthread_join(tid[i], NULL);
    }

    pthread_rwlock_destroy(&rwlock);

    return 0;
}

 

Condition variables: 

If the mutex is used to synchronize the access of threads to shared data, then the condition variable is the value used to synchronize shared data between threads. Condition variables provide a communication mechanism between threads: when a shared data reaches a certain value, wake up the thread waiting for the shared data       

1. Define the condition variable pthread_cond_t cond;
2. Initialize pthread_cond_init (& cond, NULL);
3. Wait for the condition pthread_cond_wait (& cond, & mutex);
                                 mutex: If it is not in a mutex environment, it is the same as a dummy
                                 in a mutex environment: wait function will set mutex 1. Wait returns and the mutex returns to its original value
4. Modify the condition pthread_cond_signal (& cond);
5. Destroy the condition pthread_cond_destroy (& cond);

规范写法:
pthread_mutex_lock();
    while(条件不满足)
    pthread_cond_wait();
//为什么会使用while?
//因为pthread_cond_wait是阻塞函数,可能被信号打断而返回(唤醒),返回后从当前位置向下执行, 被信号打断而返回(唤醒),即为假唤醒,继续阻塞
pthread_mutex_unlock();

pthread_mutex_lock();
pthread_cond_signal(); //信号通知   ----   如果没有线程在等待,信号会被丢弃(不会保存起来)。
pthread_mutex_unlock();
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
#include <stdlib.h>             //创建两个线程一个wait print,一个signal sleep()

pthread_cond_t cond;
pthread_mutex_t mutex;

void* f1(void* arg)
{
  while(1)
  {
    pthread_cond_wait(&cond, &mutex);
    printf("running!\n");
  }
}
void* f2(void* arg)
{
  while(1)
  {
    sleep(1);
    pthread_cond_signal(&cond);
  }
}

int main()
{
  pthread_t tid1, tid2;
  pthread_cond_init(&cond, NULL);
  pthread_mutex_init(&mutex, NULL);

  pthread_create(&tid1, NULL, f1, NULL);
  pthread_create(&tid2, NULL, f2, NULL);

  pthread_join(tid1, NULL);
  pthread_join(tid2, NULL);

  pthread_cond_destroy(&cond);
  pthread_mutex_destroy(&mutex);
  return 0;
}

System V // Based on kernel persistence

Semaphore: POSIX // Semaphore based on file persistence


1. Define the semaphore: sem_t sem;
2. Initialize the semaphore: sem_init (sem_t * sem,
                                                int shared, // 0 means how many threads in the process use
                                                int val); // The initial value of the
semaphore 3.PV operation int sem_wait (sem_t * sem); // sem-; If less than 0, block P operation
                      int sem_post (sem_t * sem); // sem ++; V operation
4. Destroy sem_destroy (sem_t * sem);

 

Extended learning:

Optimistic lock and pessimistic lock?

Optimistic locking:

     In the relational database management system, optimistic concurrency control (also known as "optimistic lock", Optimistic Concurrency Control, abbreviated as "OCC") is a method of concurrency control. It assumes that concurrent transactions of multiple users will not affect each other when they are processed, and each transaction can process the part of the data that it affects without generating locks. Before submitting the data update, each transaction will check whether the transaction has modified the data after the transaction reads the data. If other transactions are updated, the transaction being committed will be rolled back.

The optimistic concurrency control transaction includes the following stages: 
1. Read: The transaction reads the data into the cache, and the system will assign a timestamp to the transaction. 
2. Verification: After the transaction is completed, submit it. At this time, all transactions are synchronously verified. If the data read by the transaction is modified by other transactions after reading, conflicts occur and the transaction is interrupted and rolled back. 

3. Write: After passing the verification phase, write the updated data to the database.

Advantages and disadvantages:

       Optimistic concurrency control believes that the probability of data races between transactions is relatively small, so it should be done directly as much as possible, and it will not be locked until it is committed, so no locks or deadlocks will occur. But if you simply do this directly, you may still encounter unpredictable results. For example, two transactions read a row of the database and write it back to the database after modification. Then you have a problem.

Pessimistic lock:

    In a relational database management system, pessimistic concurrency control (also known as "pessimistic lock", Pessimistic Concurrency Control, abbreviation "PCC") is a method of concurrency control. It can prevent a transaction from modifying data in a way that affects other users. If the operation performed by a transaction reads a row of data and a lock is applied, then only when the transaction releases the lock, other transactions can perform operations that conflict with the lock.

Advantages and disadvantages: Pessimistic concurrency control is actually a conservative strategy of "fetch lock before access", which provides a guarantee for the security of data processing. However, in terms of efficiency, the mechanism of handling locking will cause additional overhead in the database and increase the chance of deadlock. In addition, because there will be no conflicts in read-only transactions, there is no need to use locks. It can only increase the system load; it also reduces the parallelism. If a transaction locks a row of data, other transactions must wait for the transaction to complete before processing the number of rows.

 

How many threads can the system create at most? (Generally, the actual measurement shall prevail, but the test results will be different depending on the size of the stack developed each time).

One is to view cat / proc / sys / kernel / threads-max directly on the command line.      My computer shows 7572

The other is to calculate the size of the user space 3G, which is 3072M / 8M stack space = 380     

The third program: run to 32754 (theoretical value 32768)

Published 115 original articles · Like 29 · Visitors 50,000+

Guess you like

Origin blog.csdn.net/huabiaochen/article/details/105019191