Getting Started with Linux Multithreading | Thread Mutual Exclusion | Lock | Encapsulated Thread | Encapsulated Lock | Deadlock

Article directory

1. Thread mutual exclusion

1. Concept

2. Thread mutual exclusion interface

1. Mutex interface

Initialize mutex

Mutex locking and unlocking

destroy mutex

2. Principle of mutex

3. Thread encapsulation

4. Lock packaging

5. Deadlock

1. The concept of deadlock

2. Necessary conditions for deadlock:

3. Avoid deadlock: any one of the four necessary conditions for breaking deadlock is the core idea.

Summarize



There is a global variable in multi-threading that is shared by all execution flows. In threads, most resources are shared directly or indirectly. As long as there is sharing, there may be problems with concurrent access, which may lead to data inconsistency.

1. Thread mutual exclusion

1. Concept

  • Critical resources: Resources shared by multi-threaded execution streams are called critical resources (shared resources are protected to a certain extent)
  • Critical section: Any thread has code that accesses critical resources. The code in a thread that accesses critical resources is called a critical section.
  • Non-critical section: The code in a thread that does not access critical resources is called a non-critical section.
  • Mutual exclusion: At any time, mutual exclusion ensures that only one execution flow enters the critical section and accesses critical resources, which usually protects critical resources.
  • Atomicity: An operation that will not be interrupted by any scheduling mechanism. The operation has only two states, either completed or incomplete.

2. Thread mutual exclusion interface

1. Mutex interface

Initialize mutex

  1. Static allocation (statically allocated locks do not need to be destroyed)
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

     2. Dynamic allocation

int pthread_mutex_init(phthread_mutex_t * restrict mutex, const pthread_mutexattr_t * restict attr);

参数:mutex 要初始化的互斥量
     attr:null

Mutex locking and unlocking

int pthread_mutex_lock(pthread_mutex_t * mutex);
int pthread_mutex_unlock(pthread_mutex_t * mutex);
//成功返回0,失败返回错误码

  • When pthread_lock is called, there may be two situations: the mutex is in an unlocked state, and the function locks the mutex and returns success.

  • When the function call is initiated, other threads have already locked the mutex, or there are other threads applying for the mutex at the same time, but there is no competition for the mutex, then pthread_lock will block (the execution flow is suspended), waiting for the mutex to be unlocked.

destroy mutex

Do not destroy a locked mutex

For a mutex that has been destroyed, make sure that no thread will try to lock it later.

int pthread_mutex_destroy(pthread_mutex_t * mutex);

2. Principle of mutex

          In order to implement the mutex lock operation, most architectures provide a swap or exchange instruction, which is used to exchange data between the register and the memory unit. Since there is only one instruction, atomicity is guaranteed. Even on a multi-processor platform, bus cycles for accessing memory are sequential. When a swap instruction on one processor is executed, the swap instruction on another processor can only wait for bus cycles.

        Shared resources are stored in memory, and mutex is stored in memory. Starting mutex = 1. There are two threads A and B. The scheduler first performs the work of thread A. A first initializes the value in the register to 0, and then A The data in the memory and the CPU are exchanged. At this time, the value in the register is 1, the value in the memory is 0, and A obtains the lock. When the time slice arrives, B starts executing. B changes the value in the register to 0. When he performs the swap, mutex = 0 in the memory. At this time, the thread is suspended.

3. Thread encapsulation

class Thread
{
   public:
    typedef enum
    {
        NEW = 0,
        RUNNING,
        EXITED
    }ThreadStatus;

    typedef void(*func_t)(void *);
    
    //构造
    Thread(int num,func_t fun,void * args)
    :_tid(0),_status(NEW),_func(func),_args(args)
    {
        char name[128];
        snprintf(name,sizeof(name),"thread - % d",num);
        _name = name;
    }

    int status() { return _status;}
    std::string threadname() {return _name;}
    pthread_t thread_id()
    {
        if(_status == RUNNING) return _tid;
        else return 0;
    }

    static void * runHelper(void * args)
    {
        Thread * ts = (Thread * )args; // 拿到了当前对象
        (*ts)();
        return nullptr;
    }


    void operator()()
    {
        if(_func!= nullptr) _func(_args);
    }

    //创建线程
    void run()
    {
        int n = pthread_create(&_tid,nullptr,runHelper,this);
        if( n!= 0) exit(1);
        _status = RUNNING;
    }

    void join()
    {
        if n = pthread_join(_tid,nullptr);
        if(n!= 0) return ;
        status = EXITED;
    }

    ~Thread()
    {}



    private:
        pthread_t _tid;
        std::string _name;
        func_t _func; //线程未来要执行的回调
        void * args;
};

4. Lock packaging

#pragma once 

#include<iostream>
#include<pthread.h>

int tickets = 1000; //抢票 共享资源

pthread_mutex_mutex =   PTHREAD_MUTEX_INITIALIZER;


class Mutex   //自己不维护锁,由外部传入
{
public:
    Mutex(pthread_mutex_t * mutex)
    :_pmutex(mutex)
    {}
    
    void lock()
    {
        pthread_mutex_lock(_pmutex);
    }

    void unlock()
    {
        pthread_mutex_unlock(_pmutex);
    }

    ~Mutex()
    {}

   private:
    pthread_mutex_t * _pmutex;
  
};


class LockGuard
{   public:
        LockGuard(pthread_mutex_t * mutex)
        :_mutex(mutex)
    {
        _mutex.lock();
    }

    ~LockGuard()
    {
        _mutex.unlock();
    }


    private:
        Mutex _mutex;
}; 


void threadRoutine(void * args)
{
    std::string message = static_cast<const char *>(args);
    while(true)
    {
        LockGuard lockguard(&mutex);
        if(ticktes > 0)
        {
            usleep(200);
            cout<<message<<"get a ticket: "<<tickets --<<endl; //临界区
        }

        else
        {
            break;
        }
    }

    //抢完票后续的动作放入用户数据库中
    usleep(1000);
    LockGuard lockguard(&mutex);
}

int main()
{
    Thread t1(1, threadRoutine, (void *)"1");
    Thread t2(2, threadRoutine, (void *)"2");
    Thread t3(3, threadRoutine, (void *)"3");
    Thread t4(4, threadRoutine, (void *)"4");

   
    return 0;
}

5. Deadlock

Multi-threaded code has the problem of concurrent access to critical resources, so a locking strategy has been developed. Locking may lead to deadlock.

1. The concept of deadlock

2. Necessary conditions for deadlock:

1. Mutual exclusion condition: a resource can only be used by one execution flow at a time

2. Request and hold: An execution flow is blocked due to requesting resources, and it is okay to hold the obtained resources.

3. Loop waiting (loop waiting condition): Several execution flows form a head-to-tail relationship in a loop waiting for resources.

4. Non-deprivation conditions: The resources obtained by an execution flow cannot be forcibly deprived before they are used up.

3. Avoid deadlock: any one of the four necessary conditions for breaking deadlock is the core idea.

Solve the deadlock problem: 1. No lock 2. Actively release the lock 3. Apply for locks in sequence 4. Control threads to release locks uniformly


Summarize

Guess you like

Origin blog.csdn.net/jolly0514/article/details/132671215