Summary of pthread_cond_wait understanding and use of mutex and condition variables

After two weeks of study, I also have some shallow understanding of threads and processes. The most difficult thing to understand is thread synchronization. There are three methods for thread synchronization. One is the use of mutex locks, and the other is the condition The use of variables, and finally the asynchronous signal.
-Basic knowledge of mutex locks-Basic knowledge
of condition variables
- The use of condition variables and mutex locks-
* pthread_cond_wait understanding
- asynchronous signal
- semaphore


Mutex

Mutex lock function:
pthread_mutex_init initialize
pthread_mutex_destroy before locking, release the resource occupied by the mutex after unlocking
pthread_mutex_lock lock
pthread_mutex_unlock unlock
pthread_mutex_tylock test lock

Mutual exclusion locks allow only one thread to execute a critical code at the same time through the lock mechanism, ensuring that the execution is completed at one time without being interrupted.

condition variable

Condition variable function:
pthread_cond_init initialize
pthread_cond_wait block based on condition variable, unconditional, etc.
pthread_cond_timewait block until the specified event occurs, wait for a period of time
pthread_cond_signal send activation signal
pthread_cond_broadcast unblock all threads
pthread_cond_destroy clear condition variable

The condition variable has two operations: first, a thread waiting to use the resource waits for the condition variable to be set to true, and the second operation is to set the condition to true after another thread has finished using the common resource.

The relationship between condition variables and mutex locks

To ensure that the variable is modified correctly, the condition variable also needs to be protected, and a mutex is used to protect it. If you do not use the combination of the two, when a thread only uses a mutex, when other threads come to the program, they will determine whether to lock or not. If they are locked, they will wait for unlocking. There may be many threads during this time. To judge, and then block here, when the thread calling the lock releases the lock, the blocked thread will snatch the resource again, resulting in a waste of resource time and space. When a condition variable is added, the blocked threads will become ordered and queued in a queue. When receiving the activation signal, it will not repeatedly check whether the lock has been released, only a notification (activation signal) ) to tell the thread that the lock you need has been released and you can use it now, and the whole process becomes orderly instead of consuming resources due to grabbing. The relational operation between the two can be visualized as a change from a disordered resource occupation to an ordered resource saving, so the use of both condition variables and mutex locks can ensure the correct modification of condition variables.

pthread_cond_wait理解

In the condition variable, the function of the lock is very important, and one of the functions that plays a key role is the wait function.

Condition variables are a mechanism for synchronization using global variables shared between threads, which mainly include two actions: one thread waits for the "condition of the condition variable to be established" and suspends; another thread makes the "condition is established" (given the condition is established) Signal). To prevent races, the use of condition variables is always combined with a mutex.
-pthread_cond_wait Baidu Encyclopedia

Function prototype:
int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
Two actions of this function: first release the lock pointed to by the second parameter, after unlocking, point the condition variable of the first parameter cond to the condition blocking variable, until Condition signal is activated.
When a thread is using condition variables and mutex locks, when the conditions are not met (that is, when the condition signal is blocked), through this function, the thread's lock will be continuously unlocked and finally locked again until the condition Satisfaction, after being activated, it will go to the step of re-locking, and finally return to the function. At this time, the execution of the function statement ends, and the following program will be carried out.

Note two points:
  1) Before thread_cond_wait(), you must lock the associated mutex, because if the target condition is not met, pthread_cond_wait() will actually unlock the mutex, then block, and relock the mutex after the target condition is met , and then return. – This is very important
  2) Why is while(sum<100), not if(sum<100)? This is because there is a time difference between the return of pthread_cond_signal() and pthread_cond_wait(), assuming that in this During the time difference, there is another thread t4 that reduces the sum to less than 100, then t3 should check the size of the sum again after pthread_cond_wait() returns. This is the purpose of using while
pthread_cond_wait() is used to block the current thread , waiting for another thread to wake it up using pthread_cond_signal() or pthread_cond_broadcast.
pthread_cond_wait() must be used in conjunction with pthread_mutex_lock(). The pthread_cond_wait() function automatically releases the mutex as soon as it enters the wait state. When another thread wakes up the thread through pthread_cond_signal() or pthread_cond_broadcast and makes pthread_cond_wait() pass (return), the thread automatically obtains the mutex again.
pthread_cond_signal() must be used in conjunction with pthread_mutex_unlock(lock_s). Before sending a signal or broadcasting, you must unlock the mutex, because when the thread blocked on the condition variable is awakened, the mutex must be locked. If the thread is awakened, At this time, the notification thread still locks the mutex, and the awakened thread immediately blocks on the mutex.

asynchronous signal

Synchronization is coordinated pacing, running in a predetermined sequence. Such as: you finish, I will say.
The word "tong" is easy to understand literally as acting together, but
it is not. The word "tong" should mean collaboration, assistance, and mutual cooperation.
Such as process and thread synchronization, it can be understood that the process or thread A and B cooperate together. When A executes to a certain extent, it depends on a certain result of B, so it stops and signals B to run; B executes according to the words, and then gives the result to A. ;A to continue the operation.
The so-called synchronization means that when a function call is issued, the call will not return until the result is obtained, and other threads cannot call this method. According to this definition, in fact, most functions are called synchronously (such as sin, isdigit, etc.). But in general, when we say synchronous and asynchronous, we specifically refer to those tasks that require the cooperation of other components or that take a certain amount of time to complete. For example, the Window API function SendMessage. This function sends a message to a window, and the function does not return until the other party has processed the message. When the other party finishes processing, the function returns the LRESULT value returned by the message processing function to the caller.
In multi-threaded programming, some sensitive data are not allowed to be accessed by multiple threads at the same time. At this time, the synchronous access technology is used to ensure that the data can be accessed by at most one thread at any time to ensure the integrity of the data.

Seeing this, I am very confused. How do asynchronous signals achieve synchronization? Going back to the original starting point, what is thread synchronization? In fact, thread synchronization is not executing a piece of code at the same time, but executing through coordination and negotiation. It is an orderly process, preventing them from competing for resources and contention. Robbery usually has accidents and is very violent... So it must be carried out in an orderly manner, and the negotiation should be carried out in an orderly manner. Just like queuing up.

First of all, the signal is asynchronous with any thread, and it arrives at a different time for each thread. When multiple threads receive an asynchronous signal, only one thread first gets the signal. At this time, if multiple concurrent signals are sent to a process, and the process has many threads, these concurrent signals will be acquired by different threads, and because they are concurrent, only in a period of time, in this There is still a sequence in the time period. At this time, all threads block these signals. The process of thread receiving the signal, being suspended and then activating the processing is still a concurrent process of queued execution, but it seems to happen at the same time.

signal

Quoting a fragment seen on Weibo
"The use of this shared memory is subject to race conditions. From the example of file locks, we know that the communication between processes is not just as simple as communication, but also needs to deal with critical conditions like this. Area code. Here, we can also use file locks, but using file locks for shared memory is too incongruous. In addition to inconvenience and inefficiency, file locks can not be used for higher process control. Therefore, Here we need more advanced process synchronization control primitives to implement related functions, which is what semaphores do.”

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326362740&siteId=291194637