Introduction to multi-threaded programming

Introduction to multi-threaded programming

1. Introduction to processes and threads

1.1.History of both

There was no concept of processes and threads in the original operating system. Once a task was started, it took control of the entire machine. Before the task was completed, there was no way to perform other tasks.
Suppose a task A needs to read a large amount of data input (I/O operations) during execution. At this time, the CPU can only wait quietly for task A to finish reading the data before it can continue execution, which is wasted. CPU resources.

Support multi-tasking operating system: The concept of multi-tasking was added to later operating systems. A task refers to an operation performed by one or more processes to achieve a purpose. Each process corresponds to a certain memory address space and can only use its own memory space, and the processes do not interfere with each other. This provides the possibility for process switching.
When a process is suspended, the operating system will save the state of the current process (such as process identification, resources used by the process, etc.), and when it switches back next time, it will be restored based on the previously saved state, and then continue execution.
And CPU time allocation is managed by the operating system. The operating system allocates time based on the current CPU status and the priority of the process, thereby ensuring that each process can be allocated appropriate CPU time.
For application developers, they no longer need to spend energy allocating CPU time. Instead, they think that their programs are always occupying the CPU and only focus on the business that their applications want to achieve. In fact, when a process is running, it will not always occupy CPU time. The operating system will allocate CPU time to other processes based on the process attributes and preempt the CPU time of the current process.

Operating system that supports multi-threading:
After the emergence of processes, the performance of the operating system has been greatly improved, but people are still not satisfied. People gradually have requirements for real-time interaction. Because a process can only do one thing at a time, if a process has multiple subtasks, it can only execute these subtasks one by one.
For example, for a monitoring system, it not only needs to display image data on the screen, but also communicates with the server to obtain image data, and also processes people's interactive operations. If at a certain moment the system is communicating with the server to obtain image data, and the user clicks a button on the monitoring system, then the system will have to wait for the image data to be obtained before processing the user's operation. If it takes time to obtain the image data, 10s, then the user will just keep waiting. Obviously, people cannot be satisfied with such a system.
So the concept of threads was introduced. A process contains multiple threads. A process is the smallest unit of resource allocation, and a thread is the smallest unit of CPU scheduling. Threads in a process can share information such as addresses and file descriptors within the process. CPU time is allocated by the operating system according to the attributes of each thread, thereby ensuring that each thread can be allocated appropriate CPU time. Therefore,
in a multi-threaded system, the above example of the monitoring system can be implemented with multiple threads. One thread uses For getting data from the server, a thread is used to respond to user interaction.
Of course, this example can be solved using multi-process mode, with one process used to communicate with the server and one process used to respond to user operations.

Real-time system and non-real-time system:
A real-time system means that the correctness of the calculation not only depends on the logical correctness of the program, but also depends on the time when the results are generated. If the time constraints of the system are not met, a system error will occur.
Real-time scheduling under general-purpose Linux:
General-purpose Linux can achieve real-time scheduling by setting the priority of threads. For detailed description, see: Priority of threads.
However, real-time scheduling under general-purpose Linux has the following problems, so a real-time operating system is required.
1. Linux system The scheduling unit in is 10ms, so it cannot provide precise timing.
2. When a process calls a system call to enter the kernel state, it cannot be preempted.
3. A large number of masked interrupt operations are used in the Linux kernel implementation, which will cause loss of interruption

Scheduling under real-time operating system system:
RTAI (Real-Time Application Interface) is a real-time operating system. Its basic idea is that in order to provide hard real-time support in Linux systems, it implements a small real-time operating system with a microkernel (we also call it RT-Linux's real-time subsystem), and converts ordinary Linux systems into Runs as a low-priority task within the operating system.
And the general timing accuracy in Linux systems is 10ms, that is, the clock cycle is 10ms, and RT-Linux can provide more than a dozen microsecond-level scheduling granularity by setting the system's real-time clock to a single trigger state.

1.2. Choices during development

Let’s take a look at the comparison between multi-threading and multi-process according to multiple different dimensions.
Dimension multi-process multi-thread summary
Data sharing synchronization Data sharing is complex and requires IPC; data is separated and synchronization is simple because process data is shared and data sharing is simple , but it is also because of this reason that complex synchronization has its own advantages.
Memory
CPU occupies a lot of memory, and switching
is complex. CPU utilization is low. Occupying less memory, switching is simple, and CPU utilization is high. Threads dominate.
Create,
destroy
, switch, create, destroy, switch complex, and slow. Create. Destruction and switching are simple, and the speed is fast. Threads are dominant. Programming:
debugging
and programming are simple, debugging is simple, programming is complex, and debugging is complex. Processes are dominant.
Reliability. Processes will not affect each other. If one thread hangs up, the entire process will hang up. Processes are dominant.

1. Priority threads that require large amounts of resource sharing
2. Priority threads that need to be frequently created and destroyed

2. A simple multi-threading example

#include <pthread.h>
#include <unistd.h>
#include <stdio.h>

void* Func(void* pParam);

int main()
{

int iData = 3;

pthread_t ThreadId;
pthread_create(&ThreadId, NULL, Func, &iData);

for(int i=0; i<3; i++)
{
	printf("this is main thread\n");
	sleep(1);
}

pthread_join(ThreadId, NULL);

return 1;

}

void* Func(void* pParam)
{

Fopen

int* pData = (int *)pParam;

for(int i=0; i<*pData; i++)
{
	printf("this is Func thread\n");
	sleep(2);
}
fclose
return NULL;	

}

Compile and run as follows:
[root@localhost thread_linuxprj]# g++ -g -o thread_test thread_test.cpp –lpthread

[root@localhost thread_linuxprj]# ./thread_test
this is main thread
this is Func thread
this is main thread
this is Func thread
this is main thread
this is Func thread

This program has a total of two threads: the main thread and the thread that executes the function Func. Through the output results, you can see that these two threads perform output operations at the same time.

2.1.Startup of threads

int pthread_create( pthread_t *thread,
const pthread_attr_t *attr,
void *(*start_routine) (void *),
void *arg);

Parameter description:
pthread_t *thread: ID corresponding to the created thread, the unique identifier of the thread.
const pthread_attr_t *attr: Thread attributes, you can set the thread's scheduling mode and priority, DETACH/JOIN mode. Generally set to NULL, indicating that the default attributes are used.
void *( start_routine) (void ): The function pointer executed by the thread. This function must have a void type parameter, and the return value must be void type.
void *arg: Parameters of the function executed by the thread, which can be set to NULL.

Return value:
0 indicates success, -1 indicates failure, and you can use errno to obtain the failure reason.

2.2. Stopping the thread

2.2.1. Thread automatically stops

After the thread function finishes running, the thread stops automatically. In the above example, after the Fun function returns, the corresponding thread will automatically stop, and its return value can be obtained by other threads.

You can also explicitly call the pthread_exit function to stop the thread.

Under normal circumstances, just return directly.

2.2.2. External notification thread to stop

You can use the pthread_cancel function to send a signal to notify the thread to stop. After the thread receives the signal, it will exit according to its own exit attribute, or perform other operations.

It is generally not recommended to use it, because the destination thread will exit directly for this operation by default. This will cause the resources applied within the thread to be unable to be released, resulting in resource leakage. It is recommended to use thread automatic exit mode.

However, in real-life scenarios, it is often necessary for one thread to notify another thread to exit, such as the following example:

void* Func(void* pParam)
{ while(true) { Perform related logic processing }



int main()
{

pthread_t Thread1;
pthread_create(&Thread1, NULL, Func, NULL);

等待退出信号
请求子线程退出

The main thread creates the sub-thread Thread1 and then waits for exit. When an external exit signal is received, the sub-thread is notified to exit.

After the sub-thread is started, it will continue to perform its own logical processing until it receives the exit signal from the main thread before exiting.

Because it is not recommended to use pthread_cancel, you can use the following mode to solve it

bool g_ThreadExitFlag = false;

void* Func(void* pParam)
{ while(!g_ThreadExitFlag) { Perform related logic processing }



	清除本线程所申请的资源
return NULL;

int main()
{
pthread_t Thread1;
pthread_create(&Thread1, NULL, Func, NULL);

等待退出信号

g_ThreadExitFlag = true;

pthread_join(Thread1, NULL);

A shared variable g_ThreadExitFlag is used between the main thread and Thread1. In the thread function executed by Thread1, each logical loop determines if g_ThreadExitFlag is true, clears the resources applied for by itself, and returns, and the Thread1 thread automatically exits.

2.2.3. About pthread_join

After the main thread executing the man function exits, other threads in this process will exit immediately no matter where they are executed. In this way, because the logic of the sub-thread has not been completed, there will be situations that are inconsistent with expectations.
So in the above example, the main thread blocks the call of the pthread_join function, and the output after compilation and running is as follows:
this is main thread
this is Func thread
this is main thread
this is Func thread
this is main thread

According to the code, Func needs to be printed three times, but during actual running, because the man function has returned before Fun has returned, all threads have been terminated, so Func is only printed two times.

pthread_join函数原型如下:

int pthread_join(pthread_t thread_id,
void **retval);

pthread_t thread_id: The thread ID waiting to recycle resources
void **retval: The return value of the corresponding thread function. If you don't pay attention, you can set it to NULL.

pthread_join is used to wait for the thread_id thread to exit and recycle the resources of the corresponding thread. If the corresponding thread does not exit, it will always be in a waiting state.

When creating a thread, you can specify whether the current thread is in join mode or detach mode through the pthread_attr_t parameter. If the thread is in join mode, you must use the pthread_join function to recycle thread resources. If you do not need to pay attention to the exit of the thread, you can set the thread to detach mode. At this time, you do not need to call the pthread_join function to recycle thread resources. Naturally, pthread_join cannot wait for threads in detach mode.

3. Competition and synchronization between threads

When designing multi-threaded programs, multiple threads often read and write to a memory area at the same time, which will cause multi-thread conflicts and result in inconsistent with expectations. Examples below:

#include <pthread.h>
#include <unistd.h>
#include <stdio.h>

int g_iData = 0;
void* Func1(void* pParam);
void* Func2(void* pParam);

int main()
{
pthread_t Thread1, Thread2;
pthread_create(&Thread1, NULL, Func1, NULL);
pthread_create(&Thread2, NULL, Func2, NULL);

pthread_join(Thread1, NULL);
pthread_join(Thread2, NULL);

return 1;

}

void* Func1(void* pParam)
{
for (int i=0; i<3; i++)
{
g_iData = 1;

    sleep(1);

    printf("Func1 print g_iData:%d\n", g_iData);
}

return NULL;	

}

void* Func2(void* pParam)
{

for (int i=0; i<3; i++)
{
    g_iData = 2;

    sleep(1);

    printf("Func2 print g_iData:%d\n", g_iData);
}

return NULL;		

}

In addition to the main thread, there are two worker threads running Func1 and Func2 respectively. It is expected that Func1 prints "Func1 print 1" three times and Fun2 prints "Func2 print 2" three times.
The actual running results are as follows:
[root@localhost thread_linuxprj]# ./thread_test
Func2 print g_iData:1
Func1 print g_iData:1 Func1
print
g_iData:1
Func2 print g_iData:1 Func1 print g_iData:2
Func2 print g_iData:2

[root@localhost thread_linuxprj]# ./thread_test
Func2 print g_iData:1
Func1 print g_iData:1
Func1 print g_iData:1
Func2 print g_iData:1
Func1 print g_iData:2
Func2 print g_iData:2

[root@localhost thread_linuxprj]# ./thread_test
Func1 print g_iData:1
Func2 print g_iData:1
Func2 print g_iData:2
Func1 print g_iData:2
Func1 print g_iData:1
Func2 print g_iData:1

As can be seen from the running results, a lot of unexpected content is printed, and the printing is different every time it is run.
This reflects the characteristics of multi-threaded programs, because multiple threads share process data, synchronization is more complicated, and once a problem occurs, because thread scheduling depends on the operating system, and the order of scheduling is different, the external performance of the program is also different, and it is easy to Many unnecessary problems arise, causing many difficulties in positioning.

Therefore, it is necessary to try to design multi-thread synchronization during programming and coding.

3.1. If there is no need for functions to access the same resources, try to avoid them.

If the function called by the thread does not access public resources (memory, file descriptor), then there will be no multi-thread conflict problem at all. Therefore, when writing multi-threaded programs, do not access the same resources unless necessary.

For example, in the above example of multi-thread conflict, because the two thread functions access the same memory: the global variable g_iData, this problem does not exist if the accessed is not a global variable, but a variable in its own stack.
void* Func1(void* pParam)
{ for (int i=0; i<3; i++) { int iData = 1;


    sleep(1);

    printf("Func1 print g_iData:%d\n",  iData);
}

return NULL;	

}

void* Func2(void* pParam)
{

for (int i=0; i<3; i++)
{
    int iData = 2;

    sleep(1);

    printf("Func2 print g_iData:%d\n",  iData);
}

return NULL;		

}

However, the reason why multi-threaded programming is adopted is generally because multiple tasks need to access the same resources, so multi-threads accessing the same resources is an inevitable situation. This needs to be solved using the following technology.

3.2. Mutex lock

As shown in the following code, using g_mutex can control that only one thread is accessing the same resource at the same time. (Note that it is not recommended to use the Sleep operation in the lock. The following example only adds sleep to the lock to better demonstrate multi-thread conflicts)

#include <pthread.h>
#include <unistd.h>
#include <stdio.h>

int g_iData = 0;
void* Func1(void* pParam);
void* Func2(void* pParam);

pthread_mutex_t g_mutex;

int main()
{
pthread_mutex_init(&g_mutex, NULL);

pthread_t Thread1, Thread2;
pthread_create(&Thread1, NULL, Func1, NULL);
pthread_create(&Thread2, NULL, Func2, NULL);

pthread_join(Thread1, NULL);
pthread_join(Thread2, NULL);

pthread_mutex_destroy(&g_mutex);

return 1;

}

void* Func1(void* pParam)
{
for (int i=0; i<3; i++)
{
pthread_mutex_lock(&g_mutex);

    g_iData = 1;

    sleep(1);

    printf("Func1 print g_iData:%d\n", g_iData);

    pthread_mutex_unlock(&g_mutex);
}

return NULL;	

}

void* Func2(void* pParam)
{

for (int i=0; i<3; i++)
{
    pthread_mutex_lock(&g_mutex);

    g_iData = 2;

    sleep(1);

    printf("Func2 print g_iData:%d\n", g_iData);

    pthread_mutex_unlock(&g_mutex);
}

return NULL;		

}

A certain thread runs and executes pthread_mutex_lock(&g_mutex). After occupying g_mutex, other threads will be in a waiting state if they run the pthread_mutex_lock(&g_mutex) code. Until the thread occupying g_mutex calls pthread_mutex_unlock (&g_mutex) to release g_mutex, the operating system allocates a thread from other waiting threads that can execute pthread_mutex_lock (&g_mutex), occupy the lock, and perform subsequent logical processing. Threads that do not occupy the lock continue to wait. .

Using a mutex does not lock a piece of memory, but locks an operation.

The lock must be initialized using the pthread_mutex_init function before use. When the lock is no longer used, pthread_mutex_destroy must be called for destruction.

3.2.1. Precautions

Be sure to remember to unlock after locking.
This concept is easy to understand, but problems often occur during use. Whenever you use a lock, you must check each unlocked place, such as
void Func()
{ pthread_mutex_lock(&g_mutex) if( …) { pthread_mutex_unlock (&g_mutex) return; } else if(…) { If(…) { pthread_mutex_lock(&g_mutex) Call XXX pthread_mutex_unlock (&g_mutex) return } else { if(…) { If() { //No exit here Unlocking will cause a deadlock! Return; } … … pthread_mutex_unlock (&g_mutex) return; } } pthread_mutex_unlock (&g_mutex)






























return;
}

pthread_mutex_unlock (&g_mutex)
}

Try not to do long-time operations in the lock.
If a long-time algorithm is done in the scope of a lock, once a thread performs this operation, it will cause other threads that need to occupy this lock to be in a long-term state. Waiting state until a long time-consuming operation is completed will greatly reduce the efficiency of the program.

Common time-consuming operations:
algorithms that consume a long time on the CPU, read and write operations on IO, and Sleep.

If a certain write situation involves multiple threads reading and writing the same IO (for example, multiple threads operating the same file), it is not recommended to lock the IO at this time, but the program design needs to be changed to only A thread operates on IO.

For example, if thread A and thread B want to write a file at the same time, it can be modified so that thread A and thread B write to a piece of memory at the same time, and a new thread C is added to refresh the data in this memory to the file. In this way, only one thread accesses the file, and does not involve multi-thread access to IO. It only needs to lock the shared memory.

Try not to lock another lock within a locking range.
Within the locking range of lock A, lock lock B. In this way, if lock A wants to be unlocked, it must wait until the current thread locks the lock. B is locked successfully, that is, lock A depends on lock B.
Such code is prone to circular dependencies. In one piece of code, lock A depends on lock B, and in another piece of code, lock B depends on lock A. In this way, a deadlock will occur when two threads execute these two pieces of code. For example:

void Func1(void* pParam)
{
pthread_mutex_lock(&g_mutexA);

    … …
    pthread_mutex_lock(&g_mutexB);

    … …
		
		pthread_mutex_unlock(&g_mutexB);

   pthread_mutex_unlock(&g_mutexA);

}

void Func2(void* pParam)
{

    pthread_mutex_lock(&g_mutexB);

    … …
    pthread_mutex_lock(&g_mutexA);

    … …
		
		pthread_mutex_unlock(&g_mutexA);

   pthread_mutex_unlock(&g_mutexB);

}

If thread 1 executes pthread_mutex_lock(&g_mutexB); in Func1, thread 2 executes pthread_mutex_lock(&g_mutexA); at this time, because thread 2 has already executed the pthread_mutex_lock(&g_mutexB); operation, thread 1 is waiting to lock g_mutexB. At the same time, because thread 1 has locked g_mutexA, thread 2 is in the waiting state for g_mutexA.
In this way, the two threads are always waiting and a deadlock occurs.

The locking range should be
as small as possible. The larger the locking range, the more likely the above problems will occur, and it will be less conducive to subsequent code updates and maintenance, so the locking range should be as small as possible.

Use automatic lock to avoid deadlock.
You can use C++ to encapsulate the operating system API to implement automatic lock mode. During the automatic lock life cycle, the code is automatically locked, and when the automatic lock life cycle terminates, it is automatically unlocked, thus solving the problem. Most deadlock problems. The above example is implemented using automatic locking as follows:

CLock_CS lock_cs;
void Func()
{ AUTO_CRITICAL_SECTION(lock_cs) if(…) { //The automatic lock life cycle ends and is automatically unlocked. return; } else if(…) { If(…) { //The automatic lock life cycle ends and is automatically unlocked. return } else { if(...) { If() { //The automatic lock life cycle ends and is automatically unlocked. Return; } … … //The automatic lock life cycle ends and is automatically unlocked. return; } } //The automatic lock life cycle ends and is automatically unlocked. return; }





























//The automatic lock life cycle ends and is automatically unlocked.
}
For automatic lock implementation, please refer to the following svn code
https://192.168.20.6:8443/svn/hnc8/trunk/apidev/net/comm/src/criticalsection.h

Use read-write locks to improve efficiency.
Application scenario: Multiple threads perform read and write operations. When writing, only one can write, and other read and write operations are in a waiting state. When reading, because no modification will be made to the public resources, they can be read at the same time to improve efficiency.

But if a mutex lock is used, once one thread occupies the lock, other threads cannot occupy it and are in a waiting lock state. Therefore, the demand for simultaneous reading cannot be met.

This can be achieved using read-write locks:

int pthread_rwlock_init(pthread_rwlock_t *restrict rwlock,
const pthread_rwlockattr_t *restrict attr);
int pthread_rwlock_destroy(pthread_rwlock_t *rwlock);

int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock);

int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock);

int pthread_rwlock_unlock(pthread_rwlock_t *rwlock);

3.3.Conditional variables

Using a mutex lock can only guarantee that only one thread can execute the code within the lock range at the same time, but the execution order of each thread cannot be guaranteed. If development involves the need to ensure that subsequent threads can execute only after a certain thread is executed, it is recommended to use condition variables.
As in the following example, threads 1, 2, and 3 are started at the same time. When thread 1 completes execution, one of threads 2 or 3 will be awakened for subsequent operations.

#include <pthread.h>
#include <unistd.h>
#include <stdio.h>

int g_iData = 0;
bool g_bNewThreadRun = false;
void* Func1(void* pParam);
void* Func2(void* pParam);
void* Func3(void* pParam);

pthread_mutex_t g_mutex;
pthread_cond_t g_cond;

int main()
{
pthread_mutex_init(&g_mutex, NULL);
pthread_cond_init(&g_cond, NULL);

pthread_t Thread1, Thread2, Thread3;
pthread_create(&Thread1, NULL, Func1, NULL);
pthread_create(&Thread2, NULL, Func2, NULL);
 //pthread_create(&Thread3, NULL, Func3, NULL);

pthread_join(Thread1, NULL);
pthread_join(Thread2, NULL);
 pthread_join(Thread3, NULL);

pthread_cond_destroy(&g_cond);
pthread_mutex_destroy(&g_mutex);

return 1;

}

void* Func1(void* pParam)
{

for (int i=0; i<3; i++)
{

    g_iData = 1;

    sleep(1);

    printf("Func1 print g_iData:%d\n", g_iData);   
}

g_bNewThreadRun = true;
 pthread_cond_signal(&g_cond);


return NULL;	

}

void* Func2(void* pParam)
{
pthread_mutex_lock(&g_mutex);
while(!g_bNewThreadRun)
{
pthread_cond_wait(&g_cond, &g_mutex);
}
g_bNewThreadRun = false;
for (int i=0; i<3; i++)
{
g_iData = 2;

    sleep(1);

    printf("Func2 print g_iData:%d\n", g_iData);

}

pthread_mutex_unlock(&g_mutex);
return NULL;		

}

void* Func2(void* pParam)
{
pthread_mutex_lock(&g_mutex);
while(!g_ bNewThreadRun)
{
pthread_cond_wait(&g_cond, &g_mutex);
}

    g_bNewThreadRun = false;

for (int i=0; i<3; i++)
{
    g_iData = 2;

    sleep(1);

    printf("Func2 print g_iData:%d\n", g_iData);
}

pthread_mutex_unlock(&g_mutex);

return NULL;		

}

Threads 1, 2, and 3 are started. Thread 1 executes immediately because there is no lock, and sets g_bNewThreadRun=true, and then triggers the condition variable pthread_cond_signal(&g_cond); Thread 2 first performs the
pthread_mutex_lock(&g_mutex); locking operation, and then calls pthread_cond_wait(&g_cond, &g_mutex) function, entering this function will first unlock it, and then wait for g_cond to take effect.
In the same way, thread 3 executes pthread_mutex_lock(&g_mutex) and then calls the pthread_cond_wait(&g_cond, &g_mutex) function, waiting for g_cond to take effect.
When thread 1 triggers the condition variable g_cond, one of thread 2 or 3 will be awakened. The pthread_cond_wait(&g_cond, &g_mutex) function returns, and g_mutex is locked before returning to ensure that only one thread can execute subsequent operations. When the execution is completed, call pthread_mutex_unlock(&g_mutex) to unlock.

3.3.1. Precautions

When the setting condition is valid, it must be before triggering pthread_cond_signal
because the running order of threads is scheduled by the operating system. In the above example, if thread 1 calls pthread_cond_signal and then sets g_ bNewThreadRun to true, it is possible that the thread 1 will set g_ bNewThreadRun to true after these two steps. During this time, pthread_cond_wait of thread 2 or 3 was awakened. As a result, it was judged that g_ bNewThreadRun was still not false, so pthread_cond_wait continued to be called and was in a waiting state, causing the condition variable validity signal to be lost.

Before calling pthread_cond_wait, the incoming lock must be locked.
According to the above example, we can see that after entering the pthread_cond_wait function, the system will first unlock the incoming lock. After waiting for the condition variable to take effect, it will lock the incoming lock and then return. This can ensure subsequent Only one thread can perform operations on public resources.
Therefore, before calling pthread_cond_wait, the incoming lock must be locked. If there is no lock, the execution results will be unpredictable.

When calling pthread_cond_wait, you need to use while loop judgment for protection
because the running order of threads is scheduled by the operating system. In the above example, thread 1 may have triggered the condition variable signal before thread 2 runs to pthread_cond_wait. At this time, The condition variable valid signal will be lost, and thread 2 will always be in the pthread_cond_wait state. Therefore, it is generally necessary to determine whether it can be executed immediately in the current state before calling pthread_cond_wait.
if(!g_ bNewThreadRun)
{ pthread_cond_wait(&g_cond, &g_mutex); } ... At the same time, pthread_cond_wait may return due to system interruption, and the condition variable signal may not have taken effect at this time. Therefore, you need to use a while loop to determine whether it can be executed in the current state. while(!g_ bNewThreadRun) { pthread_cond_wait(&g_cond, &g_mutex); } … … The condition variable is triggered repeatedly, and pthread_cond_wait can only be obtained once. According to the above explanation, before one thread performs pthread_cond_wait, if another thread sets the condition variable signal to be valid, then the signal will be lost.











So if thread 1 pthread_cond_signal triggers multiple times at the same time, thread 2's pthread_cond_wait may only wake up once, because when the remaining pthread_cond_signal triggers, thread 2 may be performing subsequent operations and is not in the pthread_cond_wait state.
If the signal needs to be triggered once and the business processing thread needs to be executed once, it is recommended to use the semaphore mode.

3.4. Signal amount

Semaphores can be used to implement the typical producer-consumer pattern, as shown in the following code example:

#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
#include <time.h>
#include <stdio.h>
#include <errno.h>
#include

using std::list;

pthread_mutex_t List_Mutex;
list g_PdtList;

sem_t g_HavePdtSem;

bool g_bExit = false;

void PutPdt(int iPdt)
{
pthread_mutex_lock(&List_Mutex);

g_PdtList.push_back(iPdt);

pthread_mutex_unlock(&List_Mutex);

}

void GetPdt(int& iPdt)
{
pthread_mutex_lock(&List_Mutex);

iPdt = g_PdtList.front();
g_PdtList.pop_front();

pthread_mutex_unlock(&List_Mutex);

}

void* ProducterFunc(void* pParam);
void* ConsumerFunc(void* pParam);

int main()
{
pthread_mutex_init(&List_Mutex, NULL);
sem_init(&g_HavePdtSem, 0, 0);

const int MAX_CONSUMER_NUM = 3;

pthread_t ProductThread;
pthread_t ConsumerThread[MAX_CONSUMER_NUM];

pthread_create(&ProductThread, NULL, ProducterFunc, NULL);

for (int i=0; i<MAX_CONSUMER_NUM; i++)
{
    pthread_create(&ConsumerThread[i], NULL, ConsumerFunc, NULL);
}

sleep(3);
g_bExit = true;


pthread_join(ProductThread, NULL);

for (int i=0; i<MAX_CONSUMER_NUM; i++)
{
    pthread_join(ConsumerThread[i], NULL);
}


sem_destroy(&g_HavePdtSem);
pthread_mutex_destroy(&List_Mutex);

return 1;

}

void* ProducterFunc(void* pParam)
{
for (int i=0; i<5; i++)
{
printf(“Producter make pdt:%d\n”, i);
PutPdt(i);

    sem_post(&g_HavePdtSem);
}

return NULL;	

}

void* ConsumerFunc(void* pParam)
{
static int i = 0;
int iIndex = i++;

while(!g_bExit)
{
    timespec abstime;
    clock_gettime(CLOCK_REALTIME, &abstime);

    abstime.tv_sec += 3;
    if (sem_timedwait(&g_HavePdtSem, &abstime) == -1)
    {
        if (errno == ETIMEDOUT)
        {
            continue;
        }
    }

    int iPdt = -1;
    GetPdt(iPdt);

    printf("Consumer[%d] get pdt:%d\n", iIndex, iPdt);
}

return NULL;		

}

编译运行结果如下:
[root@localhost sem_linux_test]# ./thread_test
Producter make pdt:0
Producter make pdt:1
Producter make pdt:2
Consumer[2] get pdt:0
Consumer[0] get pdt:1
Consumer[1] get pdt:2
Producter make pdt:3
Producter make pdt:4
Consumer[0] get pdt:3
Consumer[2] get pdt:4

When the main function starts, it calls sem_init(&g_HavePdtSem, 0, 0); to initialize a semaphore.
Afterwards, the main function creates a Product thread and 3 Consumer threads, of which the Product produces a total of 6 products. Each time a product is created, sem_post(&g_HavePdtSem) is called; the value of the g_HavePdtSem semaphore is +1.
After each Consumer thread is successfully created, call sem_timedwait(&g_HavePdtSem, &abstime), because the initial value 0 set when the semaphore is initialized (the third parameter of sem_init is 0) is waiting for the semaphore to take effect. After obtaining a valid semaphore, sem_timedwait returns and sets the value of the g_HavePdtSem semaphore to -1. until the value of the semaphore drops to 0.

4. Thread priority

线程属性中包含有线程的调度策略和线程的优先级的信息,在创建线程时可以通过线程的属性设置线程的优先级。不过一般情况下多线程设计很少涉及到线程优先级的修改。
当线程属性中调度策略有如下三个类型。
	SCHED_FIFO:实时调度,先进先出,线程启动后一直占用CPU运行,一直到有比此线程优先级更高的线程处于就绪态才释放CPU
	SCHED_RR:实时调度,时间片轮转算法,当线程的时间片用完,系统将重新分配时间片,并置于就绪队列尾。放在队列尾保证了所有具有相同优先级的RR任务的调度公平。
	SCHED_OTHER:分时调度,默认算法。

The set thread priority is only effective when the thread's scheduling policy is SCHED_FIFO and SCHED_RR.

You can use the following function to modify the thread's scheduling algorithm and priority
int pthread_attr_setschedparam(pthread_attr_t *attr,
const struct sched_param *param);

int pthread_attr_setschedpolicy(pthread_attr_t *attr,
int policy);

5.The difference between Linux and Win

The above are all thread operations under Linux. The thread operation API under Win is different from Linux.
Thread starts and waits for exit
uintptr_t _beginthreadex( void *security,
unsigned stack_size,
unsigned ( *start_address )( void * ),
void *arglist,
unsigned initflag,
unsigned *thrdaddr );

DWORD WINAPI WaitForSingleObject(HANDLE hHandle,
DWORD dwMilliseconds)

互斥锁
HANDLE WINAPI CreateMutex(
__in LPSECURITY_ATTRIBUTES lpMutexAttributes,
__in BOOL bInitialOwner,
__in LPCTSTR lpName
);

BOOL WINAPI ReleaseMutex(
__in HANDLE hMutex
);

In the Win environment, it is generally recommended to use critical sections instead of locks. The calling efficiency of critical sections is much higher than that of locks.

临界区
void WINAPI InitializeCriticalSection(
__out LPCRITICAL_SECTION lpCriticalSection
);
void WINAPI DeleteCriticalSection(
__in_out LPCRITICAL_SECTION lpCriticalSection
);

void WINAPI EnterCriticalSection(
__in_out LPCRITICAL_SECTION lpCriticalSection
);

void WINAPI LeaveCriticalSection(
__in_out LPCRITICAL_SECTION lpCriticalSection
);

Synchronization event
BOOL WINAPI SetEvent(
__in HANDLE hEvent
);

DWORD WINAPI WaitForSingleObject(
__in HANDLE hHandle,
__in DWORD dwMilliseconds
);

信号量
BOOL WINAPI ReleaseSemaphore(
__in HANDLE hSemaphore,
__in LONG lReleaseCount,
__out LPLONG lpPreviousCount
);
DWORD WINAPI WaitForSingleObject(
__in HANDLE hHandle,
__in DWORD dwMilliseconds
);

6. Available code bases

Because the APIs for multi-threaded operations under Windows and Linux are different, when used in code that supports both Win and Linux, it is often necessary to add compilation macros everywhere, which is not conducive to code maintenance.
Therefore, you can use the encapsulated code and dynamic library (open source code, no copyright issues) to facilitate upper-layer processing without adding compilation macros.

Guess you like

Origin blog.csdn.net/p309654858/article/details/132145206