FreeRTOS study notes (2, synchronization and mutual exclusion communication, use of queues, queue sets)

foreword

This is the second bomb. Due to the limitation of the length of CSDN, the FreeRTOS learning is divided into several parts. This is the second part. It


mainly includes 同步与互斥通信、队列、队列集的使用etc.

Links to past study notes

第一弹: FreeRTOS study notes (1. FreeRTOS initial knowledge, task creation, task state theory, scheduling algorithm, etc.) :
第二弹 FreeRTOS study notes (2. Synchronization and mutual exclusion communication, use of queues, queue sets)
第三弹 : FreeRTOS study notes (3, The use of semaphores and mutexes)
第四弹 : FreeRTOS study notes (4, event groups, task notifications)
第五弹 : FreeRTOS study notes (5, timers, interrupt management, debugging and optimization)

study engineering

所有学习工程
oufen/FreeRTOS Learning
都在我的Gitee工程当中,大家可以参考学习

Synchronization Mutex and Communication

Synchronization, Mutex and Communication

For example,
the multitasking system is regarded as a team, and each task in it is equivalent to a person in the team. The
team members must coordinate the work progress --------> synchronously
contend for the meeting room---- ---> Mutual Exclusion
Communication-------> Communication

Task notification, queue, event group,
semaphore, mutex, etc.

  1. wait is sync

taskA is doing something, taskB must wait for taskA to complete before continuing to do it, this is synchronization, in step

  1. mutually exclusive

For some resources, only one task can be used at the same time, and this task1 must use it exclusively.
After one task1 is used, another task2 cannot use it.

But after this task1 is used up, remind task2 that you can use it. This is to use synchronization to achieve mutual exclusion.

image.png

image.png
mutually exclusive understanding

There are two functions, taskA and taskB, taskA
occupies the toilet first, and uses the toilet.
During the execution of taskA, taskB executes again, and finds that the toilet is used, and enters the blocked state.
After the blocked taskA uses the toilet, it sends a reminder to tell taskB that
taskB starts from the blocked state. , transition to running state, use the toilet, and leave

Use synchronization to implement mutual exclusion

Methods to implement synchronization or mutual exclusion

  • task notification
  • queue
  • event group
  • Semaphore (semaphoe)
  • Mutex (mutex)

image.png

Examples of synchronous operations

image.png

task2 waits for task1 to complete the calculation
task2 competes with task1 for CPU resources during the calculation of task1

The two tasks compete with each other to consume CPU resources for about 4s

The question is, since task2 is waiting, why should it seize CPU resources?
image.png

Comment out task2
here only consumes 2s, task2 does not preempt CPU resources with task1
image.png

The method of circularly detecting a variable to achieve synchronization has a big flaw and will take up a lot of CPU resources

If the RTOS is synchronized, the task not only has to wait for something to happen, but also the current task enters the blocking state blocked or the dormant state suspended during the waiting process.

asynchronous
image.png

Examples of Mutual Exclusion Operations

Mutually exclusive use of the serial port

There are hidden dangers in using global variables to achieve mutual exclusion in a multitasking system

Use a general function to create task3 and task4

Simply use global variables

image.png

communication

Implementing communication in freeRTOS is not complicated

After the variable is calculated in task1, the variable can be accessed in task2, which is communication
through global variables.

The complexity lies in how to implement synchronization and mutual exclusion

The solution of freeRTOS

to ensure

  • correctness
  • efficiency
    • taskA is using CPU resources, taskB should block or sleep during the waiting process
    • The waiter is going to sleep or block

solution

  • queue (first in first out)
    • Understand that the queue is a pipeline or a conveyor belt
  • time groups (combinations of events)
    • Each bit represents an event
    • When the producer finishes something, he sets a bit to 1
    • Consumers can wait for an event, several events, or one of several events, many-to-many relationship
    • multiple producers, multiple consumers
  • amount of signal
    • Data is passed in the queue
    • The count value is passed in the semaphore
    • After the taskA producer is completed, let the count value +1
    • taskC consumer consumption, let the count value -1
  • mutex
    • The count value passed in the semaphore is only set to 1 or 0
    • Use mutexes to protect certain critical resources
    • Only one task can use CPU resources at a time
    • There is also the possibility of priority inversion, (priority inheritance)
  • task notification
    • many-to-one relationship
    • The task on the left notifies taskC
    • Can notify values, events, etc.

image.png

Queue Queue

Basic knowledge of queues

  • How to create, clear, delete queues
  • How to store messages in the queue
  • How to send (write) data to the queue, how to read data from the queue, and how to overwrite the data in the queue

Queues are first in first out

It can be considered that the queue is a routine operation and a pipeline

When writing data, put it into the tail, and when reading data, read from the head

image.png
On the left is the worker, on the right is the consumer.
After the worker produces the product, he puts the product on the conveyor belt.
When there is data in the queue, the consumer can read the data from the queue.
If there are multiple data in the queue, the result is The first data to put in the queue

When storing data in the queue, it can be divided into head and tail

The conventional practice is to put the data into the tail after the data is produced,
and the consumer reads the data from the head

If you put new data into the head header, the new data will not overwrite the original header head data

It is to move the original data back, the queue will move the data of the original head head back, and then insert the new data

These data are managed using a ring buffer, so moving a piece of data is not complicated and efficient
image.png

describe queue

Queue, queue

image.png
image.png

The capacity of each queue is different, and there is a pointer to a buffer that is actually used to store data

Initially there is no data in this queue, the consumer should enter a blocked state while waiting

How to enter the blocking state, you can modify your own state first

But it should be able to find the consumer when there is data in the queue, wake it up

Therefore, in the Queue structure, there should be a List linked list to store tasks waiting for data

If the queue is full of data, the producer still wants to fill the queue with data. If he does not want to overwrite the data, he should wait

So there should be a linked list List2 in the Queue structure, waiting to write data and space tasks

image.png
image.png

Transfer data using queues

image.png

blocking access to the queue

Multiple tasks can read and write queues, as long as you know the handle of the queue
. Tasks and ISRs can read and write queues

  1. When the task reads and writes the queue, if the reading and writing is unsuccessful, it will enter the Blocked state, and the timeout period can be specified

If it is able to read and write, it will enter the Ready state, otherwise it will block until the timeout

  1. When a task reads the queue, if there is no data in the queue, the task enters the Blocked state, and the blocking time can also be specified

If there is data in the queue, the task will immediately change to the Ready state
. If there is no data, the task will also enter the Ready state after the timeout expires.

  1. There is no limit to the task of reading the queue. When multiple tasks read and write an empty queue, these tasks will enter the blocking state

When multiple tasks are waiting for data in the same queue, which task will enter the Ready state when there is data in the queue

  • highest priority task
  • If everyone has the same priority, the task with the longest waiting time will enter the Ready state

Processes using queues

  • create queue
  • write queue
  • read queue
  • delete queue

create queue

There are two ways to create a queue

  • dynamically allocate memory
  • Statically allocate memory

image.png
image.png

The structure of the queue xQUEUE
image.png
xQUEUE
image.png

The essence of a queue is a ring buffer.
If you want to create a queue, you must first create a Queue structure
image.png

How to create a queue
image.png

1. Dynamically create queue Queue

image.png

2. Statically create a queue Queue

image.png

reset queue

image.png

delete queue
image.png

write queue

Write queue, the queue is a ring buffer

image.png

image.png

The length of the queue is
0 - N-1
The pcWriteTo pointer points to the head of the buffer

The data of the queue can be obtained through the pvItemToQueue pointer, and the size is ItemSize

After the copy is completed, it is written into the queue, and the pcWriteTo pointer points to the next data in the queue, and the pcWriteTo pointer +=ItemSize;

If the queue is full, it should not be written to the queue, otherwise the old data will be overwritten.
At this time, you can specify a waiting time xTicksToWait. If the waiting time is 0, it means not waiting, and when the queue cannot be written, it will Immediately return
if it is not 0, the task that calls the write queue function will be put into the xTasksWaitingToSend linked list, and enter the blocking state.
After there is space in the queue, it will be woken up

When the write pointer writes the last data in the queue, the pointer jumps to the head of the queue, and jumps from the tail to the head

read queue

image.png

image.png

When the data cannot be read out, it will be placed in the xTasksWaitingToReceive linked list of the queue and enter the blocking state. It will wake up when other tasks write to this queue

The pointer pcHead points to the first address of the data, which will not change. What is changed
is pcReadFrom, the last read position, which points to
pcReadFrom+=ItemSize. If the read exceeds the size of the queue,
pcReadFrom will point to the head again, so as to read the 0th item. data

image.png

When writing data and reading data, if the written data is full or there is no data when reading data, enter the blocking state and wait for
wake-up

The task with the highest priority is woken up.
If the priorities are the same, the task with the longest waiting time is woken up.

query queue

Query how much data and how much space is left in the queue
image.png

Basic use of queues

1. Use queues to achieve synchronization

image.png
The role of the Volatile keyword

Generally, the compiler will optimize the system so that the MCU does not read data from the memory, but reads from the cache or register. Therefore, we must add voaltile modification to ensure that the compiler does not perform any optimization on this variable. .

Generally, the compiler will use volatile to avoid key variables from being optimized during compilation , such as register variables, which will be read from registers every time this variable is used, rather than optimized (maybe copying data in memory), Make sure that the readout data is stable.


Reading a variable of type volatile always returns the most recently written value .

Volatile will only do one thing, tell the compiler not to optimize my variable, just compile it according to the code I wrote, to avoid multi-threading problems, and you will write stupid code than compile it.


Under what circumstances must the variable be defined as volatile?

  • register variable
  • Global variables used by the interrupted process outside the method
  • Global variables used by threads outside the method

Let task1 write the accumulated value sum into the queue after the calculation is completed, and task2 reads the queue, and prints out when there is data in the queue, and enters the blocking state if there is no data in the queue

In this way, task2 will not participate in CPU scheduling during the waiting process

Once tsak1 writes the accumulated value into the queue, task2 enters the ready state from the blocking state, and thus runs, reading the data in the queue

step:

  1. create queue
    1. Specifies the length of the queue (how many members are in the queue)
    2. Specifies the size of the data in the queue (the size of the data (members) in the queue)
  2. task1 writes data to the queue
  3. task2 reads data from the queue

image.png

Synchronization is achieved using queues
image.png
image.png

image.png

2. Use queues to implement mutual exclusion

I want task3 and task4 to achieve exclusive use of serial ports, and use queues to achieve mutual exclusion

Lock the serial port to achieve mutual exclusion
image.png

image.png

image.png

image.png

Example of a queue

  • Identify data sources
  • transfer large chunks of data
  • Mail

Identify data sources

image.png

transfer large chunks of data

Just pass the address into the queue

In the queue, what we pass in is a value, and copy the value into the queue.
This value can be data or an address.
image.png

When using addresses to access data, the data is stored in RAM, pay attention to these points

  • RAM is called shared memory. Make sure that RAM cannot be modified at the same time. Only the sender modifies this RAM when writing the queue, and only the receiver can access this RAM when reading the queue.
  • To keep RAM available, this RAM should be a global variable or dynamically allocated memory
  • For dynamically allocated memory, ensure that it cannot be released in advance, and must be released after the receiver finishes using it

Mail

image.png

Queue Set Queue Set

Obtain data from multiple queues, which is the queue set

For example, mouse, button, and touch screen can generate data, and all of them can be put into their own queue

The application app supports 3 kinds of input devices. At this time, it is necessary to read these three queues and wait for these three queues.

Any queue with data can wake up the App and let it continue to work

The queue set is also a queue.
The previous queue contained data, while the queue set contained queues.

Assuming that the mouse, button, and touch screen have created three queues, if the program wants to create these three queues at the same time

Then a queue set Queue Set should be created

1. Queue set length

  1. The length of this queue set should be: the length of queue A + the length of queue B + the length of queue C

Otherwise, when A, B, and C are all full, the queue set has no space to store all the handles
image.png

2. Queue set establishes connection

  1. Queue sets and queues are associated

The handle of the queue will point to the queue set

3. Generate data, write it into the queue, and pass the queue handle into the queue set

  1. Press the touch screen touch, touch generates data, this data is stored in the queue of touch Queue (xQueueSend), and the handle will be put into the Queue Set inside this function

image.png

image.png

At this time, there is data in the Queue Set

4. Read queue set

  1. Read Queue Set

The Read Queue Set function will return a queue Queue

5. Read queue

  1. Read Queue

Read the Queue Set once, after returning a queue, you can only read the Queue once

image.png
Specific steps
image.png

Use of Queue Sets

Create two task
task1 to write data to Queue1
tsak2 to write data to Queue2

task3 uses the Queue Set to monitor these two queues

1. Create two queues Queue

2. Create a Queue Set

/*队列集的长度应该是 队列A的长度+队列B的长度*
xQueueSetHandle = xQueueCreateSet(4); /

3. Add two Queues to the Queue Set (establish a connection)

Note that this is to establish a connection, not to put it in the Queue Set

/* 3、把两个Queue和Queue Set建立联系*/
	xQueueAddToSet(xQueueHandle1, xQueueSetHandle);
	xQueueAddToSet(xQueueHandle2, xQueueSetHandle);

4. Create three tasks

Task1 and task2 respectively write data into the queue
task3 to monitor the Queue Set, to see which Queue has data and which one has data, read the data out

image.png

Task1 writes data to Queue1, and at the same time puts the handle of Queue1 queue into Queue Set

task3 is waiting for Queue Set, Queue1 has data, returns handle, reads the data of handle, prints -1

Task2 is the same

The use of queue sets requires the configuration of the FreeRTOS_Config.h file

image.png

#define configUSE_QUEUE_SETS 1 /*Queue Set 函数开关*/

task3 Queue Set monitors the queues Queue1 and Queue2, if there is data in the queue, get the handle of the queue, read the data of the Queue, and then print the data
image.png
image.png
image.png

The queue set can monitor multiple queues, pick out the queues with data from multiple queues, and then read the queues, and then read the data in the queues

Guess you like

Origin blog.csdn.net/cyaya6/article/details/132507836
Recommended