FreeRTOS Queue | FreeRTOS Nine

Table of contents

illustrate:

1. Queue Introduction

1.1. What is a queue?

1.2. Advantages of queues

1.3. Queue implementation function

1.4. Understanding queue usage

1.5. Queue characteristics

1.6. Queue blocking processing

1.7. Queue enqueue and enqueue process

2. Queue structure

2.1. Understanding the structure

2.2. Community understanding

2.3. Queue structure storage area

3. Queue API function

3.1. Create queue function

3.2. Enqueue function

3.3. Dequeue function

4. Queue API function implementation steps

4.1. Queue creation API function

4.2. Queue writing data API function

4.3. Queue data reading API function


illustrate:

About the content:

1) The following contents are mostly conceptual understanding and step analysis

2) There is no personal sample code yet. The official sample code of FreeRTOS is used.

3) If you want to transplant the code for testing, please look elsewhere. There is no personal sample code for testing in the following content.

About others:

1) Operating system: win 10

2) Platform: keil 5 mdk

3) Language: c language

4) Board: STM32 series transplanted to FreeRTOS
 

1. Queue Introduction

1.1. What is a queue?

        Queue is a mechanism (a message mechanism) for data interaction from task to task , task to interrupt , and interrupt to task.

 

1.2. Advantages of queues

        1) Compared with global variables commonly used in bare metal, queues in FreeRTOS ensure data security.

        2) When multiple tasks operate a variable at the same time, the read and write data of the variable is unsafe, as shown in Figures 1 and 2 below.

               

                             Figure 1 Figure 2

1.3. Queue implementation function

        1) FreeRTOS is based on queues and implements a variety of functions, including queue sets, mutually exclusive semaphores, counting semaphores, binary semaphores, recursive mutually exclusive semaphores, etc.

        2) The read queue and write queue are protected to prevent multi-tasking interference. When using them, you only need to call the relevant API functions, as shown in Figures 3 and 4 below.

                                               

                    Figure 3 Figure 4

1.4. Understanding queue usage

        1) A limited amount of data with a fixed size can be stored in the queue. Each piece of data in the queue is called a "queue item", and the maximum number of "queue items" that the queue can store is called the length of the queue.

        2) When creating a queue, you must specify the queue length and the size of the queue items (the values ​​are not fixed), as shown in Figure 5 below

 Figure 5

1.5. Queue characteristics

        1) The data dequeuing method usually adopts the "first in, first out" (FIFO) data storage buffer mechanism, and the data that is queued first is read first; of course, the "last in, last out" (LIFO) method can also be configured.

        2) Data transfer method, using actual value transfer, directly putting the value in the queue for transfer; you can also use transfer pointers, generally when transferring larger data, pointer transfer is used

        3) Multi-task access, the queue does not belong to a certain task, any task and interrupt can send (enqueue)/read (dequeue) messages to the queue

        4) Dequeue and enqueue blocking. When a task sends (enqueues) a message to a queue, you can specify a blocking time. When the queue is full and cannot be enqueued, there are three situations:

        1. The blocking time is 0, and you will return directly without continuing to wait.

        2. The blocking time is 0-port_Max_DELAY, wait for the set blocking time, and fail to join the queue within this time, return

        3. The blocking time is port_Max_DELAY, until you can join the queue.

        Note: Dequeuing is the same as joining the team, there is no duplication.

1.6. Queue blocking processing

        1) Queue blocking. When the queue is full and there is still task X to join the queue, it cannot join the queue at this time. First, mount the task

        2) Dequeue blocking. When the queue is empty and there is still task Y to join the queue, it cannot be dequeued at this time (because there is no data). First, mount the task Y status list item to pxDelayedTaskList, and then add the task X event list Item mounted to xTaskWaitingToReceive

When multiple tasks are queued into a "full queue" at the same time, these tasks will enter the blocking state. That is to say, multiple tasks are waiting for space in the same queue. When a space appears, which task enters the ready state first?

        1) The task with the highest priority among multiple tasks

        2) If multiple tasks have the same priority, the task with the longest waiting time will enter the ready state.

1.7. Queue enqueue and enqueue process

        1) Create a queue, as shown in Figure 6 below

Figure 6

        2) Join the team (position filling), as shown in Figures 7 and 8 below

 Figure 7

 Figure 8

        3) Dequeue (position filling), as shown in Figures 9 and 10 below

 Figure 9

 Figure 10

2. Queue structure

2.1. Understanding the structure

typedef struct QueueDefinition

{         int8_ _t* pcHead /*Starting address of storage area*/         int8_ _t* pcWriteTo; /*Next writing position*/ union /*Union*/


{         QueuePointers_ _t xQueue;         SemaphoreData_ _t xSemaphore; }u;         List_ .t xTasksWaitingToSend; /*Waiting to send list*/         List_ _t xTasksWaitingToReceive; /*Waiting to receive list*/         volatile UBaseType_ _t uxMessagesWaiting; /*Number of non-idle queue items*/         UBaseType_ .t uxLength; /*queue length*/         UBaseType_ .t uxltemSize; /*size of queue item*/         volatile int8_ _t cRxLock; /*read lock counter*/         volatile int8_ _t cTxLock; /*write lock counter */










/*Some other conditional compilation*/
}xQUEUE;

2.2. Community understanding

When used with queues:

typedef struct QueuePointers

{         int8_ _t* pcTail; /*The end address of the storage area*/         int8_ _t * pcReadFrom; /*The address of the last read queue*/ } QueuePointers_ _t;


When used with mutex semaphores and recursive mutex semaphores:

typedef struct SemaphoreData

{         TaskHandle_ t xMutexHolder; /* Mutex semaphore holder*/         UBaseType_ t uxRecursiveCallCount; /* Recursive mutex semaphore acquisition counter*/ } SemaphoreData_ t;


2.3. Queue structure storage area

As shown in Figure 11 below:

 Figure 11

3. Queue API function

The main process of using queues: Create process-->Write queue-->Read queue

3.1. Create queue function

1) Function name, xQueueCreate(), function: dynamically create a queue

2) Function name, xQueueCreateStatic(), function: statically creates a queue

3) The difference between the two: dynamically created queue memory is dynamically allocated by the memory managed by FreeRTOS, while static creation requires the user to allocate memory by himself

Code part:

#if ( configSUPPORT_DYNAMIC_ALLOCATION == 1 )
    #define xQueueCreate( uxQueueLength, uxItemSize )    xQueueGenericCreate( ( uxQueueLength ), ( uxItemSize ), ( queueQUEUE_TYPE_BASE ) )
#endif

Parameter explanation:

uxQueueLength, meaning: queue length

uxItemSize, meaning: queue item size

queueQUEUE_TYPE_BASE, meaning: What function does the queue implement?

Optional parameters are as shown in Figure 12:

 Figure 12

Return value explanation:

Return: NULL, meaning: Queue creation failed

Return: other values, meaning: queue created successfully

3.2. Enqueue function

As shown in Figure 13 below:

Figure 13

Code part:

As shown in Figure 14 below:

 Figure 14

The joining position is as shown in Figure 15 below:

 Figure 15

Enqueue entry function:

BaseType_t xQueueGenericSend( QueueHandle_t xQueue,
                              const void * const pvItemToQueue,
                              TickType_t xTicksToWait,
                              const BaseType_t xCopyPosition );

Parameter explanation:

xQueue, meaning: queue to be written

pvItemToQueue, meaning: message to be written

xTicksToWait, meaning: blocking timeout

xCopyPosition, meaning: message writing position

Return value explanation:

Return: pdTRUE, meaning: Queue writing is successful

Return: errQUEUE_FULL, meaning: Queue writing failed

3.3. Dequeue function

As shown in Figure 16 below:

 Figure 16

Code part:

BaseType_t xQueueReceive( QueueHandle_t xQueue,
                          void * const pvBuffer,
                          TickType_t xTicksToWait ) PRIVILEGED_FUNCTION;

Parameter explanation:

xQueue, meaning: queue to be read out

pvBuffer, meaning: message reading buffer area

xTicksToWait, meaning: blocking timeout

Return value explanation:

Return: pdTRUE, meaning: Queue writing is successful

Return: pdFALSE, meaning: Queue writing failed

illustrate:

After successfully reading the message, this function will remove the read message, while the xQueuePeek function will not remove the read message.

4. Queue API function implementation steps

4.1. Queue creation API function

Name: xQueueCreate

Implementation process:

1) What is actually executed is xQueueGenericCreate( )
2) xQueueGenericCreate( ( uxQueueLength ). ( uxltemSize ), (
queueQUEUE_ TYPE. BASE))
3) Calculate how much memory the queue requires xQueueSizeInBytes = ( size. t)( uxQueueLength *
uxltemSize )
4) as The queue applies for memory, and the application size is: sizeof(Queue, t) + xQueueSizeInBytes. The front part stores
the structure members, and the latter part stores the queue items.
5) Determine whether the memory application is successful. If successful, the first address of the queue item storage area is calculated.
6) Call prvlinitialiseNewQueue () initializes the new queue pxNewQueue

1. Initialize the queue structure member variables
2. Call xQueueGenericReset () to reset the queue

4.2. Queue writing data API function

Name: xQueueSend

Implementation process:

1) What is actually executed is: xQueueGenericSend( QueueHandle_t xQueue,
                              const void * const pvItemToQueue,
                              TickType_t xTicksToWait,
                              const BaseType_t xCopyPosition );

2) Enter the critical section (turn off interrupts)

3) Determine whether the queue is full

4) If the queue has free space, then

1. Messages can only be written when the queue has free space or is overwritten.

2. When there is free space or overwriting, copy the writing message to the queue according to the specified writing method.

3. Determine whether there are tasks that are blocked because the message cannot be read, and if there are any tasks, unblock them through: xTaskRemoveFromEventList() function --> Determine whether the scheduler is suspended

        1. Not suspended: the corresponding event list items and status list items will be removed, and the task will be added to the ready list.

        2. Suspended: The event list item will be removed and the event list item will be added to the waiting ready list item: xPendingReadyList. When the recovery task scheduler xTaskResumeALL() is called, the tasks in xPendingReadyList will be processed.

4. Exit the critical section

5) The queue is full, then

1. Messages cannot be written at this time, so the task must be blocked.

2. If the blocking time is 0, it means there is no blocking and an error that the queue is full will be returned directly.

3. If the blocking time is not 0, the task needs to be blocked. Record the value of the system beat counter and the number of overflows at this time, which will be used to compensate for the blocking time below.

4.3. Queue data reading API function

Name: xQueueReceive

Implementation process:

1) Enter the critical area

2) Determine whether the queue is empty

3) If there is data, then

1. Use the function prvCopyDataFromQueue() to copy data

2. Decrease the number of queue items by one.

3. Because a queue item has been subtracted previously, there is a vacancy in the queue. If there are tasks in the xTaskWaitingToSend waiting list, unblock the state and use the xTaskRemoveFromEventList() function to determine whether the scheduler is suspended:

        1. Not suspended: the corresponding event list items and status list items will be removed, and the task will be added to the ready list.

        2. Suspended: The event list item will be removed and the event list item will be added to the waiting ready list item: xPendingReadyList. When the recovery task scheduler xTaskResumeALL() is called, the tasks in xPendingReadyList will be processed.

        

4. Exit the critical section (enable interrupts)

4) No data

1. No message can be read at this time and the task is blocked.

2. If the blocking time is 0, it means there is no blocking and an error that the queue is empty will be returned directly.

3. If the blocking time is not 0, the task needs to be blocked. Record the value of the system beat counter and the number of overflows at this time, which will be used to compensate for the blocking time below.

4. Determine whether it is necessary to continue blocking after the blocking time is compensated, then

        1. Required: Add the event list of the task to the waiting-to-receive list, add the task status list item to the blocking list for blocking, unlock the queue, and restore the scheduler

        2. Not required: unlock the queue, restore the scheduler, and return the error that the queue is empty.

Guess you like

Origin blog.csdn.net/qq_57663276/article/details/128918116