FreeRTOS Learning Record 04--Queue

0 Preface

@ Author         :Dargon
@ Record Date    :2021/07/13
@ Reference Book : `FreeRTOS源码详解与应用开发`,`ARM Cortex-M3与Cortex-M4权威指南`,`B站正点原子FreeRTOS讲解视频`
@ Purpose        :学习正点原子的miniFly,该飞控基于FreeRTOS系统开发的,所以学习一下记录下关于RTOS系统的一些基本操作,大概了解系统的工作原理,如何创建,运行,切换任务等等基本操作流程。在此进行学习的记录。

1 The basics of queues

1.1 Queue Queue_t

  • Structure Queue_t
    typedef struct QueueDefinition
    {
          
          
        int8_t *pcHead;					/*< Points to the beginning of the queue storage area. */
        int8_t *pcTail;					/*< Points to the byte at the end of the queue storage area.  Once more byte is allocated than necessary to store the queue items, this is used as a marker. */
        int8_t *pcWriteTo;				/*< Points to the free next place in the storage area. */
    
        union							/* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */
        {
          
          
            int8_t *pcReadFrom;			/*< Points to the last place that a queued item was read from when the structure is used as a queue. */
            UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */
        } u;
    
        List_t xTasksWaitingToSend;		/*< List of tasks that are blocked waiting to post onto this queue.  Stored in priority order. */
        List_t xTasksWaitingToReceive;	/*< List of tasks that are blocked waiting to read from this queue.  Stored in priority order. */
    
        volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */
        UBaseType_t uxLength;			/*< The length of the queue defined as the number of items it will hold, not the number of bytes. */
        UBaseType_t uxItemSize;			/*< The size of each items that the queue will hold. */
    
        volatile int8_t cRxLock;		/*< Stores the number of items received from the queue (removed from the queue) while the queue was locked.  Set to queueUNLOCKED when the queue is not locked. */
        volatile int8_t cTxLock;		/*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked.  Set to queueUNLOCKED when the queue is not locked. */
    
        #if( ( configSUPPORT_STATIC_ALLOCATION == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) )
            uint8_t ucStaticallyAllocated;	/*< Set to pdTRUE if the memory used by the queue was statically allocated to ensure no attempt is made to free the memory. */
        #endif
    
        #if ( configUSE_QUEUE_SETS == 1 )
            struct QueueDefinition *pxQueueSetContainer;
        #endif
    
        #if ( configUSE_TRACE_FACILITY == 1 )
            UBaseType_t uxQueueNumber;
            uint8_t ucQueueType;
        #endif
    
    } xQUEUE;
    
    /* The old xQUEUE name is maintained above then typedefed to the new Queue_t
    name below to enable the use of older kernel aware debuggers. */
    typedef xQUEUE Queue_t;
    
    1. pcHeadThe pointer points to the first address of the storage area
    2. pcTailThe pointer points to the end address of the memory area
    3. pcWriteToThe pointer points to the next storable address in the memory area
    4. a union
      1. When used as a queue, pcReadFrompoint to the first address of the last dequeued queue item
      2. When used as a recursive mutex, record the number of recursive calls
    5. xTasksWaitingToSendList, which means that the task that fails to send a message due to the queue message is full, so that the task enters the blocking state, and the event list item of the task is xEventListItemhung on this list.
    6. xTasksWaitingToReceiveList, which means that the task that fails to read the message due to the empty queue message makes the task enter the blocking state, so that the event list item of the task is xEventListItemhung on this list.
    7. uxMessagesWaitingRecord the number of messages in the current queue
    8. uxLengthThe maximum number of messages that the queue can hold
    9. uxItemSizeThe size of a message size
    10. cRxLockand cTxLockTwo locks are temporarily unused.
    11. Some conditional compilation, don't look at it for now.

1.2 Queue initialization Dynamic

  • About queue creation
    • The file queue.c implements some common functions. The specific functions for external calls are defined in queue.h in the form of a macro, which is convenient for external calls and enhances the robust performance of the program.
    • xQueueCreate() initialization function definition
      #if( configSUPPORT_DYNAMIC_ALLOCATION == 1 )
          #define xQueueCreate( uxQueueLength, uxItemSize ) xQueueGenericCreate( ( uxQueueLength ), ( uxItemSize ), ( queueQUEUE_TYPE_BASE ) )
      #endif
      
      • xQueueGenericCreate() function that actually does the work

        1. Calculate xQueueSizeInBytesand allocate memory, including the size of the queue structure + the size of the actual storage area.
        2. pucQueueStoragePoints to the location of the queue store.
        3. Call the function prvInitialiseNewQueue( uxQueueLength, uxItemSize, pucQueueStorage, ucQueueType, pxNewQueue )to initialize the remaining member variables of Queeu_t.
        4. This initialization function only applies for memory, including initializing the position of some pointers, pointing to different memory locations, so as to facilitate the initialization of new queues later. The following is the 在初始化列表的时候同样是先进行内存的申请与地址的指向,后面才进行具体的结构体的变量的正式初始化specific initialization of the member variables of Queue_t.
        • prvInitialiseNewQueue() source code analysis

          1. The updated pxNewQueue->pcHeadpointer points to
            1. If the queue's uxItemSize=0, it directly points to the first address of the queue's application memorypxNewQueue
            2. Otherwise, point to the address where the message is storedpucQueueStorage
          2. Initialize the number of queue messagespxNewQueue->uxLength
          3. Initialize the message length, the memory occupied by each messagepxNewQueue->uxLength
          4. Call the function xQueueGenericReset( pxNewQueue, pdTRUE )to reset the queue.
          • prvInitialiseNxQueueGenericReset() resets the queue source code analysis
            1. Update pxQueue->pcTailpoints to the last position of the queue.
            2. Update pxQueue->uxMessagesWaitingmeans the number of messages waiting to be read by the task, that is, the number of messages currently stored in the queue, which is 0.
            3. pxQueue->pcWriteToThe address where messages can be stored.
            4. pxQueue->u.pcReadFromQueue messages are read from the back, which needs to point to the first address of the last message item.
            5. pxQueue->cRxLockRegarding the lock of queue messages, it has not been used here.
            6. Initializes pxQueue->xTasksWaitingToSendand pxQueue->xTasksWaitingToReceivetwo lists
              1. If it is judged to be a new list, since there are no tasks in the list that are blocked due to the sending and receiving of messages hanging on the two lists, the initialization function of the list is directly called to initialize the list vListInitialise.
              2. If it is judged that it is not a new list, these two lists need to be processed. If it is blocked and suspended because it is waiting to read the list message, don't worry about it, because the list will always be empty when it exits the function. For waiting to send a message to the queue but being blocked and suspended pxQueue->xTasksWaitingToSend, you need to make a judgment.
              3. If there is a task hanging in , pxQueue->xTasksWaitingToSendremove the task, and then perform a task switch.

2 Implementation of API functions Application Programming Interface

2.1 Enqueue xQueueGenericSend()

  • There are three types of functions about sending
    • task level
      1. xQueueSendToFront() Send a message to the head of the queue
      2. xQueueSendToBack() Send a message to the end of the queue
      3. xQueueSend() Send messages normally and send messages to the end of the queue
      4. xQueueOverwrite() If the queue message is full, it will automatically overwrite the old message
    • interrupt level
      1. The back of the interrupt level + FromISR() corresponds to the above task-level functions one by one
    • The real function of xQueueGenericSend() , the previous sending function, is the function called
      1. If it is judged that the queue is not full at this time
        1. prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition )Copy the message xCopyPositionto the queue in the form of
        2. Check whether there is a previous task blocked and hung up because the queue was empty before and the message content could not be read xTasksWaitingToReceive, and remove it from the event list.
        3. perform a task switch
      2. The number of messages in the queue is full and cannot be sent
        1. Parameters xTicksToWait =0directly exit, returnerrQUEUE_FULL
        2. Wait for a certain blocking time
        3. wait
      3. Exit the critical section for viewing, and hang the task on the blocking list of the queue (if there is an empty position, it will enter the situation that the above queue is not full in the for loop, and return to a state directly after the execution is completed)

2.2 Dequeue xQueueGenericReceive()

  • read message from queue
    • task level

      1. After xQueuePeek() reading is complete, the message will not be deleted
      2. After xQueueReceive() reads, the message will be deleted
    • interrupt level

      1. xQueueGenericReceive() has the same effect as above
    • The function of xQueueGenericReceive() that actually works, the previous function of receiving messages, is the function that is called

      1. Judging that there is a message in the queue at this time
        1. prvCopyDataFromQueue( pxQueue, pvBuffer );copy the message to the bufferpvBuffer
        2. Check whether there is a previous task blocked and hung up because the previous queue space is full and cannot send message content to the queue, and xTasksWaitingToSendremove it from the event list.
        3. perform a task switch
      2. Queue has no messages
        1. Parameters xTicksToWait =0directly exit, returnerrQUEUE_EMPTY
        2. Wait for a certain blocking time
        3. wait
      3. Exit the critical section for viewing, and hang the task on the blocking list of the queue (if there is an empty position, it will enter the situation that the above queue is not full in the for loop, and return to a state directly after the execution is completed)

3 Signal amount

  • First introduce the semaphore used temporarily, and then supplement it after learning the semaphore

3.1 Binary semaphore

  • It corresponds to a feature applied to the queue that the task is suspended to the waiting list because the corresponding message cannot be read

3.2 Priority inversion

  • Using the priority inversion phenomenon generated by binary semaphore
  • explain in your own words
    1. For example, there are now three tasks, namely high priority H_task, medium priority task M_task and low priority task L_task
    2. Tasks H and M are pending due to waiting for an event to occur, and task L is running
    3. During the running of task L, it needs to access the shared resource, and before accessing, it needs to obtain the semaphore corresponding to the resource
    4. Task L gets the semaphore and starts running.
    5. After the event that task H is waiting for occurs, it deprives task L of the CPU usage right
    6. Task H starts running
    7. During the running of task H, it also needs to use the resources that task L is using. Since the semaphore corresponding to this resource is not occupied by task L, H has to enter the suspended state and wait for task L to release the semaphore.
    8. Task L continues to run
    9. At this time, the event that task M was waiting for happened, and task M deprived task L of the CPU usage right and started running
    10. After the execution of task M, return the CPU usage right to task L
    11. Task L runs until the end, releasing the semaphore of the shared resource
    12. Task H gets the semaphore before continuing to run
    • In this case, it is equivalent to that the priority of H has dropped to the priority level of L, and M is running higher than H, and the phenomenon of priority inversion has occurred.

3.3 Mutex semaphore

  • The mutual exclusion semaphore is to solve the above phenomenon caused by the binary semaphore.
    1. After the above situation occurs, raise the priority of L to the same priority as H.
      amount
    2. Task H gets the semaphore before continuing to run
    • In this case, it is equivalent to that the priority of H has dropped to the priority level of L, and M is running higher than H, and the phenomenon of priority inversion has occurred.

3.3 Mutex semaphore

  • The mutual exclusion semaphore is to solve the above phenomenon caused by the binary semaphore.
    1. After the above situation occurs, raise the priority of L to the same priority as H.

Guess you like

Origin blog.csdn.net/Dallas01/article/details/118720796