Learning and understanding of VxWorks (two)

Thank you for sharing, and attach the link: http://www.prtos.org/vxworks-wind-kernel/

This article discusses the design idea of ​​the Wind kernel. As mentioned above, the Wind kernel of VxWorks adopts a tailorable micro-kernel design with concurrent execution of multiple tasks, preemptible priority scheduling, optional time slice scheduling, inter-task communication and synchronization Mechanism, fast context switching, low interrupt latency, fast interrupt response, support for interrupt nesting, support for 256 priority levels, support priority inheritance, and task deletion protection mechanism. Wind kernel runs in privileged mode and does not use trap instructions And jump table, all system calls are implemented in the form of function calls.

The Wind kernel is a strong real-time operating system kernel, but like other mature operating systems, it is not a "hard real-time" operating system. The so-called "hard real-time" means that when a certain event occurs, the system must respond within a predetermined time, or before a deadline, otherwise it will cause catastrophic consequences. An operating system with this feature must either make a promise or reject it immediately (so that the submitter can consider other measures) for every job submitted and its time requirements, and guarantee the promise made Its realization. Jobs submitted to the hard real-time system can also be without time requirements, but of course the commitment to these jobs does not include the factor of time, and they will be treated as background jobs.

VxWorks does not use a deadline-based scheduling algorithm, that is, it does not accept any time requirements for job submission. In order to improve real-time performance, the current methods are all priority preemptive scheduling, so that high-priority tasks are executed first. In this way, under the premise of certain computing resources, real-time requirements can be met through proper task division and task priority setting.

Note: Some people on the Internet say that the VxWorks kernel is a hard real-time kernel, which is not accurate. To be precise, the VxWorks system is a real-time system, but its real-time performance is achieved through proper task division and task priority setting under the premise of certain computing resources. Other systems such as uC/OS system and FreeRTOS are all like this.

2.1 Wind core structure

In order to improve the real-time performance of the system, most operating systems provide various mechanisms, such as a preemptible kernel and the latter half of interrupt processing. In order to improve real-time performance, VxWorks is constructed into a hierarchical structure with a micro-kernel.

In the early days of computers, the operating system was mostly a monolithic entity. In such a system, modules that provide different functions are considered independently, such as processor management, memory management, file management, etc., and the relationship between the modules is less considered. Such an operating system has a clear structure and simple structure. However, due to the complexity of the operating system, it is difficult to divide the boundary between the preemptible part and the non-preemptible part in this large-granularity operating system. It is difficult to avoid redundant operations in the operating system Execution in the non-preemptible part results in poor real-time performance of the system and is not suitable for real-time application environments.

An operating system with a micro-kernel hierarchical structure can better solve this problem. In such an operating system, the kernel is used as the starting point of the hierarchical structure, and the functions of each layer encapsulate the lower layers. The kernel only needs to contain the most important operating instructions, provide an abstraction layer between high-level software and low-level hardware, and form the minimum set of operations required by other parts of the operating system. In this way, it is relatively easy to accurately determine the boundary between the preemptible part and the unpreemptable part, which reduces the operations that need to be performed in the non-preemptible part of the kernel, which is conducive to achieving faster kernel preemption and improving the real-time performance of the system.

The best internal structure model of an operating system is a hierarchical structure, with the kernel at the bottom. These levels can be seen as an inverted pyramid, with each level built on the functions of the lower levels. The kernel contains only the most important low-level functions performed by an operating system. Just like an operating system with a unified structure, the kernel provides an abstraction layer between high-level software and low-level hardware. However, the kernel only provides the minimum set of operations required to construct the rest of the operating system.

The Wind kernel of VxWorks is such a microkernel that is conducive to the construction of a hierarchical structure. It consists of the kernelLib library, taskLib library, semLib library, tickLib library, wdLib library, schedLib library, workQLib library, windLib library, windAlib library, semAlib library, and The workQAlib library is composed. Among them, the kernelLib library, taskLib library, semLib library, tickLib library, and wdLib library constitute the basic functions of the VxWorks kernel, and are also the most basic and core functions of the Wind kernel. In such a kernel, it is easy to achieve strong real-time performance; and the high-level encapsulation of the kernel allows users to only call the top-level functions when developing VxWorks applications, without having to care about the bottom-level implementation, and the program design is also very convenient. . The VxWorks kernel structure can be logically divided into 3 layers, as shown in Figure 2.1.

VxWorks kernel interpretation-2

2.1 VxWorks kernel hierarchy

The routines contained in the global variable kernelState constitute the core state of the Wind kernel. When the kernelState is set to TRUE, it means that there is code running in the kernel state at this time. The essence of the kernel mode of VxWorks is to protect the kernel data structure to prevent multiple codes from accessing the kernel data structure at the same time, so it is different from the concept of the general operating system kernel mode.

To enter the kernel mode, simply set the global variable kernelState to TRUE. From this point on, all kernel data will be protected to avoid competition. When the kernel operation ends, VxWorks resets the kernelState to FALSE through the windExit() routine. Let us consider who will compete to use the kernel data structure when in the kernel mode. Obviously, the interrupt service routine is the only possible initiator to request additional work from the kernel when the Wind kernel is in the kernel. This means that once the system enters the kernel mode, the only way for applications to request kernel services is through the interrupt service routine.

The kernelState is always checked whether it has been set before it is set. This is the essence of the mutual exclusion mechanism. The VxWorks kernel uses the technology of delaying work as the implementation method of the mutual exclusion mechanism. When the kernelState is TRUE, the work to be done will not be executed immediately, but will be delayed as a delayed Job. When in the work queue. The kernel delay work queue can only be cleared by windExit() before the context of the previous task to be executed is restored (or intExit() is cleared when the interrupt ISR first enters the kernel mode). Turn off the interrupt at the moment when the kernel mode is about to end, windExit() will check whether the work queue is empty and enter a selected task context.

As mentioned earlier, the Wind kernel uses the kernelState global variable to simulate a privileged state from the software. In the privileged state, task preemption is prohibited. The functions of the privileged routines are in the windLib library. Execute the routines in the windLib library, and the higher-level calling routines will enter the kernel state (kernelState=1) to obtain mutually exclusive use of all kernel queues. The routines in the windLib library have the ability to freely control the data structure of the task kernel. The kernel state (kernelState=1) is a powerful mutual exclusion tool. During the time in the kernel state, preemption is prohibited. The high preemption delay destroys the rapid response delay of the real-time system, so this mechanism must be used very conservatively (currently, the open source RTOS using this design mechanism is only RTEMS). In fact, the design concept of the microkernel is to keep the core small enough while still having the ability to support higher-level applications. Remember what I said in the previous chapter? "A beautiful kernel is not about what other functions can be added, but what other functions can be reduced."

The Wind kernel opens interrupts when it is in a privileged state, which means that the kernel can still continue to respond to external interrupts at this time. The innovation of the Wind kernel lies in the concept of Work Queue . Since the kernel state can only be mutually exclusive access by high-level applications, when an interrupt occurs, if no program accesses the kernel state routines, the current interrupt ISR will first enter the kernel state (set kernelState to TRUE), and then execute the corresponding interrupt ISR ; If the kernel state is already occupied (kernelState=FALSE), the current interrupted ISR will be placed in the kernel work queue and returned directly. When the program occupying the kernel mode exits the kernel mode (calls the windExit() routine), the work in the kernel work queue (ie Job) will be executed, and the interrupt ISR will be processed (I will combine the code in a subsequent blog post) Analysis O(∩_∩)O).

The kernel state routines included in the kernelState in the wind kernel are in the windLib library. The routines beginning with wind* are shown in Figure 2.2.

VxWorks kernel interpretation-2

Figure 2.2 Schematic diagram of VxWorks kernel mode routines

among them:

*Indicates that the interrupt ISR cannot call this routine, and can only be called at the task level;

@* means that the routine can be called in the interrupt ISR;

# Indicates that it will be used inside the wind kernel;

@ Means it can run in kernel mode;

The components of the VxWorks system are shown in Figure 2.3.

VxWorks kernel interpretation-2

Figure 2.3 VxWorks system composition diagram

2.2 Wind core classes and objects

The Wind kernel of VxWorks uses the idea of ​​class and object to organize the five components of the wind kernel: task management module, memory management module, message queue management module, semaphore management module, and watchdog management module.

In the Wind kernel, all objects are part of a class, and the class defines the method of operating the object (Method), and at the same time maintains the operation record of all the objects. The Wind core adopts the semantics of C++, but it is implemented in C language. The entire Wind kernel is realized through explicit coding, and its compilation process does not depend on a specific compiler. This means that the Wind kernel can not only be compiled on the diab compiler that comes with VxWorks, but also can use the open source GNU/GCC compiler. VxWorks designed a meta-class for the Wind core, and all object classes (Obj-class) are based on this meta-class. Each object class is only responsible for maintaining the operation methods of its respective object (Object) (such as creating an object, initializing an object, canceling an object, etc.), and managing statistical records (such as the data of a created object, the number of destroyed objects, etc.). The class management mode is not a feature of the VxWorks kernel, it is an integral part of the operating system; but all kernel objects depend on it. Figure 2.4 shows the relationship between object classes, objects, and metaclasses among the various components of the Wind core.

VxWorks kernel interpretation-2

Figure 2.4 Schematic diagram of the relationship between object class, object and meta class

Remarks: By adopting the design idea of ​​objects and classes, the various components of the Wind core of VxWorks can be organized organically. When creating various instances of the same component, it is convenient to verify the correctness of the instance type. At the same time, all component object classes are derived from the base. Class, the operation record of all objects.

2.3 Wind kernel features

Multitasking: The basic function of the kernel is to provide a multitasking environment. Multitasking makes many programs appear to be executed concurrently, but in fact the kernel executes them in stages according to the basic scheduling algorithm. Each apparently independent program becomes a task. Each task has its own context, which contains the CPU environment and system resources it sees when the kernel schedules the task to execute.

Task status: The kernel maintains the current status of each task in the system. The state transition occurs when the application calls the kernel function service. The following defines the Wind kernel state:

Ready state-a task currently does not wait for any resources except the CPU

Blocked state-a task is blocked because some resources are unavailable

Delayed state-a task sleeps for a period of time

Suspended state-an auxiliary state mainly used for debugging, suspended forbidden task execution

After the task is created, it enters the suspended state, and a specific operation is required to make the created task enter the ready state. This operation is executed very quickly, so that the application can create the task in advance and activate the task in a quick way.

Scheduling control: Multitasking requires a scheduling algorithm to allocate CPU to ready tasks. The default scheduling algorithm in VxWorks is preemptive scheduling based on priority, but applications can also choose to use time slice round-robin scheduling.

  • Priority-based preemptive scheduling: Priority-based preemptive scheduling, each task is assigned a priority, and the kernel assigns the CPU to the task with the highest priority in the ready state. Scheduling adopts a preemptive method, because when a task with a priority higher than the current task becomes ready, the kernel will immediately save the upper part of the current task and switch to the upper part of the high-priority task. VxWorks has 256 priority levels from 0 to 255. The task is assigned a priority when it is created, and the priority can be dynamically modified during the running of the task in order to track the real-world event priority. External interrupts are assigned priority over any task, so that a task can be preempted at any time.
  • Time slice rotation: preemptive scheduling based on priority can expand time slice rotation scheduling. Time slice round-robin scheduling allows tasks in the ready state of the same priority to share the CPU fairly. There is no time slice round-robin scheduling. When there are multiple tasks sharing the processor at the same priority, one task may monopolize the CPU and will not be blocked until it is preempted by a higher priority task without giving other tasks of the same priority. Opportunity to run. If time slice rotation is enabled, the time counter to execute the task is incremented every clock tick. When the specified time slice is exhausted, the counter will be cleared and the task will be placed at the end of the task queue with the same priority. New tasks that join a specific priority group are placed at the end of the group of tasks, and the running counter is initialized to zero.

Basic task functions: The basic task functions used for state control include the creation, deletion, suspension and wake-up of a task. A task can also put itself to sleep for a specific time interval without running. Many other task routines provide status information obtained by the task context. These routines include access to a task's current processor register control.

Task deletion problem: The  wind kernel provides a mechanism to prevent tasks from being deleted accidentally. Usually, a task that is executed in a critical area or accesses a critical resource needs to be specially protected. We imagine the following situation: a task gains exclusive access to some data structure, and it is deleted by another task while it is executing in the critical section. Because the task cannot complete the operation on the critical section, the data structure may still be in a damaged or inconsistent state. Moreover, assuming that the task has no chance to release the resource, any other tasks now cannot obtain the resource, and the resource is frozen.

Any task that wants to delete or terminate a task with deletion protection will be blocked. When the protected task completes the critical section operation, it will cancel the deletion protection so that it can be deleted, thereby unblocking the deletion task.

As shown above, task deletion protection is usually accompanied by mutually exclusive operations. In this way, for convenience and efficiency, the mutex semaphore includes a delete protection option (I will introduce it in detail in a subsequent blog post)

Inter-task communication:  In order to provide a complete multi-task system function, the Wind kernel provides a rich set of inter-task communication and synchronization mechanisms. These communication functions enable independent tasks in an application to coordinate their activities.

Shared address space:  The basis of the inter-task communication mechanism of the Wind kernel is the shared address space where all tasks are located. By sharing the address space, tasks can communicate freely using pointers to shared data structures. The pipeline does not need to map a memory area to the address space of two mutually communicating tasks.

Note: Unfortunately, while the shared address space has the above advantages, it also brings the danger of reentrant access to unprotected memory. UNIX and Linux operating systems provide such protection by isolating processes, but at the same time it brings a huge performance loss for real-time operating systems.

Mutually exclusive operation:  When a shared address space simplifies data exchange, it becomes necessary to avoid resource competition through mutually exclusive access. Many mechanisms used to obtain mutually exclusive access to a resource differ only in the scope of these mutexes. Methods to achieve mutual exclusion include prohibiting interrupts, prohibiting task preemption, and resource locking through semaphores.

  • Interrupt prohibition: The strongest mutual exclusion method is to shield interrupts. Such locking guarantees exclusive access to the CPU. This method can certainly solve the problem of mutual exclusion, but it is not appropriate for real-time because it prevents the system from responding to external events during the lock-up period. Long interruption delays are unacceptable for applications that require a definite response time.
  • Prohibition of preemption: prohibition of preemption provides a way of coercive and weak mutual exclusion. While the current task is running, other tasks are not allowed to preempt, but the interrupt service routine can be executed. This may also cause poor real-time response. Just like disabled interrupts, blocked tasks will have a long preemption delay, and high-priority tasks in the ready state may be forced to wait for a while before they can be executed. Accepted time. To avoid this situation, try to use semaphores to achieve mutual exclusion when possible.
  • Mutually exclusive semaphore: semaphore is the basic way to lock access to shared resources. Unlike prohibiting interruption or preemption, semaphores restrict mutual exclusion operations to only related resources. A semaphore is created to protect resources. The semaphore of VxWorks follows Dijkstra's P() and V() operation modes.

When a task requests a semaphore, the P() operation depends on the status of the semaphore being set or cleared when the call is made. Two situations will occur. If the semaphore is in the set state, the semaphore will be cleared and the task will continue to execute immediately. If the semaphore is in the cleared state, the task will be blocked to wait for the semaphore.

When a task releases the semaphore, several things happen in the V() operation. If the semaphore is already in the set state, releasing the semaphore will not have any effect. If the semaphore is in the cleared state and no task is waiting for the semaphore, the semaphore is simply set. If the semaphore is in the cleared state and one or more tasks are waiting for the semaphore, the highest priority task is unblocked, and the semaphore is still in the cleared state.

By associating some resources with the semaphore, mutually exclusive operations can be achieved. When a task wants to manipulate resources, it must first obtain the semaphore. As long as the task has the semaphore, all other tasks are blocked by requesting the semaphore. When a task finishes using the resource, it releases the semaphore, allowing another task waiting for the semaphore to access the resource.

The Wind core provides binary semaphores to solve the problems caused by mutual exclusion operations. These problems include the deletion protection of resource owners, priority reversal caused by resource competition:

  • Deletion protection: A problem caused by mutual exclusion involves task deletion. In the critical section protected by the semaphore, it is necessary to prevent the execution task from being accidentally deleted. Deleting a task performed in a critical section is disastrous. The resource will be destroyed, and the semaphore that protects the resource will become unavailable, so that the resource cannot be accessed. Usually deletion protection is provided in conjunction with mutually exclusive operations. For this reason, mutually exclusive semaphores usually provide options to implicitly provide the aforementioned task deletion protection mechanism.
  • Priority reversal/priority inheritance: Priority reversal occurs when a high-priority task is forced to wait for an indeterminate period of time for a lower-priority task to complete execution. Consider the following assumptions (I have already introduced in the previous blog post, here is O(∩_∩)O):

T1, T2 and T3 are high, medium and low priority tasks respectively. T3 obtains related resources by owning the semaphore. When T1 preempts T3 and requests the same semaphore to compete for the resource, it is blocked. If we assume that T1 is only blocked until T3 has used up the resource, the situation is not so bad. After all, resources cannot be preempted. However, low-priority tasks cannot avoid being preempted by medium-priority tasks. A preemptive task such as T2 will prevent T3 from completing operations on resources. This situation may continue to block T1 and wait for an indeterminate period of time. This situation becomes a priority reversal, because although the system is based on priority scheduling, it makes a high-priority task wait for a low-priority task to complete execution. Mutex semaphores have an option to allow the implementation of priority inheritance algorithms. Priority inheritance solves the problem caused by priority reversal by raising the priority of T3 to T1 while T1 is blocked. This prevents T3 and indirectly prevents T1 from being preempted by T2. In layman's terms, the priority inheritance protocol enables a task with a resource to wait for the priority execution of the task with the highest priority among the tasks of the resource. When the execution is complete, the task releases the resource and returns to its normal or standard priority. Therefore, tasks that inherit priority avoid being preempted by any intermediate priority tasks.

Synchronization: Another common usage of semaphore is for the synchronization mechanism between tasks. In this case, the semaphore represents the condition or event that a task is waiting for. Initially, the semaphore is in the cleared state. A task or interrupt indicates the occurrence of an event by setting the semaphore. Tasks waiting for the semaphore will be blocked until the event occurs and the semaphore is set. Once unblocked, the task executes the appropriate event handler. The application of semaphore in task synchronization is very useful for freeing the interrupt service routine from lengthy event processing to shorten the interrupt response time.

Message queue: The message queue provides a lower-level mechanism for exchanging variable-length messages between tasks and interrupt service routines or other tasks. This mechanism is similar in function to pipes, but has less overhead.

Pipes, sockets, remote procedure calls, and many high-level vxWorks mechanisms provide higher-level abstractions of communication between tasks, including pipes, TCP/IP sockets, remote procedure calls and more. In order to maintain the design goal of reducing the kernel to only include a minimum function set sufficient to support high-level functions, these features are based on the kernel synchronization method described above.

2.4 Advantages of Wind core design

An important design feature of the Wind core is the minimum preemption delay. Other major design advantages include unprecedented configurability, scalability to unforeseen application requirements, and portability in the development of various microprocessor applications.

Minimal preemption delay:  As discussed earlier, prohibiting preemption is a common way to obtain mutually exclusive operations for critical code resources. The undesirable negative effect of this technique is the high preemption delay, which can be achieved by using semaphores as much as possible to achieve mutual exclusion and keep the critical section as compact as possible. But even the widespread use of semaphores cannot solve all the root causes that may cause preemption delays. The kernel itself is a source of delay in preemption. In order to understand the reasons, we must better understand the mutual exclusion operations required by the kernel.

Kernel level and task level: In any multitasking system, a large number of applications occur in the context of one or more tasks. However, some CPU time slices are not in the context of any task. These time slices occur when the kernel changes internal queues or decides task scheduling. In these time slices, the CPU executes at the kernel level rather than the task level.

In order for the kernel to safely operate its internal data structures, there must be mutual exclusion operations. There is no relevant task context at the kernel level, and the kernel cannot use semaphores to protect the internal linked list. The kernel uses work deferrals as a way to achieve mutual exclusion. When the kernel participates, the function called by the interrupt service routine is not directly activated, but is placed in the kernel's work queue. The kernel completes the execution of these requests and clears the kernel work queue.

When the kernel is executing and has been requested for service, the system will not respond to function calls that arrive at the kernel. It can be simply considered that the kernel state is similar to prohibiting preemption. As discussed earlier, preemption delay is undesirable in real-time systems because it increases the response time to events that cause application tasks to be rescheduled. Although it is impossible for the operating system to completely avoid time consumption at the kernel level (preemption is prohibited at this time), it is important to reduce this time. This is the main reason for reducing the number of functions executed by the kernel, and also the reason for not adopting a unified structure of system design.

VxWorks shows that a minimal kernel designed with task-level operating system services can meet the demand. VxWorks is a real-time operating system with a fairly small kernel and a fully functional hierarchical structure that is now available, independent of any processor.

The VxWorks system provides a large number of functions on top of the Wind kernel. It includes memory management, a complete BSD4.3 network package, TCP/IP, network file system (NFS), remote procedure call (RPC), UNIX compatible link load module, C language interpretation interface, various types of timing Controllers, performance monitoring components, debugging tools, additional communication tools such as pipes, signals and sockets, I/O and file systems, and many functional routines. These are not running at the kernel level, so interrupts or task preemption are not prohibited.

Configurability:  Real-time applications have multiple kernel requirements. No core has a good design compromise to meet every need. However, a kernel can be configured to adjust specific performance characteristics and tailor the real-time system to best suit the requirements of an application. Unpredictable kernel configurability is provided to the application in the form of a user-selectable kernel queuing algorithm.

Queuing strategy: The queuing library in vxWorks is executed independently of using their kernel queuing function, which provides the flexibility to add new queuing methods in the future.

There are various kernel queues in VxWorks. The ready queue is a queue of all tasks waiting to be scheduled indexed by priority. The tick queue is used for timing functions. The semaphore queue is a linked list of blocked tasks waiting for the semaphore. The active queue is a first-in-first-out (FIFO) linked list of all tasks in a system. Each of these queues requires a different queuing algorithm. These algorithms are not embedded in the kernel, but are extracted into an autonomous, convertible queuing library. This flexible organization form is the basis for meeting special configuration requirements.

Scalability:  The ability to support unforeseen kernel extensions is as important as functional configurability. Simple kernel interfaces and mutual exclusion methods make kernel-level function extensions quite easy; in some cases, applications can only use kernel hook functions to implement specific extensions.

Internal hook function: In order to add additional task-related functions to the system without modifying the kernel, VxWorks provides hook functions for task creation, switching, and deletion. These allow additional routines to be called for execution when tasks are created, context switches, and tasks are deleted. These hook functions can use the free area in the task context to create the task characteristics of the Wind kernel.

Future considerations:  Many system functions are now becoming more and more important, and will affect the preemption delay in kernel design. Although a complete discussion of these issues is beyond the scope of this blog post, it is worth mentioning briefly.

Designing an operating system independent of the CPU has always been a challenge. As new RSIC (reduced instruction set) processors become popular, these difficulties have increased. In order to execute effectively in a RISC environment, the kernel and operating system need to have the flexibility to execute different strategies.

For example, consider the routines executed by the kernel during task switching. In CISC (complex instruction set, such as 680x0 or 80x86) CPU, the kernel stores a complete set of registers for each task, and these registers are swapped in and out when the task is running. On a RISC machine, this is unreasonable because too many registers are involved. So the kernel needs a more sophisticated strategy, such as cache registers for tasks, allowing applications to specify some registers for special tasks.

Portability: In order for the Wind kernel to run on the structure where they appear, a portable kernel version is required. This makes migration feasible, but not optimal.

Multiprocessing: Supporting tightly coupled multiprocessing requirements requires the internal functions of the real-time kernel to include, ideally, to request kernel calls from a remote end, such as from one processor to another. This involves semaphore calls (to synchronize between processors) and task calls (to control tasks on another CPU). This complexity will undoubtedly increase the overhead of kernel-level function calls, but many services such as object identification can be performed at the task level. The advantage of keeping a minimum core in a multiprocessing system is that the interlocking between processors can have a better time granularity. Large cores will consume extra time at the core level, and only a coarse interlock time granularity can be obtained.

Important dimensions of real-time kernels: Many performance characteristics are used to compare existing real-time kernels. These include:

  • Fast task context switching-due to the multitasking nature of real-time systems, it is important for the system to quickly switch from one task to another. In a time-sharing system, such as UNIX, context switching is at the ms level. The original context switch performed by the Wind core is measured at the us level.
  • Minimal synchronization overhead—Because synchronization is the basic method to achieve mutually exclusive access to resources, it is important to minimize the overhead caused by these operations. In vxWorks, the request and release of binary semaphores are also measured at the us level.
  • Minimal interrupt latency-because events from the outside world usually come in the form of interrupts, it is important for the operating system to process these interrupts quickly. The kernel must disable interrupts when operating some critical data structures. In order to reduce interrupt latency, these times must be minimized. The interrupt delay of Wind core is also us level.

Note: Specific numerical performance indicators can only be obtained after direct measurement on the specific target board.

The impact of preemption delay on performance indicators:   When many real-time solutions are submitted to application engineers, performance indicators become more and more important for evaluating vendor products. Unlike context switching and interrupt latency, preemption latency is difficult to measure. So it is rarely mentioned in the description. But considering that when the kernel usually prohibits context switching, it can be as long as hundreds of microseconds, and it is meaningless to claim a fixed-length context switching time of 50us (regardless of the number of tasks). In addition to being difficult to measure, preemption delay may weaken the effectiveness of many performance indicators. The Wind kernel minimizes the preemption delay by reducing the size of the kernel. A kernel with multiple functions will inevitably cause a long preemption delay.

Sincerely, I have made a coarse-grained introduction to the Wind kernel. In the next article, I will elaborate on the specific aspects of the Wind kernel in combination with the code. . . . O(∩_∩)O

Guess you like

Origin blog.csdn.net/qq543716996/article/details/105246387