Explain the world's operating system

Explain the world's operating system

This article is written by the desk case drawon  (https://www.drawon.cn), the founder of Yunjing (https://www.yunjingxz.com) based on years of experience, from the origin of the operating system, application classification, design classification, and From the perspective of resource usage, the operating system is macroscopically elaborated. After reading this article, you will generally have a more systematic, macroscopic, and in-depth understanding of the operating system. Each of us uses different operating systems frequently every day. It is necessary to have a basic understanding of the operating system from a macro perspective to a simple one.

Front and back system

The front-end and back-end systems were the original ancient times of the operating system. Before the birth of an operating system, the basic computer system was the architecture model of the front-end and back-end systems.

The concept of front-end and back-end systems

Before the emergence of an operating system, the system was the front-end and back-end systems. For example, when the MCU used in most embedded systems can run simple C-language programs and the single-chip microcomputer is running naked, we think it is the front-end and back-end system. The following Zhang Tu introduced the model of the front-end and back-end systems:

From the figure, we can see that there are the following characteristics

The application is an infinite loop. Generally, in the main function, an infinite loop will be written. This loop will have a certain delay, and it will be executed every once in a while. The corresponding function is called in the loop to complete the corresponding operation. This part can be regarded as the background behavior (background). Scanning in an infinite loop is a polling.

The background system will always run, (The background is always running) This Main function will not stop.

When there is an urgent task to be processed, the MCU provides an interrupt mechanism, and each interrupt vector is connected with an interrupt processing function for urgent processing. This interrupt handler behavior, we call the foreground behavior (Foreground) . When an emergency occurs, it is immediately caught by the interrupt handling mechanism. Emergency events are generally triggered by external IOs. Trigger behaviors include high-level to low-level triggers, low-level to high-level triggers, and rising or falling edge triggers. There is also an internal timing trigger, etc.

The following is the transformation model of the front-end and back-end systems

{
    void main () {
        for(;;) {
            InitSystem();
            InitUartIsr(OnUartDataReceived);
            for(;;) {
                LED0 = ON;
                DelayMillisecs(500);
                LED1 = OFF;
                DelayMillisecs(500);
            }
        }
    } 
    void OnUartDataReceived(byte[] buffer, int count) {
        // doing something.
    }
}

Advantages of front-end and back-end systems

The mechanism of the front-end and back-end systems is relatively simple, and the requirements for programmers are relatively low. As long as you learn some C language and basic hardware-related knowledge, you can do some things, such as light up LED lights. lamp. The front-end and back-end systems can be learned based on the single-chip microcomputer.

The following are some books on single-chip microcomputers, which can be used to learn the front-end and back-end systems.

The advantages of the front-end and back-end systems are as follows:

  • The cost is low. Generally, a single-chip microcomputer can be bought for a few cents or even a few dollars.
  • The demand is very large, as long as there are some simple control logic systems, most of them use a simple MCU, and run a front-end and back-end system inside, as follows we give a distribution pyramid diagram of the operating system.

  • Design, development and coding are relatively simple. Generally, the development and design of the front-end and back-end systems are completed by embedded engineers.

Disadvantages of front-end and back-end systems

  • The application scenario is relatively simple, controlling simple peripheral computers and simple calculations. For large and complex logic or interaction, front-end and back-end systems are difficult to handle. There are also some more powerful experts who can use the front and back systems to complete tasks that the operating system can complete, and even things that the operating system is difficult to complete (small talents are useful).
  • Resources are difficult to schedule systematically. The CPUs of the front and back systems are always busy, and resources are difficult to schedule systematically .
  • When developing the system, the code coupling is high, you have me in you, and you in me . In the front-end and back-end systems, every time a software module is added to a system, it may affect the previous functions. Therefore, if you want to play the front-end and back-end systems well, you must be in charge of the overall situation and understand other people's program logic.

operating system

basic concept

The system software (or program collection) that controls and manages various hardware and software resources in the computer system and effectively organizes the operation of multi-programs is the interface between the user and the computer.

Let's share the above, the mainstream operating system

Windows operating system

MAC OS operating system

Chome OS operating system

Mobile IOS, Android, Windows Phone (past tense)

The operating system determines the interaction method and ecology of the system. Everyone has their own favorite operating system.

Operating system classification

by family

The family history of Unic is as follows:

As you can see from the picture above, Unix (Unics) is actually the ancestor of most operating systems. Many operating systems have evolved from Unix.

Evolution diagram of the Unix family

As can be seen from the figure above:

  • At present, the very well-known Apple's operating system MAC OS in the PC field and the mobile operating system IOS are both rewritten from the Unix system.
  • Google's Android operating system, ChromeOS are modified from the Linux system.
  • Ubuntu is a community open source version of the Linux system.
  • Also in the vehicle system, QNX, the microkernel system that claims to be very powerful, comes from the Unix system.

Classified by Kernel Type

The concept of microkernel/macrokernel

The core functions of the operating system, task scheduling, abstraction and management of memory and devices. Then, for our convenience, things like system services, drivers, and file systems are integrated.

For the programs we usually run, each program runs for tens of milliseconds, and everyone rotates back and forth. In this way, it seems to us that these programs are running "simultaneously". The reason why applications can be scheduled by the operating system through time slices is because for the CPU, ordinary applications and the kernel of the operating system run at different privilege levels, which we call rings. Applications run on Ring 3, while the kernel runs on Ring 0.

With the development of technology, the operating system has become more and more complex, and there are more and more things in the kernel. People also began to consider whether the original architecture should be changed to improve the performance and stability of the operating system , mainly to simplify the kernel to reduce the complexity of development , and to isolate various programs as much as possible to ensure that a program crashes No other programs will be implicated.

The microkernel, which was hotly discussed in the 1980s, is such an architecture.

several kernel architectures

Theoretical design and practical engineering are all compromises. Therefore, there is the emergence of a hybrid kernel, which integrates the different advantages of the macro kernel and the micro kernel, and compromises the design between the two solutions. OS X and Windows fall into this category.

Advantages of microkernels

The microkernel considers retaining the most basic functions of the operating system in the kernel of the operating system, that is, task scheduling, abstraction and management of memory and devices. All other functions are removed from the kernel and implemented in the user mode, and provide services to other applications in the C/S mode.

The benefits brought by the microkernel are mainly stability and real-time performance , that is, the number of modules in the kernel is reduced, the structure is more streamlined and optimized, the programs and drivers that can affect the kernel are also reduced, and the stability is improved accordingly; the other is real-time performance , after the kernel is simplified, the response delay is reduced. However, it does not improve the performance after streamlining. The microkernel makes only the most critical part of the kernel, and other modules and system functions are all run as independent modules in the user space. After the functions are dispersed, the cost of communication is increased. However, the characteristics of the microkernel operating system are especially suitable for the control of industrial control systems , and the design is simple , so there are many applications in small systems. In addition, many real-time operating systems are designed using microkernel architecture.

To sum up a few words

  • Worse is better.

    • In the computer field, products that are well-designed often end up with failures.
    • Like Unix won over Multics.
    • Lisp (a general-purpose high-level computer programming language) is not as popular as C.
    • The vision of OSI is finally completed by the TCP/IP protocol.
    • I believe that the so-called real cloud operating system in China will eventually be completed by Yunjing , a new generation of cloud operating system (just kidding).
    • Microsoft's WPF fully utilizes the MVVM design pattern, and the design is so pure that it has not become popular.

    ......

  • The macro core is a gorgeous palace.

  • The microkernel is an exquisite little villa.

Why doesn't Linux use microkernel or hybrid kernel mode?

Theoretically perfect problems will encounter various compromises that have to be compromised in practice. Because what you make is to be deployed in a real-life production environment. It's not about a piece that only looks perfect in the lab. The high degree of modularization of the microkernel naturally has a cost, which is to increase the redundancy of code interaction and the loss of efficiency, and this is precisely a big problem.

Linus can write all these messy things by himself, write them correctly once, and run them stably without bugs, but we scumbags can't do it, we can only rely on the protection mode to prevent hundreds of engineers from writing The pile of rubbish that came out, blue screens at every turn, being weak but questioning the genius's approach, and knowing that one is weak but imitating the genius's approach are all manifestations of not being able to recognize reality .

Linux itself existed only as a side project of Linus at the beginning of its implementation. The Monolithic Kernel is more convenient than the Micro Kernel from the perspective of implementation because it does not need to process message queues and so on.

Everyone knows that linus is not interested in microkernels. As long as he is around, the kernel will not consider microkernelization. He is a pragmatist, he said a famous saying: Talk is cheap. Show me the code .

Linus: "Gaah. Guys, this whole ARM thing is af cking pain in the ass." Promoted DeviceTree. (off-topic, * domineering ). This is the flamboyant and free and easy side of Linus. Experience carefully.

Classified according to real-time

According to real-time performance, the operating system is divided into hard real-time and soft real-time. So what are the advantages of hard real-time and soft real-time, in fact, it is measured by the response time of the interrupt.

Standards for measuring real-time performance:

  • The corresponding time of the interruption.

    Corresponding time of interrupt == the maximum time of turning off the interrupt + the time of protecting the internal registers of the CPU + the execution time of entering the interrupt service function + the time of starting to execute the first instruction of the interrupt service routine (ISR)

  • The task switching time is the time from when the current task is suspended to when the task to be switched starts running.

  • A hard real-time operating system must deterministically meet timing requirements in the face of varying loads (from minimal to worst-case) . It has nothing to do with the power of the CPU , the time must be deterministic.

  • representative real-time operating system

    • Linux is the representative of real-time operating system
    • Vxworks (Wind River) is a representative of hard real-time operating system.

    In the following table, the real-time performance comparison of the real-time operating system is shown here

    WxWorks uCOS-II RT-Linux2 QNX6 MACosX Windows Linux-GP
    Hardware platform MC68000 33MHz-486 60MHz-486 33MHz-486
    task switching 3.8us <9us unknown 12.57us
    interrupt response < 3us < 7.5us 25us 7.54us

Basic knowledge related to programming in the operating system

The basic concept of process thread

  • Concurrency and isolation.
  • The context of program execution (Context of Execution)
  • The basic unit of execution and scheduling: thread
  • Resource Ownership: process
  • A process is a container of resources, containing (one or more) threads. The basic unit of kernel scheduling is a thread (not exactly), not a process.
  • Each thread under the same process shares resources (address space, open files, signal handlers, etc), but registers, stacks, PC pointers, etc. do not share.

What is the difference between process and thread? In fact, part of the above has been said, the thread is the basic unit of scheduling and execution, and the final code is executed in the thread. A process is a container of resources, including one or more threads. All threads under the same process share resources.

The following picture shows the difference:

Linux's thread process concept

Linux threads are user-level, that is, threads do not exist in the kernel.

  • All thread management is performed at the application layer.
  • The kernel doesn't care, and actually isn't aware of threads.

Windows' thread process concept

It can also be seen from the above figure that Windows and Linux obviously adopt different concepts.

Windows threads are at the kernel level.

  • Windows is an example of these concepts.
  • The kernel maintains contexts for threads and processes.
  • Scheduling actually runs on a thread basis.

interprocess communication

With the isolation of threads and processes, a communication mechanism between threads and processes is required to ensure that tasks are completed cooperatively and data is shared and accessed.

Communication between Windows processes

  • file mapping
  • Shared memory
  • Anonymous pipe (single item, write at one end, read at the other end)
  • named pipe.
  • dynamic link library
  • Remote procedure calls (can be within a machine or across machines)
  • UDS(Unix Domain Socket)
  • Windows-based messaging mechanism...

Communication between Linux processes

  • Pipes, and well-known pipes
  • Signal
  • Message queue (message queue)
  • Shared memory (most efficient)
  • Semaphore
  • The main function is the synchronization method between processes and different threads of the same process (UDS) Socket

Although the above Window and Linux adopt different methods and conceptually different methods for inter-process communication, in fact, their basic principles are similar.

Communication between threads

  • Shared data structures. Shared memory
  • Event delivery
  • message queue
  • Mailbox (ucosII)

thread synchronization

Thread synchronization, that is, when a thread is operating on memory or peripherals, other threads cannot operate on this memory address or peripherals, until the thread completes the operation, other threads can operate, and other threads In the waiting state, there are many ways to achieve thread synchronization, as follows.

  • Generally use a semaphore (Semaphore).
  • High-level languages ​​such as java itself are designed with this in mind, such as the synchronized keyword, wait, and notify methods.
  • Mutex can be used.
  • Critical section object.

Semaphores and Mutexes

Semaphore (semaphore, or semaphore)

Take the operation of a parking lot as an example. For simplicity, assume that there are only three parking spaces in the parking lot, and all three parking spaces are empty at the beginning. At this time, if five cars come at the same time, the gatekeeper allows three of them to enter directly, and then puts down the car barrier, and the remaining cars must wait at the entrance, and all subsequent cars will also have to wait at the entrance. At this time, a car left the parking lot. After the janitor learned about it, he opened the car barrier and put the car outside in. If two cars left, he could put in two more cars, and so on. In the micro world, in the computer world, such as accessing hard disk space and reading data, resources are often limited. This mechanism can be used to effectively coordinate and control access to resources.

Mutex (mutual exclusion lock)

A special semaphore that only one thread can enter at a time. Performance will be better than semaphore. For some special application scenarios, only one thread can access at a time, and other threads can continue to run after the thread exits. For example, the IO peripherals of the operating system, printers, public toilets in real life, etc.

Guess you like

Origin blog.csdn.net/besidemyself/article/details/130752589