DP reading: "openEuler Operating System" (3) Classification of operating systems


GIF of DP Reading
The operating system is between the application layer and the hardware layer. It looks at the application and looks at the hardware below.

On the application layer, in scenarios such as autonomous driving and industrial driving, the reliability of the operating system is placed in a more important position. Microkernels are more popular, and library operating systems are also very popular on cloud computing platforms.

At the hardware level, the semiconductor manufacturing process is close to the physical limit , single-core performance is close to the bottleneck, and general-purpose processors are moving from multi-core to many-core.

The development trends of operating systems involve the following directions:

  1. Edge Computing: As IoT devices connect to the network, data processing applications are migrating to the edge. This will drive the development of server systems, including new scale-out storage architectures that run block, file, and object on standard x86-based servers. Edge computing devices need to be compact, rugged, and suitable for harsh environments, and underlying infrastructure services, including networking and storage, need to be moved to the edge to ensure data security, privacy, and enable low-latency data analytics. In this case, x86 server systems are combined with offload accelerators to become the basic platform for hosting edge workloads.
  2. 5G and mobile devices: Telcos are moving towards 5G enablement and their 5G infrastructure is also evolving towards software-defined. This will provide better connectivity for mobile users (mobile devices and connected/autonomous vehicles).
  3. Cloud-first to data-first shift: As software-defined architectures become more prevalent, customers are shifting from a “cloud-first” strategy to

microkernel

Different from the macro kernel (Monlitihic Kernel) , such as Linux
Comparison of macrokernel and microkernel structures

Microkernel is a streamlined kernel of the core functions of the operating system , retaining only necessary modules, such as IPC, memory management, CPU scheduling, etc.

IPC, Inter-Process Communication
Inter-Process Communication (IPC, Inter-Process Communication) refers to a technology that transfers information and data between different processes. In a computer system, each process runs in its own address space and has independent memory and resources. In order to achieve inter-process communication, some special mechanisms and methods need to be adopted.

Common inter-process communication methods include:

1. "管道(Pipe)":管道是一种半双工的通信方式,数据只能单向流动,通常用于父子进程之间的通信。
2. "命名管道(Named Pipe)":命名管道是管道的一种扩展,允许无亲缘关系进程间的通信。它使用了一个文件作为通信的媒介。
3. "信号(Signal)":信号是一种异步的通信方式,用于通知接收进程有某个事件发生。它不适合传输大量数据,主要用于进程控制。
4. "消息队列(Message Queue)":消息队列是一种队列式的通信方式,允许不同进程发送和接收消息。进程可以以异步的方式读写消息队列。
5. "共享内存(Shared Memory)":共享内存允许多个进程访问同一块内存区域,从而实现快速的数据交换。但需要注意同步和互斥问题。
6. "信号量(Semaphore)":信号量是一种同步机制,用于控制多个进程对共享资源的访问。它可以保证资源在同一时间只被一个进程使用。
7. "套接字(Socket)":套接字是一种端到端的通信方式,适用于不同主机上的进程间通信。它既可以用于本地进程间通信,也可以用于网络通信。


Eg: The following is the Mermaid diagram of the pipe (Pipe)

管道(Pipe)
父子进程间的通信
半双工
数据单向流动

Microkernel-based operating systems include MINIX3, seL4, Fuchsia , etc.

MINIX3 interprets the modular idea and interprets the operating system into a user-mode process. minix3 community

seL4 has optimized the inter-process communication (IPC) mechanism. It has the slogan "Security is no excuse for bad performance" in the sel4 community . Its ICP is the fastest among the L4 family. (It is the first formally verified high-reliability kernel that prohibits concurrent kernel processing)

Fuchsia is designed based on Zircon. The
fuchsia Chinese community
provides core drivers and CLibrary instances for Fuchsia.

These inter-process communication methods can be selected according to different needs and scenarios. When designing and implementing a multi-process system, the correct selection and use of inter-process communication mechanisms is one of the keys to ensuring system performance and reliability.

It is designed to increase portability within a small memory space and provides a modular design to allow users to install different interfaces, such as DOS, Workplace OS, Workplace UNIX , etc.

library operating system

For specific programs on a virtual machine, the operating system often contains unused drivers, dependency packages, services, etc. Eg: The USB driver is useless in a virtualized cloud environment, but is still included in the Linux kernel. It also brings about problems such as slow virtual machine startup and large attack surface.

As a result, Library Operating System (LibOS) came into being. The basic idea is to customize the kernel of the operating system according to the application and remove the useless parts. LibOS provides functions that originally belong to the operating system kernel to applications in the form of libraries.

Developers choose stack modules and a set of minimal dependency libraries to build applications that you can run directly on the hypervisor or hardware.

Comparison of the size of general-purpose operating systems and LibOS
The main working principle of the library operating system is to treat the hypervisor as a stable hardware platform, which is responsible for managing and scheduling all applications and device drivers. Compared with traditional operating systems, the library operating system is more efficient and more scalable because it only contains the necessary components of the operating system, and resources such as applications and device drivers are compiled into a single kernel middle.

In addition, the library operating system also uses object-oriented technology, making it easier for developers to add, modify or delete system components, while reducing operating system maintenance costs.

The library operating system is a highly customizable, scalable, and efficient operating system. It has advantages that traditional operating systems cannot match and will play an increasingly important role in the future cloud computing field.

In recent years , the main solution for next-generation cloud computing software deployment represented by Serverless is

LibOS is expected to become the main solution for next-generation cloud platform software deployment.

Advantages:
Small size, quick to start
a single application

LibOS
启动快
单个应用
体积小
稳定性
易用性

outer kernel

In traditional operating systems, only the kernel can manage hardware resources and interact with the hardware indirectly through the kernel hardware abstraction interface.

The outer core or outer core (Exokernel) , also known as the vertical structure operating system , is a relatively extreme design method. This design philosophy lets the designer of the user program decide the design of the hardware interface. The outer kernel itself is very small, and it is usually only responsible for services related to system protection and system resource reuse. (Focus on the isolation, protection and reuse of physical resources )

Traditional kernel design (including single-core and micro-core) Traditional kernel design (including single-core and micro-core) abstracts the hardware and hides hardware resources or device drivers under the hardware abstraction layer . For example, in these systems, if a piece of physical storage is allocated, the application does not know its actual location. The goal of the outer core is to allow applications to directly request a specific physical space , a specific disk block, etc. The system itself only guarantees that the requested resource is currently free, and the application is allowed to access it directly.

Since the outer core system only provides relatively low-level hardware operations and does not provide high-level hardware abstraction like other systems, additional runtime support needs to be added. These runtime libraries run on top of the kernel and provide complete functionality to user programs.

Here is an operating system example from MIT's Aegis team :
Example of outer kernel operating system Aegis

The libraries working on top of the outer kernel interface provide a higher level of abstraction for the operating system. In this way, restrictions on applications are reduced.

multi-core

Today's mainstream computers have multi-core processor systems. Limited by Moore's Law, power consumption, and design complexity, modern computer architecture has developed from multi-core (Multi-Core) to many-core (Many-Core) , and the increasing number of processor cores has become an obvious trend.

The operating system based on multi-kernel is a new operating system architecture proposed based on the above challenges. In a multi-core operating system, the machine is regarded as a network with multiple independent CPU cores, and the operating system is built as a distributed system. One CPU corresponds to one operating system core. Multiple operating system cores run in parallel but do not share memory. , transmit information through Message ( asynchronous message ).

The multi-core operating system kernel system model is shown in the figure:
Multi-core operating system model
Multi-core refers to the integration of two or more complete computing engines (cores) in one processor. Such designs are easier to scale and pack more processing performance into a slimmer form factor that uses less power and generates less heat from computing power.

discretization kernel

Existing data centers are organized in units of servers, and the architecture may have the following problems:

  1. Resource utilization is low.
    29-day and 12-hour tracking of Google and Alibaba Cloud servers found that the server cluster only used half of the CPU and memory.
  2. Hardware flexibility is weak.
    Deployment planning takes a long time, and the corresponding changes in computing requirements are not suitable.
  3. Coarse-grained fault domains.
    Motherboard, memory, CPU and power supply failures account for 50%-82% of total server failures.
  4. Poor heterogeneity support. The use of hardware devices such as GPU, TPU, DPU, FPGA, and NVM is increasing, and maintenance costs are rising.
资源利用率低
Google、阿里云的服务器29天和12h跟踪发现,服务器集群仅使用一半的CPU和内存
硬件弹性弱
部署规划时间长,对应的计算需求变化不适配
粗粒度的故障域
主板、内存、CPU和电源故障占服务器总故障的50%-82%
异构性支持差
如GPU、TPU、DPU、FPGA、NVM等硬件设备使用不断增加,维护成本上升
name describe
GPU Graphics Processing Unit, graphics processor. Used for graphics rendering and processing tasks.
TPU Tensor Processing Unit,张量处理单元。一种专为处理张量计算而设计的处理器,主要用于人工智能和机器学习方面的加速计算。
DPU Deep-Learning Processing Unit,深度学习处理单元。专为深度学习计算而设计的处理器,可提高训练和推断速度。
FPGA Field Programmable Gate Array,现场可编程门阵列。一种可编程逻辑电路,允许用户在硬件级别实现自定义算法和数据处理流程。
NVM Non-Volatile Memory,非易失性存储器。一种不依赖于电源而能保持数据的存储器,常用于缓存、持久性存储和共享访问。

现在主要以一种离散化的数据中心架构:
Discrete data center architecture

将服务器的硬件打散,故障隔离,网络连接构成组件,以组件为单位构建数据中心。

离散化数据中心有以下优点:

  1. 硬件弹性好
  2. 独立故障域
  3. 支持硬件异构

离散化内核设计通常用于提高系统的可靠性和可维护性。通过将不同功能模块分布在不同的实体(离散化内核)中进行处理,可以降低系统的复杂性和耦合性,并提高模块之间的隔离性。这种方法还可以使系统更易于升级和扩展,因为每个离散化内核可以根据需要进行独立升级和替换,而不会影响其他部分的功能。

但是,现有的操作系统支持的是数据中心的服务器为基础单位,而非以组件为基础单位。因此需要面向新的数据中心体系结构构建新的操作系统抽象。LegoOS便是一种离散化的操作内核(Splitkernel),基本理念为既然硬件已经拆分了,操作系统也应该拆分了。
Splitkernel model
Splitkernel 模型的主要特点在四个方面:

  1. 打散的操作系统功能
  2. 监视器运行在硬件组件之上
  3. 组件之间通过网络进行通信
  4. Splitkernel在全局范围内对资源故障进行处理

随着以Severless等为代表的微服务架构兴起,细粒度计算成为未来计算趋势。打散的数据中心硬件和操作系统为实现高效的细粒度计算提供了可能。

openEuler操作系统简介

Screenshot of openEuler forum official website
在此,我们就对现代操作系统发展历程以及趋势有了比较清晰的认识。
操作系统处于应用层与硬件层之间,上看应用、下看硬件。

话不多说,我直接上openEuler的整体架构图:
The overall architecture of openEuler

欧拉开源操作系统(openEuler,简称“欧拉”)从服务器操作系统正式升级为面向数字基础设施的操作系统,支持服务器、云计算、边缘计算、嵌入式等应用场景,支持多样性计算,致力于提供安全、稳定、易用的操作系统。通过为应用提供确定性保障能力,支持 OT 领域应用及 OT 与 ICT 的融合。

欧拉开源社区通过开放的社区形式与全球的开发者共同构建一个开放、多元和架构包容的软件生态体系,孵化支持多种处理器架构、覆盖数字基础设施全场景,推动企业数字基础设施软硬件、应用生态繁荣发展。

2019 年 12 月 31 日,面向数字基础设施的全场景开源操作系统开源社区openEuler 正式成立。
2020 年 3 月 30 日,openEuler 20.03-LTS(Long Term Support,简写为 LTS,中文为长生命周期支持)版本正式发布,为 Linux 世界带来一个全新的具备独立技术演进能力的 Linux 发行版。
2020 年 9 月 30 日,首个 openEuler 20.09 创新版发布,该版本是 openEuler 社区中的多个企业、团队、独立开发者协同开发的成果,在 openEuler 社区的发展进程中具有里程碑式的意义,也是中国开源历史上的标志性事件。
2021 年 3 月 31 日,发布 openEuler 21.03 内核创新版,该版本将内核升级到 5.10,还在内核方向实现内核热升级、内存分级扩展等多个创新特性,加速提升多核性能,构筑千核运算能力。
2021 年 9 月 30 日,全新 openEuler 21.09 创新版如期而至,这是欧拉全新发布后的第一个社区版本,实现了全场景支持。增强服务器和云计算的特性,发布面向云原生的业务混部 CPU 调度算法、容器化操作系统 KubeOS 等关键技术;同时发布边缘和嵌入式版本。
2022年3月30日,基于统一的5.10内核,发布面向服务器、云计算、边缘计算、嵌入式的全场景openEuler 22.03-LTS版本,聚焦算力释放,持续提升资源利用率,打造全场景协同的数字基础设施操作系统。
2023 年 3 月 30 日,发布 openEuler 23.03 内核创新版,采用 Linux Kernel 6.1 内核,为未来 openEuler 长生命周期版本采用 6.x 内核提前进行技术探索,方便开发者进行硬件适配、基础技术创新和上层应用创新

openEuler’s white paper: https://www.openeuler.org/whitepaper/openEuler-whitepaper-2303.pdf
openEuler's official website
Official website link: https://www.openeuler.org/zh/

Then our road to openEuler begins.

embedded orientation


openEuler is dedicated to likes
Set a small goal: run openEuler Embedded! Sig-embedded address Sig-ROS address on my childhood toy EV3

Hello,openEuler~

おすすめ

転載: blog.csdn.net/m0_74037814/article/details/133089457