Operating System (1)

Computer Architecture

A computer system by the operator, controller, memory (i.e. RAM), input devices, output devices. Wherein the arithmetic operator data, and the data in memory, the operator to get the data into memory by the controller, the operator result of the last operation stored in memory by the controller, the program instructions and data is typically composed of, when the program runtime instructions and data stored in memory, the instruction address information data in the memory, so that operator knows the location of data in memory, and then get the data by the controller; General IO device or by Northbridge is connected to the south bridge chip, a north bridge becomes a high-speed bus controller: memory and CPU are connected together by Northbridge. Southbridge called low speed bus controller: a peripheral device and a south bridge together, and then connected to the Northbridge Southbridge;

For a simple PC, terms, at a time can only run one program, but a strong PC computing power to run the program content and the CPU time is not particularly long, so in order to be able to take advantage of the computer's resources as possible, PC need to have the ability to run multiple programs, each program is running requires a coordinator called the kernel (kernel runs on the hardware and manage hardware resources to virtual hardware resources provided to the upper layer of the other way application required (the reason for this is that if an application runs directly on the hardware then this program can control various properties of hardware that may interfere with each other when programs are launched, a malicious program could lead to other quit all programs so we need a unified resource managers, and each program in order to use the hardware must be completed by the kernel, the kernel will not let programs access the hardware directly but through the computing power provided by the hardware, said one system call (system call). in order to let the system call as little as possible, system call are doing very bottom, so the programmer to program the system by calling function will be very cumbersome and between the many programs is the same as Word and excel both need to print, if you do not put these interfaces as common words which means the word and excel requires its own to develop a print module, which means that most of the features on our computer is bound to repeat and repeat these take up extra space waste of resources. Therefore, in addition to providing OS kernel to the usual need to kernel calls himself the kernel output provided out of this output is through more high-level call interface to achieve, this interface called library (also known as API: application programming Interface), the library itself is an application, but this application the program does not perform intersection program can not run independently, and only when invoked by other programs to perform, so the programmer in order to call the hardware may or may not use the library to call directly on the system by using the kernel library calls (library call referred lib) call (system call) to achieve), the computing power of the hardware resources available points That process for multiple applications.

CPU role

Fetching instructions from memory and decodes the instruction (instruction about to be converted into cpu command can be run), after the completion of the decoding instruction execution; cpu therefore have value unit, a decoding unit, an execution unit of three components; a program when a plurality of instruction, taken cpu from memory instructions and stored in a register, using the instruction counter to point instruction to be executed; when cpu context switch, cpu need to save each of the status register into memory, cpu modified data also stored in memory.

Early be increased by increasing the efficiency of cpu cpu frequency; cpu frequency can not be behind the increase, so there will be multi-core, but a process can only run on one core, when the core of how the server, and only a busy process, this time We can not play the advantages of multi-core, so there is a thread, a process will be divided into multiple threads so that each thread is running on multiple cores, improving efficiency.

 

Storage system

Mechanical hard - solid state drive - memory - three cache - secondary cache - a buffer - a register (a Slow - Fast)

Register: can be close to the work cycle of the CPU clock, the CPU operating data register, the data register is not to be operated in a cache lookup will be found and the data loaded into the register, if a cache data which will not be operated in the secondary cache lookup to find and load the result into the cache and then loaded into a register, and so on.
A / secondary cache: usually physical core cpu proprietary, each one has its own dedicated physical cores of a / secondary cache;
three-level cache: cpu physical cores share;
Memory: cpu share physical cores.

The composition of IO devices

IO device generally consists of a controller device and the device itself consists of:

Controller: integrated in a chip on the motherboard, or a group of chips, is responsible for controlling the corresponding device. The controller is responsible for the implementation of the operating system receives a command and complete the command: os as to read data corresponding devices. The controller driver needs to read this data into physical operation corresponding to the operation device;

Equipment: The equipment itself by itself generally is a simple interface that needs to be connected by a controller;

Driver: normally developed by the equipment manufacturers, some controllers directly integrated on the motherboard and very versatile, in order to complete the basic operating system boot operation, the driver of such a controller is generally integrated in the kernel;

Registers: Each controller has a small number of registers used for communication, such as a hard disk controller may be used to specify the address, number of sectors, the direction (read or write data to the disk data) or the like, so to activate a control , a device driver from the basic operation of the instruction after the instruction converted to the corresponding operation device and the operating system receives an operation request to register to complete the discharge operation; each showed an IO port register, all combinations of registers called device IO port space.

How to identify cpu IO ports on each controller:

Be addressed to IO ports on each controller, that is, each IO port has a separate address, when the host starts, each IO ports are required to register with the bus IO port space used by the port, followed by host IO port assignment for each port, the CPU can access through a port corresponding to the IO device; CPU in order to read data corresponding to the device, send instructions to the drive, the drive instruction signal into the storage device can be understood in the register, so register or IO port addresses are dealing cpu carried by bus and IO device; there may exist a device receives data IO, CPU needs to load the data onto a process for data processing, such as network cards receives a user to access web services request, the request to reach the LAN equipment, network card device can not directly handle the request, this time should be a mechanism to allow the CPU to know of such a signal as soon as possible, and let the cpu activation data on kernel reads the corresponding IO device, controlled by the kernel cpu starts a process corresponding to the read user request.

Implementation of input and output:

   1. Blind waiting for: the user program to initiate a system call, the kernel will be translated into a corresponding device driver process the call and then start IO device drivers and check the device in a continuous loop and see whether the device is completed work;
   2. interrupt, IO device that is able to take the initiative to send a notification, so no need to wait for the blind, such a notification is called an interrupt, the interrupt routine that is capable of being executed so that cpu cpu core acquisition notification interrupt request, so the host device there is usually a programmable interrupt controller, and the controller can communicate directly cpu, the signal arrival notification cpu. (When the host is started, each IO device To interrupt controller using a registered interrupt number, the interrupt number is unique, so when io device data arrives, will not immediately put data on the data bus, the device io It will issue an interrupt request to the interrupt controller, interrupt controller interrupt number to determine which device is the interrupt request from, then by bus to inform the interrupt controller cPU, let the cpu know which IO device sent to the interrupt request, and then activate core cpu and switching process is currently being executed (but not the context switch interrupt switch), then the original kernel processes running on the first exit, the kernel running on the cpu, they came by the kernel on the IO device acquiring data)
   such as: workflow card device:
   users a web service request arrives, the card has its own buffer, the buffer when a packet arrives after the card will trigger an interrupt, and packet buffers to the network card, the kernel interrupt processing, judgment is receiving data, it will then copy the data buffer to the kernel buffer, the memory kernel buffer The period of storage space that is the read buffer, after reading came out to be a network request packet, view the target address is not their own, if the destination address is himself began to open the message, the port number by the kernel to know what the process is to receive deal with.
 

  Kernel handles interrupts are divided into 2 parts: the
    interrupt part: cpu cores activated and switching process is currently executing (but not the context switch interrupt switch), then the kernel running on the cpu, by the kernel on its own to get the IO device data, the data in the kernel buffer), originally run on the cpu process exits; (NIC receive data packets will trigger an interrupt frequently affects performance, so there will be 3 ways)
    interrupt part: if the original cpu process execution time has not been completed, the kernel may withdraw from cpu, cpu continue to run the original process, this time not only receive data processing data; data processing is necessary to schedule another process by scheduling to handle.
   3.DMA mechanism, direct memory access, when you need to interrupt the part, cpu dma devices will be notified and told dma equipment io bus (control bus, an address bus, a data bus) is yours to use, and then divided the memory space for some the apparatus used for dma, can receive data into memory, so the device can dma achieve their discharge from the data buffer to the memory, when the data reading is completed, an interrupt request to the device dma cpu, tell cpu data have been received, At this time, only the kernel notifies cpu read data has been completed. A period of time can only be used by a dma device. (Now most frequently needed data transmission io devices have dma mechanisms, such as network cards, hard drives, etc.)

 

Memory (i.e., memory)

Virtual address space (linear address space), i.e. the 32-bit OS, each process has its own 4G memory that are available: using a bottom end fixed to the kernel memory on linux system 1G, 3G process uses memory;
split 3G user memory space a plurality of frames into fixed-size pages (4k size), each page frame as a single unit outwardly allocation; for physical memory, memory configuration can actually be called frames, called a page for the process concerned; so every a process can get physical memory space is not continuous, but may be virtualized to continuous.

通过上图实现将每个线性地址空间映射到物理空间,即当一个进程申请使用内存时,要向内核发起系统调用,由内核在物理内存申请空闲的页框;线性地址空间和物理内存间由一个映射关系,当某一个进程在cpu上执行时,进程要取哪些数据,进程会告诉cpu取地址空间xxx中的数据,这里的xxx是线性地址,需要将xxx映射成物理内存地址才能读取到数据(这个映射关系保存在Task struct中),所以在一个进程发出地址请求之后,cpu必须要找相应进程的Task struct并装载映射表完成线程地址到物理地址的转换(这个动作是通过cpu中的MMU组件实现);这种转换关系会缓存在TLB中,提高下次查找的效率。

MMU:内存管理单元,是cpu上的一个组件,当某一进程要访问线性地址空间中某段数据时,这个进程会将这个线性地址空间的地址传给cpu,cpu本身知道通过这个地址是不可能访问到数据的,这时cpu通过MMU将这个地址转换成对物理地址的访问,从而cpu就能访问到数据,并将此时的映射关系保存在TLB中;

Task struct:每一个进程都有一个作业地址结构,这个作业地址结构其实就是内核为每个进程维护的一个数据结构(即一段内存空间:内核为了追踪一个进程,在内核的内存地址空间中通过链表的形式:保存进程id号、父进程id号、所使用的内存地址空间、所在的cpu、所打开的文件、内部的线程等);

 TLB(Translation Lookaside Buffer)转换检测缓冲区是一个内存管理单元,用于改进虚拟地址到物理地址转换速度的缓存。TLB是一个小的,虚拟寻址的缓存,其中每一行都保存着一个由单个PTE(Page Table Entry,页表项)组成的块。如果没有TLB,则每次取数据都需要两次访问内存,即查页表获得物理地址和取数据。

 shell

操作系统启动的时候,应用程序未必马上运行起来。只能说启动操作系统这些程序具备了运行环境,并没有运行。通常启动程序的方式有多种:比如操作系统启动后程序自动运行如一些服务等,也可手动启动。手动启动程序怎么去指挥计算机启动某个应用程序呢?怎么让操作系统去接受用户的命令去启动特定的某个应用程序,这时必需给操作系统提供一个特殊的应用程序——shell:它是整个操作系统的外壳,是能够实现接收用户指令理解用户命令并且将它传给内核,由内核指挥着应用程序启动并操作应用程序的界面。shell分为两种:GUI(图形界面);CLI(命令行)。shell作用:1.提供能够让用户进行交互的界面;2.将用户的指挥行为翻译为计算机能够理解的命令。

 

Guess you like

Origin www.cnblogs.com/lriwu/p/8953259.html