Operating system-related knowledge

Foreword

In order to interview and turned up the operating system this big book

Inter-process communication (IPC)

  1. Shared memory
  2. message queue
  3. signal
  4. signal
  5. Sockets
  6. Ordinary pipeline
  7. Named Pipes

Process Scheduling

  1. FCFS scheduling algorithm
  2. Short work process priority
  3. Priority Scheduling Algorithm
  4. Higher than the response priority scheduling algorithm
  5. Round-Robin
  6. Multilevel feedback queue scheduling

Communication between threads

Lock mechanism:

  1. Mutex
  2. Condition variable
  3. Read-Write Lock

Semaphore mechanism:

  1. Thread unnamed semaphores and semaphore named thread

Signaling mechanisms:
signal processing mechanism among similar processes

The main purpose of communication between threads for thread synchronization, so the thread does not want to exchange process data communication mechanisms for communication

### operating system components

  1. Process management: the management of the processor is essentially the implementation of "time", that is, how will truly rational allocation of CPU for each task.
  2. Storage management: the essence is the management of storage "space", mainly referring to the management of main memory;
  3. Device management: management of the hardware device is substantial, including the allocation of input and output devices, start and complete recovery
  4. File Manager: also known as Information Management
  5. Programming Interface
  6. User Interface

User mode and system state switch when? 64-bit systems are usually used, and that it compared to 32-bit systems, what is the difference and advantages?

The following three situations in which the switching state by the user to the kernel mode

  1. System calls
  2. Abnormalities, such as page fault
  3. The terminal peripheral device, the peripheral device when the user requests the completion of the operation, sent back to the cpu interrupt operation

1) 2 different site capacity) calculated at different speeds

Select the one you are familiar with the disk arm scheduling algorithms for simple description

  1. First come, first served: the idea of ​​this algorithm is relatively easy to understand. Suppose the current track in a position in the queue are sequentially processed for each service track, the advantage that the process is relatively simple, but the disadvantage is the average head movement distance and the moving distance will be great.
  2. Shortest seek time priority algorithm: the nature of this algorithm is implemented using a greedy algorithm, assume that the current track in a position after the next process from the current track number of the nearest track, and then the processing is completed from the processing track number of the last track number, track number until all services have finished the program ends. The advantage of this is that the performance will be better than FIFO algorithm, but will have track number from the current track far not long-term service, or "hunger" phenomenon, because the serial number required to access the service is dynamically generated, that is, various applications may continue to make different track number of access requests.
  3. scan algorithm: that is the very image of the elevator scheduling algorithm. First in one direction (for example, scanning from the outside), the scanning process sequence sequentially access service request. When a sequence of service to scan the innermost reverse scan, it is noted here that the innermost layer is assumed that track number 0, a sequence of service requirement is innermost No. 5, after completion of access No. 5, on the reverse , you do not need to go down to sweep. Combined with a better understanding of the process of the elevator, the elevator down access time, knowing that the bottom layer is not human, it's not going any further
  4. cscan algorithm: loop scanning algorithm, look on an algorithm, a problem. A closer look, we find that when the scanning sequence to the innermost of service requirements, then will reverse in the next large part of the time, should not be required track number for this service, as has been previously visited a. What does it mean, that all sequences between a track number to the innermost layer have been visited from the initial track number, so SCAN will increase the waiting time. To address this situation, thinking CSCAN algorithm is that after a visit to the most complete sequence inside a service request, immediately return to the outermost To access the track. That is always in one direction. It is also known a one-way scan scheduling algorithm. A track from the innermost layer back to the track to be accessed immediately from this step is an absolute value of the difference of both track number.

The difference between processes and threads

  1. Scheduling: the thread as a basic unit of scheduling and allocation process as the basic unit of own resources
  2. And issue: not only can execute concurrently between processes, multiple threads in the same process can also be executed concurrently
  3. Have the resources: the process is an independent unit with the resources, the threads themselves have essentially no system
    of system resources, has only a little in the operation of essential resources (such as a program counter, a set of registers and stack), but it may belong other threads share the process of a process have all the resources. Between processes is not shared address space, and the thread is a shared address space where the process of
  4. Overhead: overhead when creating or revocation process is due to whom the system must allocate and deallocate resources, leading to system overhead is significantly larger than create or revocation of thread.

OS paging method

opt: Best replacement algorithm (optional replacement). Replace the next visit from the longest current page. opt algorithms need to know the operating system of the future events, obviously can not be achieved only as a measure of other algorithms.

LRU: least recently used (Least Recently Used) replaced the previous page using the current farthest distance. According to the principle of locality: Replace page recent visit to the most unlikely. Performance closest OPT, but difficult to achieve. You can maintain access page on a stack of labels or add time of last access to each page, but the overhead is great

FIFO: FIFO (First In First Out), the page will be treated as a circular buffer, alternative circular fashion. This is to achieve the most simple algorithm is to replace the implicit logic resides in memory the longest page. However, since a part of the program or data in the high frequency of use throughout the life of the program, it will lead to repeated swapped out

clock: replacement algorithm clock (Clock), is associated with each page frame is to use a bit. When the first page is loaded into memory or accessed again, using the 1 position. Each time needs to be replaced, find use bit is set to replace the first frame 0. During the scanning process, if the bit is encountered using the frame 1, the use position is 0, the scanning continues. If a so-called frame bits are zero, the first frame is replaced

Graphic:

Operating system memory management

Hierarchy diagram of a computer store

Memory Management

Memory management including virtual addresses, address translation, memory allocation and recovery, memory expansion, memory sharing and protection functions.

Continuous allocate storage management

Contiguous allocation means allocating a contiguous memory space for the user program, there is a single continuous successive allocation storage management and storage management two ways zonal

A single continuous storage management

In this management mode, the memory is divided into two areas: a user area and a system area. Application loaded into the user area, the user may use all the space region. It features the most simple, suitable for single-user, single-tasking operating system. CP / M and DOS 2.0 or less is used in this way. The biggest advantage of this approach is easy to manage. But there are still some problems and deficiencies, for example, requires less memory space program, resulting in a waste of memory; the entire program is loaded, making the program part of the rarely used also take - given the amount of memory

Partition storage management

In order to support multi-channel system and a time-sharing system program that supports concurrent execution of multiple programs, management of storage partitions is introduced. Partition storage management is the memory is divided into a number of equal or unequal size of the partition, the operating system takes up one of the partitions, the remaining partitions used by the application, each application takes up one or several partitions. Although it can partition storage management support concurrent, but it is difficult to shared memory partitions.

Partition storage management introduced two new issues: internal fragmentation and external fragmentation.
Space debris within the sub-region is not utilized, the debris is difficult to use outside the occupied zones between the free partition (usually small free partitions).

To achieve the storage management partition, the operating system should maintain a data structure is a linked list of partition or partition table. Each item in the table generally includes a start address, size, and status of each area (if assigned).

One technique often used to partition storage management is the memory crunch (compaction).

Fixed partition

Characteristics of the fixed memory partition is divided into a number of consecutive fixed-size partitions. Partition size may be equal to: This approach is only suitable for concurrent execution of the same program to a plurality of (a plurality of objects treated in the same type). The partition size may also vary: a plurality of small partitions, partition, and an appropriate amount of medium-small number of large partitions. The size of the program, the current allocation of idle, appropriately sized partitions

Advantages: easy to implement, low cost.
Cons: There are mainly two: in the debris caused by waste; total number of partitions fixed, limiting the number of concurrent execution of the program.

Dynamic Partitioning

Characteristic of the dynamic partitioning is dynamic partitions created: their initial allocation requirements, or dispensed through system calls during its execution or change the partition size when the loader. Compared with the advantage that the fixed partition: no internal fragmentation. But it introduces another debris - debris outside. Dynamic partition allocation is to find a partition of the free partition, the size must be greater than or equal to the requested program. If greater than the required, then the partition is divided into two partitions, one partition for size requirements and marked "busy", and the other for the remaining portion of the partition and marked as "free." Partition allocation priorities are usually from low-end to high-end memory. Partition release process dynamic partitions have a problem to note is that the adjacent free partitions into one large empty partition.

Here are several common partition allocation algorithm:

< Big column   operating system-related knowledge p> The first adaptation method (nrst-fit): Find by partition from scratch in memory of the order, to find the first partition to meet the requirements for distribution. Better time allocation and release of the performance of the algorithm, the larger free partition can be retained in the high-end memory. But with the low-end continues to partitions divide will produce more small partitions to find a time when the cost will increase each allocation.

下次适配法(循环首次适应算法 next fit):按分区在内存的先后次序,从上次分配的分区起查找(到最后{区时再从头开始},找到符合要求的第一个分区进行分配。该算法的分配和释放的时间性能较好,使空闲分区分布得更均匀,但较大空闲分区不易保留。

最佳适配法(best-fit):按分区在内存的先后次序从头查找,找到其大小与要求相差最小的空闲分区进行分配。从个别来看,外碎片较小;但从整体来看,会形成较多外碎片优点是较大的空闲分区可以被保留。

最坏适配法(worst- fit):按分区在内存的先后次序从头查找,找到最大的空闲分区进行分配。基本不留下小空闲分区,不易形成外碎片。但由于较大的空闲分区不被保留,当对内存需求较大的进程需要运行时,其要求不易被满足。

伙伴系统

固定分区和动态分区方式都有不足之处。固定分区方式限制了活动进程的数目,当进程大小与空闲分区大小不匹配时,内存空间利用率很低。动态分区方式算法复杂,回收空闲分区时需要进行分区合并等,系统开销较大。伙伴系统方式是对以上两种内存方式的一种折衷方案。
伙伴系统规定,无论已分配分区或空闲分区,其大小均为 2 的 k 次幂,k 为整数, l≤k≤m,其中:

2^1 表示分配的最小分区的大小,

2^m 表示分配的最大分区的大小,

通常 2^m是整个可分配内存的大小。
假设系统的可利用空间容量为2^m个字, 则系统开始运行时, 整个内存区是一个大小为2^m的空闲分区。在系统运行过中, 由于不断的划分,可能会形成若干个不连续的空闲分区,将这些空闲分区根据分区的大小进行分类,对于每一类具有相同大小的所有空闲分区,单独设立一个空闲分区双向链表。这样,不同大小的空闲分区形成了k(0≤k≤m)个空闲分区链表。

分配步骤:
当需要为进程分配一个长度为n 的存储空间时:

首先计算一个i 值,使 2^(i-1) <n ≤ 2^i,
然后在空闲分区大小为2^i的空闲分区链表中查找。
若找到,即把该空闲分区分配给进程。
否则,表明长度为2^i的空闲分区已经耗尽,则在分区大小为2^(i+1)的空闲分区链表中寻找。

若存在 2^(i+1)的一个空闲分区,则把该空闲分区分为相等的两个分区,这两个分区称为一对伙伴,其中的一个分区用于配, 而把另一个加入分区大小为2^i的空闲分区链表中。

若大小为2^(i+1)的空闲分区也不存在,则需要查找大小为2^(i+2)的空闲分区, 若找到则对其进行两次分割:

第一次,将其分割为大小为 2^(i+1)的两个分区,一个用于分配,一个加入到大小为 2^(i+1)的空闲分区链表中;

第二次,将第一次用于分配的空闲区分割为 2^i的两个分区,一个用于分配,一个加入到大小为 2^i的空闲分区链表中。

若仍然找不到,则继续查找大小为 2^(i+3)的空闲分区,以此类推。
由此可见,在最坏的情况下,可能需要对 2^k的空闲分区进行 k 次分割才能得到所需分区。

与一次分配可能要进行多次分割一样,一次回收也可能要进行多次合并,如回收大小为2^i的空闲分区时,若事先已存在2^i的空闲分区时,则应将其与伙伴分区合并为大小为2^i+1的空闲分区,若事先已存在2^i+1的空闲分区时,又应继续与其伙伴分区合并为大小为2^i+2的空闲分区,依此类推。
在伙伴系统中,其分配和回收的时间性能取决于查找空闲分区的位置和分割、合并空闲分区所花费的时间。与前面所述的多种方法相比较,由于该算法在回收空闲分区时,需要对空闲分区进行合并,所以其时间性能比前面所述的分类搜索算法差,但比顺序搜索算法好,而其空间性能则远优于前面所述的分类搜索法,比顺序搜索法略差。 需要指出的是,在当前的操作系统中,普遍采用的是下面将要讲述的基于分页和分段机制的虚拟内存机制,该机制较伙伴算法更为合理和高效,但在多处理机系统中,伙伴系统仍不失为一种有效的内存分配和释放的方法,得到了大量的应用。

页式和段式存储管理

前面的几种存储管理方法中,为进程分配的空间是连续的,使用的地址都是物理地址。如果允许将一个进程分散到许多不连续的空间,就可以避免内存紧缩,减少碎片。基于这一思想,通过引入进程的逻辑地址,把进程地址空间与实际存储空间分离,增加存储管理的灵活性。地址空间和存储空间两个基本概念的定义如下:

地址空间:将源程序经过编译后得到的目标程序,存在于它所限定的地址范围内,这个范围称为地址空间。地址空间是逻辑地址的集合。

存储空间:指主存中一系列存储信息的物理单元的集合,这些单元的编号称为物理地址存储空间是物理地址的集合。

根据分配时所采用的基本单位不同,可将离散分配的管理方式分为以下三种:
页式存储管理、段式存储管理和段页式存储管理。其中段页式存储管理是前两种结合的产物。

页式存储管理

将程序的逻辑地址空间划分为固定大小的页(page),而物理内存划分为同样大小的页框(page frame)。程序加载时,可将任意一页放人内存中任意一个页框,这些页框不必连续,从而实现了离散分配。该方法需要CPU的硬件支持,来实现逻辑地址和物理地址之间的映射。在页式存储管理方式中地址结构由两部构成,前一部分是页号,后一部分为页内地址w(位移量),如图4所示:

页式管理方式的优点是:

1)没有外碎片,每个内碎片不超过页大比前面所讨论的几种管理方式的最大进步是,

2)一个程序不必连续存放。

3)便于改变程序占用空间的大小(主要指随着程序运行,动态生成的数据增多,所要求的地址空间相应增长)。

缺点是:要求程序全部装入内存,没有足够的内存,程序就不能执行。

段式存储管理

在段式存储管理中,将程序的地址空间划分为若干个段(segment),这样每个进程有一个二维的地址空间。在前面所介绍的动态分区分配方式中,系统为整个进程分配一个连续的内存空间。而在段式存储管理系统中,则为每个段分配一个连续的分区,而进程中的各个段可以不连续地存放在内存的不同分区中。程序加载时,操作系统为所有段分配其所需内存,这些段不必连续,物理内存的管理采用动态分区的管理方法。

在为某个段分配物理内存时,可以采用首先适配法、下次适配法、最佳适配法等方法。

在回收某个段所占用的空间时,要注意将收回的空间与其相邻的空间合并。

段式存储管理也需要硬件支持,实现逻辑地址到物理地址的映射。

程序通过分段划分为多个模块,如代码段、数据段、共享段:

–可以分别编写和编译
–可以针对不同类型的段采取不同的保护
–可以按段为单位来进行共享,包括通过动态链接进行代码共享
这样做的优点是:可以分别编写和编译源程序的一个文件,并且可以针对不同类型的段采取不同的保护,也可以按段为单位来进行共享。

总的来说,段式存储管理的优点是:没有内碎片,外碎片可以通过内存紧缩来消除;便于实现内存共享。缺点与页式存储管理的缺点相同,进程必须全部装入内存。

页式和段式管理的区别

页式和段式系统有许多相似之处。比如,两者都采用离散分配方式,且都通过地址映射机构来实现地址变换。但概念上两者也有很多区别,主要表现在:
1)、需求:是信息的物理单位,分页是为了实现离散分配方式,以减少内存的碎片,提高内存的利用率。或者说,分页仅仅是由于系统管理的需要,而不是用户的需要。段是信息的逻辑单位,它含有一组其意义相对完整的信息。分段的目的是为了更好地满足用户的需要。
一条指令或一个操作数可能会跨越两个页的分界处,而不会跨越两个段的分界处。
2)、大小:页大小固定且由系统决定,把逻辑地址划分为页号和页内地址两部分,是由机器硬件实现的。段的长度不固定,且决定于用户所编写的程序,通常由编译系统在对源程序进行编译时根据信息的性质来划分。
3)、逻辑地址表示:页式系统地址空间是一维的,即单一的线性地址空间,程序员只需利用一个标识符,即可表示一个地址。分段的作业地址空间是二维的,程序员在标识一个地址时,既需给出段名,又需给出段内地址。
4)、段比页大,因而段表比页表短,可以缩短查找时间,提高访问速度。

线程的同步机制

  1. 临界区(Critical Section):通过对多线程的串行化来访问公共资源或一段代码, 速度快, 适合控制数据访问。 在任意时刻只允许一个线程对共享资源进行访问, 如果有多个线程试图访问公共资源, 那么在有一个线程进入后, 其他试图访问公共资源的线程将被挂起, 并一直等到进入临界区的线程离开, 临界区在被释放后, 其他线程才可以抢占。
  2. Mutex (Mutex): using mutex mechanism. Only with mutex thread have access to public resources, because only a mutex object, they are able to ensure that public resources are not accessed simultaneously by multiple threads. Public safety resource sharing exclusive only to achieve the same application, but also to achieve different applications to share a common resource security
  3. Semaphore (semaphore): It allows multiple threads access the same resource at the same time, but the need to limit the maximum number of threads to access this resource at the same time.
  4. Event (Event): to synchronize threads by notifying the operation mode can also be easily achieved through multiple lines
    operating priority comparison process of

Guess you like

Origin www.cnblogs.com/lijianming180/p/12326852.html