Operating system study notes

Deadlock, interrupt, memory management (partition replacement strategy, paging storage management, page elimination algorithm, segmented storage management), process synchronization and mutual exclusion, process and thread distinction, producer-consumer problem, I/O multiplexing technology, etc. basic concept.

Resource management function of operating system 
1.  Processor allocation

  1. Propose a strategy for process scheduling
  2. Give the process scheduling algorithm
  3. Allocate processors

2. Storage management

  1. Storage allocation and storage independence
  2. storage protection
  3. storage expansion

3. Device management

  1. device independent
  2. Device allocation, exclusive, shared and virtual allocation techniques
  3. Device transfer control

4. Software resource management

In the early 1960s, hardware made progress in two aspects: one was the introduction of channels; the other was the emergence of interrupt technology.

A channel is a special-purpose processing component that can control the work of one or more peripherals, and is responsible for information transfer between external devices and main memory. Once it is started, it can run independently of the cpu, so it can be a cpu and channels operate in parallel. And the cpu and various external devices can also operate in parallel. The so-called interrupt means that when the host receives a certain signal (such as the I/O device completion signal), it immediately stops the original work and turns to process the event. When the event is processed, the host returns to the original work point and continues to work. .

The main features of an operating system are concurrency and sharing.

The so-called concurrent execution of programs means that several programs are running in the system at the same time, and the execution of these program segments overlaps on events. The execution of one program segment has not ended, and the execution of another program segment has started, even if the overlap is a small part, it is said that these program segments are executed concurrently.

Process control primitives
1. Cancel primitive 
2. Create primitive 
3. Block primitive 
4. Wake primitive

  • Ready state: When the process gets all resources except cpu.
  • Running state: When the process is allocated by the scheduling module, it gets the control right of the central processing unit.
  • Waiting state: If the process is waiting for an event to occur, (such as waiting for an input/output operation)

The broad definition of process synchronization is the various restrictions imposed on the temporal order of process operations.

One of the special rules of synchronization rules is that multiple operations cannot be executed at the same time. This synchronization rule is called mutual exclusion .

A resource that is only allowed to be used by one process at a time is called a critical resource .

In the operating system, when a process is accessing a certain storage area, other processes are not allowed to read or modify the contents of the storage area, otherwise, an incalculable error will occur. This mutual restriction relationship between processes is usually called mutual exclusion .

The so-called synchronization is that concurrent processes may need to wait for each other or exchange messages at some key points. This mutually restricted waiting and exchange of messages is called process synchronization .

Pointer and reference
the same point: both are the concept of address. 
Differences: 
1. A pointer points to a piece of memory, and its content is the address of the memory pointed to 
2. A reference is an alias for a certain piece of memory 
3. From the point of view of memory allocation, the program allocates a memory area for the pointer variable, and the reference does not need to allocate memory Area 
4. A reference can only be initialized once at the time of definition and cannot be changed afterwards; a pointer can be changed

Thread safety means that multiple threads accessing the same code will not produce undefined results. Writing thread-safe code relies on thread synchronization .

Common ways of  thread synchronization
: 1. Critical section 
2. Event 
3. Mutex (very similar to critical object, except that it is allowed to be used between processes, while critical section is only restricted to use between threads of the same process) 
4. signal

Producer, consumer problem
is an abstract description of a synchronization problem, where each process in a computer system can consume (use) or produce (release) a certain type of resource.

Two synchronization signals need to be set, one indicates the number of empty buffers, represented by empty, whose initial value is the size n of the bounded buffer; the other indicates the number of full buffers, represented by full, whose initial value is 0. Since the bounded buffer is a critical resource and must be used exclusively, a mutually exclusive semaphore also needs to be set.


while(生产未完成)  
{
    ......  
    p(empty);  
    p(mutex);  
    送一个产品到有界缓冲区中;  
    v(mutex);  
    v(full);  

}


while(消费未完成)  
{  
    ......    
    p(full);  
    p(mutex);   
    从有界缓冲区中取产品;    
    v(mutex);  
    v(empty);  
}

IPC: Interprocess communication, interprocess communication

进程间通信的8种方式: 
1. 无名管道(pipe):管道是一种半双工的通信方式,数据只能单向流动,而且只能在具有亲缘关系的进程间使用,进程的亲缘关系通常是指父子进程关系。 
2. 高级管道:将另一个进程当做一个新的进程在当前程序中启动,则它算是当前进程的子进程,这种方式我们称为高级管道方式。 
3. 有名管道:有名管道也是半双工的通信方式,但是它允许无亲缘关系进程间的通信。 
4. 消息队列:室友消息的链表,存放在内核中并由消息队列标示符标示。消息队列克服了信号传递消息少,管道只能承载无格式字节流以及缓冲区大小受限等缺点。 
5. 信号量:信号量是一个技术器,可以用来控制多个进程对共享资源的访问。它常作为一种锁机制,防止某进程正在访问共享资源时,其它进程也访问该资源,因此,主要作为进程间以及同一进程不同线程之间的同步手段。 
6. 信号:信号是一种比较复杂的通信方式,用于通知接收进程某个事件已经发生。 
7. 共享内存:共享内存就是映射一段能被其它进程所访问的内存,这段共享内存由一个进程创建,但多个进程都可以访问,共享内存是最快的IPC方式,它是针对其它通信机制运行效率低而专门设计的。它往往与其它通信机制,如信号量配合使用,来实现进程间的同步和通信。 
8. 套接字:套接字也是一种进程间通信机制,与其它通信机制不同的是,它可以用于不同机器间的进程通信。

资源分配策略: 
1. 先请求先服务 
2. 优先调度

在两个或多个并发进程中,如果每个进程持有某种资源,而又都等待着别的进程释放它或他们现在保持着的资源,在未改变这种状态之前都不能向前推进,称这组进程产生了死锁

产生死锁的必要条件: 
1. 互斥条件:设计的资源是非共享的,即一次只有一个进程使用 
2. 不可剥脱条件:进程所获得的资源在未使用完毕之前,不能被其它进程强行夺走。 
3. 部分分配条件:进程每次申请它所需要的一部分资源,在等待新的资源的同时,进程继续占用已分配到的资源。 
4. 环路条件:存在一种进程的循环链,链中的每一个进程已获得的资源同时被链中的下一个进程所请求。

死锁的避免: 
1. 有序资源分配法 
系统中所有的资源都给定一个唯一的号码,所有分配请求必须以上升的次序进行,破坏了环路条件。 
2. 银行家算法 
当新进程加入系统时,它必须说明对各类资源类型的实例的最大需求量。这一数量不能超过系统各类资源的总数,当进程申请一组资源时,该算法需要坚持申请者对各类资源的最大需求量,如果系统现存的各类资源的数量可以满足当前它对各类资源的最大需求量时,就满足当前的申请;否则,进程必须等待,直到其它进程释放足够的资源为止,换言之,仅当申请者可以在一定时间内无条件地归还它所申请的全部资源时,才能把资源分配给他。

作业调度算法: 
1. 先请求先服务调度算法 
2. 短作业优先调度算法 
3. 响应比高者优先调度算法 
    响应比=响应时间/执行时间 
    响应时间为作业进入系统后的等待时间加上估计的执行时间,即为周转时间,因此,响应比可以写成: 
    响应比=1+作业等待时间/执行时间 
4. 优先调度算法

进程调度: 
1. 进程优先数调度算法 
进程就绪队列必须以进程的优先级排序,具有最高优先级的进程放在队首,并且是第一个被分派的进程。 
2.循环轮转调度

一个程序可由代码段,数据段,堆栈段,特别分段等组成。确定在线性地址空间中的指令地址或操作数地址需要两个信息,一个是该信息所在的分段,另一个是该信息在段内的偏移量。

导致系统效率急剧下降的主存和辅存之间的频繁页面置换现象称为颠簸

分区的几种基本放置策略: 
1. 首次适应算法 
2. 最佳适应算法 
3. 最环适应算法

页淘汰策略: 
当请求调页程序调进一个页面,而此时该作业上所分得的主存块已全部用完,则必须淘汰该作业已在主存中的页。 
1. 最佳算法(opt算法) 
2. 先进先出算法(FIFO算法) 
3. 最久未使用淘汰算法(LRU算法,Least recently used) 
当需要置换页时,选择最长时间未被使用的那一页淘汰 
4. 最不经常使用淘汰算法(LFU算法,Least frequently used) 
最不经常使用淘汰算法是将最近应用次数最少的页淘汰

分区存储管理方法容易出现碎片,页式系统中一页或页号相连的几个虚拟页上存放的内容一般都不是一个逻辑意义完整的信息单位。

在段式系统中,作业由若干个逻辑分段组成,如可由代码分段,数据分段,栈分段组成,分段是程序中自然划分的一组逻辑意义完整的信息集合。

段式地址变换由段表来实现,段表由若干个表目组成,每一个表目描述一个分段的信息,其逻辑上应该包括:段号,段长,段首址。

段式系统页式系统的地址变换过程十分相似,但页式系统是一维地址结构,而段式系统是二维地址结构。页式系统中的页面和段式系统中的分段有本质的区别,主要表现在以下几个方面: 
1. 页式系统可实现存储空间的物理划分,而段式系统实现的是程序地址空间的逻辑划分; 
2. 页面的大小是固定且相等的(页的大小由w字段的位数决定);段式系统中分段,长度可变且不相等,由用户编程时决定; 
3. 页面是用户不可见的,而分段是用户可见的; 
4. 将程序地址分成页号p和页面位移w是硬件的功能,w字段的溢出将自动加入页号中去;程序地址分成段号s和段内位移w是逻辑功能,w字段的溢出将产生主存越界(而不是加到段号中去)。

段页式存储管理的用户地址空间是三维的,按段划分的,在段中再划分成若干大小相等的页,这样,地址结构就由段号,段内页号,和页内位移三部分组成。用户使用的仍是段号和段内相对地址,由地址变换机构自动将段内相对地址的高几位解释为段内页号,将剩余的地位解释为页内位移。用户地址空间的最小单位不是段而是页,而主存按页的大小划分,按页装入。

所谓设备独立性是指用户在编制程序时所使用的设备与实际使用的设备无关,也就是在用户程序中仅使用逻辑设备名。 
1. 一个程序应该独立于分配给它的各种类型的具体设备; 
2. 程序应尽可能与它所使用的I/O设备类型无关

缓冲是在两种不同速度的设备之间传输信息时平滑传输过程的常用手段。

缓冲器是以硬件的方法来实现缓冲,它容量较小,软件缓冲区是指在I/O操作期间用来临时存放I/O数据的一块内存区域,缓冲是为了解决中央处理器的速度和I/O设备的速度不匹配的问题而提出来的。

使用缓冲的理由: 
1. 处理数据流的生产者和消费者之间的速度差异(双缓冲) 
2. 协调传输数据大小的不一致 
3. 应用程序的拷贝语意

有三种通用的缓冲技术提供缓冲服务: 
1. 双缓冲 
2. 环形缓冲 
3. 缓冲池

I/O设备分配算法: 
1. 先请求先服务 
2. 优先级高者优先

在多作业系统中,为使各作业进程共享系统的外部设备,必须对外部设备进行合理的分配,常用的设备分配技术有独享分配,共享分配,虚拟分配技术。

独占设备是让一个作业在整个运行期间独占使用的设备。 
对共享设备采用共享分配方式,即进行动态分配,当进程提出资源申请时,由设备管理模块进行分配,进程使用完毕后,立即归还。

为了克服独占设备的缺点,操作系统提供了外部设备联机同时操作的功能,又称为假脱机系统,该系统在作业执行前将作业信息通过独占设备预先输入到辅存上的一个特定存储区域中存放好,称为预输入,此后,作业执行需要数据时不必再启动独占设备读入,只要从磁盘输入数据就行了。另一个方面,在作业执行中,也不必直接启动独占设备输出数据,而要将作业输出数据写入磁盘中存放,在作业执行完毕后,由操作系统来组织信息输出。

I/O设备的控制方式分成四类: 
1. 循环测试I/O方式 
2. I/O中断方式 
3. 通道方式 
4. DMA方式

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324773511&siteId=291194637