Operating system written examination and interview Highlights

A. Outline

1. What is the operating system
2 threads and processes
3. Memory Management
4. Scheduling
5. critical areas and critical conflict resolution
6.7 kinds of ways IPC
7. deadlock causes and approach
8. Inter-process synchronization and mutual exclusion difference, thread synchronization mode
9. common process scheduling algorithm
basic characteristics 10. Real-Time systems

Reference Bowen:
https://blog.csdn.net/xiongluo0628/article/details/81461053
https://blog.csdn.net/FanceFu/article/details/79357048
https://blog.csdn.net/qq_38998213/article / the Details / 87,899,231
https://blog.csdn.net/youngchang06hpu/article/details/8009947

1. What is the operating system:

The operating system is the bridge computer hardware and user interaction, and management of computer hardware and software resources of the program

2. Processes and Threads:

Process:
the process is a program with a certain separate functions run on one set of data on the activities, the process is an independent unit operating system resource allocation and scheduling , is the product of multi-channel programming technology, used to enhance the operating system. efficiency, implement parallel in the macro sense.

Thread:
A thread is an entity ** process is CPU scheduling and dispatch of basic units **, which is smaller than the process of the basic unit can operate independently of the threads themselves have essentially no system resources, has only a little running. essential resources (such as a program counter, a set of registers and stack), but it can be shared with other threads process belong to a process that is owned by all of the resources.

Process and thread the differences and connections:

Contact:
thread is a specific process of, or revocation of a process can create multiple threads.
Concurrent execution of multiple threads can share data in the process, but each has its own independent stack space context, the program counter and storage.

the difference:
Here Insert Picture Description

Summary:
1. thread overhead small and large process overhead
2. A program has at least one process, a process has at least one thread
3. The process can always create and destroy with the needs of the program, but as long as there is a thread crashes, then the process also crash

3. Memory Management

1. Virtual Memory (best suited to manage a large object, or an array of structures):
each program has its own address space, the address space is divided into a plurality of blocks, each block is called a. These pages are mapped to physical memory, but does not need to be mapped to contiguous physical memory, you do not need all the pages must be in physical memory.
When a program refers to a portion of the physical address space in memory, it performs the necessary mapping hardware immediately. When a program references a portion of the address space is not in physical memory, the operating system is responsible for loading the missing portion of physical memory and re-execute a failed instruction.

2. A memory-mapped file (best suited to manage large data stream (typically from a file) and a shared data between a plurality of running on a single computer process):
memory-mapped files, a file is mapped to a memory. Win32 provides allows applications to map the file to a process function (CreateFileMapping). Memory-mapped files and virtual memory is somewhat similar to a reserved area may address space by memory-mapped files, and will be submitted to this area of physical memory, physical memory from a memory-mapped file already exists in the file on disk, and in the you must first be mapped file before the file operation. When processing using the memory-mapped files stored in a file on disk you will not have to perform I / O operations on the file, so that the memory-mapped files can play a very important role in processing large amounts of data files.

3. Memory Stack (the most suitable for managing a large number of small objects)

4. Scheduling

1. Priority Priority:

  • Manual given priority
  • The response ratio as a priority (higher than the response priority scheduling algorithm)

2. round robin schedule:
all ready process according to the FIFO principle arranged in a queue, each time scheduling, CPU allocated to the first team process, which can perform a time slice. When the time slice expires, issued by the clock timer interrupt, the scheduler stops execution of the process, and sent it to the end of the ready queue while continuing the CPU allocated to the head of the queue process.
Round-robin time-slice algorithm efficiency and have a great relationship. Because each switching process must save the information process and the process of loading new information, if the time slice is too short, the process of switching too often, in the process of switching will take too much time.

Multilevel Feedback Queue 3:
① a plurality of the ready queue, and given different priority for each queue. The first has the highest priority, followed by a second queue of queues, each queue priority remaining decreased by one. This algorithm gives the respective time slice execution queue process is the same as does the size, the higher the priority in the queue, the smaller the slice execution time specified for each process.
② When a new process into the memory after it is first placed in the first end of the queue, according to the principle of FCFS queue waiting to be scheduled. When it came time to perform the process, as it can be completed within the time slice, you can prepare evacuation system; if it is not completed at the end of a time slice, the process scheduler will go to the next end of the queue.

4. solve critical areas and critical conflicts

Allow only the use of a process called resource critical resource . Many physical devices belong to critical resources, such as input, printers, tape drives
that code to access the critical resource is called a critical section.
Solution:
For exclusive access to critical resources, each process before entering the critical zone, you need to check to make sure that only one process into the critical areas, such as the existing process entered its critical section, the other trying to enter the critical section the process must wait. Enter the critical section of the process is to quit within a limited time, so that other processes can promptly enter their critical section. If you can not access their critical region, we should let the CPU, the process appears to avoid the busy and so the phenomenon.

5. The inter-process communication

1. Line : allowing communication between parent and child
2 named pipes : allowing communication between unrelated processes
3. signal : a notification receiving process has occurred practice, similar to a broadcast
4. semaphore : An identifier generally used as, for example, lock
the message queue : message queue linked list is composed by the message, stored in the kernel by the message queue identifier identifying
6. shared memory : created by a process, other processes may access the memory section
7. the socket : communication mechanism between the socket is the process different from other communication mechanism is that it can be used for interprocess communication between its different.

Comparative advantages and disadvantages:
the pipeline: slow, limited capacity of
message queues: receive the system capacity limits, and care should be first read when the reading is not to consider the last issue data.
Semaphore: transmitting complex information is not only used to synchronize.
Shared Memory: the ability to easily control capacity, fast speed, but to keep pace, such a process at the time of writing, another process should pay attention to the issue of literacy, which is equivalent thread-safe thread.

6. The inter-thread communication

Inter-thread communication is divided into two models: shared memory and message passing

1. Use the volatile keyword (shared memory): multiple threads simultaneously monitor a variable when this variable changes, the thread can perceive and implement appropriate business
2. Use wait () and notify () method: Object class provides a method of communication between threads: wait (), notify () , notifyaAl (), which is the basis, wait, and must notify the communication with multithreading using synchronized, wait method releases the lock, notify method does not release the lock
3. conduit

7. deadlock causes and treatment approaches

Deadlock Deadlock refers to two or more processes in the implementation process, due to the competition for resources or A blocking phenomenon caused due communicate with each other, without external force, they will not be able to promote it. At this time, say the system is in deadlock state or system to produce a deadlock, which is always in the process of waiting for another process called the deadlock.

Four necessary conditions for Deadlock:
1. mutually exclusive conditions: a resource can only be a process using
2. inalienable condition: the process has access to resources, not used before, other processes can not be arbitrarily deprived only can take the initiative to release
3. request and maintain the conditions: the process has been maintained for at least one resource, but proposed a new resource request, the resource has been occupied by another process, this time requesting process is blocked, but the resources that they have acquired to keep hold.
4. Wait cycling conditions: i.e., the process set {p0, p1, p2, p3 ...... pn}; p0 p1 is waiting for a resource occupied, p1 is waiting for a resource occupied p2, pn p0 is waiting for a resource occupied.
As long as the above condition is not satisfied, the deadlock will not occur.

Solution: banker's algorithm, can refer to this article word + a picture clear - the banker's algorithm

8. The Inter-process synchronization and mutual exclusion synchronization difference and thread way

Mutually exclusive (exclusive): a resource only allowed one time only one process for its access, but does not restrict access to the order / of a certain moment two processes can not run
synchronization (collaborative): In most cases , on the basis of mutually exclusive access to some mechanism to achieve an orderly process, in a few cases, it allows multiple processes simultaneously access
the principle of synchronization mechanisms to follow :
 1. idle let into;
 2. busy waiting;
 3. limited to wait;
 4. let the right to wait;

9. thread synchronization method:

Critical section , mutex , semaphore , event
  
  critical area : the serial multi-threaded access to public resources or through a piece of code, speed control for data access.
  Mutex : A mutex mechanism, only those with permission mutex thread have access to public resources, because only a mutex object, it is possible to ensure that public resources are not accessed simultaneously by multiple threads.
  Semaphore maximum number of threads the same time it allows multiple threads access the same resources, the same time it is necessary to limit access to this resource: Semaphore objects in front of several other methods, signals allows multiple threads to use a shared resource.
  Event (signal): to maintain simultaneous multi-threading operation by way of a notification, it can also easily implement priority comparison operation multithreading.
  Compared:
Here Insert Picture Description

10. A method of synchronization process:

** spin lock: ** at any time, at most one holder, it said, can only have at most one execution unit at any time to acquire a lock. But the two slightly different scheduling mechanisms. For the mutex, if the resource is already occupied, the resource request can only go to sleep. But the spin lock without causing the caller to sleep, if the spin lock has been held another execution unit, the caller has been to see whether the spin cycle lock holder has released the lock where the word "spin" It is so named.

** tube side: ** process can only have exclusive use of the tube, that is, when a process using the tube, another process must wait. After a process finished using the tube, it must release the tube and a process to wait for a wake-up tube process.
In the inlet of the tube queue queue called an entry, since the process will perform wake-up operations, it may have a plurality of queue waiting to use the tube, such as a hot queue queue, its priority is higher than the queue.

11. Common process scheduling algorithm

Polling and interrupts:
low polling efficiency, long waiting time, CPU utilization is not high; interrupting easy to miss some of the problems, high CPU utilization.

1. First come, first served (FCFS): the principle of this algorithm is in accordance with the job arrives backup job queue (or process into the ready queue) select the order of operations (or processes)
  
2. Short job first (SJF: Shortest Process First): this is mainly used for job scheduling algorithm, it is selected from the job sequence in the backup job shortest running time required to run into the main memory.

3. The round-robin scheduling algorithm: all ready process according to the FIFO principle arranged in a queue, each time scheduling, the CPU allocated to the first team process, which can perform a time slice. When the time slice expires, issued by the clock timer interrupt, the scheduler stops execution of the process, and sent it to the end of the ready queue while continuing the CPU allocated to the head of the queue process.
Round-robin time-slice algorithm efficiency and have a great relationship. Because each switching process must save the information process and the process of loading new information, if the time slice is too short, the process of switching too often, in the process of switching will take too much time.

4. higher priority than response: the principle of higher response ratio (+ latency requirement has been run time) / operating time priority required in each job selected in operation when the first case is calculated in response to the backup job for each job in the queue than RP. Select the maximum jobs in operation.

5. Priority Scheduling Algorithm: according to the size of the process scheduling priority. The high-priority processes get priority scheduling policy is called priority scheduling algorithm. Note: The more priority number, the lower the priority.

6. The multi-stage queue scheduling algorithm:
  ① a plurality of the ready queue, and given different priority for each queue. The first has the highest priority, followed by a second queue of queues, each queue priority remaining decreased by one. This algorithm gives the respective time slice execution queue process is the same as does the size, the higher the priority in the queue, the smaller the slice execution time specified for each process.
② When a new process into the memory after it is first placed in the first end of the queue, according to the principle of FCFS queue waiting to be scheduled. When it came time to perform the process, as it can be completed within the time slice, you can prepare evacuation system; if it is not completed at the end of a time slice, the process scheduler will go to the next end of the queue.

12. The real-time system the basic characteristics of

Accomplish a specific task within a certain time, real-time and reliability.
The so-called "real-time operating system" actually refers to the operating system work, the resources can be dynamically allocated as needed at any time. Since the resources can be dynamically allocated, and therefore, its ability to handle transaction stronger, faster.

Published 47 original articles · won praise 15 · views 10000 +

Guess you like

Origin blog.csdn.net/qq_41525021/article/details/100109553