Operating system --- Harbin Institute of Technology Sun Zhigang 70 lecture summary (13-25)

13.windows creates a new process --- the createProcess() parameter is the location of the program that needs to be executed
 
Google msdn API to find API for Windows
 
The man (manual manual) API under Linux can find the API usage
 
kill process id under linux to end the process
 
Communication between processes:
Message passing: process ---> kernel ---> process
One side sends and the other receives, the link needs to be established first
Blocking, non-blocking message passing
Cache of messages in the operating system:
unbuffered
bounded buffer
unbounded buffer
Shared memory: Negotiation between processes (system calls), delineating shared memory (synchronization issues, some systems do not support)
int main(void){
     int i;
     for(i=0;i<3;i++){
     fork();
 
     sleep(30);
     }
}
 
For this program, the main process spawns three processes. The first subprocess of main spawns 2 subprocesses, the second subprocess spawns one subprocess, and the third subprocess spawns no subprocesses.
Instead of each child process spawning three child processes. (i++ will be generated every time the main process is fork, and the i values ​​in the three subprocesses are different)

 

Run the program:
 

 

Running result: the global variables of the child process and the parent process are independent of each other
Subprocess output increments
The parent process keeps outputting 0
The address of the value of the parent process and the child process is the same
 
Threads are independently owned: (threads belonging to the same process share data file code segments)
thread ID
PC
register
stack
 
16.
Thread Features:
Responsiveness:
A process initiates an IO operation and enters the waiting state. At this time, the user clicks on the window, and the operating system sends a message but the process cannot respond. For multithreaded processes, the IO operations are handed over to the newly created thread.
Resource sharing, economical, and expensive to create a process:
 
 
User-level thread: Use technology to simulate multiple sets of registers, multiple PCs, and multiple stacks (save the scene and restore the scene).
High efficiency: no kernel involved
Good customizability
shortcoming:
1. If any thread calls a blocking system call, then all threads are blocked
2. Can't run on multiple processors, just a thread in the eyes of the operating system
Kernel-level threads: threads supported by the kernel
 
17
Linux thread implementation:
The clone() system call can implement fork() and threads
clone parameter (flag):
CLONE_FS child task and parent task file system sharing
CLONE_FILES Opened files are shared
CLONE_VM memory sharing
 
The difference between creating threads and processes in Linux:
Call clone without adding CLONE_VM when calling fork
The CLONE_VM parameter was added when creating a thread
 
This causes the system to see no difference between processes and threads, and threads are not optimized (so threads are called lightweight processes)
 
18. The threading model is unstable, and a problem with one thread causes the process to crash.
 
19 Processes can be divided into:
CPU bound process
IO bound process
 
CPU scheduling happens: decides which process needs to run next
Advantages and disadvantages of scheduling algorithm:
CPU utilization
Throughput: how many processes are running per unit time
Turnaround time: How long does the process take from creation to completion
Waiting time: how much time has passed in the ready queue
Response time: How long does it take for the process to respond after the event occurs
 
Shortest job scheduling:
preemptive
Non-preemptive (shortest remaining time first)
Time slice rotation method: allocate time slices (fixed length), and wait until the time is up. Select the one with higher priority from the ready queue and allocate a time slice.
 
How to realize time slice rotation + select high priority----multi-level queue
Each process is placed in a queue of different priorities, and the process is taken from the queue with the highest priority to run each time
priority:
System processes such as interrupt processing need to be processed in time and placed at the highest priority
Interactive process (related to user input and output, more IO time, very short CPU time, requiring quick response to the user), sub-priority
CPU-using processes
 
Now the common strategy of the operating system: MultiLevel FeedBack Queue (MultiLevel FeedBack Queue)
Determine CPU scheduling based on the previous behavior of the process
accomplish:

 

The above two queues use the time slice rotation method
The third queue uses FIFO
The time slice allocated to the process CPU for the first queue is 8
All processes come in (do not know if they are CPU bound or IO bound) and put them in the queue with the highest priority first
If the process runs out of 8 time slices as soon as it comes in (without entering IO, CPU-bound type), put the process in the next priority queue.
If 16 time slices are used up again, then put them in a lower priority (first-in, first-out queue)
In this way, the CPU-bound process can be at low priority, and the CPU can quickly respond to user input and output (IO-bound type)
 
Problem: Problems can arise if a process starts out CPU-bound (IO-bound) or vice versa
solve:
It is not a judgment but multiple judgments at runtime. If the process runs out of CPU time slices at any time, it is placed in a lower priority queue.
If the process in the low priority queue performs an IO (from the waiting state to the ready state), it will be placed in the higher priority queue. The advantage of this is that the CPU needs to be processed as soon as possible after the user IO is finished.
 
Scheduling of multi-core CPUs: More complex scheduling requires load balancing (CPUs are busy)
Multi-core CPUs are homogeneous and heterogeneous (CPU instructions, operation speed)
Same as single core: the next process selection is selected with high priority
different: yes
Symmetric multiprocessor management (mostly) - each core may run a process or operating system
Each CPU has its own ready queue
All processors share the ready queue - if a process switches from one CPU to another, it will cause a cache flush (CPU caches are not shared)
Asymmetric multiprocessor management: only one CPU runs the operating system --- much simpler to design the operating system
 
Afnity: Programmers can set which CPU a process enters to run
Load balancing:
Busy CPU pushes to idle CPU or idle CPU pulls from busy CPU
 
Conflict between the two? -- "soft Affnity (general settings) or hard Affinity
 
Solaris example: (this table can be modified without recompiling the kernel)

 

The higher the number, the higher the priority and the less time slice given.
One line explanation: A queue with a priority of 20 is allocated a time slice of 120 . When the time slice is exhausted, the priority is adjusted to 10 (the exhaustion indicates that it is CPU-intensive, and it is placed in a lower-level queue)
return from sleep : If the corresponding priority is IO, the priority changes from the waiting state to the ready state to return from sleep (wake up from IO to trigger this operation)
 
Windows XP scheduling:
 

 

Two values ​​are required when setting the priority:
Which category (time critical, highest, above normal, normal, etc.)
There is also a distinction between the degree of desire for time slices in each category
for example:
Common process priority is 8 (normal, normal)
The priority of time-critical (time critical) in below normal is 15, which is very high
The priority of the thread that activated the window is the original
 
Linux priority: (the smaller the number, the higher the priority)
Real-time : 0-99
Nice : 100-140
 

 

 
There is one queue per priority, and each priority is assigned a corresponding time slice.
When a thread runs out of time slice, it is thrown into the priority queue of another array. In this way, the time slice in the priority queue in the active array array is not used up. Ensure that all threads have a chance to execute by switching between active and expired
 
24. Process synchronization
rewind(File * f): make the file pointer execute the file header
fopen: open the file, return the file pointer
 
25.
Critical Section: A section of code that accesses a shared resource (critical resource)
 
The scheduling principle for a process to enter a critical section is:
 
1. If several processes request to enter the free critical section, only one process is allowed to enter at a time.
 
2. At any time, there can be no more than one process in the critical section. If an existing process has entered its critical section, all other processes that attempt to enter the critical section
 
The process must wait.
 
3. The process entering the critical area should exit within a limited time, so that other processes can enter their critical area in time.
 
4. If the process cannot enter its own critical section, it should give up the CPU to avoid the process of "busy waiting"

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325293810&siteId=291194637