[Operating System] Operating System Learning Summary

1. Concept

1) Operating system features

Concurrency
Sharing
Virtuality (turning a physical entity into several logical counterparts)
Asynchrony

2) OS Architecture

Monolithic structure
Hierarchical structure
Virtual machine structure
C/S
structure Microkernel structure

3) Base conversion

4) Global variables and local variables

Global variables are allocated in the global data segment and loaded when the program starts running, while local variables are allocated on the stack.

2. Processes and threads

1) Concurrent execution characteristics of the program

①Intermittent

A program in concurrent execution has a status of "execution-pause-execution" due to resource constraints.

② Loss of closure

Due to resource sharing and the same writing, the closedness of the single-channel execution of the program is broken.

③Irreproducibility

Concurrent execution speed is uncertain, has randomness, and loses reproducibility.
Time-related errors may occur.

2) Process

①Definition

A process is a computing activity of a program that can be executed concurrently on a certain data set, and it is also the basic unit of the OS for resource allocation and operation scheduling.

Programs in running state are stored in memory as processes.

②The difference between a process and a program

i> A process is a dynamic concept, and a program is a static concept.
A process exists during the execution of a program.
ii> Processes have concurrency characteristics, programs do not.
iii> Mutual constraints between processes, while programs do not.
(Resource sharing and competition cause processes to restrict each other)
iv> There is a many-to-many relationship between processes and programs.
A program runs multiple times to form different processes, and a process can be associated with different programs by executing a specific program.

③ Process characteristics

Dynamics
Concurrency
Independence (a process is the basic unit of resource allocation, protection and scheduling in the system)
Asynchronous
Structural (a process has a certain structure, consisting of programs, data sets and process control blocks)

④Process Control Block (PCB)

The PCB is established with the creation of the process and cancelled with the revocation of the process. The PCB is the only sign of the existence of the process. PCB is a data structure used by OS to record and describe process status and related information. PCB resident memory, which includes the situation when the process is executing and the state, breakpoint and other information of the process after giving up the CPU.
PCB

⑤ Process status

3 states and transitions
3 states
5 states and transitions
5 states
7 states and transitions
7 states

3) Thread

①Features

i> A thread is a relatively independently runnable unit of the process species
ii> A thread is the basic scheduling unit in the operating system, and the thread species contains the basic information required for scheduling.
iii> In an operating system with a threading mechanism, a process is no longer a scheduling unit, a process contains at least one thread, and a thread is used as a scheduling unit.
iv> The thread itself does not own resources, it shares the resources owned by the process with other threads in the same process. Since resource sharing is involved between threads, a synchronization mechanism is required to implement communication between multiple threads within a process.
v> Similar to processes, threads can also create other threads, and threads also have declaration cycles and state changes.

4) Processor scheduling

①Batch job scheduling algorithm

First-come, first-served
Short job priority
High response ratio scheduling algorithm (taking care of short jobs and taking into account the order in which jobs arrive, so that long jobs will not be unserviced for a long time)

②Interactive system process scheduling

Time Slice Round Robin Scheduling Algorithm

The time-sharing system scheduling algorithm is a preemptive scheduling algorithm.

Each process can only run in turn in turn. If the process is still running at the end of the time slice, the CPU will deprive the process of the right to use and assign the CPU to another process. If the process blocks or ends before the time slice ends, the CPU switches immediately.

Disadvantages: The system consumption and process/thread switching overhead are large, and the size of the overhead has a great relationship with the length of the time slice. But if the time slice is too long, each process can be completed within its time slice, and the algorithm degenerates into a first-come, first-served algorithm.

Priority Scheduling Algorithm

It is divided into non-preemptive priority scheduling and preemptive priority scheduling.

Multilevel Feedback Queue Scheduling Algorithm (Feedback Loop Queue)

Using dynamic allocation of priorities, the scheduling strategy is a preemptive scheduling method

5) Interrupt source

① Forced interruption

Caused by random events rather than pre-arranged by the programmer.
Such as:
input/output interrupts (device travel, execution of print statements),
hardware failure interrupts (power failure), clock interrupts, console interrupts, and programmatic interrupts.

②Voluntary interruption

Such as: the time slice is up.

3. Memory management

1) Paging memory management

The physical address space of running processes is contiguous. The size of the page is determined by the address structure of the hardware.

①Basic principle

The address space of the user program is divided into several pages of equal size, and the page number starts from 0 to divide the memory space into several physical blocks equal to the page size, namely memory blocks. Each physical block is numbered, starting from 0.

②Address mapping

page table

Usually stored in memory.
Page number, physical block number.
Implements the mapping of logical addresses to physical addresses.

address structure

A logical address consists of a page number and an offset within the page.
The physical address is: block number * block size + offset address.

calculate

The logical address is 2052, the page size is 1KB, the page table is as follows, and the physical address is obtained.
As shown in the figure: 2052/1024(1KB) = 2; 2015%1024=4; the paging mechanism is page number p=2, offset w=4.
According to the page table, the corresponding physical block number of the second page is 7.
Then the physical address is 7*1024+w = 7172
Paging memory address mapping process

③Page replacement strategy

④Advantages and disadvantages

Paging memory management does not generate external fragmentation, but it does generate internal fragmentation.
If the memory required by the process is not an integer multiple of the page size, then the last physical block cannot be used up, resulting in in-page fragmentation.
Another advantage is that common code can be shared.

⑤Quick Table (Associative Register)

In order to speed up the address conversion speed in the process of logical address and physical address conversion, a special buffer memory with parallel query capability, that is, a fast table, is added to the address conversion mechanism.
The fast table is used to store those page table entries that are currently accessed.

Conversion steps:
i> After the effective address given by the CPU, the address conversion mechanism automatically sends the page number p into the cache register, and compares this page number with all the page numbers in the cache.
ii> If there is a match with this page number, it means that the page table entry to be accessed is in the fast table.
iii> Directly read the physical block number corresponding to the page number from the fast table, and send it to the physical address register.
iv> If the same page table number is not found in the fast table, the page table in the memory must be accessed again. After finding the page table entry corresponding to the page number from the page table, put the physical block in the page table entry. number into the address register.
v> At the same time, the page table entry corresponding to this page number is stored in the fast table, that is, the fast table is re-modified.

⑥Page update algorithm

2) Segmented memory management

3) Virtual memory

Can speed up virtual and real address translation is: I. Increase the block table (TLB) capacity II. Make the page table resident in memory

4) Interleaved memory

Interleaved memory is essentially a modular memory that can perform multiple independent read and write operations in parallel.
Each block of the interleaved memory has its own MAR and MDR.
The addresses of each module of the interleaved memory are discontinuous, and the units of adjacent addresses are located in adjacent modules.

4. I/O management

5. File management

6. Deadlock

1) Cause of deadlock

Inter-process competition for resources and process advancement order are illegal.

2) Deadlock Necessary Conditions

If the following four conditions are met, a deadlock will occur.

① Mutual exclusion conditions

Refers to the exclusive use of the allocated resources by the process, that is, a resource can only be used by one process at a time.

② Possess and wait

When a process blocks while requesting a new resource, it keeps holding on to the resource it has acquired.
That is, the process does not obtain all the resources it needs at one time, but applies for new resources when it occupies a part.

③No preemption condition

And the allocated resources cannot be forcibly preempted from the corresponding process.
That is: resources can only be automatically released after the process completes its task.

④Loop wait condition

There is a circular chain of process-resources, circular waiting.

Conditions for no deadlock: At least one process is guaranteed to obtain all resources.
For example: N processes share 11 printers, and each process needs 3 printers. When the value of N does not exceed the value, the system will not deadlock?
The worst case is that 1 process acquires 3 printer resources, and N-1 processes acquire 2 printers, waiting to acquire the third one. So 3+(N-1)*2 = 11, N=5

3) No deadlock strategy

①Prevention of deadlock

Destruction of one of the 4 conditions that create a deadlock.

②Avoidance of deadlock

In the process of dynamic resource allocation, an algorithm is used to prevent the system from entering an unsafe state, thereby avoiding deadlocks.

③ Deadlock check

Take appropriate action to clear the deadlock that has occurred from the system.

④Release of deadlock

Cancel or suspend some processes in order to reclaim some resources, and then allocate these resources to the processes that are already in the blocking state to turn them into the ready state to continue running.

4) Deadlock Algorithm

A. Avoid deadlocks

Banker's Algorithm

B. Unlock deadlock

process of elimination

C. Prevent deadlock

Static resource allocation method

D. Detect deadlock

Resource Allocation Diagram Simplification.
The so-called simplification means that if all the resource requests of a process can be satisfied, it can be imagined that the process gets all the resources it needs, finally completes the task, completes the operation, and releases all the resources occupied. In this case, then It is said that the resource allocation graph can be simplified by the process. Adding a resource allocation graph can be simplified by all its processes, then the graph is said to be simplifiable, otherwise the graph is said to be irreducible. The method
of simplification is as follows:
( 1) In the resource allocation diagram, find a process node Pi that is neither waiting nor isolated. Since Pi can obtain all the resources it needs, and release all the resources it occupies after running, it can be used in the resource Eliminate all application edges and allocation edges of Pi in the allocation graph, making it an isolated node with neither application edges nor allocation edges.
(2) Allocate the resources released by Pi to the processes that apply for them, that is, in the resource allocation graph
(3) Repeat steps (1) and (2) until no qualified process node can be found.
After simplification, if the resource allocation graph can be eliminated , so that all processes become isolated nodes, then the graph is completely reducible, otherwise it is irreducible.

7. Calculation

1) Original code, inverse code, complement code

①Original code

The first bit is the sign bit, the others are the value. 0 is positive, 1 is negative.

②Inverse code

The complement of the positive number is itself; the complement of
the negative number: the sign bit remains unchanged, and the other bits are inverted.

~10 means 10 bits are negated.

③Complement code

Positive complement: itself.
Negative number's complement: the sign bit of the original code remains unchanged, the other bits are inverted, and 1 is added at the end.
That is: after the first 1 counted from the back, the bits are inverted.

Note: Computers store in two's complement .

④Example

Example 1:

int i = 5;
int j = 10;
System.out.println(i + ~j); //5+(-11)=-6

j = 10, the original code is (10)b = 0000 0000 0000 0000, 0000 0000 0000 1010;
(~10) = 1111 1111 1111 1111, 1111 1111 1111 0101
; When reading, the original code of the number whose complement is (~10) is:
1000 0000 0000 0000, 0000 0000 0000 1011; namely -11

Example 2:
Variables a and b are 64-bit signed positive numbers, a: 0X 7FFF FFFF FFFF FFFF; b: 0X 8000 0000 0000 0000; then a+b = ?

Solution:
a+b = 0X FFFF FFFF FFFF FFFF FFFF; is -1's complement.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325534522&siteId=291194637