Operating system over again

 OS write a little basic knowledge, interview continuously updated ...

Base Basics

1. Computer Hardware : input / output devices, memory, a calculator, a controller.

2. OS are three basic types :

Batch OS: single channel batch (CPU utilization rate), multi-channel batch (lack of interaction)

Sharing OS: round-robin assignment;

Real-time OS: timely response, high reliability

3. The most basic features : concurrency and shared

4. Parallel and Concurrent : parallel: the true sense performed simultaneously, or one after the concurrent execution, to give round-robin execution period of time, after a period of time is finished, the macro parallel.


Related processes & threads

1. Concept : a process running on a data collection program, the OS is the basic unit of resource allocation;

2. Dynamic , concurrent, independent, asynchronous, a program may contain multiple processes;

3. Composition : programs, data, PCB (identifier, state - blocking, ready to perform), priority, CPU-site protected areas, family contact (relationship between father and son from), possession of a list of resources

4. Status : Ready blocking and executed, three queues. When the scheduling process is ready state occurs, enter the execution state, execution state requested resource (such as IO operations) is unable to meet the blockage occurred, was thrown into the blocking queue, after completion when resources are met to enter the ready state;

 

The kernel mode and user mode : privileged kernel mode has engaged, all instructions can be executed, access to all registers and memory areas; low privileged user mode can access only the portion of the storage area, the instruction execution section.

6. Create : general use fork () function creates a child process, pay attention to the emergence of a zombie process, how to solve? It is described in the UNP: the function signal using the signal capturing SIGCHLD signal using the corresponding function (using waitpid).

7. Thread : OS basic unit of scheduling, the county agreed to all the memory space (resources) shared process in the process, they only have their own program counter, stack and a small amount of registers.

8. Create : pthread_create (tid, function parameters)

9. Recovery : Other threads pthread_join () is recovered, or disposed in the thread from the thread pthread_detach function () such that after performing its own function end;

Different process threads:

  9.1 Scheduling of the basic unit and the basic unit of resource allocation;

  9.2 whether it has the resources: the process has a thread with only a small amount of registers and program counter as well as stack;

  9.3 costs: the cost of creating, to be involved in the process of allocating memory space, so slower than the thread, revocation is similar; the field of special protection than the thread switching overhead, the complexity of the process, the process to switch memory space, and then switch stack and related registers , and the thread only requires subsequent steps, no need to switch memory space, so fast. (UNP said thread creation process may be faster than the 10 to 100-fold).

  9.4 Communication: Since the inter-process memory space isolation, so to cross-memory space to communicate (using pipes, message queues, shared memory, semaphores, socket socket), and the thread will come through the global communication into a can.

  9.5 Application scenario: multi-threaded for multi-machine, multi-core and multi-threaded apply;

10. The inter-process communication (IPC)

Pipeline: anonymous pipes and named pipes, anonymous pipes can only be used with a genetic relationship between processes, and named pipe can be used between the general processes; Linux header file <pipe.h>

Message Queue: create a message queue, a process to read, to write another; the header file: <mqueue.h>, mq_open () and mq_clos ();

Shared memory: one of the fastest IPC way, why fast? Because he directly mapped into the process space of the process memory space you want to use, thus avoiding the overhead of time to interact with the kernel buffer other ways. (Other ways are generally the setup, the buffer write process, and then into the kernel buffer, and then read again in the process other data from the kernel buffer own process buffer) <sys / mman.h>

socket: one kind of a way of remote process communication can be achieved. Linux header file <sys / socket.h>

(TCP / UDP each programming paradigm)

11. Process Synchronization:

互斥锁:pthread_mutex_t,pthread_mutex_lock/trylock/unlock

Read-write locks: read and write locks, read locks share, exclusive write locks. And read-write lock database similar.

Semaphore: <semaphore.h> sem_open () / wait / trywait (), an example is the operating system of producers and consumers have done this problem

Condition variables: pthread_cond_t signal () and wait () function, normally used with the mutex.

Record Lock: lock on the record, typical examples of file locking.

12. Scheduling Algorithm

  12.1 first-come, first-served FCFS: that came before (at the head of the ready queue) first scheduled for execution, all waiting behind. (This reminds me of a dormitory on the morning roommate on the toilet for half an hour ... I am also extremely anxious, but not to seize the resources ... I had no choice but to find another host, becoming one of his temporary process)

  12.2 short operating priority SJF: the first is short of execution. Very unfriendly to work long, leading to a long job, "Hunger."

  12.3 priority scheduling PSA: determining a priority of jobs, select the high priority queue is loaded into memory from the reserve.

  High priority response ratio 12.4: Select each job in response to a relatively high loaded into memory (response latency than + :( run time) / wait time), the algorithm considers the latency factor.

13. Deadlock

A set of processes in the queue, waiting for each process are held by other processes, resources, resulting in circular wait situation.

Condition 4: Condition mutually exclusive, and remains request, not deprived, the loop condition

How to prevent: start from 234 conditions, you need to request the release of deprivation can be a high priority, numbering for each type of resource, the need to request the release of high low.

How to avoid: banker's algorithm, find out the needs of pre-allocated, to determine whether to find a safe state.

How to detect: resource allocation map can completely simplification;

How to solve: 1 to terminate the process (all terminated; termination of the order until the deadlock state is released); 2 pay the minimum lifting algorithm ().

14. The virtual memory

 In my opinion, virtual memory does not actually expand from physical memory. But through feed technology "looks like" expanded physical memory. Principle used involves temporal locality and spatial locality.

In the initial run, only the necessary program into memory page, a page fault occurs if the runtime then missing page replacement, replacement of the well-known algorithms are LRU (least recently used algorithm), the principle is used programs and data, it is there are likely to continue to access after a certain period of time.

This can be a very large software or program up and running, just feed technology in its role only.

15. The physical memory and virtual memory

 Physical address corresponding to the physical memory, the virtual memory corresponding to the logical address, there is a mapping between them.

When accessing a logical address, the logical address needs to be broken down into a group number 1. CPU and an inner group address; to check the address conversion table according to the group number, it is determined whether the set of information in the memory;

2. If the group number is already in memory (main memory), then reads out the table corresponding to the group number from the physical address translation; group number and the physical and logical addresses in the group, access necessary information;

3. If the group number is not in memory, the page fault that occurs, you need to read the information stored on disk (secondary storage) into memory, if there is a free page, read directly into otherwise have to select a memory page replacement , then the information needed to change into memory, and updates the address translation table.

Missing page replacement algorithm

1. stochastic algorithm: generating a random number to determine that a swap out memory;

2. FIFO algorithm: the first to enter the page swapped out;

3. Least Recently Used algorithm: Replace page most recently infrequently used; LRU

4. Optimal algorithm: Replace page (an ideal algorithm) after the maximum time before use.

16. Program compilation process

17. Comparison of static and dynamic links link

1. The static link library executable file needs to fit into, and does not require a dynamic link library;

2. Static link library function directly to you, so fast execution speed in the executable file; and a dynamic link library, you need to go to the library to find temporary, so perform at a slower pace;

3. statically linked executables lead to bloated, and the use of dynamic link library executables fresh, save storage space;

4. In addition the static link library there is a problem repeated assembly (#ifndef his ilk is to solve these problems), dynamic link libraries This problem does not exist;

5. Static link library updated and expanded trouble (already linked into executable files), you need to recompile the software, and the use of dynamic link libraries, you simply replace the dynamic link library file.

(FIG source Baidu)

Guess you like

Origin www.cnblogs.com/yocichen/p/11406218.html