Operating System Theory and Multichannel Technology

what is an operating system

An operating system is a control program that coordinates, controls, and manages computer hardware and software.

In detail, the operating system should be divided into two parts:

#1: Hide the ugly hardware call interface and provide application programmers with a better, simpler and clearer model (system call interface) for calling hardware resources.
After application programmers have these interfaces, they no longer need to consider the details of operating the hardware and can concentrate on developing their own applications.
For example: the operating system provides the abstract concept of a file, and the operation of the file is the operation of the disk.
With the file, we no longer need to consider the read and write control of the disk (such as controlling the disk rotation, moving the head to read and write data, etc.),

#2: Make the application's competing requests for hardware resources in an orderly manner
For example, many application software actually share a set of computer hardware. For example, there may be three applications that need to apply for a printer to output content at the same time.
Then the a program competes for the printer resource and prints, and then the b competes for the printer resource, or it may be c, which leads to disorder,
The printer may print a section of a and then print c..., one of the functions of the operating system is to turn this disorder into order.

operating system history

The first generation of computers (1940-1955): vacuum tubes and punch cards

The background of the first generation of computers:

Before the first generation, human beings wanted to replace manpower with machinery. The generation of the first generation of computers was a sign that computers entered the electronic age from the mechanical age. From the failure of Babbage until the Second World War, there was little progress in the construction of digital computers. World War II spurred an explosion of computer research.

Professor John Atanasoff of Lowa State University and his student Clifford Berry built what is believed to be the first working digital computer. The machine uses 300 vacuum tubes. Around the same time, Konrad Zuse built the Z3 computer with relays in Berlin, a group at Bletchley Park, England built the Colossus in 1944, Howard Aiken built the Mark 1 at Harvard, and William Mauchley at the University of Pennsylvania and his students J.Presper Eckert built ENIAC. Some of these machines are binary, some use vacuum tubes, and some are programmable, but they are all very primitive, taking seconds to set up to perform the simplest calculations.

During this period, engineers in the same group designed, built, programmed, operated and maintained the same machine, all programming in pure machine language, or worse, traversing thousands of cables It is connected to the plug-in board to form a circuit to control the basic functions of the machine. There is no programming language (nor assembly), and operating systems have never been heard of. The process of using the machine is more primitive, see the "Working Process" for details

Features:
There is no concept
of an operating system, all programming is to directly control the hardware

Working process:
The programmer makes an appointment on the computer timetable on the wall for a period of time, then the programmer takes his plug-in board to the computer room and puts his plug-in board into the street computer. During these few hours, he exclusively enjoys the entire computer resources. The latter group had to wait (more than 20,000 vacuum tubes often burned out).

Later, punched cards appeared, and programs could be written on the card and read into the machine without the need for a plug-in board.

advantage:

Programmers have exclusive access to the entire resource during the application period, and can debug their own programs in real time (bugs can be dealt with immediately)

shortcoming:

A waste of computer resources, only one person uses it at a time.
Note: There is only one program in the memory at the same time, which is called and executed by the CPU. For example, the execution of 10 programs is serial.

Second Generation Computers (1955~1965) : Transistors and Batch Systems

The background of the second generation computer:

Since computers were very expensive at the time, it was natural to think of ways to reduce the waste of time. The usual approach is a batch system.

Features:
Designers, production personnel, operators, programmers and maintenance personnel have a clear division of labor. The computer is locked in a special air-conditioned room and run by professional operators. This is the 'mainframe'.

With the concept of operating system

With programming language: FORTRAN language or assembly language, write it on paper, then punch it into a card, and then take the card box to the input room, hand it to the operator, and drink coffee and wait for the output interface

Working Process: Illustration

How the second generation solves the problems/disadvantages of the first generation:
1. Save the input of a bunch of people into a large wave of input,
2. Then calculate sequentially (this is problematic, but the second generation calculation does not solve it)
3. Put The output of a bunch of people is accumulated into a large wave of output

The predecessor of the modern operating system: (see picture)

Advantages: batch processing, saves time

Disadvantages:
1. The whole process requires human participation and control, and the tape is moved around (two little people in the middle)

2. The calculation process is still sequential calculation - "serial"

3. The computer that the programmer used to enjoy for a period of time must now be planned into a batch of jobs, and the process of waiting for the result and re-commissioning needs to be completed by other programs of the same batch (this has a great impact on Program development efficiency, unable to debug the program in time)

The third generation of computers (1965~1980) : integrated circuit chips and multiprogramming

The background of the third generation computer:

In the early 1960s, most computer manufacturers had two completely incompatible product lines.

One is word-oriented: a large scientific computer, such as the IBM 7094, see above, mainly used for scientific computing and engineering computing

The other is character-oriented: commercial computers, such as the IBM 1401, pictured above, are mainly used by banks and insurance companies for tape filing and printing services

It is expensive to develop and maintain completely different products, and different users use computers for different purposes.

IBM tries to satisfy both scientific computing and commercial computing by introducing the system/360 series. The low-end 360 series is comparable to the 1401, and the high-end is much more powerful than the 7094. Different performances sell for different prices.

The 360 ​​was the first mainstream model to use a (small-scale) chip (integrated circuit), and compared to the second generation of computers using transistors, the price/performance ratio was greatly improved. Descendants of these computers are still used in large computer centers, the predecessors of today's servers , which handle no less than a thousand requests per second.

How to solve the second generation computer problem 1:
After the card is taken to the computer room, the job can be read from the card to the disk very quickly, so when a job ends at any time, the operating system can read a job from the tape, install it Running in and out of the memory area, this technology is called
simultaneous external device online operation: SPOOLING, this technology is used for output at the same time. When this technology is adopted, the IBM 1401 machine is no longer needed, and the tapes do not have to be moved around (the two little people in the middle are no longer needed)

How to solve problem 2 of the second generation computer:

The operating system of the third-generation computer widely uses the key technology that the operating system of the second-generation computer does not have: multi-channel technology

In the process of executing a task, if the cpu needs to operate the hard disk, it sends an instruction to operate the hard disk. Once the instruction is issued, the mechanical arm on the hard disk slides to read the data into the memory. During this period of time, the cpu needs to wait, and the time may be very short. , but for the cpu, it has been very long, long enough to allow the cpu to do many other tasks, if we let the cpu switch to do other tasks during this time, so the cpu will not be fully utilized. This is the technical background of the multi-channel technology

Multi-channel technology:

Multi-channel in multi-channel technology refers to multiple programs. The implementation of multi-channel technology is to solve the orderly scheduling problem of multiple programs competing or sharing the same resource (such as cpu). The solution is multiplexing. Multiplexing is divided into multiplexing in time and multiplexing in space.

Spatial reuse : divide the memory into several parts, and put each part into a program, so that there are multiple programs in the memory at the same time.

 

Time multiplexing : when one program is waiting for I/O, another program can use the cpu. If enough jobs can be stored in the memory at the same time, the cpu utilization rate can be close to 100%, similar to our elementary school mathematics institute. learning co-ordination methods . (After the operating system adopts multi-channel technology, it can control the switching of processes, or compete for the execution authority of the CPU between processes. This switching will not only be carried out when a process encounters io, but a process will occupy the CPU for too long. It will also switch, or the execution authority of the cpu is taken away by the operating system)

copy code
Modern computers or networks are multi-user. Multiple users not only share hardware, but also share information such as files and databases. Sharing means conflict and disorder.

The operating system mainly uses

1. Document which program uses what resource

2. Allocate resource requests

3. Mediation of conflicting resource requests for different programs and users.

We can summarize the functions of the above operating systems as: processing multiple (multiple or multiple) shared (shared or multiplexed) resource requests initiated by multiple programs, referred to as multiplexing

There are two ways to implement multiplexing

1. Multiplexing in time

When a resource is reused in time, different programs or users use it in turn. After the first program acquires the resource and finishes using it, it is the second one's turn. . . The third. . .

For example: there is only one cpu, and multiple programs need to run on the cpu. The operating system first allocates the cpu to the first program, and the program runs for a long enough time (the length of time is determined by the algorithm of the operating system) or When I/O is blocked, the operating system allocates the cpu to the next program, and so on, until the first program is reassigned to the cpu and then runs again. Since the cpu switching speed is very fast, the feeling to the user is these Programs run concurrently, or concurrently, or pseudo-parallel. As for how resources are time-multiplexed, or who should be the next program to run, and how long a task needs to run, these are the work of the operating system.

2. Spatial multiplexing

Each client acquires a small portion of a larger resource, reducing the time spent queuing for a resource.

For example, when multiple running programs enter the memory at the same time, the hardware layer provides a protection mechanism to ensure that their respective memories are separated and controlled by the operating system, which is much more efficient than a program that monopolizes the memory and queues up to enter the memory one by one.

Other resources for space reuse are disks, and in many systems, one disk holds files for many users at the same time. Allocating disk space and keeping track of who is using which disk blocks are typical tasks of operating system resource management.

The combination of these two methods is a multi-channel technology
copy code

The biggest problem with spatial multiplexing is that the memory between programs must be divided. This division needs to be implemented at the hardware level and controlled by the operating system. If the memory is not divided from each other, one program can access the memory of another program,

The first loss is security. For example, your qq program can access the memory of the operating system, which means that your qq can get all the permissions of the operating system.

The second loss is stability. When a program crashes, it may also reclaim the memory of other programs. For example, if the memory of the operating system is reclaimed, the operating system will crash.

The operating system of third-generation computers is still batch processing

Many programmers miss the first generation of exclusive computers that could debug their programs on the fly. In order to satisfy programmers who can get a response quickly, the time-sharing operating system appeared

How to fix problem 3 on second generation computers:

Time- sharing operating system:
multiple online terminals + multi-channel technology

20 clients are loaded into the memory at the same time, 17 are thinking, 3 are running, the CPU uses a multi-channel method to process these 3 programs in the memory, because the client submits are generally short instructions and seldom consumes energy. Over time, indexing computers can provide fast interactive services to many users, all of whom think they have exclusive access to computer resources

CTTS: MIT successfully developed a CTSS-compatible time-sharing system on a modified 7094 machine. After the third-generation computer widely adopted the necessary protection hardware (the memory between programs was isolated from each other), time-sharing The system just became popular

After the successful development of CTTS, MIT, Bell Labs and General Electric decided to develop MULTICS that can support hundreds of terminals at the same time (its designers aim to build a machine that meets the computing needs of all users in the Boston area), it is obvious that it is going to be heaven, Finally fell to his death.

Later, Ken Thompson, a Bell Labs computer scientist who had participated in the development of MULTICS, developed a simple, single-user version of MULTICS, which became the UNIX system . Based on it, many other Unix versions have been derived. In order to enable programs to run on any version of Unix, IEEE proposed a Unix standard, posix (Portable Operating System Interface)

Later, in 1987, a small clone of UNIX, the minix, appeared for educational use. Finnish student Linus Torvalds wrote Linux based on it

Fourth generation computer (1980~present): personal computer

From egon teacher blog

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324743470&siteId=291194637