Operating System Principles Chapter 9 Virtual Memory

Operating System Principles for Undergraduate Students Learning Record
Learning Record Family Bucket

Code must be brought into memory to run, but not all code needs to be loaded

The principle of locality: a program can run as long as it is partially loaded into memory

9.1 Virtual memory technology

When a process is running, a part of it is loaded into the memory first, and the other part is kept on the disk. When the instructions to be executed or the data to be accessed are not in the memory, the operating system automatically completes them from the disk into the memory for execution.

Virtual address space: the virtual memory allocated to the process

Virtual Address: The location of an instruction or data in virtual memory

Virtual memory: Combine memory and disk organically to obtain a large-capacity "memory", that is, virtual memory. But it also needs to be combined with the address space of the computer itself, that is, the influence of 32-bit and 64-bit. 32-bit can only be up to 4G, and 64-bit can be very large.

Copy-on-Write

Copy-on-write allows parent and child processes to share pages at initialization time

If a process modifies a shared page, a copy is made.

9.2 Implementation of virtual memory

Two implementations:

  • Virtual paging (virtual storage technology + paging storage management)

  • Virtual segment (virtual storage technology + segment storage management)

There are two types of virtual pages:

  • Request paging (all in external memory at the beginning)
  • Pre-adjust pages (pre-adjust some pages)

request pagination

  • Before the process starts running, instead of loading all pages, one or zero pages are loaded

  • After running, dynamically load other pages according to the running needs of the process

  • When the memory space is full and a new page needs to be loaded, a page in the memory is replaced according to a certain algorithm so that a new page can be loaded

page fault

If a page is accessed, the first access to the page needs to fall into the OS page fault interrupt (page-fault trap)

  1. access instruction or data
  2. Look at another table to decide:
    2.1 invalid reference => terminated
    2.2 just out of memory
  3. Find the location of the page on the backing store
  4. get empty page frame, swap page into page frame
  5. Reset the page table and set the effective bit to v
  6. restart command

Page fault interrupt processing (final exam) internal interrupt, soft interrupt

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-t0aLtc9Q-1641365786872) (E:\Documents and PPT\Junior Course Study\Operating System\Pictures\Ninth Chapter\page fault interrupt handling process)]

Performance of Request Pagination (final exam)

time for space

Page fault rate: the number of page faults/total access times, the probability of page faults.

Effective Access Time (EAT)

EAT = (1-p) * memory access time + p * page fault time

9.3 Page Replacement

Basic page replacement method:

  1. Find the location of the page on disk
  2. Find a free box (a lot of possibilities and strategies need to be dealt with here)
  3. Write pages to free frames, update page and frame tables
  4. restart user process

Page Replacement Algorithm

FIFO first in first out algorithm

The first-in-first-out algorithm is implemented by a queue. For each page to be written, first check whether there is a free position in the queue. If it exists, no operation will be done; if not, the page at the head of the queue will be squeezed out, and this page will be inserted at the end of the queue.

Belady exception, more page frames may cause more page faults (four page frames are more page faults than three)

OPT Optimal Replacement Algorithm

The replaced page is a page that is no longer needed in the future or will be used in the farthest future (from the current page number backwards, which one is no longer needed or is needed the farthest)

Role: as a standard to measure the performance of other algorithms

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-mogotoAH-1641365786874) (E:\Documents and PPT\Junior Course Learning\Operating System\Pictures\Ninth Chapter \OPT permutation algorithm)]

LRU least recently used algorithm

Look forward to see which page number has not been used for the longest time, and when the drawing is updated, it is best to put the most recently used page at the top and the longest used page at the end.

Replace pages that have not been used for the longest time

Second Chance Algorithm

need to access bit

If the access bit is 0, replace it directly

If the page access bit to be swapped is 1, then:

​ Set the access bit to 0

​ keep pages in memory

​ With the same rules, replace the next page

Implementation: clock permutation (clockwise)

9.4 System thrashing

9.4.1 Frame allocation

Must meet: the minimum number of pages required by each process

Two main allocation strategies

  • fixed allocation
  • priority assignment

Fixed allocation:

Equal allocation – equal division method
Example: If there are 100 page frames and 5 processes, each process is allocated 20 pages
Proportional allocation – allocated according to the size of each process

Priority assignment:

Use a ratio allocation strategy based on priority rather than process size

If process Pi generates a page fault

  • Select to replace one of the page frames
  • Select a page from a lower priority process to replace

9.4.2 Bumps

If a process does not have enough pages, the page fault rate will be high, which will cause:

  • Low CPU utilization.
  • The operating system thinks it is necessary to increase the number of multi-programs
  • A new process will be added to the system

thrashing (thrashing): pages of a process are swapped in and out frequently

The reason for the jitter: the number of frequently accessed pages by the process is higher than the number of available physical blocks (not enough physical blocks allocated to the process)

Working set model:

The working set concept: a collection of pages that a process actually accesses in a short time interval.

a Installing a faster CPU does not work

b. Installing a larger paging disk will not work

c. Increase the degree of multi-channel

d. Reduce the degree of multi-channel

e. Install more memory

f. It is possible to install faster hard drives or multiple controllers with multiple hard drives

g. Add prepaging to the page fetching algorithm

9.5 Kernel memory allocation

different from user memory

Usually obtained from the free memory pool

  • The kernel needs to allocate memory for data structures of different sizes
  • Some kernel memory requires contiguous physical page emulation

The kernel has the following characteristics when using the memory block:
(1) The size of the memory block is relatively small;
(2) The time to occupy the memory block is relatively short;
(3) It is required to complete the allocation and recovery quickly;
(4) It does not participate in the exchange.
(5) Frequently use memory blocks of the same size to store data of the same structure;
(6) Require dynamic allocation and recovery.

buddy system

Buddy System

The main idea: memory according to the power of 2

9.6 Other factors

9.6.1 Preset paging

In the early stage of process startup, reduce a large number of page fault interrupts

Load all or some of the required pages of the process before referencing

Memory is wasted if preloaded pages are not used

9.6.2 Page Size Selection

size selection

  • Fragmentation – need small pages
  • Table size - need large pages
  • I/O overhead - requires large pages
  • Program local - requires small pages
  • Page Faults – Need Larger Pages
  • other factors

No best answer, in general, tend towards larger pages

9.6.3 TLB range

TLB range – the amount of memory accessed through the TLB

TLB range = (TLB size) X (page size)

Ideally, the working set of a process should be stored in the TLB

  • Otherwise there will be a large number of page faults

increase page size

  • For applications that do not require huge pages, this will lead to increased fragmentation

Various page sizes available

  • This allows applications that require huge pages the opportunity to use huge pages without increasing the size of the fragment

Guess you like

Origin blog.csdn.net/weixin_45788387/article/details/122323806