page replacement algorithm

Chapter 6: Page Replacement Algorithms

Labels (space separated): OS


  • Function and Goal
  • Experimental setup and evaluation method
  • Partial Page Replacement Algorithm
    • Optimal page replacement algorithm (OPT, optional)
    • First-in, first-out algorithm (FIFO, First-In First-Out)
    • Least Recently Used algorithm (LRU, Least Recently Used)
    • Clock Page Replacement Algorithm (Clock)
    • second chance method
    • Least Frequently Used Algorithm (LFU, Least Frequently Used)
    • Belady phenomenon
    • Comparison of LRU, FIFO and Clock
  • Global Page Replacement Algorithm
    • working set model
    • Working Set Replacement Algorithm
    • Page Fault Rate Replacement Algorithm

(1) Functional goals

  • Function:
    • When a page fault interrupt occurs, a new page needs to be loaded and the memory is full, select which physical page in the memory is to be placed on top
  • Target:
    • Reduce the number of page swaps in and out as much as possible (that is, the number of page fault interruptions), specifically, swapping out pages that are no longer used in the future or that are less used in the short term, usually only under the guidance of the principle of locality Predictions based on past statistics
  • Page locking (frame locking):
    • Used to describe critical parts of the operating system or time-critical application processes that must reside in memory. The way to achieve this is: add a lock bit to the page table (lock bit)

6.1 Optimal page replacement algorithm (OPT, optional)

  • The basic idea:
    • When a page fault interrupt occurs, for each logical page stored in memory, calculate how long it needs to wait before its next access, and select the one with the longest waiting time as the replaced page
    • This is only an ideal situation and cannot be achieved in real systems, because the operating system has no way of knowing how long each page will wait before being accessed again.
    • It can be used as a basis for performance evaluation of other algorithms (run a program on an emulator and record each page access, and the optimal replacement algorithm can be used in the second run)

6.2 First-In First-Out Algorithm (FIFO, First-In First-Out)

  • The basic idea:
    • The page that has been in memory the longest is selected and eliminated. Specifically, the system maintains a linked list that records all logical pages located in memory. Judging from the arrangement order of the linked list, the first page of the chain has the longest residence time, and the last page of the chain has the shortest residence time. When a page fault occurs, the first page of the chain is eliminated and the new page is added to the end of the list.
    • The performance is poor, the page called out may be a page that is frequently accessed, and there is a Belady phenomenon, and the FIFO algorithm is rarely used alone.

6.3 Least Recently Used Algorithm (LRU, Least Recently Used)

  • The basic idea:

    • When a page fault occurs, the page that has not been used for the longest time is selected and eliminated.
    • It is an approximation to the optimal page replacement algorithm, which is based on the principle of program locality, that is, if some pages are frequently accessed in a short period of time (the last few instructions), then in a short period of time in the future Over time, they may also be accessed frequently again. Conversely, if some pages have not been accessed for a long time in the past, they may not be accessed for a long time in the future.
    • The LRU algorithm needs to record the sequence of the usage time of each page, which is expensive
  • Two possible implementations are:

    • The system maintains a linked list of pages, with the most recently used page as the first node, and the longest unused page as the last node. Each time the memory is accessed, the corresponding page is found, removed from the linked list, and moved to the top of the linked list. If not, it is directly inserted into the top of the linked list. Every time a page fault occurs, the page at the end of the linked list is eliminated.
    • Set up an active page stack, when accessing a page, push the page number to the top of the stack, and then check whether there is the same page number as this page in the stack, and extract it if there is. When a page needs to be eliminated, always choose the page at the bottom of the stack, which is the one that has not been used for the longest time.

6.4 Clock Page Replacement Algorithm (Clock)

  • Approximation of LRU, an improvement to FIFO
  • The basic idea:
    • The access bit in the page table entry needs to be used. When a page is loaded into memory, the bit is initialized to 0. Then if the page is accessed (read/write), the bit is set to 1;
    • Organize each page into a circular linked list (similar to a clock surface), and point the pointer to the oldest page (come first);
    • When a page fault occurs, examine the oldest page pointed to by the pointer. If its access bit is 0, it will be eliminated immediately; if the access bit is 1, the position will be set to 0, and then the pointer will move down one space. And so on until the eliminated page is found, then move the pointer to its next cell.

6.5 Second chance method

  • There's a huge price to pay here to replace "dirty pages"
  • Modify the Clock algorithm to allow dirty pages to always be kept in the clock head scan
  • Use both dirty and used bits to guide permutation
used dirty used’ dirty’
0 0 replace page
0 1 0 0
1 0 0 0
1 1 0 1

6.6 Least Frequently Used Algorithm (LFU, Least Frequently Used)

  • The basic idea:
    • When a page fault occurs, the page with the least number of visits is selected and eliminated.
  • Implementation:
    • An access counter is set for each page. Whenever a page is accessed, the access counter of the page is incremented by 1. When a page fault occurs, the page with the smallest count value is eliminated.
  • The difference between LRU and LFU:

    • LRU examines how long it has not been visited, and the shorter the time, the better; while LFU examines the number or frequency of visits, and the more visits, the better.
  • question:

    • A page is used a lot at the start of a process, but not later. Time consuming to implement.
  • Solution:

    • Periodically shift the count register to the right by one

6.7 Belady phenomenon

  • Belady phenomenon:

    • When using the FIFO algorithm, sometimes there is an abnormal phenomenon that the number of physical pages allocated increases and the page fault rate increases instead.
  • reason:

    • The replacement feature of the FIFO algorithm is contradictory to the dynamic features of the process accessing memory, and is inconsistent with the goal of the replacement algorithm (that is, replacing less-used pages), because the pages replaced by it are not necessarily the ones that the process will not access. of.

6.8 Comparison of LRU, FIFO and Clock

  • The LRU algorithm and FIFO are essentially first-in, first-out ideas, but LRU sorts the pages according to the most recent access time, so it is necessary to dynamically adjust the order between each page at each access (there is a page The most recent access time has changed); and FIFO is sorted according to the time when the page enters the memory. This time is fixed, so the sequence between each page is fixed. If a page has not been accessed since it entered memory, its most recent access time is the time it entered memory. In other words, if all pages in memory have not been accessed, the LRU algorithm degenerates into a FIFO algorithm.
  • The performance of the LRU algorithm is better, but the system overhead is large; the system overhead of the FIFO algorithm is small, but the Belady phenomenon may occur. Therefore, the compromise is the Clock algorithm, which does not have to dynamically adjust the order of the page in the linked list every time a page is accessed, but just makes a mark, and then moves it when a page fault occurs. To the end of the linked list, for those pages that have been accessed in memory, it cannot remember their exact location like the LRU algorithm.

6.9 Working Set Model

(1) Working set

  • defs:
    • The set of logical pages currently in use by a process, which can be represented by a binary function w(t, △)
      • t is the current execution time
      • △ is called the working-set window, that is, the time window for a fixed-length page access
      • w(t, △) = the set of all pages in the △ time window before the current time t (as t changes, the set is constantly changing)
      • |w(t,△)| refers to the size of the working set, that is, the number of pages|

(2) Resident Collection

  • defs:
    • Refers to the set of pages that the process actually resides in memory at the current moment
    • The working set is an inherent property of the process during operation, and the resident set depends on the number of physical pages allocated to the process by the system and the page replacement algorithm used.
    • If the entire working set of a process is in memory, that is, the resident set contains (=) the working set, then the process will run smoothly without causing too many page faults (until the working set changes drastically, thus transition to another state)
    • When the size of the process resident set reaches a certain number, more physical pages are allocated to it, and the page fault rate will not decrease significantly.

6.10 Two Global Page Replacement Algorithms

(1) Page fault rate page replacement algorithm

  • Variable allocation strategy:
    • Resident sets are variable in size. For example, when each process starts running, it first allocates a certain number of physical pages according to the size of the program, and then dynamically adjusts the size of the resident set during the running process.
    • The method of global page replacement can be adopted. When a page fault occurs, the replaced page can be among other processes, and each concurrent process competes to use the physical page.
  • Advantages and disadvantages:

    • Better performance, but increased system overhead
  • Implementation:

    • Page fault frequency (PFF) algorithm can be used to dynamically adjust the size of the resident set
  • Page missing rate:

    • Page fault rate means "page faults/memory access" (ratio) or "reciprocal of the average time interval for page faults".
  • Factors that affect the page fault rate:

    • page replacement algorithm
    • Number of physical pages allocated to the process
    • the size of the page itself
    • How to write a program

Page fault rate algorithm

  • If the page fault rate of the running program is too high, allocate more physical pages by increasing the working set
  • If the page fault rate of the running program is too low, reduce its number of physical pages by reducing the working set
  • Try to keep the page fault rate of each program running within a reasonable range
  • An alternating working set calculation is explicitly adapted to minimize page misses
    • When page faults are high – increase the working set
    • When page fault rate is low – reduce working set

algorithm:

  • Keep track of missing probability
    • When a miss occurs, this time is recorded from the last page miss, and t'last is the last page miss time
  • If the time between page misses is "large", then reduce the working set
    • If t'current - t'last > T, then remove from memory all pages not referenced within [t'last,t'current] time
  • If the time between page misses is "small", then increase the working set
    • If t'current - t'last <= T, then add missing pages to the working set

6.11 Trashing

  • defs:
    • If the physical pages allocated to a process are few and cannot contain the entire working set, then the resident set is included in the working set, then the process will cause many page fault interrupts, and pages need to be replaced frequently between memory and external memory , so that the running speed of the process becomes very slow, we call this state "jitter".
  • The reason for the jitter:

    • As the number of processes resident in memory increases, the number of physical pages allocated to each process continues to decrease, and the page fault rate continues to rise. Therefore, the operating system needs to choose an appropriate number of processes and the number of frames required by the process to achieve a balance between the concurrency level and the page fault rate.
  • Flutter problem may be improved by native page replacement algorithm

  • Better Rules for Loading Control: Tweak MPL
    • MTBF, mean time between page faults
    • Page fault service time (PFST)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325766375&siteId=291194637
Recommended