In-depth understanding of the computer system second time Chapter IX virtual memory Part1

 

This took 4 hours and 40 minutes, looked at 559 ~ 575, a total of 17

The first pass the corresponding address  https://www.cnblogs.com/stone94/p/10264044.html

 

Note: The exercises in this chapter must be done, and to do immediately in time to see, which is very helpful in understanding the content of the measure just learning, continue to look back in time, it will not be forced to continue to accumulate ignorant degree
 
 
The key terms and their English, said:
Remember to read this chapter while a great help, or these as a dictionary when reading an abbreviation forget what does it mean to have a look at the line
VM: Virtual Memory
PA: Physical Address
VA: Virtual address
MMU: Memory Management Unit (in this chapter can be seen as representative of address translation)
VP: virtual page
VPO: the virtual page offset (byte)
VPN: Virtual page number
TLBI: TLB Index
TLBT: TLB tag
PTE: page table entry
PP: physical page
PPO: physical page offset (byte)
The byte offset within the cache block: CO
CI: cache index
CT: cache tags

 

 

What is virtual memory? What is its role?
In order to manage memory more efficiently and less mistakes, modern system provides a main memory of abstraction, called Virtual Memory (VM).
Virtual memory is abnormal hardware, hardware address translation, main memory, disk files and perfect interaction kernel software, which provides a large, consistent and private address space for each process. By a very clear mechanism for virtual memory provides three important capabilities: (1) as it is the main address space of the cache stored on a disk, save only the active area in the main memory, and in accordance with back and forth between the main memory and the disk data transfer, in this way, it is efficient use of main memory; (2) it provides a consistent address space for each process, which simplifies memory management; (3) which protect the address space per process is not destroyed by another process.
 
 
Physical addressing
Virtual addressing
In Figure 9-2, the virtual address (VA) translated into a physical address (PA) translation work is determined by the memory management unit (MMU) is responsible for
 
 
Page table
Virtual memory system must be determined by some method whether a virtual page cache somewhere in the DRAM. If so, the system must also determine which virtual page is stored in the physical page. If you do not hit, the system must determine the location of the virtual page which is stored in the disk, select a sacrifice workers also in physical memory and virtual pages copied from disk to DRAM, replace the sacrifices.
These functions are provided by a combined hardware and software, including operating system software, the MMU address translation hardware (memory management unit) and a storage is called a page table (page table, pgd) in the physical memory data structure, the virtual page table page map to the physical page. Every time a hardware address translation of the virtual address into a physical address, reads the page table. Operating system maintains a page table of contents, and the pages back and forth between disk and DRAM.
Page table is an array of page table entry (Page Table Entry, PTE) is.

 

 

 
Page Hit
When the query data, the right data is cached in physical memory (cache hit), the corresponding data is directly returned by the virtual address
Missing pages
When the query data, the data is not cached in physical memory through virtual addresses (cache miss) , this time, address translation hardware will trigger a page fault exception missing page exception call page fault handler in the kernel, the program select a sacrifice page (page if the sacrifice had been modified, it will be written back to disk, persistent down), the copy destination physical location of the original page to sacrifice the page, and then ends abnormally program processing returns, it will re start of instruction that caused the page fault, the directive will result in sending a virtual address-to-address translation page fault hardware, this time, will inevitably lead to a page hit, returns the corresponding data to the CPU.
 
 
The PTE cache in the cache
Although memory page table is a management tool for virtual memory usage, however, the page table is itself data, data, people will think of how to cache it, in order to improve the efficiency of access

 

Figure 9-14,

The most bumpy road:
PTEA the MMU → → → VA  PTEA PTEA miss → → → PTE → PTEA memory hit → PTE → MMU → PA → PA → PA → memory miss → data → data → PA → hit processor
The smoothest road:
VA → MMU → PTEA → PTEA命中 → PTE → MMU → PA → PA命中 → 数据 → 处理器
 
 
把 PTE 缓存在 TLB 中
利用 TLB 加速地址翻译
如图 9-14 所示,每次 CPU 产生一个虚拟地址,MMU 就必须查阅一个 PTE,以便将虚拟地址翻译为物理地址。在最糟糕的情况下,这回要求从内存多取一次数据,代价是几十到几百个周期。如果 PTE 碰巧缓存在 L1 中,那么开销及下降到 1 个或 2 个周期。然而,许多系统都视图消除即使是这样的开销,它们在 MMU 中包括了一个关于 PTE 的小的缓存,称为翻译后备缓冲器(Translation Lookaside Buffer, TLB)。
TLB 是一个小的、虚拟寻址的缓存,其中每一行都保存着一个由单个 PTE 组成的块。
当 TLB 命中时,所有的翻译步骤都是在芯片上的 MMU 中执行的,因此非常快。

 

 
多级页表
我认为,多级页面的核心作用是用时间换空间。也就是说,其实多级页表比一级页表运行起来要慢,但是可以减轻内存的压力。
关于空间,见本小节(9.6.3 多级页表)的开头
“到目前为止,我们一直假设系统只用一个单独的页表来进行地址翻译。但是如果我们有一个 32 位的地址空间、4KB 的页面和一个 4 字节的 PTE,那么即使应用所引用的只是虚拟地址空间中很小的一部分,也总是需要一个 4MB 的页表驻留在内存中。对于地址空间为 64 位的系统来说,问题将变得更为复杂。”
算一些这笔账:
32位地址空间,即共有2^32个虚拟地址
每个页面(即页)4KB,即2^12
那么虚拟地址空间中共有2^32除以2^12,即2^20个页面(即页)
每个页都需要对应一个页表条目(即PTE),故页面条目的个数也是2^20
而每个 PTE 的大小是 4 字节,即2^2,那么2^20页表条目的总大小就是 2^22 字节了,即 4MB
关于时间,见本小节的这两段:
“二级页表中的每个 PTE 都负责映射一个 4KB 的虚拟内存页面,就像我们查看只有一级的页表一样。注意,使用 4 字节的 PTE,每个一级和二级页表都是 4KB 字节,这刚好和一个页面的大小是一样的。
这种方法从这两个方面减少了内存要求。第一,如果一级页表中的一个 PTE 是空的,那么相应的二级页表就根本不会存在。这代表一种巨大的潜在节约,因为对于一个典型的程序,4GB 的虚拟地址空间的大部分都会使未分配的。第二,只有一级页表才需要总是在主存中;虚拟内存系统可以在需要时创建、页面调入或调出二级页面,这就减少了主存的压力;只有最经常使用的二级页表才需要缓存在主存中。”
 

 

Guess you like

Origin www.cnblogs.com/stone94/p/11872714.html