"Out of Memory" does not mean physical memory

I started programming in x86 computer, then Intel processor-enabled memory management strategy has undergone tremendous and rapid change. Have to know the difference between pain "extended memory" and "extended memory" over time and gradually disappear, fortunately, my memory has also been a definite difference.
As a result of early experience, I occasionally surprised at this fact: Many professional programmers seem to have the idea from "80286 Protected Mode" before you had no memory management.
For example, I occasionally ask, "I have an 'out of memory' errors, but I checked, the machine has enough memory, how is this going?

Imagine, when you run out of memory machine, you will think you are related to the amount of memory! And more charming ah!
I think, most of the methods used to describe the modern virtual memory management problem is that they assume that the world began from DOS - the "Memory" is equal to RAM, which is the "physical memory", and "virtual memory" is just a physical memory lets see up more clever tricks. Although Historically, this is the virtual memory on Windows development model, which is a reasonable approach, but I personally do not like this conceptualization of the virtual memory management.
So, what I quickly sketched the concept of virtual memory. But first thing to note. Modern Windows memory management system is far more complex and interesting sketch, sketch purpose is to give a virtual memory management system of the general feeling, as well as the relationship between psychological tool storage and addressing a number of clear thinking. It is by no means true of course memory manager.
First, I assume that you do not need to understand the concept of two additional explanation: The operating system process management, operating system management file on disk.
Each process can have its data storage space it needs. It requires the operating system to create a certain amount of data is stored as it is, while the operating system is to do so.
Now, I'm sure, myths and preconceptions began to flood. Of course, this process can not ask "how much it wants to." Of course, the 32-bit process can only claim a maximum of 2 GB. Or that the required 32-bit process data storage space is certainly only so much RAM. These two assumptions are not right. Data storage is a process reserved for the amount of space available in the operating system is limited only by disk.
This is the key point: in my opinion, what we call "process memory" of data storage is best seen as a big file on disk.
Thus, assuming 32-bit process requires a lot of memory, and it stores multiple requests. It may need a total of 5 GB of storage space. Operating system to find sufficient 5GB disk space in the file, and tells the process of storage space is indeed available. The process then how to write to the store? Only 32-bit pointer to the process, but the unique identification 5GB of storage space required for each byte of at least 33.
To solve this problem is things start to get a little tricky place.

5GB of storage space is divided into several blocks, each block typically 4KB, called a "page." 4GB operating system provides a "virtual address space" (over 1 million) for the process can be addressed by 32-bit pointers. The process then tell the operating system should be "mapped" to the 32-bit address space which pages 5GB of disk storage.
After mapping is completed, the operating system 98 to know when the process attempts to use the pointer 0x12340000 in its address space, this corresponds to, e.g., 2477 bytes at the beginning of the page, and the operating system knows the location of the page is stored on the disk. When the write pointer or read pointer from the operating system may determine which byte references disk storage, and performs a corresponding read or write operation.
"Out of memory" error almost never happens, because there is not enough available storage space; as we have seen, the storage space is disk space, and now a lot of disk. In contrast, the "Out of memory" error occurs, because the process can not find a large enough contiguous unused portion to perform the mapping request page in its virtual address space.
Half of the 4GB address space (or, in some cases, one-quarter) is reserved for the operating system to store data specific to the process. In the remaining half of the "user" address space, a large part of the application code is configured EXE and DLL files occupy. There are even enough space, address space may not be large enough unmapped "hole" to meet the needs of the process.
Part of the process virtual address space can be mapped by attempting to identify needs no longer to deal with this situation, "unmap" them, and then map them to store files in a number of other pages. If the 32-bit process is designed to handle large multi-GB of data storage, it is clear that's what it must do. Usually such a program is doing video processing, or other similar things, and can be safely and easily remap chunks of address space to the other part of the "memory file" of.
But if not it? If the process is a more normal and better performance of the process, only hundreds of millions of bytes of storage space, then what happens? If such a process is only functioning properly, and then try to assign some of the large number of strings, then the operating system will almost certainly be able to provide disk space. But this process is how to map a large number of pages into the address space of the string it?
If it happens that there is not enough contiguous address space, then the process will not be able to get a pointer pointing to the data, and it is actually useless. In this case, the process will be issued "out of memory" error. Now a misnomer. It really should be a "can not find enough contiguous address space" error; there is enough memory, because memory is equal disk space.

I have not mentioned the Ram. RAM can be seen as a performance optimization. Accessing data in RAM, where the information is stored in the electric field near the speed of light, the much faster than accessing data on the disk, the information stored on the disk to approach Miata great heavy metal molecules in the speed of movement.
The operating system tracks the process of accessing the most frequent memory page, and copy them in RAM to increase the speed. When the process is not currently accessing cached in RAM page pointer corresponding to the operating system performs a "page fault" operation out of the disk, copy a page from one disk to another RAM, thus reasonable to assume that soon will visit again page.
Operating system in a read-only shared resources is also very clever. If the two processes are loaded on the same page with the code from a DLL, then the operating system can be shared RAM cache between the two processes. Since the code may not be altered in any process, so by sharing the RAM to store duplicate page it is very wise.
But even with clever shared, the cache system eventually will run out of RAM. When this happens, the operating system will guess what pages are least likely to be referenced again soon, if they have changed, they will be written to disk, and release the RAM to read are more likely to be accessed again soon content.
When the operating system guess wrong, or, more likely, when there is not enough memory to store all the pages of all running processes frequently visited, the machine will begin to "bump." Operating system all the time is spent writing and reading on expensive disk storage, disk kept running, and you did not complete any work.
This also means that "out of memory" rarely result in "out of memory" error. It's not a bug, but resulted in poor performance because actually stored on disk full cost of the fact that it suddenly becomes relevant.
Another view is that the total amount of virtual memory that consumes virtually no great relationship with their performance. Whether or not related to the total consumption of virtual memory, but (1) How much memory is not shared with other processes, (2) "working set" common page how much, and (3) working set of all active processes than the available RAM.
It should now be clear why "out of memory" error is usually how much physical memory you have, even regardless of the amount of storage available. It is almost always related to the address space in the window 32, the address space is relatively small, and can easily be split.
Of course, many of these problems will disappear in the window 64, because 64-bit address space is the original window billions of times, it is difficult to split. (Of course, if physical memory is less than the total working set, then no matter how much address space, there will be jitter.)
This way of conceptualizing virtual memory is completely contrary to the usual assumption. Generally, it is considered a physical memory, when the physical memory is too full, the contents of the physical memory is swapped to disk. But I prefer to be seen as a storage disk storage, and physical memory is an intelligent caching mechanism, you can make the disk seem faster. Maybe I'm crazy, but it helps me to better understand it.

Well, I lied. 32-bit Windows will process on the total disk storage limit to 16 TB, 64-bit Windows will limit it to 256 TB. But if you have enough disk space, there is no reason to single process can not allocate more than GB of disk space. Many electrical engineer pointed out to me, of course, a single electron moving at all unhappy; it is moving so fast the electric field. I have updated the text, I hope you are now satisfied. In some of the virtual memory system, a page can be marked as "performance this page is critical, therefore must always be retained in RAM." If such a page than available RAM page, because you might not have enough RAM and "out of memory" error. But this case is much more difficult than the lack of address space.

Guess you like

Origin www.cnblogs.com/yilang/p/12586125.html