Static memory
D-type flip-flop
Static memory cell
- write
- Provided that the bit lines (bit = 1, =0)
- The word gate
- read
- And the bit Are filled to a high level Vdd
- The word line gate
- The state of the flip-flop, which will cause a low level of the bit line
- Perception bit and amplifiers change the value read out of memory
Static memory is typically organized
Random access
Static memory
Fast
low storage density per unit area is smaller memory capacity
data into / out of a common pin
high energy consumption
the high price
Compared with dynamic memory
SRAM | DRAM | |
---|---|---|
Store information | trigger | capacitance |
Destructive readout | non- | Yes |
It needs to be refreshed | Do not | need |
Send ranks Address | Meanwhile the election | Twice to send |
Access speed | fast | slow |
Integration | low | high |
Heat | Big | small |
Storage costs | high | low |
Program locality principle
for(i=0;i<1000;i++)
{
for(j=0;j<1000;j++)
{
a[i]=b[i]+c[i];
}
}
if err{...}
else for(i=0;i<1000;i++)
{
for(j=0;j<1000;j++)
{
e[i]=d[i]+a[i];
}
}
Internal data stream access
instruction to access locality
Different programs may access different segments of memory space.
Program in a certain period of time usually visit only a small address space
two kinds of locality:
temporal locality
and spatial locality
Hierarchical memory system
using a cache memory cache to improve the average access speed of the CPU to the memory
temporal locality: most recently accessed information is likely to have to be accessed.
The information recently accessed items loaded into the cache.
Spatial locality: near the information recently accessed information may also be accessed.
The load information recently accessed items of information together to close the cache.
A cache memory cache
- definition
- A memory is provided between the CPU and main memory, to achieve high-speed static memory, cache frequently accessed information on the CPU.
- Feature
- High speed: the speed of the CPU substantially match
- Transparent: Full hardware management, transparent to the programmer
To solve the problem
- A mapping relationship between address
how to get the data in the cache according to the memory address - Consistency between the data
whether the content is already in the cache address corresponding to the contents of main memory - Particle size data exchange
the contents of the cache to the main memory contents at what granularity switching - cache content loading and replacement policy
on how to improve the cache hit rate?