In this technical world, we get many types of memory. In which there is some memory in which the data is saved permanently such as hard disk, SSD, etc. And there is some memory in which Data is not permanent but stays for a certain time or a section, which we call RAM (Random Access Memory). Now comes Cache Memory. It has one external cache and one main cache memory which is inside our CPU. This cache memory is the fastest memory.
Cache memory is a very high-speed memory. This memory is used to synchronize speeds with high-speed CPUs. The price of the cache is much higher than that of the main memory and disk memory. But CPUs are cheaper than registers. A cache is a very fast memory that acts as a buffer between the RAM and the CPU. The work done by us in the cache is saved so that this data is available on the CPU immediately if needed.
Cache memory is used to reduce the time it takes to access data in the main memory. A cache is a small and fast memory that retrieves copies of data from frequently used main memory locations. Typically a CPU has many different independent caches, which store instructions and data.
Types of cache memory
Cache memory is very fast in speed and cost is very expensive. The cache is classified as levels that describe its proximity and accessibility to the microprocessor. The cache is divided into three common levels.
It is the fastest memory in all cache memory and the smallest in size. It is a type of memory in which data is stored and accepted which is immediately stored in the CPU.
The speed of this cache is less than L1 and higher than L3. It is larger than L1 in size. The capacity of data in this memory is more than L1. Because of which we can store data in it more than L1.
The speed of this CAS memory is less than both L1 and L2, ie it is the slowest memory. However, the speed of the L3 is twice the DRAM. L3 is the largest in size. The capacity of this memory is more than both L1 and L2 cache memory. With multicore processors, every core will have dedicated L1 and L2 cache, however, they will share AN L3 cache. If AN L3 cache references AN instruction, it’s typically elevated to a better level of cache.
In the past, L1, L2, and L3 caches are created exploitation combined processor and motherboard parts. Recently, the trend has been toward consolidating all 3 levels of memory caching on the computer hardware itself. that is why the first suggests that increasing cache size has begun to shift from the acquisition of a selected motherboard with completely different chipsets and bus architectures to purchasing computer hardware with the correct quantity of integrated L1, L2, and L3 cache.
Cache memory mapping
Three different types of mapping are used for the purpose of cache memory which is as follows
- Direct Mapping
- Associative Mapping
- Set-associative mapping
1. Direct mapping
The simplest technique, called direct mapping, maps every block of main memory into only 1 potential cache line. or
In Direct mapping, assigned every memory block to a particular line within the cache. If a line is antecedently haunted by a memory block once a brand new block must be loaded, the recent block is trashed. associate degree address area is split into 2 components index field and a tag field. The cache is employed to store the tag field whereas the remainder is held on within the main memory. Direct mapping`s performance is directly proportional to the Hit quantitative relation.
For functions of cache access, every main memory address is often viewed as consisting of 3 fields. the smallest amount important w bits determine a novel word or computer memory unit among a block of main memory. In most up to date machines, the address is at the computer memory unit level. The remaining s bits specify one in all the 2s blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (most important portion) and a line field of r bits. This latter field identifies one in all the m=2r lines of the cache.
2. Associative Mapping
In this sort of mapping, the associative memory is employed to store content and addresses of the memory word. Any block will come in any line of the cache. this implies that the word id bits are accustomed determine that word within the block is required, however, the tag becomes all of the remaining bits. this allows the position of any word at any place within the cache memory. It’s thought-about to be the quick stand also the most versatile mapping kind.
3. Set-associative mapping
This sort of mapping is Associate in Nursing increased form of direct mapping wherever the drawbacks of direct mapping area unit removed. Set associative addresses the matter of potential thrashing within the direct mapping methodology. It will this by spoken language that rather than having precisely one line that a block will map to within the cache, we are going to type A few lines along making a group.
Then a block in memory will map to anybody of the lines of a particular set..Set-associative mapping permits that every word that’s a gift within the cache will have 2 or additional words within the main memory for identical index addresses. Set associative cache mapping combines the most effective of direct and associative cache mapping techniques.
Application of Cache Memory
- Typically, the cache memory can store a reasonable number of blocks at any given time, but this number is significantly less than the total number of blocks present in the main memory.
- The correspondence between the main memory blocks and those in the cache is specified by a mapping function.
Cache memory is vital as a result of it improves the potency of information retrieval. It stores program directions and information that square measure used repeatedly within the operation of programs or data that the CPU is probably going to want next. the pc processor will access this data additional quickly from the cache than from the most memory. Quick access to those directions will increase the general speed of the program.
Aside from its main operate of up performance, cache memory could be a valuable resource for evaluating a computer’s overall performance. Users will try this by watching the cache’s hit-to-miss quantitative relation. Cache hits square measure instances during which the system with success retrieves information from the cache. A cache miss is once the system appearance for the information within the cache, can not notice it, and appears in different places instead. In some cases, users will improve the hit-miss quantitative relation by adjusting the cache memory block size the dimensions of information units keep.
Improved performance and skill to observe performance aren’t almost about up general convenience for the user. As technology advances and is more and more relied upon in mission-critical eventualities, having speed and reliableness becomes crucial. Even a couple of milliseconds of latency might doubtless result in monumental expenses, reckoning on matters.
Cache vs main memory
DRAM acts as the computer’s main memory. It performs calculations on data received from storage. DRAM and cache are unstable memory, meaning data is saved only when power is being run in them. As soon as the electricity in them is closed, all the data present in the ends. The DRAM remains installed on the motherboard, the Aus CPU accesses it via a bus connection.
DRAM is typically half the speed of L1, L2, and L3 cache memory, and is also less expensive. It accesses data faster than flash storage, hard disk drive (HDD) and tape storage. DRAM has been used over the past few decades to provide space to store frequently accessed disk data to improve I / O performance.
DRAM needs to be refreshed every few milliseconds, while cache memory (which is a type of random access memory) does not need to be refreshed. It is made directly into the CPU to give the processor the fastest possible access to memory locations and nanosecond speed access times to frequently referenced instructions and data.
SRAM is faster than DRAM, but because it is a more complex chip, it is also more expensive to build.
Cache vs virtual memory
A computer has a limited amount of DRAM and even less cache memory. When a large program or several programs are running simultaneously, it is possible to fully use memory. To compensate for the lack of physical memory, the operating system (OS) of a computer may create virtual memory.
To do this, the operating system temporarily transfers inactive data from DRAM to disk storage. This approach increases the virtual address space by using active memory in DRAM and passive memory in HDDs to create contiguous addresses that hold both an application and its data. Virtual memory allows a computer to run large programs or more than one program simultaneously, and each program functions as if it has unlimited memory.
To copy virtual memory to physical memory, the OS divides memory into page files or swap files. Which consists of a fixed number of addresses. Those pages are stored on a disk and when they are needed, the OS copies them from disk to main memory and converts the virtual memory address into a physical one. These translations are handled by a memory management unit (MMU).
Thanks for reading this post. If any of the information in this post is incorrect or missing, please comment in the comment box.
If you have any doubts or questions in “What is Cache Memory?”, please comment in the comment box