Temporal Locality of reference – In this Least recently used algorithm will be used. Whenever there is page fault occurs within word will not only load word in main memory but complete page fault will be loaded because spatial locality of reference rule says that if you are referring any word next word will be referred in
its register that’s why we load complete page table so complete block will be loaded.
Principle of locality of reference justifies the use of cache.
Step-2: CPU access time = (Hit ratio*access time) + (Miss ratio*access time+Main memory)
= (0.8*30) + (0.2*(30+150))
= 60 ns
exploit the temporal locality of reference in a program
exploit the spatial locality of reference in a program
reduce the miss penalty
none of these
To exploit the spatial locality, more than one word are put into cache block.
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
Let M is the memory access time and C is the Cache access time.
Speed Gain= M/ (0.8M +0.2C).
Note: Speed Gain can be computed if we know how faster is the Cache when compared with Memory
Consider a system with 2 level cache. Access times of Level 1, Level 2 cache and main memory are 0.5 ns, 5 ns and 100 ns respectively. The hit rates of Level 1 and Level 2 caches are 0.7 and 0.8 respectively. What is the average access time of the system ignoring the search time within cache?
Average access time = 0.7(0.5) + 0.3(0.8)(5) + 0.3(0.2)(100)
Average access time = 7.55 ns
None of the above
● There are two basic types of reference locality – temporal and spatial locality.
● Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration.
● Spatial locality refers to the use of data elements within relatively close storage locations.
● Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, pre fetching for memory and advanced branch predictors at the pipe lining stage of a processor core.
→ Register memory is the smallest and fastest memory in a computer.
→ Register memory size very small compared to Cache memory.
A digital computer has a memory unit of 64k x 16 and a cache memory of 10 words. The cache uses Direct mapping with a block size of four words. How many bits are there in the tag, index and block Fields of address format?
1, 6, 16
6, 8, 2
Therefore Physical address = PA = 16 bits
No. of blocks in cache = cache-size/block-size = 210/ 23 = 28 = 256
∴ 8 bits for block
As block size = 4 words = 22 words
∴ 2 bits for offset
Now, tag = 16 - 8 - 2 = 6 bits
a) Tag = 6 bits, Index = block = 8 bits, offset = word = 2 bits
→ Locality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of a processor core.
(i) Cached memory is best suited for small loops.
(ii) Interleaved memory is best suited for small loops
(iii) Interleaved memory is best suited for large sequential code.
(iv) Cached memory is best suited for large sequential code.
(i) and (ii) are true.
(i) and (iii) are true.
(iv) and (ii) are true.
(iv) and (iii) are true.
That way, contiguous memory reads and writes are using each memory bank in turn, resulting in higher memory throughputs due to reduced waiting for memory banks to become ready for desired operations.
→ A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.
→ A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations.
→ Loops consist of frequently used variables.
improve disk performance
increase the capacity of main memory
speed up main memory Read operations
h + 10 (1 - h)
(1 - h) + 10 h
= Mean time = (h*1)+(1-h)(10)
= Mean time = h+(1-h)(10)
Option-C is more appropriate answer
write through, the new cache contents is written down to the main memory immediately after the write to the cache memory,
write back, the new cache contents is not written down to the main memory immediately after the change, but only when the given block of data is replaced by a new block fetched from the main memory or an upper level cache. After a data write to the cache, only state bits are changed in the modified block, indicating that the block has been modified (a dirty block).
The write back updating is more time efficient, since the block cells that were modified many times while being in the cache, are updated in the main memory only once.