Cache

Question 1
The principal of the locality of reference justifies the use of
A
virtual memory
B
interrupts
C
main memory
D
cache memory
       Computer-Organization       Cache       ISRO-2007
Question 1 Explanation: 
Spatial Locality of reference – this says that there is chance that element will be present in the close proximity to the reference point and next time if again searched then more close proximity to the point of reference.
Temporal Locality of reference – In this Least recently used algorithm will be used. Whenever there is page fault occurs within word will not only load word in main memory but complete page fault will be loaded because spatial locality of reference rule says that if you are referring any word next word will be referred in
its register that’s why we load complete page table so complete block will be loaded.
Principle of locality of reference justifies the use of cache.
Question 2
A cache memory needs an access time of 30 ns and main memory 150 ns, what is the average access time of CPU (assume hit ratio = 80%)?
A
60
B
30
C
150
D
70
       Computer-Organization       Cache       ISRO-2017 May
Question 2 Explanation: 
Step-1: Hit ratio 80%=0.8 and Miss ratio=20%=0.2
Access time=30ns
Main memory=150ns
Step-2: CPU access time = (Hit ratio*access time) + (Miss ratio*access time+Main memory)
= (0.8*30) + (0.2*(30+150))
= 60 ns
Question 3
More than one word are put in one cache block to
A
exploit the temporal locality of reference in a program
B
exploit the spatial locality of reference in a program
C
reduce the miss penalty
D
none of these
       Computer-Organization       Cache       ISRO CS 2008
Question 3 Explanation: 
Temporal locality refers to the reuse of specific data and/or resources within relatively small time durations. Spatial locality refers to the use of data elements within relatively close storage locations.
To exploit the spatial locality, more than one word are put into cache block.
Question 4
Consider a system with 2 level cache. Access times of Level 1 cache, Level 2 cache and main memory are 1 ns, 10 ns, and 500 ns, respectively. The hit rates of Level 1 and Level 2 caches are 0.8 and 0.9, respectively. What is the average access time of the system ignoring the search time within the cache?
A
13.0
B
12.8
C
12.6
D
12.4
       Computer-Organization       Cache       ISRO-2016
Question 4 Explanation: 
Average access time = [H1 * T1] + [(1 - H1) * Hm * Tm]
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
= 12.6ns
Question 5
How much speed do we gain by using the cache, when cache is used 80% of the time? Assume cache is faster than main memory
A
5.27
B
2.00
C
4.16
D
6.09
       Computer-Organization       Cache       ISRO CS 2013
Question 5 Explanation: 
Speed Gain=(Memory Access Time without Cache)/( Memory Access Time with Cache).
Let M is the memory access time and C is the Cache access time.
Then,
Speed Gain= M/ (0.8M +0.2C).
Note: Speed Gain can be computed if we know how faster is the Cache when compared with Memory
Question 6

Consider a system with 2 level cache. Access times of Level 1, Level 2 cache and main memory are 0.5 ns, 5 ns and 100 ns respectively. The hit rates of Level 1 and Level 2 caches are 0.7 and 0.8 respectively. What is the average access time of the system ignoring the search time within cache?

A
20.75 ns
B
7.55 ns
C
24.35 ns
D
35.20 ns
       Computer-Organization       Cache       UGC-NET DEC Paper-2
Question 6 Explanation: 
Average access time = level 1 hit rate( level 1 access time) + (level1 miss rate)(level 2 hit rate(level 2 access time)+ (level 1 miss rate)( level 2 miss rate) (main memory access time)
Average access time = 0.7(0.5) + 0.3(0.8)(5) + 0.3(0.2)(100)
Average access time = 7.55 ns
Question 7
The principle of locality of reference justifies the use of:
A
Non reusable
B
Cache memory
C
Virtual memory
D
None of the above
       Computer-Organization       Cache       Nielit Scientist-B CS 4-12-2016
Question 7 Explanation: 
● Locality of reference, also known as the principle of locality is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.
● There are two basic types of reference locality – temporal and spatial locality.
● Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration.
● Spatial locality refers to the use of data elements within relatively close storage locations.
● Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, pre fetching for memory and advanced branch predictors at the pipe lining stage of a processor core.
Question 8
_____ memory is intended to give memory speed approaching that of the fastest memories available, and at the same time provide a large memory size at the price of less expensive types of semiconductor memories
A
Register
B
Counter
C
Flip flop
D
cache
       Computer-Organization       Cache       KVS DEC-2013
Question 8 Explanation: 
→ Cache memory, also called CPU memory, is high-speed static random access memory (SRAM) that a computer microprocessor can access more quickly than it can access regular random access memory (RAM).
→ Register memory is the smallest and fastest memory in a computer.
→ Register memory size very small compared to Cache memory.
Question 9
Which of the following memory improves the speed of execution of a program?
A
Virtual memory
B
Primary memory
C
Secondary memory
D
Cache memory
       Computer-Organization       Cache       KVS DEC-2017
Question 9 Explanation: 
Cache memory improves the speed of execution of a program. But it is costly. When compare to registers, registers are more faster than cache memory.
Question 10

A digital computer has a memory unit of 64k x 16 and a cache memory of 10 words. The cache uses Direct mapping with a block size of four words. How many bits are there in the tag, index and block Fields of address format?

A
1, 6, 16
B
28
C
6, 8, 2
D
24
       Computer-Organization       Cache       JT(IT) 2016 PART-B Computer Science
Question 10 Explanation: 
MM size = 64K × 16 = 216 × 16 i.e., MM has 216 words
Therefore Physical address = PA = 16 bits

No. of blocks in cache = cache-size/block-size = 210/ 23 = 28 = 256
∴ 8 bits for block
As block size = 4 words = 22 words
∴ 2 bits for offset
Now, tag = 16 - 8 - 2 = 6 bits
a) Tag = 6 bits, Index = block = 8 bits, offset = word = 2 bits
Question 11
The principle of Locality of reference justifies the use of :
A
Virtual memory
B
Interrupts
C
Cache memory
D
Secondary memory
       Computer-Organization       Cache       UGC NET CS 2005 june-paper-2
Question 11 Explanation: 
→ The principle of Locality of reference justifies the use of cache memory. The principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.
→ Locality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of a processor core.
Question 12
Cached and interleaved memories are ways of speeding up memory access between CPU’s and slower RAM. Which memory models are best suited (i.e. improves the performance most) for which programs ?
(i) Cached memory is best suited for small loops.
(ii) Interleaved memory is best suited for small loops
(iii) Interleaved memory is best suited for large sequential code.
(iv) Cached memory is best suited for large sequential code.
A
(i) and (ii) are true.
B
(i) and (iii) are true.
C
(iv) and (ii) are true.
D
(iv) and (iii) are true.
       Computer-Organization       Cache       UGC NET CS 2012 June-Paper2
Question 12 Explanation: 
→ Interleaved memory is a design made to compensate for the relatively slow speed of dynamic random-access memory (DRAM) or core memory, by spreading memory addresses evenly across memory banks.
That way, contiguous memory reads and writes are using each memory bank in turn, resulting in higher memory throughputs due to reduced waiting for memory banks to become ready for desired operations.
→ A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.
→ A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations.
→ Loops consist of frequently used variables.
Question 13
Block or Buffer caches are used to
A
improve disk performance
B
handle interrupts
C
increase the capacity of main memory
D
speed up main memory Read operations
       Computer-Organization       Cache       UGC NET CS 2011 June-Paper-2
Question 13 Explanation: 
Block or Buffer caches are used to speed up main memory Read operations. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.
Question 14
The performance of a file system depends upon the cache hit rate. If it takes 1 msec to satisfy a request from the cache but 10 msec to satisfy a request if a disk read is needed, then the mean time (ms) required for a hit rate ‘h’ is given by :
A
1
B
h + 10 (1 - h)
C
(1 - h) + 10 h
D
10
       Computer-Organization       Cache       UGC NET CS 2007-Dec-Paper-2
Question 14 Explanation: 
Mean time = (Cache hit rate)*(cache access time) + (cache miss rate)(memory access time)
= Mean time = (h*1)+(1-h)(10)
= Mean time = h+(1-h)(10)
Question 15
Cache memory is :
A
High-Speed Register
B
Low-Speed RAM
C
Non-Volatile RAM
D
High-speed RAM
       Computer-Organization       Cache       UGC NET CS 2007 June-Paper-2
Question 15 Explanation: 
Option-A is ruled out because it is not register. Registers are more faster than cache memory. Option-B and D is ruled out because it is not a type RAM.
Option-C is more appropriate answer
Question 16
In _____ method, the word is written to the block in both the cache and main memory, in parallel.
A
Write through
B
Write back
C
Write protected
D
Direct mapping
       Computer-Organization       Cache       UGC NET CS 2016 July- paper-3
Question 16 Explanation: 
→ A cache memory contains copies of data stored in the main memory. When a change of data in a cache takes place (ex. a modification due to a processor write) the contents of the main memory and cache memory cells with the same address, are different. To eliminate this lack of data coherency two methods are applied:
write through, the new cache contents is written down to the main memory immediately after the write to the cache memory,
write back, the new cache contents is not written down to the main memory immediately after the change, but only when the given block of data is replaced by a new block fetched from the main memory or an upper level cache. After a data write to the cache, only state bits are changed in the modified block, indicating that the block has been modified (a dirty block).
The write back updating is more time efficient, since the block cells that were modified many times while being in the cache, are updated in the main memory only once.
There are 16 questions to complete.
PHP Code Snippets Powered By : XYZScripts.com