## Cache

 Question 1

A certain processor uses a fully associative cache of size 16 kB. The cache block size is 16 bytes. Assume that the main memory is byte addressable and uses a 32-bit address. How many bits are required for the Tag and the Index fields respectively in the addresses generated by the processor?

 A 28 bits and 0 bits B 24 bits and 4 bits C 24 bits and 0 bits D 28 bits and 4 bits
Computer-Organization       Cache       GATE 2019
Question 1 Explanation:
Since, it is said for fully associative cache so No index bits is there. So, now for tag bits,
Total bits - Offset bits = 32 - 4 = 28
So, tag bits = 28, Index bits = 0
 Question 2

A certain processor deploys a single-level cache. The cache block size is 8 words and the word size is 4 bytes. The memory system uses a 60-MHz clock. To service a cache miss, the memory controller first takes 1 cycle to accept the starting address of the block, it then takes 3 cycles to fetch all the eight words of the block, and finally transmits the words of the requested block at the rate of 1 word per cycle. The maximum bandwidth for the memory system when the program running on the processor issues a series of read operations is ______ × 106 bytes/sec.

 A 160 B 145 C 172 D 124
Computer-Organization       Cache       GATE 2019       Video-Explanation
Question 2 Explanation: Cache block = 8 words
Word size = 4 bytes
Cache block size = 32 bytes
Clock = 60 MHz
⇒ T = 1/clock = 1/60×106 seconds
Cache miss
= 1 cycle(Address) + 3 cycles (8 words) + 1word/cycle ×8 (transfer)
= 12 cycles
= 12/60×106
Total bandwidth = total data/total time = 32 bytes/(12/60×106) = 160 × 106 bytes/second
 Question 3

The size of the physical address space of a processor is 2P bytes. The word length is 2W bytes. The capacity of cache memory is 2N bytes. The size of each cache block is 2M words. For a K-way set-associative cache memory, the length (in number of bits) of the tag field is

 A P - N - log2K B P - N + log2K C P - N - M - W - log2K D P - N - M - W + log2K
Computer-Organization       Cache       GATE 2018       Video-Explanation
Question 3 Explanation:
Physical memory is of size 2P bytes.
Each word is of size 2W bytes.
Number of words in physical memory = 2(P-W)
So the physical address is P-W bits
Cache size is 2N bytes.
Number of words in the cache = 2(N-W)
Block size is 2M words
No. of blocks in the cache = 2(N-W-M)
Since it is k-way set associative cache, each set in the cache will have k blocks.
No. of sets = 2(N-W-M ) / k
SET bits = N-W-M-logk
Block offset = M
TAG bits = P-W-(N-M-W-logk)-M = P-W-N+M+W+logk-M = P - N + logk
 Question 4

Consider a two-level cache hierarchy with L1 and L2 caches. An application incurs 1.4 memory accesses per instruction on average. For this application, the miss rate of L1 cache is 0.1; the L2 cache experiences, on average, 7 misses per 1000 instructions. The miss rate of L2 expressed correct to two decimal places is __________.

 A 0.05 B 0.06 C 0.07 D 0.08
Computer-Organization       Cache       GATE 2017 [Set-1]       Video-Explanation
Question 4 Explanation:
It is given that average references per instruction = 1.4
For 1000 instructions total number of memory references = 1000 * 1.4 = 1400
These 1400 memory references are first accessed in the L1.
Since the miss rate of L1 is 0.1, for 1400 L1 references the number of misses = 0.1 * 1400 = 140
We know when there is a miss in L1 we next access the L2 cache.
So number of memory references to L2 = 140
It is given that there are 7 misses in L2 cache. Out of 140 memory references to L2 cache there are 7 misses.
Hence the miss rate in L2 cache = 7/140 = 0.05
 Question 5

Consider a 2-way set associative cache with 256 blocks and uses LRU replacement. Initially the cache is empty. Conflict misses are those misses which occur due to contention of multiple blocks for the same cache set. Compulsory misses occur due to first time access to the block. The following sequence of accesses to memory blocks

(0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)

is repeated 10 times. The number of conflict misses experienced by the cache is __________.

 A 76 B 79 C 80 D 81
Computer-Organization       Cache       GATE 2017 [Set-1]       Video-Explanation
Question 5 Explanation:
When a block is first time accessed and it will not be present in the cache, so it will be a compulsory miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is less than k, and in that case if the second access is amiss then it is capacity miss. In this case the cache doesn't have the size to hold all the k-unique blocks that came and so when the initial block came back again it is not in the cache because capacity of the cache is less than the unique block accesses k. Hence it is capacity miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is greater than k, and in that case if the second access is a miss then it is conflict miss. In this case the cache can hold all the k-unique blocks that came and even then when the initial block came back again it is not in the cache because it got replaced, then it is conflict miss.
LRU will use the function xmod128 Cache size = 256 Bytes
2 way set associative cache
So, no. of cache sets = 256/2 = 128
Blue → Compulsory miss
Red → Conflict miss
At the end of first round we have 4, compulsory misses & 4 conflict misses.
Similarly, if we continue from Round-2 to last round, in every round we will get 8 conflict misses.
Total conflict misses = 4+9(8) = 4+72 = 76 (conflict misses)
 Question 6

A cache memory unit with capacity of N words and block size of B words is to be designed. If it is designed as a direct mapped cache, the length of the TAG field is 10 bits. If the cache unit is now designed as a 16-way set-associative cache, the length of the TAG field is ___________ bits.

 A 14 B 15 C 16 D 17
Computer-Organization       Cache       GATE 2017 [Set-1]       Video-Explanation
Question 6 Explanation:
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block number + bits for block offset)
With block size being B words no. of bits for block offset = log (B)
Because the cache capacity is N words and each block is B words, number of blocks in cache = N / B
No. of bits for block number = log (N/B)
So, the physical address in direct mapping case
= 10 + log (N/B) + log (B)
= 10 + log (N) – log B + log B
= 10 + log (N)
If the same cache unit is designed as 16-way set associative, then the physical address becomes
(Tag bits + bits for set no. + Bits for block offset)
There are N/B blocks in the cache and in 16-way set associative cache each set contains 16 blocks.
So no. of sets = (N/B) / 16 = N / (16*B)
Then bits for set no = log (N/16B)
Bits for block offset remain the same in this case also. That is log (B).
So physical address in the set associative case
= tag bits + log (N/16*B) + log B
= tag bits + log (N) – log (16*B) + log B
= tag bits + log (N) – log 16 – log B + log B
= tag bits + log N – 4
The physical address is the same in both the cases.
So, 10 + log N = tag bits + log N – 4
Tag bits = 14
So, no. of tag bits in the case 16-way set associative mapping for the same cache = 14.
 Question 7

In a two-level cache system, the access times of L1 and L2 caches are 1 and 8 clock cycles, respectively. The miss penalty from the L2 cache to main memory is 18 clock cycles. The miss rate of L1 cache is twice that of L2. The average memory access time (AMAT) of this cache system is 2 cycles. The miss rates of L1 and L2 respectively are:

 A 0.111 and 0.056 B 0.056 and 0.111 C 0.0892 and 0.1784 D 0.1784 and 0.0892
Computer-Organization       Cache       GATE 2017 [Set-2]       Video-Explanation
Question 7 Explanation:
The Average memory access time is,
AMAT = (L1 hit rate)*(L1 access time) + (L1 miss rate)*(L1 access time + L1 miss penalty)
= L1 hit rate * L1 access time + L1 miss rate * L1 access time + L1 miss rate * L1 miss penalty
We can write,
L1 miss rate = 1 - L1 hit rate
AMAT = L1 hit rate * L1 access time + (1 - L1 hit rate) * L1 access time + L1 miss rate * L1 miss penalty
By taking L1 access time common,
= (L1 hit rate + 1 - L1 hit rate)* L1 access time + L1 miss rate * L1 miss penalty
AMAT = L1 access time + L1 miss rate * L1 miss penalty
We access L2 only when there is a miss in L1.
So, L1 miss penalty is nothing but the effective time taken to access L2.
L1_miss_penalty = Hit_rate_of_L2* Access time of L2 + MissRate of L2 *(Access time of L2+ miss penalty L2)
= Hit_rate_of_L2* Access time of L2 + MissRate of L2 *Access time of L2 + MissRate of L2 * miss penalty L2
By taking Access time of L2 common we get,
= Access time of L2 * (Hit_rate_of_L2 + MissRate of L2 ) + MissRate of L2 * miss penalty L2
We know, MissRate of L2 = 1 - Hit_rate_of_L2 → Hit_rate_of_L2 + MissRate of L2 = 1
So, the above formula becomes,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
It is given,
access time of L1 = 1,
access time of L2 = 8,
miss penalty of L2 = 18,
AMAT = 2.
Let, miss rate of L2 = x.
Since it is given that L1 miss rate is twice that of 12 miss rate, L1 miss rate = 2 * x.
Substituting the above values,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
L1_miss_penalty = 8 + (x*18)
AMAT = L1 access time + L1 miss rate * L1 miss penalty
2 = 1 + (2*x) (8+18*x)
36*x2+ 16*x -1 = 0
By solving the above quadratic equation we get,
x = Miss rate of L2 = 0.056
Miss rate of L1 = 2*x = 0.111
 Question 8

The read access times and the hit ratios for different caches in a memory hierarchy are as given below. The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-word-first read policy and the write back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is ___________.

 A 4.72 B 4.73 C 4.74 D 4.75
Computer-Organization       Cache       GATE 2017 [Set-2]       Video-Explanation
Question 8 Explanation:
Since nothing is given whether it is hierarchical memory or simultaneous memory, by default we consider it as hierarchical memory.
Hierarchical memory (Default case):
For 2-level memory:
The formula for average memory access time = h1 t1 + (1-h1)(t1 + t2)
This can be simplified as
t1 + (1-h1)t2
For 3-level memory:
h1 t1 + (1-h1)(t1 + h2 t2 + (1-h2)(t2 + t3))
This can be simplified as
t1 + (1-h1)t2 + (1-h1)(1-h2)t3
Instruction fetch happens from I-cache whereas operand fetch happens from D-cache.
Using that we need to calculate the instruction fetch time (Using I-cache and L2-cache) and operand fetch time (Using D-cache and L2-cache) separately.
Then calculate 0.6 (instruction fetch time) + 0.4(operand fetch time) to find the average read access time.
The equation for instruction fetch time = t1 + (1-h1 ) t2 + (1-h1 )(1-h2 ) t3
= 2 + 0.2*8 + 0.2*0.1*90 = 5.4ns
Operand fetch time = t1 + (1-h1)t2 + (1-h1)(1-h2)t3 = 2 + 0.1*8 + 0.1*0.1*90 = 3.7ns
The average read access time = 0.6*5.4 + 0.4*3.7 = 4.72ns
 Question 9

Consider a machine with a byte addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is _____________.

 A 18 B 19 C 20 D 21
Computer-Organization       Cache       GATE 2017 [Set-2]       Video-Explanation
Question 9 Explanation:
Given that it is a byte addressable memory and the main memory is of size 232 Bytes.
So the physical address is 32 bits long.
Each block is of size 32(=25) Bytes. So block offset 5.
Also given that there are 512(=29) cache lines, since it is a direct mapped cache we need 9 bits for the LINE number.
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block/LINE number + bits for block offset)
So, tag bits + 9 + 5 = 32
Tag bits = 32 - 14 = 18
 Question 10

The width of the physical address on a machine is 40 bits. The width of the tag ﬁeld in a 512 KB 8-way set associative cache is _________ bits.

 A 24 B 25 C 26 D 27
Computer-Organization       Cache       GATE 2016 [Set-2]       Video-Explanation
Question 10 Explanation:
Given that it is a set associative cache, so the physical address is divided as following:
(Tag bits + bits for set no. + Bits for block offset)
In question block size has not been given, so we can assume block size 2x Bytes.
The cache is of size 512KB, so number of blocks in the cache = 219/2x = 219-x
It is 8-way set associative cache so there will be 8 blocks in each set.
So number of sets = (219 − x)/8 = 216 − x
So number of bits for sets = 16−x
Let number of bits for Tag = T
Since we assumed block size is 2x Bytes, number of bits for block offset is x.
So, T + (16−x) + x = 40
T + 16 = 40
T = 24
 Question 11

A ﬁle system uses an in-memory cache to cache disk blocks. The miss rate of the cache is shown in the ﬁgure. The latency to read a block from the cache is 1 ms and to read a block from the disk is 10 ms. Assume that the cost of checking whether a block exists in the cache is negligible. Available cache sizes are in multiples of 10 MB. The smallest cache size required to ensure an average read latency of less than 6 ms is _______ MB.

 A 30 B 31 C 32 D 33
Computer-Organization       Cache       GATE 2016 [Set-2]       Video-Explanation
Question 11 Explanation:
Here it is not given whether the memory is hierarchical or simultaneous.
So we consider it as hierarchical memory.
But it is given that “assume that the cost of checking whether a block exists in the cache is negligible”, which means don't consider the checking time in the cache when there is a miss.
So formula for average access time becomes h1t1 + (1-h1)(t2) which is same as for simultaneous access.
Though the memory is hierarchical because of the statement given in the question we ignored the cache access time when there is a miss and effectively the calculation became like simultaneous access.
The average access time or read latency = h1t1 + (1-h1)t2.
It is given that the average read latency has to be less than 6ms.
So, h1t1 + (1-h1)t2 < 6
From the given information t1 = 1ms, t2 = 10ms
h1*1+(1-h1)10 < 6
10-9h1 < 6
-9h1 < - 4
-h1 < - 4/9
-h1 < -0.444
Since in the given graph we have miss rate information and 1-h1 gives the miss rate, so we add 1 on both sides of the above inequality.
1-h1 < 1-0.444
1-h1 < 0.555
So for the average read latency to be less than 6 ms the miss rate hsa to be less than 55.5%.
From the given graph the closest value of miss rate less than 55.5 is 40%.
For 40% miss rate the corresponding cache size is 30MB.
Hence the answer is 30MB.
 Question 12

Assume that for a certain processor, a read request takes 50 nanoseconds on a cache miss and 5 nanoseconds on a cache hit. Suppose while running a program, it was observed that 80% of the processors read requests result in a cache hit. The average and access time in nanoseconds is  _______.

 A 14 B 15 C 16 D 17
Computer-Organization       Cache       GATE 2015 [Set-2]
Question 12 Explanation:
Average read access time = [(0.8)(5) + (0.2)(50)]ns = 4 + 10 = 14ns
 Question 13

Consider a machine with a byte addressable main memory of 220 bytes, block size of 16 bytes and a direct mapped cache having 212 cache lines. Let the addresses of two consecutive bytes in main memory be (E201F)16 and (E2020)16. What are the tag and cache line address (in hex) for main memory address (E201F)16?

 A E, 201 B F, 201 C E, E20 D 2, 01F
Engineering-Mathematics       Cache       GATE 2015 [Set-3]
Question 13 Explanation:
Block size is of 16 bytes, so we need 4 offset bits. So the lowest 4 bit is offset bits.
No. of cache lines in cache is 212 bytes which needs 12 bits. So next lower 12 bits are line indexing bits.
And the remaining top 4 bits are tag bits (out of 20). So answer is (A).
 Question 14

An access sequence of cache block addresses is of length N and contains n unique block addresses. The number of unique block addresses between two consecutive accesses to the same block address is bounded above by k. What is the miss ratio if the access sequence is passed through a cache of associativity A≥k exercising least-recently-used replacement policy?

 A n/N B 1/N C 1/A D k/n
Data-Structures       Cache       GATE 2014 [Set-1]
Question 14 Explanation:
Consider the scenario, you have k-way set associative cache, let us say a block b0 comes to set-0 and then there are k-1 unique blocks to the same set. Now the set-0 is full then one more block came to set-0 and the block b0 is replaced using LRU, if b0 comes again then it will cause a miss. But it is given that the set associativity A>=k, means the no. of blocks in a set is k or more so assume we have (k+1)-way set associativity, applying the above logic b0 comes first and then k-1 other unique accesses come to the same set-0 then there is still place for one more block in set-0 and if b0 comes again now then it will not cause a miss. As the associativity increases further there will not be any more misses other than the unique blocks we have a best case scenario. So out of N accesses only the unique blocks n can be missed and the miss ratio is n/N.
As the set associativity size keeps increasing then we don't have to replace any block, so LRU is not of any significance here.
 Question 15

A 4-way set-associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. The number of bits for the TAG field is __________.

 A 20 B 21 C 22 D 23
Computer-Organization       Cache       GATE 2014 [Set-2]
Question 15 Explanation:
Physical address size = 32 bits
Cache size = 16K bytes = 214 Bytes
block size = 8 words = 8⨯4 Byte = 32 Bytes = 25 Bytes
(where each word = 4 Bytes)
No. of blocks =214/25=29
block offset =9bits
Because it is 4-way set associative cache, no. of sets =29/4=27
Set of set = 7 bits
TAG = 32 – (7 + 5) = 20 bits Question 16

Suppose a stack implementation supports an instruction REVERSE, which reverses the order of elements on the stack, in addition to the PUSH and POP instructions. Which one of the following statements is TRUE with respect to this modified stack?

 A A queue cannot be implemented using this stack. B A queue can be implemented where ENQUEUE takes a single instruction and DEQUEUE takes a sequence of two instructions. C A queue can be implemented where ENQUEUE takes a sequence of three instructions and DEQUEUE takes a single instruction. D A queue can be implemented where both ENQUEUE and DEQUEUE take a single instruction each.
Computer-Organization       Cache       GATE 2014 [Set-2]
Question 16 Explanation:
A Queue can be implemented where Enqueue takes 3 operations & dequeue takes 1 operation.
Suppose: Dequeue: If we want to delete an element, that first we need to delete 1.
Enqueue: Question 17

In designing a computer’s cache system, the cache block (or cache line) size is an important parameter. Which one of the following statements is correct in this context?

 A A smaller block size implies better spatial locality B A smaller block size implies a smaller cache tag and hence lower cache tag overhead C A smaller block size implies a larger cache tag and hence lower cache hit time D A smaller block size incurs a lower cache miss penalty
Computer-Organization       Cache        GATE 2014 [Set-2]
Question 17 Explanation:
When a cache block size is smaller, it could accommodate more number of blocks, it improves the hit ratio for cache, so the miss penalty for cache will be lowered.
 Question 18

If the associativity of a processor cache is doubled while keeping the capacity and block size unchanged, which one of the following is guaranteed to be NOT affected?

 A Width of tag comparator B Width of set index decoder C Width of way selection multiplexor D Width of processor to main memory data bus
Computer-Organization       Cache       GATE 2014 [Set-2]
Question 18 Explanation:
When associativity is doubled, then the set offset will be affected, accordingly, the number of bits used for TAG comparator be affected.
Width of set index decoder also will be affected when set offset is changed.
A k-way set associative cache needs k-to-1 way selection multiplexer. If the associativity is doubled the width of way selection multiplexer will also be doubled.
With of processor to main memory data bus is guaranteed to be NOT affected as this is not dependent on the cache associativity.
 Question 19

In a k-way set associative cache, the cache is divided into v sets, each of which consists of k lines. The lines of a set are placed in sequence one after another. The lines in set s are sequenced before the lines in set (s+1). The main memory blocks are numbered 0 onwards. The main memory block numbered j must be mapped to any one of the cache lines from

 A (j mod v) * k to (j mod v) * k + (k-1) B (j mod v) to (j mod v) + (k-1) C (j mod k) to (j mod k) + (v-1) D (j mod k) * v to (j mod k) * v + (v-1)
Computer-Organization       Cache       GATE 2013
Question 19 Explanation:
Number of sets in cache = v.
A block numbered j will be mapped to set number (j mod v). Since it is k-way set associative cache, there are k blocks in each set. It is given in the question that the blocks in consecutive sets are sequenced. It means for set-0 the cache lines are numbered 0, 1, .., k-1 and for set-1, the cache lines are numbered k, k+1,... k+k-1 and so on. As the main memory block j will be mapped to set (j mod v), it will be any one of the cache lines from (j mod v) * k to (j mod v) * k + (k-1).
 Question 20

A RAM chip has a capacity of 1024 words of 8 bits each (1K×8). The number of 2×4 decoders with enable line needed to construct a 16K×16 RAM from 1K×8 RAM is

 A 4 B 5 C 6 D 7
Computer-Organization       Cache       GATE 2013
Question 20 Explanation:
The capacity of the RAM needed = 16K
Capacity of the chips available = 1K
No. of address lines = 16K/1K = 16
Hence we can use 4 × 16 decoder for this. But we were only given 2 × 4 decoders.
So 4 decoders are required in inner level as from one 2×4 decoder we have only 4 output lines whereas we need 16 output lines.
Now to point to these 4 decoders, another 2×4 decoder is required in the outer level.
Hence no. of 2×4 decoders to realize the above implementation of RAM = 1 + 4 = 5
 Question 21

A computer has a 256 KByte, 4-way set associative, write back data cache with block size of 32 Bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit.

The number of bits in the tag field of an address is

 A 11 B 14 C 16 D 27
Computer-Organization       Cache       GATE 2012
Question 21 Explanation:
It is given that cache size = 256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32 - 16 = 16
So, we need 16 tag bits.
 Question 22

A computer has a 256 KByte, 4-way set associative, write back data cache with block size of 32 Bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit.

The size of the cache tag directory is

 A 160 Kbits B 136 Kbits C 40 Kbits D 32 Kbits
Computer-Organization       Cache       GATE 2012
Question 22 Explanation:
It is given that cache size = 256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32 - 16 = 16
So, we need 16 tag bits.
It is given that in addition to address tag there are 2 valid bits, 1 modified bit and 1 replacement bit.
So size of each tag entry = 16 tag bits + 2 valid bits + 1 modified bit + 1 replacement bit = 20 bits
Size of cache tag directory = Size of tag entry × Number of tag entries
= 20 × 8 K
= 160 Kbits
 Question 23

An 8KB direct-mapped write-back cache is organized as multiple blocks, each of size 32-bytes. The processor generates 32-bit addresses. The cache controller maintains the tag information for each cache block comprising of the following.

1 Valid bit
1 Modified bit

As many bits as the minimum needed to identify the memory block mapped in the cache. What is the total size of memory needed at the cache controller to store meta-data (tags) for the cache?

 A 4864 bits B 6144 bits C 6656 bits D 5376 bits
Computer-Organization       Cache       GATE 2011
Question 23 Explanation:
No. of cache blocks = cache size/size of a block = 8KB/32B = 256
So we need 8 bits for indexing the 256 blocks in the cache. And since a block is 32 bytes we need 5 word bits to address each byte.
So out of remaining (32 - 8 - 5), 19 bits should be tag bits.
So tag entry size = 19 + 1 (valid bit) + 1 (modified bit) = 21 bits
∴ Total size of metadata = 21 × Number blocks = 21 × 256 = 5376 bits
 Question 24

Consider a 4-way set associative cache (initially empty) with total 16 cache blocks. The main memory consists of 256 blocks and the request for memory blocks is in the following order:

`        0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155.`

Which one of the following memory block will NOT be in cache if LRU replacement policy is used?

 A 3 B 8 C 129 D 216
Computer-Organization       Cache       GATE 2009
Question 24 Explanation:
It is given that cache contains 16 blocks, it is 4-way set associative cache.
So number of sets = 16 / 4 = 4
Given main memory blocks will be mapped to one of the 4 sets.
The given blocks are 0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155, and they will be mapped to following sets(Since there are 4 sets we get the set number by doing mod 4):
0, 3, 1, 0, 3, 0, 1, 3, 0, 1, 3, 0, 0, 0, 1, 0, 3
The cache mapping and the replacement using LRU can be seen from the below diagram. We can see that at the end of mapping the given block pattern block number 216 is not in the cache.
 Question 25

For inclusion to hold between two cache levels L1 and L2 in a multi-level cache hierarchy, which of the following are necessary?

I. L1 must be a write-through cache
II. L2 must be a write-through cache
III. The associativity of L2 must be greater than that of L1
IV. The L2 cache must be at least as large as the L1 cache
 A IV only B I and IV only C I, III and IV only D I, II, III and IV
Computer-Organization       Cache       GATE 2008
Question 25 Explanation:
Inclusion property: in this the presence of an address in a given level of the memory system guarantees that the address is present in all lower levels of the memory system.
1st is not always correct as data need not to be exactly same at the same point of time and so write back policy can be used in this instead of write through policy.
2nd is not needed when talking only about L1 and L2, as whether L2 is write through will have impact on any memory higher than L2, not on L1.
For 3rd, associativity can be equal also, so it need not be true.
For 4th statement, L2 cache has to be as large as L1 cache. In most cases L2 cache will be generally larger than L1 cache, but we will never have an L2 cache smaller than L1 cache. So only 4th statement is necessarily true. Hence option A is the answer.
 Question 26

Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:

```double ARR;
int i, j;
/* Initialize array ARR to 0.0 */
for(i=0; i<1024; i++)
for(j=0; j<1024; j++)
ARR[i][j] = 0.0; ```

The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR.

The total size of the tags in the cache directory is

 A 32Kbits B 34Kbits C 64Kbits D 68Kbits
Computer-Organization       Cache       GATE 2008
Question 26 Explanation:
It is given that cache size = 64KB
Block size = 16 Bytes = 24 Bytes, so block offset = 4 bits
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
So, we need 11-bits for set indexing.
Since the address is 32 bits long, number of tag bits = 32−11−4 = 17
Total size of the tag directory = No. of tag bits ×Number of cache blocks = 17×4K
= 68 Kbits
 Question 27

Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:

```double ARR;
int i, j;
/* Initialize array ARR to 0.0 */
for(i=0; i<1024; i++)
for(j=0; j<1024; j++)
ARR[i][j] = 0.0; ```

The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR.

Which of the following array elements has the same cache index as ARR?

 A ARR   B ARR   C ARR   D ARR  
Computer-Organization       Cache       GATE 2008
Question 27 Explanation:
It is given that cache size = 64KB
Block size = 16 Bytes
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
Each element size = 8B and size of each block = 16 B
No. of elements in one block = 16/8 = 2 We know no. of elements in one row of the array = 1024. So, no. of blocks for one row of the array = 1024/2 = 512
We know there are 211 sets, and each set has two blocks.
For any element to share the same cache index as ARR, it has to belong to the same set.
ARR belongs to set-0. The next element that belongs to set-0 will have block number 2048 because 2048 mod 211 = 0.
Block number 2048 will be from row number 2048/512 = 4, because each row has 512 blocks we divide the block number with 512 to get the row number.
From the given options ARR will have the same cache index as ARR.
 Question 28

Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:

```double ARR;
int i, j;
/* Initialize array ARR to 0.0 */
for(i=0; i<1024; i++)
for(j=0; j<1024; j++)
ARR[i][j] = 0.0; ```

The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR.

The cache hit ratio for this initialization loop is

 A 0% B 25% C 50% D 75%
Computer-Organization       Cache       GATE 2008
Question 28 Explanation:
Block size = 16B and it is given that double size is 8B, so one element = 8B.
So in one block 2 elements will be stored.
To store 1024×1024 elements no. of blocks needed = 1024×1024/2 = 220/2 = 219.
In each block the first element will be a miss and second element will be a hit because on every miss that entire block is brought into the cache. So, there will be 219 hits and 219 misses. Total no. of references = no. of elements in the array = 220
⇒ hit ratio = No. of hits / Total no. of references
= 219/220 = 1/2 = 0.5
= 0.5×100 = 50%
 Question 29

Consider a 4-way set associative cache consisting of 128 lines with a line size of 64 words. The CPU generates a 20-bit address of a word in main memory. The number of bits in the TAG, LINE and WORD fields are respectively:

 A 9, 6, 5 B 7, 7, 6 C 7, 5, 8 D 9, 5, 6
Computer-Organization       Cache       GATE 2007
Question 29 Explanation:
Cache has 128 lines.
Each line size 64 words, so no. of bits for WORD = 6 bits
Because it is 4-way set associative cache, no. of sets in the cache = 128/4 = 32 = 25
No. of bits for the set number = 5
Because the address is 20-bits long, no. of TAG bits = 20-5-6 = 9
 Question 30

Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses.

How many data cache misses will occur in total?

 A 40 B 50 C 56 D 59
Computer-Organization       Cache       GATE 2007
Question 30 Explanation:
No. of bits to represent address
= log2 216 = 16 bits
Cache line size is 64 bytes means offset bit required is
log2 26 = 6 bits
No. of lines in cache is 32, means lines bits required is
log2 25 = 5 bits.
So, tag bits = 16 - 6 - 5 = 5 bits
No. of elements in array is
50 × 50 = 2500
and each element is of 1 byte.
So, total size of array is 2500 bytes.
So, total no. of lines it will require to get into the cache,
⌈2500/64⌉ = 40
Now, given starting address of array, Now we need 40 cache lines to hold all array elements, but we have only 32 cache lines.
Now lets divide 2500 array elements into 40 array lines, each of which will contain 64 of its elements.
Now, So, if complete array is accessed twice, total no. of cache misses is, Question 31

Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses.

Which of the following lines of the data cache will be replaced by new blocks in accessing the array for the second time?

 A line 4 to line 11 B line 4 to line 12 C line 0 to line 7 D line 0 to line 8
Computer-Organization       Cache       GATE 2007
Question 31 Explanation:
From previous question explanation figure we can see that if array is accessed second time then from line 4 to line 11, A33 to A40 will be replaced by A1 to A8 and then A9 to A32 all array lines will be present and again that A1 to A8 will be replaced by A33 to A40.
 Question 32

A CPU has a cache with block size 64 bytes. The main memory has k banks, each bank being c bytes wide. Consecutive c − byte chunks are mapped on consecutive banks with wrap-around. All the k banks can be accessed in parallel, but two accesses to the same bank must be serialized. A cache block access may involve multiple iterations of parallel bank accesses depending on the amount of data obtained by accessing all the k banks in parallel. Each iteration requires decoding the bank numbers to be accessed in parallel and this takes. k/2 ns The latency of one bank access is 80 ns. If c = 2 and k = 24, the latency of retrieving a cache block starting at address zero from main memory is:

 A 92 ns B 104 ns C 172 ns D 184 ns
Computer-Organization       Cache       GATE 2006
Question 32 Explanation:
Cache with block size = 64 bytes
Latency = 80 ns
k = 24
No. of banks are accessed in parallel , then it takes k/2 ns = 24/2 = 12ns
Decoding time = 12ns
Size of each bank C = 2 bytes
Each Bank memory is 2 bytes, and there is 24 banks are there, in one iteration they may get 2*24 = 48 bytes
And getting 64 bytes requires 2 iteration.
T = decoding time + latency = 12+80 = 92
For 2 iterations = 92×2 = 184ns
 Question 33

Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2.

The value of h1 is:

 A 2.4 ns B 2.3 ns C 1.8 ns D 1.7 ns
Computer-Organization       Cache       GATE 2006
Question 33 Explanation:
Cache size = 32 KB = 32 × 210 B
Block size = 32 bytes
No. of blocks = 2 (2-way set associative)
No. of combinations = Cache size / (Block size×No. of blocks) = (32×210B) / (32×2) = 29
Index bits = 9
No. of set bits = 5 (∵ cache block size is 32 bytes = 25 bytes)
No. of Tag bits = 32 - 9 - 5 = 18
Hit latency = Multiplexer latency + latency
= 0.6 + (18/10)
= 0.6 + 1.8
= 2.4 ns
 Question 34

Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2.

The value of h2 is:

 A 2.4 ns B 2.3 ns C 1.8 ns D 1.7 ns
Computer-Organization       Cache       GATE 2006
Question 34 Explanation:
Hit latency = k/10 ns
For k, ∴ Hit latency = 17/1 = 1.7 ns
 Question 35

A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a two-dimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2.

```P1: for (i=0; i<512; i++) {
for (j=0; j<512; j++) {
x += A[i][j];
}
}

P2: for (i=0; i<512; i++) {
for (j=0; j<512; j++) {
x += A[j][i];
}
} ```

P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2.

The value of M1 is:

 A 0 B 2048 C 16384 D 262144
Computer-Organization       Cache       GATE 2006
Question 35 Explanation:
[P1] Access A in row major order.
[P2] Access A in column major order.
No. of cache blocks=Cache size/Block size = 32KB / 128B = 32×210B / 128B = 215 / 27 = 256
No. of array elements in each block = Block size / Element size = 128B / 8B = 16
Total misses for (P1) = Array size * No. of array elements / No. of cache blocks = (512×512) * 16 / 256 = 16384
 Question 36

A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a two-dimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2.

```P1: for (i=0; i<512; i++) {
for (j=0; j<512; j++) {
x += A[i][j];
}
}

P2: for (i=0; i<512; i++) {
for (j=0; j<512; j++) {
x += A[j][i];
}
} ```

P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2.

The value of the ratio M1/M2 is:

 A 0 B 1/16 C 1/8 D 16
Computer-Organization       Cache       GATE 2006
Question 36 Explanation:
Total misses for [P2] = 512 * 512 = 262144
(for every element there would be a miss)
M1/M2 = 16384/262144 = 1/16
 Question 37

Consider a direct mapped cache of size 32 KB with block size 32 bytes. The CPU generates 32 bit addresses. The number of bits needed for cache indexing and the number of tag bits are respectively.

 A 10, 17 B 10, 22 C 15, 17 D 5, 17
Computer-Organization       Cache       GATE 2005
Question 37 Explanation:
No. of blocks = cache size/block size = 32KB/32 = 210
So indexing requires 10 bits.
No. of offset bit required to access 32 byte block = 5
So, No. of TAG bits = 32 - 10 - 5 = 17
 Question 38

Consider a small two-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently used (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8

 A 2 B 3 C 4 D 5
Computer-Organization       Cache       GATE 2004
Question 38 Explanation:
Cache consists of 4 blocks and is 2-way set associative. So cache have 2 sets each of size 2 blocks.
The given sequence is 8, 12, 0, 12, 8. So in total 4 misses is there.
 Question 39

In a C program, an array is declared as float A. Each array element is 4 Bytes in size, and the starting address of the array is 0×00000000. This program is run on a computer that has a direct mapped data cache of size 8 Kbytes, with block (line) size of 16 Bytes.

(a) Which elements of the array conflict with element A in the data cache? Justify your answer briefly.

(b) If the program accesses the elements of this array one by one in reverse order i.e., starting with the last element and ending with the first element, how many data cache misses would occur? Justify your answer briefly. Assume that the data cache is initially empty and that no other data or instruction accesses are to be considered.

 A Theory Explanation is given below.
Computer-Organization       Cache       GATE 2002
 Question 40

More than one word is put in one cache block to

 A exploit the temporal locality of reference in a program B exploit the spatial locality of reference in a program C reduce the miss penalty D none of the above
Computer-Organization       Cache       GATE 2001
Question 40 Explanation:
Spatial locality refers to the use of data elements within relatively close storage locations. To exploit the spatial locality, more than one word are put into cache block.
The temporal locality refers to reuse of data elements with a smaller regular intervals.
 Question 41

A CPU has 32-bit memory address and a 256 KB cache memory. The cache is organized as a 4-way set associative cache with cache block size of 16 bytes.

(a) What is the number of sets in the cache?
(b) What is the size (in bits) of the tag field per cache block?
(c) What is the number and size of comparators required for tag matching?
(d) How many address bits are required to find the byte offset within a cache block?
(e) What is the total amount of extra memory (in bytes) required for the tag bits?
 A Theory Explanation is given below.
Computer-Organization       Cache       GATE 2001
 Question 42

The main memory of a computer has 2 cm blocks while the cache has 2c blocks. If the cache uses the set associative mapping scheme with 2 blocks per set, then block k of the main memory maps to the set

 A (k mod m) of the cache B (k mod c) of the cache C (k mod 2c) of the cache D (k mod 2 cm) of the cache
Computer-Organization       Cache       GATE 1999
Question 42 Explanation:
No. of sets in cache = No. of blocks in cache/No. of blocks per set
= 2c/c
= c
∴ Cache set no. to which block k of main memory maps to
= (Main memory block no.) mod (Total set in cache)
= k mod c
 Question 43

A computer system has a 4K word cache organized in block-set-associative manner with 4 blocks per set, 64 words per block. The number of its in the SET and WORD fields of the main memory address format is:

 A 15, 40 B 6, 4 C 7, 2 D 4, 6
Computer-Organization       Cache       GATE 1995
Question 43 Explanation:
No. of sets = 4K/ (64×4) = 212/ 28 = 24
So we need 4 set bits.
And,
64 words per block means WORD bit is 6-bits.
So, answer is option (D).
 Question 44

A direct mapped cache memory of 1 MB has a block ize of 256 bytes. The cache has an access time of 3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the main memory, while each subsequent word takes 5 ns. The word size is 64 bits. The average memory access time in ns (round off to 1 decimal place) is _____.

 A 13.5
Computer-Organization       Cache       GATE 2020
Question 44 Explanation:
Cache access time = 3 ns
Hit ratio of cache = 0.94
Word size is 64 bits = 8 bytes.
Cache line size = 256 bytes = 32 words
Main memory access time = 20ns(time for first word) + 155ns(time for remaining 31 words, 31*5 = 155ns) = 175 ns
Average access time = h1*t1 + (1-h1)(t1+t2) = t1 +(1-h1)t2
⇒ 3 + (0.06)(175) = 13.5 ns
 Question 45

A computer system with a word length of 32 bits has a 16 MB byte-addressable main memory and a 64 KB, 4-way set associative cache memory with a block size of 256 bytes. Consider the following four physical addresses represented in hexadecimal notation.

` A1 = 0x42C8A4,  A2 = 0x546888,  A3 = 0x6A289C,  A4 = 0x5E4880 `

Which one of the following is TRUE?

 A A1 and A4 are mapped to different cache sets. B A1 and A3 are mapped to the same cache set. C A3 and A4 are mapped to the same cache set. D A2 and A3 are mapped to the same cache set.
Computer-Organization       Cache       GATE 2020
Question 45 Explanation:
Main memory is 16MB in size.
The word length is given as 32 bits and the physical addresses mentioned are all contain 6 hexadecimal digits, so the the physical address is 32 bits long.
Block size is 256 bytes, block offset = 8 bits as it is a byte addressable memory.
Cache size = 64KB
Number of blocks in the cache = 64KB/256B = 256
It is a 4-way set associative cache, so no. of sets in the cache = 256/4 = 64 = 26
In the physical address we need 6 bits for the SET number.
TAG bits = 32 - 6 - 8 = 18
So the 32 bits physical address is divided as (18 TAG bits + 6 SET number bits + 8 OFFSET bits)
Since in all the options we are asked about SET numbers of the given addresses, we need to find the SET number of each of the addresses.
A1 = 0x42C8A4, here SET number is (00 1000) which includes the last 2 bits of C(1100) and binary representation of 8 (1000).
A2 = 0x546888, here SET number is (10 1000) which includes the last 2 bits of 6(0110) and binary representation of 8 (1000).
A3 = 0x6A289C here SET number is (10 1000) which includes the last 2 bits of 2(0010) and binary representation of 8 (1000).
A4 = 0x5E4880 here SET number is (00 1000) which includes the last 2 bits of 4 (0100) and binary representation of 8 (1000).
From the given options option-4 is TRUE as A2, A3 are mapped to the same cache SET.
 Question 46

In the three-level memory hierarchy shown in the following table, pi denotes the probability that an access request will refer to Mi. If a miss occurs at level Mi, a page transfer occurs from Mi+1 to Mi and the average time required for such a page swap is Ti. Calculate the average time tA required for a processor to read one word from this memory system.

 A Theory Explanation.
Computer-Organization       Cache       GATE 1993
 Question 47

State whether the following statements are TRUE or FALSE with reason:

Transferring data in blocks from the main memory to the cache memory enables an interleaved main memory unit to operate at its maximum speed.
 A True B False
Computer-Organization       Cache       GATE 1990
Question 47 Explanation:
Main memory transfer the data in the form of block to cache and cache transfer the data in the form of words, so it enables the interleaved main memory unit to operate unit at maximum speed.
 Question 48

Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The number of bits in the TAG, SET and WORD fields, respectively are:

 A 7, 6, 7 B 8, 5, 7 C 8, 6, 6 D 9, 4, 7
Computer-Organization       Cache       GATE 2008-IT
Question 48 Explanation:
No. of cache blocks = 8KB/128 = 213/27 = 26
No. of sets in cache = 26/22 = 24
So no. of set bits = 4
Size of block = 128 words = 128 B = 27
So no. of word bit = 7
So tag bits = 20 - 4 - 7 = 9
 Question 49

Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding cache line is

 A 000011000 B 110001111 C 00011000 D 110010101
Computer-Organization       Cache       GATE 2008-IT
Question 49 Explanation:
As we have found in earlier question that first 9 bits is tag bits.
So lets first convert given Hexadecimal no. into binary number, Question 50

Consider a Direct Mapped Cache with 8 cache blocks (numbered 0-7). If the memory block requests are in the following order

`3, 5, 2, 8, 0, 63, 9,16, 20, 17, 25, 18, 30, 24, 2, 63, 5, 82,17, 24.  `
Which of the following memory blocks will not be in the cache at the end of the sequence?

 A 3 B 18 C 20 D 30
Computer-Organization       Cache       GATE 2007-IT
Question 50 Explanation: So memory block 18 is not in the cache.
 Question 51

A cache line is 64 bytes. The main memory has latency 32 ns and bandwidth 1 GBytes/s. The time required to fetch the entire cache line from the main memory is

 A 32 ns B 64 ns C 96 ns D 128 ns
Computer-Organization       Cache       GATE 2006-IT
Question 51 Explanation:
For 1GBytes/s bandwidth → it takes 1 sec to load 109 bytes on line.
→ So, for 64 bytes it will take 64*1/109 = 64ns.
Main memory latency = 32
Total time required to place cache line is
64+32 = 96 ns
 Question 52

A computer system has a level-1 instruction cache (1-cache), a level-1 data cache (D-cache) and a level-2 cache (L2-cache) with the following specifications: The length of the physical address of a word in the main memory is 30 bits. The capacity of the tag memory in the I-cache, D-cache and L2-cache is, respectively,

 A 1K × 18-bit, 1K × 19-bit, 4K × 16-bit B 1K × 16-bit, 1K × 19-bit, 4K × 18-bit C 1K × 16-bit, 512 × 18-bit, 1K × 16-bit D 1K × 18-bit, 512 × 18-bit, 1K × 18-bit
Computer-Organization       Cache       GATE 2006-IT
Question 52 Explanation:
No. of blocks in cache = Capacity/Block size = 2m
Bits to represent blocks = m
No. of words in a block = 2n
Bits to represent a word = n
Tag bits = (length of physical address of a word) - (bits to represent blocks) - (bits to represent a word)
⇒ Each block will have its own tag bits.
So, total tag bits = no. of blocks × tag bits.
 Question 53

A dynamic RAM has a memory cycle time of 64 nsec. It has to be refreshed 100 times per msec and each refresh takes 100 nsec. What percentage of the memory cycle time is used for refreshing?

 A 10 B 6.4 C 1 D 0.64
Computer-Organization       Cache       GATE 2005-IT
Question 53 Explanation:
We know that 1ms = 106ns
In 106ns refresh 100 times.
Each refresh takes 100ns.
Memory cycle time = 64ns
Refresh time per 1ms i.e., per 106ns = 100 * 100 = 104ns
Refresh time per 1ns = (104)/(106) ns
Refresh time per cycle = (104*64)/(106) = 64ns
Percentage of the memory cycle time is used for refreshing = (64*100)/64 = 1%
 Question 54

Consider a system with 2 level caches. Access times of Level 1 cache, Level 2 cache and main memory are 1 ns, 10ns, and 500 ns, respectively. The hit rates of Level 1 and Level 2 caches are 0.8 and 0.9, respectively. What is the average access time of the system ignoring the search time within the cache?

 A 13.0 ns B 12.8 ns C 12.6 ns D 12.4 ns
Computer-Organization       Cache       GATE 2004-IT
Question 54 Explanation:
Average access time = [H1 * T1] + [(1 - H1) * Hm * Tm]
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
= 12.6ns
 Question 55
Consider a computer system with a byte-addressable primary memory of size 232bytes. Assume the computer system has a direct-mapped cache of size 32 KB (1 KB = 210bytes), and each cache block is of size 64 bytes. The size of the tag field is ______ bits.
 A 17
Computer-Organization       Cache       GATE 2021 CS-Set-1
Question 55 Explanation:

Given that the main memory is 2^32 bytes. So the physical address is 32 bits long.

Since it is a direct mapped cache the physical address is divided into  fields as (TAG, LINE, OFFSET).

Cache size is 32KB = 2^15 Bytes.

Block size is 64 bytes = 2^6 bytes, so OFFSET needs 6 bits.

Number of blocks in the cache = 2^15//2^6 = 2^9, So LINE number needs 9 bits.

TAG bits = 32 - 9 - 6 = 17 bits.

 Question 56
Consider a set-associative cache of size 2KB (1KB = 210bytes) with cache block size of 64 bytes. Assume that the cache is byte-addressable and a 32-bit address is used for accessing the cache. If the width of the tag field is 22 bits, the associativity of the cache is ______
 A 2
Computer-Organization       Cache       GATE 2021 CS-Set-2
Question 56 Explanation:

Cache size = 2KB

Block size = 64 Bytes = 2^6 bytes. So the OFFSET is 6 bits long.

The TAG field is 22 bits. And the physical address is 32 bits.

Since it is given as a set associative cache, the physical address is divided as (TAG, SET, OFFSET).

From the given information we can find the SET number = 32 - 22 - 6 = 4 bits.

Using the 4 SET bits, the number of sets possible = 2^4 = 16.

Number of blocks in the cache = Cache size / Block size = 2KB /  64 = 2^11/2^6 = 2^5.

Number of sets in the cache = Number of blocks in the cache / Number of blocks in each set

Number of blocks in each set is also called the associativity.

16 = 2^5/associativity

Associativity = 2^5/16 = 2

 Question 57
Assume a two-level inclusive cache hierarchy, L1 and L2, where L2 is the larger of the two. Consider the following statements.
S1:Read misses in a write through L1 cache do not result in writebacks of dirty lines to the L2.
S2:Write allocate policy must be used in conjunction with write through caches and no-write allocate policy is used with write back caches.
 A S1is true and S2is false B S1is false and S2is false C S1is true and S2is true D S1is false and S2is true
Computer-Organization       Cache       GATE 2021 CS-Set-2
Question 57 Explanation:

S1: In a write through cache the writing happens to both the L1 and L2 caches simultaneously.  It is also given as the inclusive hierarchy. Here L1 is a subset of L2. So when there is a cache read miss in L1 we try to fetch the content from L2. If the block containing that missed word is dirty in the L1 cache,  since it is a write through cache the same content would have also been written to the L2 cache also earlier itself. So there is no need to write back the dirty lines again to the L2 cache on a cache miss. Hence S1 is true.

S2:  Generally no-write allocate is used with write through cache and write allocate is used with write back cache. Hence S2 is false.

 Question 58
A cache memory that has a hit rate of 0.8 has an access latency 10 ns and miss penalty 100 ns. An optimization is done on the cache to reduce the miss rate. However, the optimization results in an increase of cache access latency to 15 ns, whereas the miss penalty is not affected. The minimum hit rate (rounded off to two decimal places) needed after the optimization such that it should not increase the average memory access time is _____________.
 A 0.85
Computer-Organization       Cache       GATE 2022       Video-Explanation
Question 58 Explanation:
The initial values are:
h1 = 0.8
t1 = 10 ns
t2 = 100ns
After the optimization the average memory access time remains the same,
Let the new hit rate be h2. The new cache access time is 15ns.
Applying the average access time formula h1t1+(1-h1)(t1+t2)
0.8*10+0.2*(10+100) = h2*15+(1-h2)(15+100)
30 = 15 + (1-h2)*100
1-h2 = 0.15
h2 = 0.85
 Question 59
Which one of the following statements is FALSE?
 A The TLB performs an associative search in parallel on all its valid entries using page number of incoming virtual address. B If the virtual address of a word given by CPU has a TLB hit, but the subsequent search for the word results in a cache miss, then the word will always be present in the main memory. C The memory access time using a given inverted page table is always same for all incoming virtual addresses. D In a system that uses hashed page tables, if two distinct virtual addresses V1 and V2 map to the same value while hashing, then the memory access time of these addresses will not be the same.
Computer-Organization       Cache       GATE 2022
Question 59 Explanation:
(A) True: The tlb performs the address search in parallel to memory.
(B) True: TLB hit implies no page fault. Hence, the page-frame will always be in main memory.
(C) False: Inverted page table depends on the size of the process. Hence, the memory access time also varies.
(D) True: Collision resolution techniques are applied. Hence, the each address may take different time.
 Question 60
Consider a system with 2 KB direct mapped data cache with a block size of 64 bytes. The system has a physical address space of 64 KB and a word length of 16 bits. During the execution of a program, four data words P, Q, R, and S are accessed in that order 10 times (i.e., PQRSPQRS…). Hence, there are 40 accesses to data cache altogether. Assume that the data cache is initially empty and no other data words are accessed by the program. The addresses of the first bytes of P, Q, R, and S are 0xA248, 0xC28A, 0xCA8A, and 0xA262, respectively. For the execution of the above program, which of the following statements is/are TRUE with respect to the data cache?
 A Every access to S is a hit. B Once P is brought to the cache it is never evicted. C At the end of the execution only R and S reside in the cache. D Every access to R evicts Q from the cache.
Computer-Organization       Cache       GATE 2022
Question 60 Explanation:
Cache size = 2KB = 2^11 bytes
Block size = 64 bytes = 2^6 bytes, we need 6 bits OFFSET to identify each byte in the block.
No. of blocks in the cache = 2^11 / 2^6 = 2^5, since it is a direct mapped cache we need 5 bits in the physical address to identify the LINE number.
Each word is 16 bits long = 2 bytes.
Number of words in each block = 64/2 = 32 words.
The physical memory is 64KB = 2^16 bytes, so the physical address is 16 bits.
In direct mapped cache the physical address is divided into (TAG, LINE, OFFSET)
In the 16 bits physical address OFFSET is 6 bits and LINE number is 5 bits, so there are 5 TAG bits.
The given physical addresses of P, Q, R, S can be written in binary as below (each hexadecimal digit represents 4 binary digits):
P: 0xA248 (10100 01001 001000 )
Q: 0xC28A (11000 01010 001010)
R: 0xCA8A (11001 01010 001010)
S: 0xA262 (10100 01001 100010)
Here P is mapped to line number 9 in the cache. S also is mapped to line number 9 and not only that they both are from the same block as even the TAG bits are matching. So, even the first access of S will be a hit as it gets prefetched along with P. Once P is fetched it is never evicted.
Both Q and R are mapped to line number 10 in the cache. But they are from different blocks as the TAG bits are different. Since it is a direct mapped cache, every access of R evicts Q from the cache.
Hence options A, B, D are true. Option C is false as at the end of execution P, R, S are in the cache not just R, S.
 Question 61
The principal of the locality of reference justifies the use of
 A virtual memory B interrupts C main memory D cache memory
Computer-Organization       Cache       ISRO-2007
Question 61 Explanation:
Spatial Locality of reference – this says that there is chance that element will be present in the close proximity to the reference point and next time if again searched then more close proximity to the point of reference.
Temporal Locality of reference – In this Least recently used algorithm will be used. Whenever there is page fault occurs within word will not only load word in main memory but complete page fault will be loaded because spatial locality of reference rule says that if you are referring any word next word will be referred in
its register that’s why we load complete page table so complete block will be loaded.
Principle of locality of reference justifies the use of cache.
 Question 62
More than one word are put in one cache block to
 A exploit the temporal locality of reference in a program B exploit the spatial locality of reference in a program C reduce the miss penalty D none of these
Computer-Organization       Cache       ISRO CS 2008
Question 62 Explanation:
Temporal locality refers to the reuse of specific data and/or resources within relatively small time durations. Spatial locality refers to the use of data elements within relatively close storage locations.
To exploit the spatial locality, more than one word are put into cache block.
 Question 63
A cache memory needs an access time of 30 ns and main memory 150 ns, what is the average access time of CPU (assume hit ratio = 80%)?
 A 60 B 30 C 150 D 70
Computer-Organization       Cache       ISRO-2017 May
Question 63 Explanation:
Step-1: Hit ratio 80%=0.8 and Miss ratio=20%=0.2
Access time=30ns
Main memory=150ns
Step-2: CPU access time = (Hit ratio*access time) + (Miss ratio*access time+Main memory)
= (0.8*30) + (0.2*(30+150))
= 60 ns
 Question 64
Consider a system with 2 level cache. Access times of Level 1 cache, Level 2 cache and main memory are 1 ns, 10 ns, and 500 ns, respectively. The hit rates of Level 1 and Level 2 caches are 0.8 and 0.9, respectively. What is the average access time of the system ignoring the search time within the cache?
 A 13 B 12.8 C 12.6 D 12.4
Computer-Organization       Cache       ISRO-2016
Question 64 Explanation:
Average access time = [H1 * T1] + [(1 - H1) * Hm * Tm]
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
= 12.6ns
 Question 65
How much speed do we gain by using the cache, when cache is used 80% of the time? Assume cache is faster than main memory
 A 5.27 B 2 C 4.16 D 6.09
Computer-Organization       Cache       ISRO CS 2013
Question 65 Explanation:
Speed Gain=(Memory Access Time without Cache)/( Memory Access Time with Cache).
Let M is the memory access time and C is the Cache access time.
Then,
Speed Gain= M/ (0.8M +0.2C).
Note: Speed Gain can be computed if we know how faster is the Cache when compared with Memory
 Question 66

Consider a system with 2 level cache. Access times of Level 1, Level 2 cache and main memory are 0.5 ns, 5 ns and 100 ns respectively. The hit rates of Level 1 and Level 2 caches are 0.7 and 0.8 respectively. What is the average access time of the system ignoring the search time within cache?

 A 20.75 ns B 7.55 ns C 24.35 ns D 35.20 ns
Computer-Organization       Cache       UGC-NET CS 2018 DEC Paper-2
Question 66 Explanation:
Average access time = level 1 hit rate( level 1 access time) + (level1 miss rate)(level 2 hit rate(level 2 access time)+ (level 1 miss rate)( level 2 miss rate) (main memory access time)
Average access time = 0.7(0.5) + 0.3(0.8)(5) + 0.3(0.2)(100)
Average access time = 7.55 ns
 Question 67
A CPU has a 32 KB direct mapped cache with 128 byte block size. Suppose A is a 2 dimensional array of size 512×512 with elements that occupy 8 bytes each. Consider the code segment
for (i =0; i < 512; i++)
{
for (j =0; j < 512; j++)
{
x += A[i][j];
}
}
Assuming that array is stored in order A, A, A……, the number of cache misses is
 A 16384 B 512 C 2048 D 1024
Computer-Organization       Cache       ISRO DEC 2017 22- Soon
Question 67 Explanation:
→ Access A in row major order.
→ No. of cache blocks=Cache size/Block size = 32KB / 128B = 32×210B / 128B = 215 / 27 = 256
→ No. of array elements in each block = Block size / Element size = 128B / 8B =16
→ Total misses for (P1)=Array size * No. of array elements / No. of cache blocks = (512×512) * 16 / 256=16384
 Question 68
In designing a computer’s cache system, the cache block (or cache line) size is an important parameter. Which one of the following statements is correct in this context?
 A Smaller block size incurs lower cache miss penalty B Smaller block size implies better spatial locality C Smaller block size implies smaller cache tag D Smaller block size implies lower cache hit time
Computer-Organization       Cache       ISRO DEC 2017 22- Soon
Question 68 Explanation:
When a cache block size is smaller, it could accommodate more number of blocks, it improves the hit ratio for cache, so the miss penalty for cache will be lowered.
 Question 69
The principle of locality of reference justifies the use of:
 A Non reusable B Cache memory C Virtual memory D None of the above
Computer-Organization       Cache       Nielit Scientist-B CS 4-12-2016
Question 69 Explanation:
● Locality of reference, also known as the principle of locality is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.
● There are two basic types of reference locality – temporal and spatial locality.
● Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration.
● Spatial locality refers to the use of data elements within relatively close storage locations.
● Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, pre fetching for memory and advanced branch predictors at the pipe lining stage of a processor core.
 Question 70
_____ memory is intended to give memory speed approaching that of the fastest memories available, and at the same time provide a large memory size at the price of less expensive types of semiconductor memories
 A Register B Counter C Flip flop D cache
Computer-Organization       Cache       KVS DEC-2013
Question 70 Explanation:
→ Cache memory, also called CPU memory, is high-speed static random access memory (SRAM) that a computer microprocessor can access more quickly than it can access regular random access memory (RAM).
→ Register memory is the smallest and fastest memory in a computer.
→ Register memory size very small compared to Cache memory.
 Question 71
Which of the following memory improves the speed of execution of a program?
 A Virtual memory B Primary memory C Secondary memory D Cache memory
Computer-Organization       Cache       KVS DEC-2017
Question 71 Explanation:
Cache memory improves the speed of execution of a program. But it is costly. When compare to registers, registers are more faster than cache memory.
 Question 72

A digital computer has a memory unit of 64k x 16 and a cache memory of 210 words. The cache uses Direct mapping with a block size of four words. How many bits are there in the tag, index and block Fields of address format?

 A 1, 6, 16 B 28 C 6, 8, 2 D 24
Computer-Organization       Cache       JT(IT) 2016 PART-B Computer Science
Question 72 Explanation:
MM size = 64K × 16 = 216 × 16 i.e., MM has 216 words
Therefore Physical address = PA = 16 bits No. of blocks in cache = cache-size/block-size = 210/ 22 = 28 = 256
∴ 8 bits for block
As block size = 4 words = 22 words
∴ 2 bits for offset
Now, tag = 16 - 8 - 2 = 6 bits
a) Tag = 6 bits, Index = block = 8 bits, offset = word = 2 bits
 Question 73
The principle of Locality of reference justifies the use of :
 A Virtual memory B Interrupts C Cache memory D Secondary memory
Computer-Organization       Cache       UGC NET CS 2005 june-paper-2
Question 73 Explanation:
→ The principle of Locality of reference justifies the use of cache memory. The principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.
→ Locality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of a processor core.
 Question 74
Cached and interleaved memories are ways of speeding up memory access between CPU’s and slower RAM. Which memory models are best suited (i.e. improves the performance most) for which programs ?
(i) Cached memory is best suited for small loops.
(ii) Interleaved memory is best suited for small loops
(iii) Interleaved memory is best suited for large sequential code.
(iv) Cached memory is best suited for large sequential code.
 A (i) and (ii) are true. B (i) and (iii) are true. C (iv) and (ii) are true. D (iv) and (iii) are true.
Computer-Organization       Cache       UGC NET CS 2012 June-Paper2
Question 74 Explanation:
→ Interleaved memory is a design made to compensate for the relatively slow speed of dynamic random-access memory (DRAM) or core memory, by spreading memory addresses evenly across memory banks.
That way, contiguous memory reads and writes are using each memory bank in turn, resulting in higher memory throughputs due to reduced waiting for memory banks to become ready for desired operations.
→ A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.
→ A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations.
→ Loops consist of frequently used variables.
 Question 75
Block or Buffer caches are used to
 A improve disk performance B handle interrupts C increase the capacity of main memory D speed up main memory Read operations
Computer-Organization       Cache       UGC NET CS 2011 June-Paper-2
Question 75 Explanation:
Block or Buffer caches are used to speed up main memory Read operations. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.
 Question 76
The performance of a file system depends upon the cache hit rate. If it takes 1 msec to satisfy a request from the cache but 10 msec to satisfy a request if a disk read is needed, then the mean time (ms) required for a hit rate ‘h’ is given by :
 A 1 B h + 10 (1 - h) C (1 - h) + 10 h D 10
Computer-Organization       Cache       UGC NET CS 2007-Dec-Paper-2
Question 76 Explanation:
Mean time = (Cache hit rate)*(cache access time) + (cache miss rate)(memory access time)
= Mean time = (h*1)+(1-h)(10)
= Mean time = h+(1-h)(10)
 Question 77
Cache memory is :
 A High-Speed Register B Low-Speed RAM C Non-Volatile RAM D High-speed RAM
Computer-Organization       Cache       UGC NET CS 2007 June-Paper-2
Question 77 Explanation:
Option-A is ruled out because it is not register. Registers are more faster than cache memory.
Option-B and C is ruled out because it is not relevant.
Option-D is most appropriate answer
 Question 78
In _____ method, the word is written to the block in both the cache and main memory, in parallel.
 A Write through B Write back C Write protected D Direct mapping
Computer-Organization       Cache       UGC NET CS 2016 July- paper-3
Question 78 Explanation:
→ A cache memory contains copies of data stored in the main memory. When a change of data in a cache takes place (ex. a modification due to a processor write) the contents of the main memory and cache memory cells with the same address, are different. To eliminate this lack of data coherency two methods are applied:
write through, the new cache contents is written down to the main memory immediately after the write to the cache memory,
write back, the new cache contents is not written down to the main memory immediately after the change, but only when the given block of data is replaced by a new block fetched from the main memory or an upper level cache. After a data write to the cache, only state bits are changed in the modified block, indicating that the block has been modified (a dirty block).
The write back updating is more time efficient, since the block cells that were modified many times while being in the cache, are updated in the main memory only once.
 Question 79
Which of the following statements related to Cache memory organization is FALSE?
 A In "write through" approach. main memory content is always invalid. B In "write back" approach, updates are made only in the cache and it minimizes memory writes. C Least Recently Used (LRU) replacement algorithm can be used in associative and set associative mappings. D For direct mapping, no replacement algorithm is needed.
Computer-Organization       Cache       CIL Part - B
Question 79 Explanation:
→Write through is a storage method in which data is written into the cache and the corresponding main memory location at the same time.
→The cached data allows for fast retrieval on demand, while the same data in main memory ensures that nothing will get lost if a crash, power failure, or other system disruption occurs.
 Question 80
Consider a cache memory organization with m lines in which the cache is divided into v sets, each of which consists of k lines. The set associative mapping technique reduces to direct mapping when:
 A v=m and k=m B v=1 and k=m C v=m and k=1 D v=1 and k=1
Computer-Organization       Cache       CIL Part - B
Question 80 Explanation: Question 81
Which of the following is an efficient method of cache updating?
 A Snoopy writes B Write through C Write within D Buffered write
Computer-Organization       Cache       ISRO CS 2020       Video-Explanation
 Question 82
How many total bits are required for a direct-mapped cache with 128 KB of data and 1 word block size, assuming a 32-bit address and 1 word size of 4 bytes?
 A 2 Mbits B 1.7 Mbits C 2.5 Mbits D 1.5 Mbits
Computer-Organization       Cache       ISRO CS 2020       Video-Explanation
Question 82 Explanation:
Size of a cache line = block size = 4 bytes
Number of lines in the cache = cache size / line size = 128KB / 4 bytes = 32K = 2^15
In a direct mapped cache the address is split into 3 fields (TAG, LINE, OFFSET)
Here OFFSET is the number bits needed to identify a word within the cache line.
As line size = word size, we don’t need any bits to identify the word within a block so OFFSET = 0.
As there are 2^15 lines in the cache, LINE needs 15 bits. From the 32 bits of physical address, TAG bits = 32 - 15 = 17 bits. TAG directory size = Number of cache lines * TAG bits = 2^15 * 17 = 544 K bits Cache data size = 128KB = 128 * 8 K bits = 1024 K bits Total cache size = TAG directory size + cache data size = 544 K + 1024 K = 1564 K bits = 1.53 M bits
 Question 83
When a cache memory is 30 times faster than main memory/RAM and cache hit ratio is 90% the speed up gained using the cache is approximately -----------
 A 10 times B 7.7 times C 2.7 times D 27 times
Computer-Organization       Cache       APPSC-2016-DL-CS
Question 83 Explanation: Question 84

The principle of locality of reference justifies the use of

 A virtual memory B interrupts C cache memory D secondary memory
Computer-Organization       Cache       APPSC-2016-DL-CA
Question 84 Explanation:
The principal of locality of reference justifies the use of cache memory.
Locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time. There are two basic types of reference locality – temporal and spatial locality.
 Question 85

The main memory of a computer has 2cm blocks while the cache has 2c blocks. If cache uses set associative mapping scheme with two blocks per set, then the block k of the main memory maps to set:

 A (k mod m) of the cache B (k mod c) of the cache C (k mod 2c) of the cache D (k mod 2cm) of the cache
Computer-Organization       Cache       CIL 2020
Question 85 Explanation:
No. of sets are = No. of blocks in cache/no. of blocks per set = 2c/2 = c
Therefore the kth block of main memory maps to k mod c of the cache.
 Question 86

Principle of locality is used in ___

 A Registers B DMA C Cache Memory D Interrupt
Computer-Organization       Cache       CIL 2020
Question 86 Explanation:
In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time. There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration. Spatial locality (also termed data locality) refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, such as, traversing the elements in a one-dimensional array.
Principle of locality is used in cache.
 Question 87
A computer system has 4k word cache organized in a block-set-associative manner, with 4 blocks per set, 64 words per block. The number of bits in the SET and WORD fields of the main memory address format is
 A 15, 4 B 6, 4 C 7, 2 D 4, 6
Computer-Organization       Cache       TNPSC-2012-Polytechnic-CS
Question 87 Explanation: Size of block = 64 w = 26 w
So, offset bit = 6
No. of blocks in a cache = 212/26 = 26
No. of sets in cache = 26/22 = 24
So, index bits = 4
 Question 88
A page table is maintained partially in cache memory with a hit ratio of 80%. Give the following, what is the Effective Access Time. Cache lookup takes 5 nanosec and memory access time is 100 nanosec.
 A 45 nanosec B 125 nanosec C 25 nanosec D 105 nanosec
Computer-Organization       Cache       APPSC-2012-DL-CS
Question 88 Explanation:
= Hit ratio * cache access time + miss ration * (cache access time + memory access time)
= 0.8 * 5ns + 0.2(5+100)
= 4+ 21
= 25NS
 Question 89
The disadvantage of write back strategy in cache is that :
 A If generates repeated memory traffic B It creates a write mechanism whenever there is a write operation to cache C Portions of main memory may be invalid D It requires local cache memory attached to every CPU in a multi processor environment
Computer-Organization       Cache       TNPSC-2017-Polytechnic-CS
Question 89 Explanation:
There is data availability risk because the cache could fail (and so suffer from data loss) before the data is persisted to the backing store. This result in the data being lost.
 Question 90

A computer has a cache, main memory, and a disk used for virtual memory. If a referenced word is in the cache, 20 ns are required to access it. If it is in main memory but not in the cache, 60 ns are needed to load it into the cache, and then the reference is started again. If the word is not in main memory, 12 ms (12*106 ns) are required to fetch the word from disk, followed by 60ns to copy it to the cache, and then the reference is started again. The cache hit ratio is 0.9 and the main memory hit ratio is 0.6

What is the average time in nanoseconds(ns) required to access a referenced word on this system?
 A 20 B 80 C 1200089 D 480026
Computer-Organization       Cache       HCU PHD CS 2018 December
Question 90 Explanation: Question 91
A computer has a 256 KByte, 4-way set associative, write back data cache with block size of 32 Bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an address is
 A 11 B 14 C 16 D 27
Computer-Organization       Cache       HCU PHD CS MAY 2017
Question 91 Explanation:
32 bits addresses can be divided as, Question 92
A 64kB direct mapped cache has 16 byte blocks. If the address is of 32 bit, how many bits are used for tag, index and offset in this cache?
 A 16,13,4 B 12,16,4 C 16,12,4 D 16,14,4
Computer-Organization       Cache       HCU PHD CS 2018 June
Question 92 Explanation: For offset bits, Block size = 16 byte = 24 byte
So, offset bits = 4
For index bits,
Cache size/Block size = No. of blocks = 216/24 = 212
So, index bits = 12
Now tag bits,
32 - (offset bits + index bits)
= 32 - (4 + 12)
= 16
 Question 93
A computer system has a RAM of 4 GB and a cache of 512MB. Assuming that the least significant bit is b0 ,the cache location using direct mapping is given by
 A the bits b31. . .b4 B the bits b2. . .b0 C the bits b31.. .b29 D the bits b28. . .b0
Computer-Organization       Cache       HCU PHD CS MAY 2015
Question 93 Explanation:
Since the size of cache is 512MB = 2^29 B
So to determine we need 29 bits from the left side of the address which is of 32 bits( Size of RAM is 4GB = 2^32 B ). So the cache location using direct mapping is given by the bits b28. . .b0
 Question 94
The hit ratio of cache memory is the percentage of access (reads and writes)for which data are found in the cache. Write-through, Write-back are two main policies for memory updation. write-allocation is a policy whereby a cache line is allocated and loaded on a write miss. If it is assumed that write-allocation is always used, which of the following is true.
 A Write-back usually results in a better hit ratio than write-through B Write through usually results in a better hit ratio than write-back C The percentage of write operations resulting in a main memory operation will never be larger for Write-back than for Write-through D Write-through can only be employed in set-associative memory
Computer-Organization       Cache       HCU PHD CS MAY 2016
Question 94 Explanation:
The percentage of write operations resulting in a main memory operation will never be larger for Write-back than for Write-through,becuase in write back data is updated in the memory only when the cache line is ready to replaced whereas in write through data is simultaneously updated to cache and memory
 Question 95
If flexibility is defined as having maximum choice in cache placement, which of the following is correctly ordered in ascending order of flexibility?
 A Direct Mapping, Set-Associative Mapping, Associative Mapping B Set-Associative Mapping, Direct Mapping, Associative Mapping C Associative Mapping, Set-Associative Mapping, Direct Mapping D None of the above.
Computer-Organization       Cache       HCU PHD CS MAY 2014
Question 95 Explanation:
In direct mapping only choice is there.
In set associative mapping of set size k,there are k choices.
While in associative mapping the choice is any block of cache.
Hence ascending order of flexibility is ,
Direct mapping , Set associative Mapping, Associative mapping.
 Question 96
If Cache Size is 64KB, Block size is 32B and the cache is Two-Way Set Associative. For a 32-bit physical address, the division between Block Offset, Index and Tag are:
 A 10 bits, 10 bits, 10 bits B 17 bits, 32 bits, 16 bits C 32 bits, 64 bits, 22 bits D 5 bits, 10 bits , 17 bits
Computer-Organization       Cache       HCU PHD CS MAY 2014
Question 96 Explanation:
The structure of 32 bit address will be like, ∴ Tag field bits = 32 - (10 + 5) = 17
 Question 97
If flexibiLity is defined as having maximum choice in cache placement, which of the following is correctly ordered in ascending order of flexibility?
 A Direct Mapping, set-Associative Mapping, Associative Mapping B set-Associative Mapping, Direct Mapping, Associative Mapping C Associative Mapping, set-Associative Mapping, Direct Mapping D None of the Above
Computer-Organization       Cache       HCU PHD CS MAY 2012
Question 97 Explanation:
In direct mapping each element has to be put in fix block .
In set associative mapping each element has to be put in fix set which in turn have some no. of blocks.So we can also say that in this type of mapping the element can be placed any of the blocks inside the specific set.
In associating mapping the elements can be placed in the any blocks available.
So the correct order in ascending order of flexibility is
Direct Mapping, set-Associative Mapping, Associative Mapping
 Question 98
Consider a machine with a byte addressable main memory of 2616 bytes and block size of 8 bytes. Assume that a direct mapped cache consisting of 32 lines is used with this machine. How many bits will be there in Tag, line and word field of format of main memory addresses?
 A 8,5,3 B 8,6,2 C 7,5,4 D 7,6,3
Computer-Organization       Cache       UGC NET JRF November 2020 Paper-2
Question 98 Explanation: Question 99
A direct mapped cache is of size 32 KB and has block size 32 Bytes. CPU also generates 32 bit address. Number of bits needed for indexing the cache:
 A 14 B 15 C 10 D 17
Computer-Organization       Cache       NIC-NIELIT Scientist-B 2020
Question 99 Explanation: There are 99 questions to complete.