Memory-Management

Question 1

Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?

A
8×220
B
4×220
C
16×210
D
256×210
       Operating-Systems       Memory-Management       GATE 2019       Video-Explanation
Question 1 Explanation: 
A TLB has 128 valid entries.
So, it can refer to 27 pages.
Each page size is 8 kB & word is 4 bytes.
So, the total addresses of virtual address spaces that can be addressed
Question 2

Consider six memory partitions of sizes 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB, where KB refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210KB, 468 KB and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any process?

A
200KBand 300 KB
B
200KBand 250 KB
C
250KBand 300 KB
D
300KBand 400 KB
       Engineering-Mathematics       Memory-Management       GATE 2015 [Set-2]
Question 2 Explanation: 

Since Best fit algorithm is used. So, process of size,
357KB will occupy 400KB
210KB will occupy 250KB
468KB will occupy 500KB
491KB will occupy 600KB
So, partitions 200KB and 300KB are NOT alloted to any process.
Question 3

Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is  _________.

A
122
B
123
C
124
D
125
       Operating-Systems       Memory-Management       GATE 2014 [Set-3]
Question 3 Explanation: 
Tavg = TLB access time + miss ratio of TLB × memory access time + memory access time
= 10 + 0.4 × 80 + 80
= 10 + 32 + 80
= 122 ms
Question 4

The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9.  The average memory access time (in nanoseconds) in executing the sequence of instructions is   __________.

A
1.68
B
1.69
C
1.70
D
1.71
       Operating-Systems       Memory-Management       GATE 2014 [Set-3]
Question 4 Explanation: 
Total instruction = 100 instruction fetch operation + 60 memory operand read operation + 40 memory operand write op
= 200 instructions (operations)
Time taken for fetching 100 instructions (equivalent to read) = 90*1ns + 10*5ns = 140ns
Memory operand Read operations = 90% (60)*1ns + 10% (60) × 5ns = 54ns + 30 ns = 84ns
Memory operands Write operations time = 90% (40)*2ns + 10% (40)*10ns
= 72ns + 40ns = 112ns
Total time taken for executing 200 instructions = 140 + 84 + 112 = 336ns
∴ Average memory access time = 336 ns/200 = 1.68ns
Question 5

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the size of a page in KB in this computer?

A
2
B
4
C
8
D
16
       Operating-Systems       Memory-Management       GATE 2013
Question 5 Explanation: 
Let the size of page is = 2p B
So the no. of entries in one page is 2p/4, where 4 is the page table entry size given in question.
So we know that process size or virtual address space size is equal to
No. of entries × Page size
So total no. of entries for 3 level page table is,
(2p/4)×(2p/4)×(2p/4)
So, No. of entries × Page size = VAS
(2p/4)×(2p/4)×(2p/4)× (2p) = 246
24p = 252
4p = 52
p = 13
∴ Page size = 213
Question 6

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer?

A
2
B
4
C
8
D
16
       Operating-Systems       Memory-Management       GATE 2013
Question 6 Explanation: 
Architecture of physically indexed cache:

Architecture of virtual indexed physically tagged (VIPT):

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13bits
These 13bits are not translated during VA→PA. So 13bits are same out of 16 Index bits, 13 are same we need to make 3bits (16-13) same now.
3bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 7

Let the page fault service time to 10 ms in a computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory accesses, what is the effective access time for the memory?

A
21 ns
B
30 ns
C
23 ns
D
35 ns
       Operating-Systems       Memory-Management       GATE 2011
Question 7 Explanation: 
P = page fault rate
EA = p × page fault service time + (1 – p) × Memory access time
= 1/106×10×106+(1-1/106)×20 ≅ 29.9 ns
Question 8

In a paged segmented scheme of memory management, the segment table itself must have a page table because

A
the segment table is often too large to fit in one page
B
each segment is spread over a number of pages
C
segment tables point to page table and not to the physical locations of the segment
D
the processor’s description base register points to a page table
E
Both A and B
       Operating-Systems       Memory-Management       GATE 1995
Question 8 Explanation: 
The segment table is often too large to fit in one page. This is true and the segment table can be divided into pages. Thus page table for each segment table, pages are created.
Segment paging is different from paged segmentation.
Question 9

The capacity of a memory unit is defined by the number of words multiplied by the number of bits/word. How many separate address and data lines are needed for a memory of 4K × 16?

A
10 address, 16 data lines
B
11 address, 8 data lines
C
12 address, 16 data lines
D
12 address, 12 data lines
       Computer-Organization       Memory-Management       GATE 1995
Question 9 Explanation: 
ROM memory size = 2m × n
m = no. of address lines
n = no. of data lines
Given, 4K × 16 = 212 × 16
Address lines = 12
Data lines = 16
Question 10

A computer installation has 1000k of main memory. The jobs arrive and finish in the following sequences.

 Job 1 requiring 200k arrives
 Job 2 requiring 350k arrives
 Job 3 requiring 300k arrives
 Job 1 finishes
 Job 4 requiring 120k arrives 
 Job 5 requiring 150k arrives
 Job 6 requiring 80k arrives 

(a) Draw the memory allocation table using Best Fit and First fit algorithms.
(b) Which algorithm performs better for this sequence?

A
Theory Explanation.
       Operating-Systems       Memory-Management       GATE 1995
Question 11

Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?

A
The hole created by worst fit is always larger than the hole created by first fit.
B
The hole created by best fit is never larger than the hole created by first fit.
C
The hole created by first fit is always larger than the hole created by next fit.
D
The hole created by next fit is never larger than the hole created by best fit.
       Operating-Systems       Memory-Management       GATE 2020
Question 11 Explanation: 
The size of hole created using best fit is never greater than size created by first fit. The best fit chooses the smallest available partition to fit the size of the process. Whereas, first fit and next fit doesn’t consider the size of the holes available.
Question 12

Consider a paging system that uses a 1-level page table residing in main memory and a TLB for address translation. Each main memory access takes 100 ns and TLB lookup takes 20 ns. Each page transfer to/from the disk takes 5000 ns. Assume that the TLB hit ratio is 95%, page fault rate is 10%. Assume that for 20% of the total page faults, a dirty page has to be written back to disk before the required page is read in from disk. TLB update time is negligible. The average memory access time in ns (round off to 1 decimal places) is ______.

A
154.5 ns
       Operating-Systems       Memory-Management       GATE 2020
Question 12 Explanation: 
M=100ns
T=20ns
D=5000ns
h=0.95
p=0.1, 1-p=0.9
d=0.2, 1-d=0.8
EMAT = h×(T+M)+(1-h)[(1-p)×2M+p[(1-d)[D+M]+d(2D+M)]+T]
= 0.95×(20+100)+(1-0.95)[(1-0.1)×200+(0.1)[(1-0.2)[5000+100]+(0.2)(10000+100)]+20]
= 154.5 ns
Question 13

A certain moving arm disk storage, with one head, has the following specifications.

 Number of tracks/recording surface = 200
 Disk rotation speed = 2400 rpm
 Track storage capacity = 62,500 bits 

The average latency of this device is P msec and the data transfer rate is Q bits/sec.
Write the value of P and Q.

A
P = 12.5, Q = 2.5×106
       Operating-Systems       Memory-Management       GATE 1993
Question 13 Explanation: 
RPM = 2400
So, in DOS, the disk rotates 2400 times.
Average latency is the time for half a rotation
= 0.5×60/2400 s
= 12.5 ms
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits
So, disk transfer rate
= 62500 × 2400/60
= 2.5 × 106 bps
So,
P = 12.5, Q = 2.5×106
Question 14

(a) The access times of the main memory and the Cache memory, in a computer system, are 500 n sec and 50 n sec, respectively. It is estimated that 80% of the main memory request are for read the rest for write. The hit ratio for the read access only is 0.9 and a write-through policy (where both main and cache memories are updated simultaneously) is used. Determine the average time of the main memory.
(b) Three devices A, B and C are corrected to the bus of a computer, input/output transfers for all three devices use interrupt control. Three interrupt request lines INTR1, INTR2 and INTR3 are available with priority of INTR1 > priority of INTR2 > priority of INTR3.
Draw a schematic of the priority logic, using an interrupt mask register, in which Priority of A > Priority of B > Priority of C.

A
Theory Explanation.
       Operating-Systems       Memory-Management       GATE 1992
Question 15

Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following sequence of memory block references if LRU policy is used for cache block replacement. Assuming that initially the cache did not have any memory block from the current job?

 0 5 3 9 7 0 16 55 
A
0 3 5 7 16 55
B
0 3 5 7 9 16 55
C
0 5 7 9 16 55
D
3 5 7 9 16 55
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 15 Explanation: 
The cache is 2-way associative, so in a set, there can be 2 block present at a time.
So,

Since, each set has only 2 places, 3 will be thrown out as its the least recently used block. So final content of cache will be
0 5 7 9 16 55
Hence, answer is (C).
Question 16

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear Velocity (ii) Constant Angular Velocity?

A
(i) 80 MB (ii) 2040 MB
B
(i) 2040 MB (ii) 80 MB
C
(i) 80 MB (ii) 360 MB
D
(i) 80 MB (ii) 360 MB
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 16 Explanation: 
Constant linear velocity:
Diameter of inner track = d = 1 cm
Circumference of inner track
= 2 * 3.14 * d/2
= 3.14 cm
Storage capacity = 10 MB (given)
Circumference of all equidistant tracks
= 2 * 3.14 * (0.5+1+1.5+2+2.5+3+3.5+4)
= 113.14 cm
Here, 3.14 cm holds 10 MB
Therefore, 1 cm holds 3.18 MB.
So, 113.14 cm holds
113.14 * 3.18 = 360 MB
So, total amount of data that can be hold on the disk = 360 MB.
For constant angular velocity:
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk
= 8 * 10 = 80 MB
Question 17

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
If the disk has 20 sectors per track and is currently at the end of the 5th sector of the inner-most track and the head can move at a speed of 10 meters/sec and it is rotating at constant angular velocity of 6000 RPM, how much time will it take to read 1 MB contiguous data starting from the sector 4 of the outer-most track?

A
13.5 ms
B
10 ms
C
9.5 ms
D
20 ms
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 17 Explanation: 
Radius of inner track is 0.5cm (where the head is standing) and the radius of outermost track is 4cm.
So, the header has to seek (4 - 0.5) = 3.5cm.
For 10m ------- 1s
1m ------- 1/10 s
100cm ------- 1/(10×100) s
3.5cm ------- 3.5/1000 s = 3.5ms
So, the header will take 3.5ms.
Now, angular velocity is constant and header is now at end of 5th sector. To start from front of 4th sector it must rotate upto 18 sector.
6000 rotation in 60000ms.
1 rotation in 10ms (time to traverse 20 sectors).
So, to traverse 18 sectors, it takes 9ms.
In 10ms, 10MB data is read.
So, 1MB data can be read in 1ms.
∴ Total time = 1+9+3.5 = 13.5ms
Question 18

Which one of the following is NOT shared by the threads of the same process?

A
Stack
B
Address Space
C
File Descriptor Table
D
Message Queue
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 18 Explanation: 
Threads cannot share the stack to maintaining the function calls and they can have individual function call sequences.
Question 19

Consider a pipeline processor with 4 stages S1 to S4. We want to execute the following loop:
For(i=1:i<=1000; i++)
{I1,I2,I3,I4}
where the time taken (in ns) by instructions I1 to I4 for stages S1 to S4 are given below:

       S1   S2   S3   S4
I1:    1    2    1    2
I2:    2    1    2    1
I3:    1    1    2    1
I4:    2    1    2    1 
The output of I1 for i=2 will be available after

A
11 ns
B
12 ns
C
13 ns
D
28 ns
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 19 Explanation: 

So, total time would be 13 ns.
Question 20

The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400 bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is

A
0.5%
B
1%
C
5%
D
10%
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 20 Explanation: 
y μs is cycle time.
x μs is data transfer time.
% of time CPU idle is,
y/x × 100
Maximum storage density is given, so consider innermost track to get the capacity
= 2 × 3.14 × 5 × 1700 bits
= 3.14 × 14000 bits
Rotational latency = 60/4200 s = 1/70 s
Therefore, to read 64 bits, time required
(106 × 64)/(70 × 3.14 × 17000) μs = 20.8 μs
As memory cycle time given is 1μs,
Therefore, % CPU cycle stolen = (1/20.8) × 100 ≈ 5%
Question 21

A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers

 30 70 115 130 110 80 20 25 
How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)

A
2 and 3
B
3 and 3
C
3 and 4
D
4 and 4
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 21 Explanation: 
SSTF: (90) 120 115 110 130 80 70 30 25 20
Direction changes at 120, 110, 130.
FCFS: (90) 120 30 70 115 130 110 80 20 25
Direction changes at 120, 30, 130, 20.
Question 22
In the context of operating systems, which of the following statements is/are correct with respect to paging?
A
Page size has no impact on internal fragmentation.
B
Multilevel paging is necessary to support pages of different sizes.
C
Paging incurs memory overheads.
D
Paging helps solve the issue of external fragmentation.
       Operating-Systems       Memory-Management       GATE 2021 CS-Set-1
Question 22 Explanation: 
  1. False, Large page size may lead to higher internal fragmentation.
  2. False, To support pages of different sizes, the Instruction set architecture should support it. Multi-level paging is not necessary.
  3. True, The page table has to be stored in main memory, which is an overhead. 
  4. True, Paging avoids the external fragmentation. 
Question 23
The Operating System of a computer may periodically collect all the free memory space to form contiguous block of free space. This is called:
A
Concatenation
B
Garbage collection
C
Collision
D
Dynamic Memory Allocation
       Operating-Systems       Memory-Management       ISRO-2018       Video-Explanation
Question 23 Explanation: 
→ The Operating System of a computer may periodically collect all the free memory space to form a contiguous block of free space. This is called garbage collection
→ We can also use compaction to minimize the probability of external fragmentation.
→ In compaction, all the free partitions are made contiguous and all the loaded partitions are brought together.
Question 24
A computer has 1000 K of main memory. The jobs arrive and finish in the sequence
Job 1 requiring 200 K arrives
Job 2 requiring 350 K arrives
Job 3 requiring 300 K arrives
Job 1 finishes
Job 4 requiring 120 K arrives
Job 5 requiring 150 K arrives
Job 6 requiring 80 K arrives
Among the best fit and first fit, which performs better for this sequence?
A
First fit
B
Best fit
C
Both perform the same
D
None of the above
       Operating-Systems       Memory-Management       ISRO-2018       Video-Explanation
Question 24 Explanation: 
Main memory = 1000K
Job 1 requiring 200 K arrives
Job 2 requiring 350 K arrives
Job 3 requiring 300 K arrives and assuming continuous allocation:
Free memory = 1000 − 850(200 + 350 + 300) = 150 K (till these jobs first fit and best fit are same)
Since, job 1 finishes, Free memory = 200 K and 150 K
Case 1: First fit
Job 4 requiring 120 K arrives
Since 200 K will be the first slot, so Job 4 will acquire this slot only. Remaining memory = 200 – 120 = 80 K
Job 5 requiring 150 K arrives
It will acquire 150 K slot
Job 6 requiring 80 K arrives
It will occupy 80 K slot, so, all jobs will be allocated successfully.
Case 2: Best fit
Job 4 requiring 120 K arrives
It will occupy best fit slot which is 150 K. So, remaining memory = 150 − 120 = 30 K
Job 5 requiring 150 K arrives
It will occupy 200 K slot. So, free space = 200 − 150 = 50 K
Job 6 requiring 80 K arrives
There is no continuous 80 K memory free. So, it will not be able to allocate.
So, first fit is better.
Question 25
Consider a logical address space of 8 pages of 1024 words mapped into memory of 32 frames. How many bits are there in the logical address?
A
13 bits
B
15 bits
C
14 bits
D
12 bits
       Operating-Systems       Memory-management       ISRO CS 2008
Question 25 Explanation: 
logical address space = 8 pages of 1024 words
number of bits in logical address space = p (page bits) + d (offset bits)
number of bits = log28 + log21024 = 3 + 10 = 13 bits
Question 26
Let the page fault service time be 10 ms in a computer with average memory access time being 20 ns. If the one-page fault is generated for every 106 memory accesses, what is the effective access time for the memory?
A
21.4 ns
B
29.9 ns
C
23.5 ns
D
35.1 ns
       Operating-Systems       Memory-Management       ISRO-2016
Question 26 Explanation: 
Question 27
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation lookaside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
A
11 bits
B
13 bits
C
15 bits
D
20 bits
       Operating-Systems       Memory-Management       ISRO-2016
Question 27 Explanation: 
Page size = 4 KB = 4 × 210 Bytes = 212 Bytes
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 28
If the page size in a 32-bit machine is 4K bytes then the size of the page table is
A
1 M bytes
B
2 M bytes
C
4 M bytes
D
4 K bytes
       Operating-Systems       Memory-Management       ISRO CS 2011
Question 28 Explanation: 
→Page size is total space taken up by page and Page table entry size is memory taken for indexing the Page in Page Table
→Size of logical address = 32 bits
→Page size = 4K =22210=212 Bytes
→Number of pages = logical address space/ size of each page = 232/ 212= 220
→Page table size = number of pages * size of a page table entry
= 220 * 22
= 222
Question 29
In a system using a single processor, a new process arrives at the rate of six processes per minute and each such process requires seven seconds of service time. What is the CPU utilization?
A
70%
B
30%
C
60%
D
64%
       Operating-Systems       Memory-Management       ISRO CS 2011
Question 29 Explanation: 
From the given question,
The number of new processes will arrive per minute = 6
Each process require to complete its task = 7 secs
CPU utilization time within a minute = 6*7 = 42 secs
The percentage of CPU utilization = time which is spent for utilization / total time * 100
= (42/60) * 100
= 70%
Question 30
Consider a 32-bit machine where four-level paging scheme is used. If the hit ratio to TLB is 98%, and it takes 20 nanosecond to search the TLB and 100 nanoseconds to access the main memory what is effective memory access time in nanoseconds?
A
126
B
128
C
122
D
120
       Operating-Systems       Memory-Management       ISRO CS 2011
Question 30 Explanation: 
Hit ratio to TLB(H) is 98%
Searching time of TLB(T) is 20ns
Access time(M) is 100ns and four level paging scheme is used.
Effective Memory access Time, EAT = H* T+ (1 - H)[ T+ 4*M] + M]
EAT = (0.98 *20) + 0.02(20 + 400) + 100
= 19.6 + 8.4 + 100 = 128 ns
Question 31
Consider a logical address space of 8 pages of 1024 words each, mapped onto a physical memory of 32 frames. How many bits are there in the physical address and logical address respectively?
A
5, 3
B
10, 10
C
15, 13
D
15, 15
       Operating-Systems       Memory-Management       ISRO CS 2013
Question 31 Explanation: 
→Number of pages= 8= 23=(3 bits)
→Each page consists of 1024 words =210(10 bits)
→Logical address space consists of 8 pages of 1024 words each,
→Then the number of bits required for logical address is 3+10=13 bits.
→Total number of frames =32=25(5 bits).
→The logical memory is mapped to physical memory which means mapping should done between pages and frames.
→Physical address = 5(number of bits for frames) + 10 (number of bits for pages)= 15 bits
Question 32
In a 64-bit machine, with 2 GB RAM, and 8 KB page size, how many entries will be there in the page table if it is inverted?
A
218
B
220
C
233
D
251
       Operating-Systems       Memory-Management       ISRO CS 2013
Question 32 Explanation: 
Given data is
Memory size = 2 GB = 231
Page size = 8 KB = 213
Number of entries in inverted page table = physical address space / page size = 231/213 = 218
Question 33
Consider the following segment table in the segmentation scheme:

What happens if the logical address requested is Segment ID 2 and offset 1000?
A
Fetches the entry at the physical address 2527 for segment Id2
B
A trap is generated
C
Deadlock
D
Fetches the entry at offset 27 in Segment Id 3
       Operating-Systems       Memory-Management       ISRO CS 2014
Question 33 Explanation: 
From the question we need to find the logical address for segment id-2.
From given table,
Segment-2 has a base address = 1527 ‘
limit address = 498.
Process can access memory from the location 1527 to 2025(1527+498)
If the process tries to access the memory with offset 1000 then a segmentation fault trap will be generated.
In computing and operating systems, a trap, also known as an exception or a fault, is typically a type of synchronous interrupt caused by an exceptional condition (e.g., breakpoint, division by zero, invalid memory access)
Question 34
Dirty bit is used to indicate which of the following?
A
A page fault has occurred
B
A page has corrupted data
C
A page has been modified after being loaded into cache
D
An illegal access of page
       Operating-Systems       Memory-Management       ISRO CS 2014
Question 34 Explanation: 
→ The dirty bit allows for a performance optimization i.e., Dirty bit for a page in a page table helps to avoid unnecessary writes on a paging device
→ When a page is modified inside the cache and the changes need to be stored back in the main memory, the valid bit is set to 1 so as to maintain the record of modified pages.
Question 35
What is the size of the physical address space in a paging system which has a page table containing 64 entries of 11 bit each (including valid and invalid bit) and a page size of 512
A
211
B
215
C
219
D
220
       Operating-Systems       Memory-Management       ISRO CS 2014
Question 35 Explanation: 
Size of Physical Address = Paging bits + Offset bits
Paging bits = 11 – 1 = 10 (As 1 valid bit is also included)
Offset bits = log2 (page size) =log2 (512) =9
Size of Physical Address = 10 + 9 = 19 bits
Question 36
Using the page table shown below, translate the physical address 25 to virtual address. The address length is 16 bits and page size is 2048 words while the size of the physical memory is four frames.
                            -
A
25
B
6169
C
2073
D
4121
       Operating-Systems       Memory-Management       ISRO CS 2014
Question 36 Explanation: 
Given data,
Virtual address length =16 bits,
Page size=2048 words
= 211 bytes
Step-1: Total number of pages = 216/211
= 25
Step-2: The physical address is nothing but [number of frames * size of each frame] Physical address= 4*211
= 213
Step-3: Given physical address (25)10 = (0000000011001)2 in 13 bits
The 13 bits address, we are representing into
Question 37

In a paged memory, the page hit ratio is 0.40. The time required to access a page in secondary memory is equal to 120 ns. The time required to access a page in primary memory is 15 ns. The average time required to access a page is

A
105
B
68
C
75
D
78
       Operating-Systems       Memory-Management       UGC-NET CS 2018 JUNE Paper-2
Question 37 Explanation: 
Average time to access a page = page hit ratio(time required to access a page in primary memory) + page miss ratio(time required to access a page in primary memory + time required to access a page in secondary memory)
Average time to access a page = 0.40(15) + 0.60(120)
Average time to access a page = 6 + 72
Average time to access a page = 78
Question 38

Which of the following statements are true ?

    (a) External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is contiguous.
    (b) Memory Fragmentation can be internal as well as external.
    (c) One solution to external Fragmentation is compaction.
 
A
(a) and (b) only
B
(a) and (c) only
C
(b) and (c) only
D
(a), (b) and (c)
       Operating-Systems       Memory-Management       UGC-NET CS 2018 JUNE Paper-2
Question 38 Explanation: 
External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is not contiguous.
Yes, it is true that memory Fragmentation can be internal as well as external.
Yes, compaction is a solution to external Fragmentation.
Question 39

Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are 

A
Page Access information
B
Virtual Page number
C
Page Frame number
D
Both virtual page number and Page Frame Number
       Operating-Systems       Memory-Management       UGC-NET CS 2018 JUNE Paper-2
Question 39 Explanation: 
→ For every page table it contains page frame number.
→ Virtual page number can represents index in the page table to get the page frame number.
Question 40

Consider a virtual page reference string 1, 2, 3, 2, 4, 2, 5, 2, 3, 4. Suppose LRU page replacement algorithm is implemented with 3 page frames in main memory. Then the number of page faults are

A
5
B
7
C
9
D
10
       Operating-Systems       Memory-Management       UGC-NET CS 2018 JUNE Paper-2
Question 40 Explanation: 

So, total number of page faults are 7.
Question 41
If there are 32 segments, each of size 1 K byte, then the logical address should have
A
13 bits
B
14 bits
C
15 bits
D
16 bits
       Operating-Systems       Memory-Management       Nielit Scientist-C 2016 march
Question 41 Explanation: 
There are 32 segments which is equal to 2​5
Each segment size 1K byte =2​ 10
Then total number of bits that logical address consists is 15 bits.
Question 42
How many wires are threaded through the cores in a coincided-current core memory?
A
2
B
3
C
4
D
6
       Operating-Systems       Memory-Management       Nielit Scientist-B CS 22-07-2017
Question 42 Explanation: 
The most common form of core memory, X/Y line coincident-current, used for the main memory of a computer, consists of a large number of small toroidal ferrimagnetic ceramic ferrites (cores) held together in a grid structure (organized as a "stack" of layers called planes), with wires woven through the holes in the cores' centers.
In early systems there were four wires: X, Y, Sense, and Inhibit, but later cores combined the latter two wires into one Sense/Inhibit line. Each toroid stored one bit (0 or 1).
Question 43
Which access method is used for obtaining a record from cassette tape?
A
Direct
B
Sequential
C
Random
D
Parallel
       Operating-Systems       Memory-Management       Nielit Scientist-B CS 22-07-2017
Question 43 Explanation: 
Question 44
A CPU generates 32 bit virtual addresses. The page size is 4KB. The processor has a Translation Lookaside Buffer(TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is
A
11 bits
B
13 bits
C
15 bits
D
20 bits
       Operating-Systems       Memory-Management       Nielit Scientist-B CS 22-07-2017
Question 44 Explanation: 
Page size = 4 KB = 4 × 210 Bytes = 212 Bytes
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 45
Computer uses 46-bit virtual address, 32 bit physical address, and a three level paged page table organization. The page table base register stores the base address of the first level table(T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second level table (T2). Each entry of T2 stores the base address of a page of the third level table(T3). Each entry of T3 stores a page table entry(PTE). The PTE is 32 bits in size, The processor used in the computer has a 1MB 16 way set associative virtually indexed physically tagged cache. the cache block size is 64 bytes. What is the size of a page in KB in this computer?
A
2
B
4
C
8
D
16
       Operating-Systems       Memory-Management       Nielit Scientist-B CS 22-07-2017
Question 45 Explanation: 
Architecture of physically indexed cache:

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16 bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13 bits
These 13bits are not translated during VA→PA. So 13 bits are same out of 16 Index bits, 13 are same we need to make 3 bits (16-13) same now.
3 bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 46

Consider a disk pack with 32 surfaces, 64 tracks and 512 sectors per pack. 256 bytes of data are stored in a bit serial manner in a sector. The number of bits required to specify a particular sector in the disk is

A
19
B
20
C
18
D
22
       Operating-Systems       Memory-Management       UGC-NET CS 2018 DEC Paper-2
Question 46 Explanation: 
There are 32(25) surfaces, each surface have 64(26) tracks and each surface have 512(29) sectors.
So, to identify each sector uniquely,
5+6+9 = 20-bits are needed.
Question 47
If there are 32 segments, each size 1K bytes, then the logical address should have
A
13 bits
B
14 bits
C
15 bits
D
16 bits
       Operating-Systems       Memory-Management       ISRO CS 2015
Question 47 Explanation: 
Given data, total 32 segments
Each segment size is 1K bytes.
Find the logical address=?
32 segments=25
Segment size=210
⇒ 25*210
⇒ 215
So, 15 bits are required.
Question 48
Increasing the RAM of a computer typically improves performance because:
A
Virtual Memory increases
B
Larger RAMs are faster
C
Fewer page faults occur
D
Fewer segmentation faults occur
       Operating-Systems       Memory-Management       ISRO CS 2015
Question 48 Explanation: 
→ When page frames increases, then no. of page faults decreases.
→ Such as if RAM size increases, then no. of page entries increases, then no. of page faults decreases.
Question 49
Dirty bit for a page in a page table
A
helps avoid unnecessary writes on a paging device
B
helps maintain LRU information
C
allows only read on a page
D
None of the above
       Operating-Systems       Memory-Management       ISRO CS 2015
Question 49 Explanation: 
→ The dirty bit allows for a performance optimization i.e., Dirty bit for a page in a page table helps to avoid unnecessary writes on a paging device.
Question 50
In a computer system, memory mapped access takes 100 nanoseconds when a page is found in TLB. In case the page is not TLB, it takes 400 nanoseconds to access. Assuming a hit ratio of 80%, the effective access time is:
A
120ns
B
160ns
C
200ns
D
500ns
       Operating-Systems       Memory-Management       KVS 22-12-2018 Part-B
Question 50 Explanation: 
EAT := TLB_miss_time * (1- hit_ratio) + TLB_hit_time * hit_ratio
TLB time = 100ns,
Memory time = 400ns
Hit Ratio= 80%
E.A.T. = (0.80)*(500)+0.20*900
0.8*100+.2*400=160
Question 51
The following diagram depicts a______cell.
A
Storage
B
Mobile
C
Memory
D
Register
       Operating-Systems       Memory-Management       KVS DEC-2013
Question 51 Explanation: 
● Computer memory is the storage space in the computer, where data is to be processed and instructions required for processing are stored.
● In the computer memory , we will perform read and write operations.
Question 52
What is coalescing?
A
It is a second strategy for allocating kernel memory
B
The buddy system allocates memory from a fixed size segment consistency of physically contiguous pages
C
Kernel memory is often allocated from a free memory pool different from the list used to satisfy ordinary user mode processes
D
An advantage of the buddy system is how quickly adjacent buddies can be combined to form larger segments using this technique
       Operating-Systems       Memory-Management       KVS DEC-2013
Question 52 Explanation: 
● coalescing is the act of merging two adjacent free blocks of memory.
● When an application frees memory, gaps can fall in the memory segment that the application uses.
● Among other techniques, coalescing is used to reduce external fragmentation, but is not totally effective.
● Coalescing can be done as soon as blocks are freed, or it can be deferred until some time later (known as deferred coalescing), or it might not be done at all.
Question 53
Each process is contained in a single section of memory that is contiguous to the section containing the next process is called
A
Contiguous memory​ ​ protection
B
Contiguous path name
C
Definite path name
D
Indefinite path name
       Operating-Systems       Memory-Management       KVS DEC-2013
Question 53 Explanation: 
Contiguous memory allocation is a classical memory allocation model that assigns a process consecutive memory blocks (that is, memory blocks having consecutive addresses).
Question 54
Both the first fit and best fit strategies for memory allocation suffer from
A
External fragmentation
B
Internal fragmentation
C
50-percent rule
D
segmentation
       Operating-Systems       Memory-Management       KVS DEC-2013
Question 54 Explanation: 
→ Internal fragmentation is the wasted space within each allocated block because of rounding up from the actual requested allocation to the allocation granularity.
→ External fragmentation is the various free spaced holes that are generated in either your memory or disk space.
→ Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation. As the processes are loaded and removed from memory, the free memory space is broken into little pieces.
→ External fragmentation exists when there is enough total memory space to satisfy a request, but the available spaces are not contiguous.
Question 55
The simplest, but most expensive approach to introductory redundancy is duplicate to every disk. This technique is called
A
Swap space
B
Mirroring
C
Page slots
D
None of these
       Operating-Systems       Memory-Management       KVS DEC-2013
Question 55 Explanation: 
● Mirroring copies identical data onto more than one drive.
● Striping partitions each drive's storage space into units ranging from a sector (512 bytes) up to several megabytes.
●The stripes of all the disks are interleaved and addressed in order.
Question 56
Copying a process from memory to disk to allow space for other processes is called___
A
Demand paging
B
Deadlock
C
page fault
D
Swapping
       Operating-Systems       Memory-Management       KVS DEC-2017
Question 56 Explanation: 
→ Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.
→ The performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.
Question 57

The segmentation memory management scheme suffers from:

A
External fragmentation
B
Internal fragmentation
C
Starvation
D
Ageing
       Operating-Systems       Memory-Management       JT(IT) 2016 PART-B Computer Science
Question 57 Explanation: 
• Segmentation avoids internal fragmentation but still it suffers from external fragmentation.
Paging avoids external fragmentation but still it suffers from internal fragmentation. < br> • Internal fragmentation is the wasted space within each allocated block because of rounding up from the actual requested allocation to the allocation granularity.
• External fragmentation is the various free spaced holes that are generated in either your memory or disk space. External fragmented blocks are available for allocation, but may be too small to be of any use.
• Resource starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm.
• Ageing is a scheduling technique used to avoid starvation.
Question 58
Which of the following technique allows execution of programs larger than the size of physical memory?
A
Thrashing
B
DMA
C
Buffering
D
Demand Paging
       Operating-Systems       Memory-Management       KVS DEC-2017
Question 58 Explanation: 
Virtual memory technique allows execution of programs larger than the size of physical memory. Demand paging we are using for virtual memory technique.
Question 59
An address in the memory is called
A
physical address
B
logical address
C
memory address
D
word address
       Operating-Systems       Memory-Management       KVS 30-12-2018 Part B
Question 59 Explanation: 
A physical address (also real address, or binary address), is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device.
Question 60
The mechanism that brings a page memory only when it is needed in___
A
page replacement
B
segmentation
C
fragmentation
D
demand paging
       Operating-Systems       Memory-Management       KVS DEC-2017
Question 60 Explanation: 
→ Demand paging follows that pages should only be brought into memory if the executing process demands them. This is often referred to as lazy evaluation as only those pages demanded by the process are swapped from secondary storage to main memory. Contrast this to pure swapping, where all memory for a process is swapped from secondary storage to main memory during the process startup.
→ Demand paging is a method of virtual memory management. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it and that page is not already in memory (i.e., if a page fault occurs).
→ It follows that a process begins execution with none of its pages in physical memory, and many page faults will occur until most of a process working set of pages are located in physical memory. This is an example of a lazy loading technique.
Question 61
Consider the following statements
S1: a small page size causes large page tables
S2: Internal fragmentation is increase with small pages
S3: I/O transfers are more efficient with large pages
Which of the following is true?
A
S1 is true and S3 is false
B
S1 and S2 are true
C
S2 and S3 are true
D
S1 is true and S2 is false
       Operating-Systems       Memory-Management       KVS DEC-2017
Question 61 Explanation: 
S1: True:A small page size causes large page tables
S2: True:Internal fragmentation is increase with small pages. Actually Internal fragmentation is decrease with small pages
S3:True:I/O transfers are more efficient with large pages
Question 62
First fit and best fit strategies for memory allocation suffer from ____ and _____ fragmentation, respectively.
A
Internal,internal
B
Internal,external
C
External,external
D
external,internal
       Operating-Systems       Memory-Management       KVS 30-12-2018 Part B
Question 62 Explanation: 
→Internal fragmentation is the wasted space within each allocated block because of rounding up from the actual requested allocation to the allocation granularity.
→External fragmentation is the various free spaced holes that are generated in either your memory or disk space.
→Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation. As the processes are loaded and removed from memory, the free memory space is broken into little pieces.
→ External fragmentation exists when there is enough total memory space to satisfy a request, but the available spaces are not contiguous.
Question 63
In a paging system, it takes 30 ns to search translation Lookaside Buffer (TLB) and 90 ns to access the main memory. If the TLB hit ratio is 70%, the effective memory access time is :
A
48ns
B
147ns
C
120ns
D
84ns
       Operating-Systems       Memory-Management       UGC NET CS 2017 Jan -paper-2
Question 63 Explanation: 
Effective memory access(EMA)=Hit ratio*(TLB access time + Main memory access time) +(1–hit ratio) * (TLB access time + 2 * main memory time)
EAM=0.7*(30+90)+0.3(30+(2*90))
=0.7*120 + 0.3(30+(180))
=0.7*120 + 0.3*210
= 84 + 63
= 147
Question 64
A unix file system has 1-KB blocks and 4-byte disk addresses. What is the maximum file size if i-nodes contain 10 direct entries and one single, double and triple indirect entry each?
A
32 GB
B
64 GB
C
16 GB
D
1 GB
       Operating-Systems       Memory-Management       UGC NET CS 2015 Dec- paper-2
Question 64 Explanation: 
Block size = 1KB
Size of one address = 22 Byte
No. of addresses a block can contain/point =210/ 22=28
Max. file size =(10+28+(28 * 28)+(28 * 28 * 28))210
=16 GB
Question 65
In which of the following storage replacement strategies, is a program placed in the largest available hole in the memory ?
A
Best fit
B
First fit
C
Worst fit
D
Buddy
       Operating-Systems       Memory-Management       UGC NET CS 2004 Dec-Paper-2
Question 65 Explanation: 
First fit:​ Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.
Best fit: ​ Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered by size. This strategy produces the smallest leftover hole.
Worst fit: ​ Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.
Question 66
Moving Process from main memory to disk is called :
A
Caching
B
Termination
C
Swapping
D
Interruption
       Operating-Systems       Memory-Management       UGC NET CS 2005 june-paper-2
Question 66 Explanation: 
Swapping is a mechanism in which a process can be swapped/moved temporarily out of main memory to a backing store , and then brought back into memory for continued execution.
Question 67
Loading operating system from secondary memory to primary memory is called ____________ .
A
Compiling
B
Booting
C
Refreshing
D
Reassembling
       Operating-Systems       Memory-Management       UGC NET CS 2006 Dec-paper-2
Question 67 Explanation: 
Loading operating system from secondary memory to primary memory is called booting.
Question 68
A page fault __________ .
A
is an error in specific page
B
is an access to the page not currently in main memory
C
occurs when a page program accesses a page of memory
D
is reference to the page which belongs to another program
       Operating-Systems       Memory-Management       UGC NET CS 2006 June-Paper-2
Question 68 Explanation: 
A page fault is an access to the page not currently in main memory. A page fault is a type of exception raised by computer hardware when a running program accesses a memory page that is not currently mapped by the memory management unit (MMU) into the virtual address space of a process.
Question 69
The memory allocation scheme subjected to ​ external​ fragmentation is :
A
Segmentation
B
Swapping
C
Demand paging
D
Multiple contiguous fixed partitions
       Operating-Systems       Memory-Management       UGC NET CS 2006 June-Paper-2
Question 69 Explanation: 
To avoid external fragmentation we have two methods
1. Paging
2. Segmentation
But both are still suffer in internal fragmentation.
Question 70
A specific editor has 200 K of program text, 15 K of initial stack, 50 K of initialized data, and 70 K of bootstrap code. If five editors are started simultaneously, how much physical memory is needed if shared text is used ?
A
1135 K
B
335 K
C
1065 K
D
320 K
       Operating-Systems       Memory-Management       UGC NET CS 2014 Dec-Paper-2
Question 70 Explanation: 
Given data,
-- Program text=200 K
-- Initial stack=15 K
-- Initialized data=50 K
-- Bootstrap code=70 K
-- Physical memory needed=?
Step-1: Here, given constraint that, all five editors are started simultaneously.
So, all editors to perform all above operations. It need physical memory is
= Program text + Initial stack + Initialized data + Bootstrap code
= 200 K + 15 K + 50 K + 70 K
= 335 K
Question 71
For the implementation of a paging scheme, suppose the average process size be ‘x’ bytes, the page size be ‘y’ bytes, and each page entry requires ‘z’ bytes. The optimum page size that minimizes the total overhead due to the page table and the internal fragmentation loss is given by
A
x/2
B
xz/2
C
√2xz
D
√ xz/ 2
       Operating-Systems       Memory-management       UGC NET CS 2014 Dec-Paper-2
Question 71 Explanation: 
Since the average number of pages required per process will be x/y and the amount of space required by the page table will be (x/y)*z. The amount of space lost due to internal fragmentation is y/2. So total space wastage is
Loss(L)=(x/y)*e + y/2
To find the value of ‘y’ that yields the minimal values, take rst derivative with respect to ‘y’ and set the resulting equation to zero. (dL/dy) =0
y=√(2xz)
Question 72
​To overcome difficulties in Readers-Writers problem, which of the following statement/s is/are true?
1) Writers are given exclusive access to shared objects
2) Readers are given exclusive access to shared objects
3) Both readers and writers are given exclusive access to shared objects.
Choose the correct answer from the code given below:
A
1 only
B
Both 2 and 3
C
2 only
D
3 only
       Operating-Systems       Memory-Management       UGC NET CS 2018-DEC Paper-2
Question 72 Explanation: 
In Readers-Writers problem, more than one Reader is allowed to read simultaneously but if a Writer is writing then no other writer or any reader can have simultaneous access to that shared object. So Writers are given exclusive access to shared objects.
Question 73
A Computer uses a memory unit with 256K word of 32 bits each. A binary instruction code is stored in one word of memory. The instruction has four parts: an indirect bit, an operation code and a register code part to specify one of 64 registers and an address part. How many bits are there in operation code, the register code part and the address part?
A
7,7,18
B
18,7,7
C
7,6,18
D
6,7,18
       Operating-Systems       Memory-Management       UGC NET CS 2018-DEC Paper-2
Question 73 Explanation: 
An instruction size is given as 32-bits.
Now, the instruction is divided into four parts :
An indirect bit
Register code part : Since number of registers given as 64(2​ 6​ ) so to identify each register uniquely 6-bits are needed.
Address part : 256K(2​ 18​ ) word memory is mentioned so to identify each word uniquely 18-bits are needed.
Operation code:
Size of Operation code = Complete instruction size - (size of indirect bit + size of register code + size of address part)
Size of Operation code= 7-bits
Question 74
Consider a system with 2 level cache. Access times of Level 1, Level 2 cache and main memory are 0.5 ns, 5 ns and 100 ns respectively. The hit rates of Level1 and Level2 caches are 0.7 and 0.8 respectively. What is the average access time of the system ignoring the search time within cache?
A
20.75 ns
B
7.55 ns
C
24.35 ns
D
35.20 ns
       Operating-Systems       Memory-Management       UGC NET CS 2018-DEC Paper-2
Question 74 Explanation: 
Average access time = level 1 hit rate( level 1 access time)+ (level1 miss rate)(level 2 hit rate(level 2 access time)+ (level 1 miss rate)( level 2 miss rate) (main memory access time)
Average access time = 0.7(0.5)+ 0.3(0.8)(5)+ 0.3(0.2)(100)
Average access time = 7.55 ns
Question 75
In a paged memory management algorithm, the hit ratio is 70%. If it takes 30 nanoseconds to search Translation Lookaside Buffer (TLB) and 100 nanoseconds (ns) to access memory, the effective memory access time is
A
91 ns
B
69 ns
C
200 ns
D
160 ns
       Operating-Systems       Memory-Management       UGC NET CS 2014 June-paper-2
Question 75 Explanation: 
Given data,
-- Hit ratio=70%
=70/100
=0.7
-- Miss ratio=(1-Hit ratio)
=(1-0.7)
=0.3
-- TLB search time=30ns
-- Access memory=100ns
-- Effective access memory=?
Step-1: [ Hit ratio*(TLB Search time+Access memory) +
Miss ratio*(TLB Search time+2*Access memory) ]
= 0.7*(30+100) + 0.3*(30+2*100)
= 0.7*(130) + 0.3*(30+200)
= 0.7*130 + 0.3*230
= 91 + 69
= 160
Question 76
The hit ratio of a Translation Lookaside Buffer (TLAB) is 80%. It takes 20 nanoseconds (ns) to search TLAB and 100 ns to access main memory. The effective memory access time is ______.
A
36 ns
B
140 ns
C
122 ns
D
40 ns
       Operating-Systems       Memory-Management       UGC NET CS 2013 Sep-paper-2
Question 76 Explanation: 
Given data,
-- hit ratio=80% it is equivalent to 0.8
-- search time=20 ns
-- access memory=100 ns
-- miss ratio= 1-Hit ratio
= 1-0.8
= 0.2
-- Effective access memory=?
Step-1: Effective access memory= Hit ratio*(search time+access memory) + Miss ratio*(search time+2*access memory)
= 0.8*(120)+0.2*(220) ns
= 140 ns
Question 77
In a paged memory, the page hit ratio is 0.40. The time required to access a page in secondary memory is equal to 120 ns. The time required to access a page in primary memory is 15 ns. The average time required to access a page is .
A
105
B
68
C
75
D
78
       Operating-Systems       Memory-Management       UGC NET CS 2018 JUNE Paper-2
Question 77 Explanation: 
Average time to access a page = page hit ratio(time required to access a page in primary memory)+ page miss ratio(time required to access a page in primary memory+time required to access a page in secondary memory)
Average time to access a page = 0.40(15)+ 0.60(120)
Average time to access a page = 6+72
Average time to access a page = 78
Question 78
Which of the following statements are true ?
(a) External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is contiguous.
(b) Memory Fragmentation can be internal as well as external.
(c) One solution to external Fragmentation is compaction.
A
(a) and (b) only
B
(a) and (c) only
C
(b) and (c) only
D
(a), (b) and (c)
       Operating-Systems       Memory-Management       UGC NET CS 2018 JUNE Paper-2
Question 78 Explanation: 
External Fragmentation exists when there is enough total memory space to satisfy a request but the available space is ​ not ​ contiguous.
Yes, it is true that memory Fragmentation can be internal as well as external.
Yes, compaction is a solution to external Fragmentation.
Question 79
Page information in memory is also called as Page Table. The essential contents in each entry of a page table is/are .
A
Page Access information
B
Virtual Page number
C
Page Frame number
D
Both virtual page number and Page Frame Number
       Operating-Systems       Memory-Management       UGC NET CS 2018 JUNE Paper-2
Question 79 Explanation: 
→ For every page table it contains page frame number.
→ Virtual page number can represents index in the page table to get the page frame number.
Question 80
Given memory partitions of 100 K, 500 K, 200 K, 300 K and 600 K (in order) and processes of 212 K, 417 K,112 K, and 426 K (in order), using the first-fit algorithm, in which partition would the process requiring 426 K be placed ?
A
500 K
B
200 K
C
300 K
D
600 K
E
None of the above
       Operating-Systems       Memory-Management       UGC NET CS 2012 Dec-Paper-2
Question 80 Explanation: 
First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.

Note: Given options are wrong. Excluded for evaluation.
Question 81
Which of the following memory allocation scheme suffers from external fragmentation ?
A
Segmentation
B
Pure demand paging
C
Swapping
D
Paging
       Operating-Systems       Memory-Management       UGC NET CS 2012 Dec-Paper-2
Question 82
The virtual address generated by a CPU is 32 bits. The Translation Lookaside Buffer (TLB) can hold total 64 page table entries and a 4-way set associative (i.e. with 4-cache lines in the set). The page size is 4 KB. The minimum size of TLB tag is
A
12 bits
B
15 bits
C
16 bits
D
20 bits
       Operating-Systems       Memory-Management       UGC NET CS 2013 Dec-paper-2
Question 82 Explanation: 
Page size = 4 KB = 4 × 210 Bytes = 212 Bytes
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 64 page table entries with 4-way set associative
=64/4
=16
=24
→ 4 bits are needed to address a set.
→ The size of TLB tag = 20-4
= 16 bits
Question 83
Consider a logical address space of 8 pages of 1024 words mapped with memory of 32 frames. How many bits are there in the physical address ?
A
9 bits
B
11 bits
C
13 bits
D
15 bits
       Operating-Systems       Memory-Management       UGC NET CS 2011 Dec-Paper-2
Question 83 Explanation: 
Since we know page and frame both have the same size and since page size is given as 1024 words, it means frame size is of 1024 i.e. 210. Hence to uniquely identify each word inside a frame 10-bits are needed.
→ Number of frames is given as 32 i.e 25. So each frame can be uniquely identified using 5-bits.
→ Hence the total number of bits needed to identify a word inside memory is 5+10 = 15 bits.

Question 84
Let the page fault service time be 10 millisecond(ms) in a computer with average memory access time being 20 nanosecond(ns). If one page fault is generated for every 106 memory accesses, what is the effective access time for memory ?
A
21 ns
B
23 ns
C
30 ns
D
35 ns
       Operating-Systems       Memory-Management       UGC NET CS 2013 June-paper-2
Question 84 Explanation: 
P=page fault rate
EA = p*page fault service time + (1-p) * Memory access time
=1/106*10*106+(1-1/106)*20
≅ 29.9 ns
Question 85
Assume N segments in memory and a page size of P bytes. The wastage on account of internal fragmentation is :
A
NP/2 bytes
B
P/2 Bytes
C
N/2 Bytes
D
NP Bytes
       Operating-Systems       Memory-Management       UGC NET CS 2009-June-Paper-2
Question 85 Explanation: 
→ The wastage on account of internal fragmentation is NP/2 bytes.
where,
Segments in memory=N
Page size= P bytes.
Question 86
Assertion (A) :  Bit maps are not often used in memory management.
Reason (R) :Searching a bitmap for a run of given length is a slow operation.
A
Both (A) and (R) are true and (R) is correct explanation for (A)
B
Both (A) and (R) are true but (R) is not correct explanation for (A)
C
(A) is true (R) is false
D
(A) is false (R) is true
       Operating-Systems       Memory-Management       UGC NET CS 2009-June-Paper-2
Question 86 Explanation: 
→ Bit maps are not often used in memory management because searching a bitmap for a run of given length is a slow operation.
Question 87
Suppose it takes 100 ns to access a page table and 20 ns to access associative memory with a 90% hit rate, the average access time equals :
A
20 ns
B
28 ns
C
90 ns
D
100 ns
       Operating-Systems       Memory-Management       UGC NET CS 2009-June-Paper-2
Question 87 Explanation: 
Given data,
-- Access page table time=100 ns
-- Associate memory=20 ns
-- hit ratio=90% = 0.9
-- Miss ratio=1-hit ration
= 10% =0.1
-- Average Access Time=?
Step-1: AAT= Hit Ratio*Access page table + Miss Ratio*Associate memory
= 0.9*20 + 0.1*100
= 28 ns
Question 88
Variable partition memory management technique with compaction results in :
A
Reduction of fragmentation
B
Minimal wastage
C
Segment sharing
D
None of the above
       Operating-Systems       Memory-Management       UGC NET CS 2009-June-Paper-2
Question 88 Explanation: 
Variable partition memory management technique with compaction results in reduction of fragmentation
Question 89
A page fault
A
is an error specific page.
B
is an access to the page not currently in memory.
C
occur when a page program occur in a page memory.
D
page used in the previous page reference.
       Operating-Systems       Memory-Management       UGC NET CS 2009 Dec-Paper-2
Question 89 Explanation: 
A page fault is the page is not in main memory.
Question 90
A program is located in the smallest available hole in the memory is _________
A
best – fit
B
first – bit
C
worst – fit
D
buddy
       Operating-Systems       Memory-Management       UGC NET CS 2009 Dec-Paper-2
Question 90 Explanation: 
First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.
Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered by size. This strategy produces the smallest leftover hole.
Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.
Question 91
Suppose it takes 100 ns to access page table and 20 ns to access associative memory. If the average access time is 28 ns, the corresponding hit rate is :
A
100 percent
B
90 percent
C
80 percent
D
70 percent
       Operating-Systems       Memory-Management       UGC NET CS 2008 Dec-Paper-2
Question 91 Explanation: 
Given data,
-- Access page table time=100 ns
-- Associate memory=20 ns
-- hit ratio=X
-- Miss ratio=1-X
-- Average Access Time=28
Step-1: AAT= Hit Ratio*Access page table + Miss Ratio*Associate memory
= X*20 + (1-X*100) [Note: If X=0.9 then we are getting exact AAT]
= 28 ns
(or)
28=X*20+(1-X*100)
X=0.9
Question 92
If holes are half as large as processes, the fraction of memory wasted in holes is :
A
B
½
C
D
       Operating-Systems       Memory-Management       UGC NET CS 2008 Dec-Paper-2
Question 93
An example of a memory management system call in UNIX is :
A
fork.
B
mmap.
C
sigaction.
D
execve.
       Operating-Systems       Memory-Management       UGC NET CS 2008-june-Paper-2
Question 93 Explanation: 
Fork() → It will create child process of already created process.
mmap() → It is memory management system call and implements demand paging.
sigaction() → Examine and change a signal action
execve() → Executes the program pointed to by filename.
Question 94
With 64 bit virtual addresses, a 4KB page and 256 MB of RAM, an inverted page table requires :
A
8192 entries.
B
16384 entries.
C
32768 entries.
D
65536 entries.
       Operating-Systems       Memory-Management       UGC NET CS 2008-june-Paper-2
Question 94 Explanation: 
Given data,
-- Virtual addresses size= 64 bit
-- Page size = 4 KB
-- RAM size = 256 MB
-- Inverted page table requires = ?

Question 95
A program has five virtual pages, numbered from 0 to 4. If the pages are referenced in the order 012301401234, with three page frames, the total number of page faults with FIFO will be equal to :
A
0
B
4
C
6
D
9
       Operating-Systems       Memory-Management       UGC NET CS 2007-Dec-Paper-2
Question 95 Explanation: 




Question 96
Average process size = s bytes. Each page entry requires e bytes. The optimum page size is given by :
A
√(se)
B
√(2se)
C
s
D
e
       Operating-Systems       Memory-Management       UGC NET CS 2007-Dec-Paper-2
Question 96 Explanation: 

Question 97
Moving process from main memory to disk is called :
A
Caching
B
Termination
C
Swapping
D
Interruption
       Operating-Systems       Memory-Management       UGC NET CS 2007 June-Paper-2
Question 97 Explanation: 
→ Page making process from main memory to disk is called swapping.
→ Swapping is a mechanism in which a process can be swapped/moved temporarily out of main memory to a backing store , and then brought back into memory for continued execution.
Question 98
Part of a program where the shared memory is accessed and which should be executed indivisibly, is called :
A
Semaphores
B
Directory
C
Critical section
D
Mutual exclusion
       Operating-Systems       Memory-Management       UGC NET CS 2007 June-Paper-2
Question 98 Explanation: 
→ Consider a system consisting of n processes {p0,p1,...,pn-1}. Each process has a segment of code, called a critical section, in which the process my changing changing common variable,updating a table, writing a file, and so on.
→ Part of a program where the shared memory is accessed and which should be executed indivisibly, is called critical section.
Question 99
A memory management system has 64 pages with 512 bytes page size. Physical memory consists of 32 page frames. Number of bits required in logical and physical address are respectively:
A
14 and 15
B
14 and 29
C
15 and 14
D
16 and 32
       Operating-Systems       Memory-Management       UGC NET CS 2017 Jan- paper-3
Question 99 Explanation: 
Given data,
-- Total number of pages=64
-- Page size=512
-- Page frames=32
-- Logical address=?
-- physical address=?
Step-1: Logical address=Total number of pages*Page size
= 26*29
= 215
Step-2: Physical address= Page size*Page frames
= 29*25
= 214
Question 100
Match the following with respect to various memory management algorithms:
A
(a)-(iii), (b)-(iv), (c)-(ii), (d)-(i)
B
(a)-(ii), (b)-(iii), (c)-(i), (d)-(iv)
C
(a)-(iv), (b)-(iii), (c)-(ii), (d)-(i)
D
(a)-(ii), (b)-(iii), (c)-(iv), (d)-(i)
       Operating-Systems       Memory-Management       UGC NET CS 2015 Dec - paper-3
Question 100 Explanation: 
Working set is used to provide frames according to the dynamically changing requirements of a process in the main memory. Here pages are stored in the main memory on demand i.e.only when a page is required for execution of a process it is allocated a frame into main memory.
Segmentation supports the user view of memory because it divides a process into segments in such a way that the meaning of the code does not change after dividing it.
Dynamic partitioning helps in eliminating the internal fragmentation by providing the partitions dynamically but using this method of partitioning external fragmentation can’t be avoided. So compaction is a method used to overcome the external fragmentation problem in dynamic partitioning.
Using fixed partitioning we can store a process in fixed partitions of main memory. By doing this only the pages required for the successful execution of a process are bring into memory and this way instead of storing only a single process in main memory, multiple processes can be stored now. Hence degree of multiprogramming increases.
Question 101
Function of memory management unit is:
A
Address translation
B
Memory allocation
C
Cache management
D
All of the above
       Operating-Systems       Memory-Management       UGC NET CS 2015 Dec - paper-3
Question 101 Explanation: 
Memory management unit is used for converting the logical address into the physical address. It does not allocate memory or it is not responsible for the cache management. It’s task is to compare the logical address with the limit if it is less than the limit, it adds the logical address to the base value and provides the physical address.

Question 102
Match List-I with List-II
List-I                                  List-II
(a) Disk                         (i) Thread
(b) CPU                         (ii) Signal
(c) Memory                  (iii) File System
(d) Interrupt                 (iv) Virtual address
Choose the correct option from those given below:
A
(a)-(i); (b)-(ii); (c)-(iii); (d)-(iv)
B
(a)-(iii); (b)-(i); (c)-(iv); (d)-(ii)
C
(a)-(ii); (b)-(i); (c)-(iv); (d)-(iii)
D
(a)-(ii); (b)-(iv); (c)-(iii);(d)-(i)
       Operating-Systems       Memory-Management       UGC NET June-2019 CS Paper-2
Question 102 Explanation: 
Disk--> File system
CPU → Thread
memory → Virtual address space
Interrupt → Signal
Question 103
What is the most appropriate function of Memory Management Unit (MMU)?
A
It is an associative memory to store TLB
B
It is a technique of supporting multiprogramming by creating dynamic partitions
C
It is a chip to map virtual address to physical address
D
It is an algorithm to allocate and deallocate main memory to a process
       Operating-Systems       Memory-Management       UGC NET CS 2015 June Paper-3
Question 103 Explanation: 
Memory Management Unit (MMU): It is a chip to map virtual address to physical address.
Question 104
Consider a paging system where translation lookaside buffer (TLB) a special type of associative memory is used with hit ratio of 80%.
Assume that memory reference takes 80 nanoseconds and reference time to TLB is 20 nanoseconds. What will be the effective memory access time given 80% hit ratio?
A
110 nanoseconds
B
116 nanoseconds
C
200 nanoseconds
D
100 nanoseconds
       Operating-Systems       Memory-Management       UGC-NET DEC-2019 Part-2
Question 104 Explanation: 
Tavg = TLB access time + miss ratio of TLB × memory access time + memory access time
= 20 + 0.2 × 80 + 80
= 20 + 16 + 80
= 116 ms
Question 105
Which of the following interprocess communication model is used to exchange messages among co-operative processes?
A
Shared memory model
B
Message passing model
C
Shared memory and message passing model.
D
Queues
       Operating-Systems       Memory-Management       UGC-NET DEC-2019 Part-2
Question 105 Explanation: 
A process can be of two types:
1. Independent process: It is not affected by the execution of other processes
2. Co-operating process: It can be affected by other executing processes.
Interprocess communication (IPC) method which will allow them to exchange data along with various information among co-operative processes. There are two primary models of interprocess communication:
1. Shared memory.
2. Message passing.
Question 106
What is compaction refers to
A
A technique for overcoming internal fragmentation
B
A paging technique
C
A technique for overcoming external fragmentation
D
A technique for compressing the data
       Operating-Systems       Memory-Management       ISRO CS 2020       Video-Explanation
Question 106 Explanation: 
Compaction is used to reduce the external fragmentation.
Question 107
The operating system and the other processes are protected from being modified by an already running process because
A
They run at different time instants and not in parallel
B
They are in different logical addresses
C
They use a protection algorithm in the scheduler
D
Every address generated by the CPU is being checked against the relocation and limit parameters
       Operating-Systems       Memory-Management       ISRO CS 2020       Video-Explanation
Question 107 Explanation: 
Relocation registers used to protect user processes from each other, and from changing operating-system code and data. Base register contains value of the smallest physical address. Limit register contains range of logical addresses and each logical address must be less than the limit register.
Question 108
Which of the following methods is used to control thrashing in demand paging systems?
A
estimating process-wise demand for frames using working set model and limiting the total demand
B
controlling the page fault frequency within safe range by adjusting degree of multi-programming
C
Banker’s Algorithm
D
estimating process-wise demand for frames using working set model and limiting the total demand and controlling the page fault frequency within safe range by adjusting degree of multi-programming but not Banker’s Algorithm
       Operating-Systems       Memory-Management       APPSC-2016-DL-CS
Question 108 Explanation: 
Bankers algorithm is a deadlock avoidance scheme and is not used for thrashing. Now thrashing is the situation where more pagefaults occur. So to avoid more pagefaults we should estimate process wise demand for frames using working set model and control the pagefault frequency within safe state range by adjusting degree of multiprogramming.
Question 109
Which of the following is solution for external fragmentation of disk space during contiguous file allocation?
A
Dynamic storage allocation
B
Disk space compaction
C
File Allocation Table
D
garbage collection
       Operating-Systems       Memory-Management       APPSC-2016-DL-CS
Question 109 Explanation: 
solution of external fragmentation is disk space compaction. In compaction method we combine all the small holes at different places to make a big hole.
Question 110

If the executing program size is greater than the existing RAM of a computer, it is still possible to execute the program, if the OS supports

A
Synchronization
B
fault tolerance
C
paging system
D
Scheduling
       Operating-Systems       Memory-Management       APPSC-2016-DL-CA
Question 110 Explanation: 
n paging system we can execute program size greater than the existing ram of computer.
Question 111

Variable partition memory management technique with compaction results in

A
reduction of fragmentation
B
minimal wastage
C
segment sharing
D
None of the given options
       Operating-Systems       Memory-Management       APPSC-2016-DL-CA
Question 111 Explanation: 
Variable partition memory management technique with compaction results in reduction of fragmentation or removing external fragmentation.
Question 112

The working set model is used in memory management to implement the concept of:

A
Principle of locality
B
Thrashing
C
Paging
D
Segmentation
       Operating-Systems       Memory-Management       CIL 2020
Question 112 Explanation: 
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.
Or
The working set is the memory that's being accessed "frequently" by an application or set of applications.
Principle of Locality does the same thing as explained above.
In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration. Spatial locality (also termed data locality) refers to the use of data elements within relatively close storage locations.
Question 113

Page fault occurs when

A
The page is an main memory
B
The page has an address, which cannot be loaded
C
The page is not in main memory
D
The page is not in cache memory
       Operating-Systems       Memory-Management       CIL 2020
Question 113 Explanation: 
A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM. The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
Question 114

Consider a system with page fault service time(S)=100 ns, main memory access time(M)=20 ns, and page fault rate(P)=65%. Calculate the effective memory access time.

A
62 ns
B
82 ns
C
80 ns
D
72 ns
       Operating-Systems       Memory-Management       CIL 2020
Question 114 Explanation: 
Let page fault rate = p
EMAT = (1 - p) × M + p・S
= 0.35 × 20 + 0.65 × 100
= 7 + 65
= 72 ns
Question 115

Which one from the following is a false statement about memory management?

A
Thrashing improves system performance.
B
Swapping increases system overhead.
C
Overhead is more in non-contiguous memory allocation compared to contiguous allocation.
D
Swapping is more effective in non-contiguous memory allocation.
       Operating-Systems       Memory-Management       APPSC-2012-DL CA
Question 115 Explanation: 
Thrashing decreases the system performance because in thrashing there are more no. of page faults.
Question 116

Consider following three statement about memory management
(I) Memory fragmentation results in poor utilization of memory.
(II) Memory fragmentation is the area of memory, which is allocated to a process but unused.
(III) Demand paging can increase the degree of multiprogramming.

A
Only (I) and (III)
B
Only (ii) and (III)
C
All (I), (II) and (III)
D
None from (I), (II) and (III)
       Operating-Systems       Memory-Management       APPSC-2012-DL CA
Question 116 Explanation: 
Memory fragmentation is when most of your memory is allocated in a large number of non-contiguous blocks, or chunks - leaving a good percentage of your total memory unallocated, but unusable for most typical scenarios. This results in out of memory exceptions, or allocation errors (i.e. malloc returns null). Hence statement I and II is true.
Also demand paging increase the degree of multiprogramming. Hence statement III is also true.
Question 117

What should be the access time of cache memory in order to achieve 98% hit ratio, if the memory access time in 200ns and effective access time required is 20ns?

A
16ns
B
18ns
C
20ns
D
10ns
       Operating-Systems       Memory-Management       APPSC-2012-DL CA
Question 117 Explanation: 
EMAT = 0.98 × Hit time of cache + 0.02 × (200 + Hit time of cache)
20 = 0.98 × H + 0.02 (200 + H)
20 = 0.98H + 0.02 × 200 + 0.02H
20 = 0.98H + 4 + 0.02H
16 = H
∴ Access time of cache memory = 16 ns
Question 118
With respect to paging, which of the following is false
A
It is based on a linear logical memory addressing concept.
B
Entire program need not be loaded into memory before execution
C
It suffers from both internal and external fragmentations
D
Page table is not required once a program is loaded
       Operating-Systems       Memory-Management       APPSC-2012-DL-CS
Question 118 Explanation: 
paging does not suffers from external fragmentation because space allocated to a process need not be contiguous. But paging suffers from internal fragmentation because on an average half of the last page of process gets wasted.
Question 119
The consistency model supported in IVT (Integrated shared virtual memory at yale) is
A
Sequential Consistency
B
General Consistency
C
Strict Consistency
D
Weak Consistency
E
Update it
       Operating-Systems       Memory-Management       APPSC-2012-DL-CS
Question 120
A linker is given object modules for a set of programs that were compiled separately. What information need not be included in an object module?
A
Object Code
B
Relocation Bits
C
Names and locations of all external symbols defined in the object module
D
Absolute address of internal symbols
       Operating-Systems       Memory-Management       APPSC-2012-DL-CS
Question 120 Explanation: 
Linker does not need absolute address of internal symbol because absolute address is calculated by loader and not linker. Linker needs relocatable addresses to calculate another relocatable addresses.
Question 121
The technique which repeatedly uses the same block of internal storage during different stages of problem is called
A
Overlay
B
Overlapping
C
Swapping
D
Reuse
       Operating-Systems       Memory-Management       TNPSC-2012-Polytechnic-CS
Question 121 Explanation: 
In a general computing sense, overlaying means "the process of transferring a block of program code or other data into main memory, replacing what is already stored". Overlaying is a programming method that allows programs to be larger than the computer's main memory.
Question 122
Relocatable programs
A
Cannot be used with fixed partitions
B
Can be loaded almost anywhere in memory
C
Do not need a linker
D
Can be loaded only at one specific location
       Operating-Systems       Memory-Management       TNPSC-2012-Polytechnic-CS
Question 122 Explanation: 
Relocatable programs can be loaded almost any where in memory because they use relocatable address.
Question 123
The larger the RAM of computer for faster is the speed since it eliminates
A
Need for ROM
B
Need for external memory
C
Frequent disk I/O
D
Need for a data wide path
       Operating-Systems       Memory-Management       TNPSC-2012-Polytechnic-CS
Question 123 Explanation: 
Since RAM size is large so there will be less page fault, hence less disk I/O.
Question 124
Poor response time is caused by
A
Process or busy
B
High I/O rate
C
High paging rates
D
All of these
       Operating-Systems       Memory-Management       TNPSC-2012-Polytechnic-CS
Question 124 Explanation: 
Poor response times are usually caused by Process busy, High I/O rates and High paging rates. ... In computing, a process is an instance of a computer program that is being executed. It contains the program code and its activity.
Question 125
Thrashing is
A
A mechanism used by OS to boost its performance
B
A phenomenon where CPU utilization is very poor
C
A concept to improve CPU utilization
D
None
       Operating-Systems       Memory-Management       APPSC-2012-DL-CS
Question 125 Explanation: 
If your system has to swap pages with a higher rate that major chunk of CPU time is spent in swapping then this state is known as thrashing. So effectively during thrashing, the CPU spends less time in some actual productive work and more time in swapping.
Question 126
The process of assigning load address to the various parts of the program; and adjusting the code and data in the program to reflect the assigned addresses is called
A
Assembly
B
Parsing
C
Relocation
D
Symbol Resolution
       Operating-Systems       Memory-Management       APPSC-2012-DL-CS
Question 126 Explanation: 
Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses.
Question 127
The size of a page is typically a :
A
Multiple of 8
B
Power of 2
C
Any size depending on operating system
D
Any size depending on user program
       Operating-Systems       Memory-Management       TNPSC-2017-Polytechnic-CS
Question 127 Explanation: 
The size of a page is typically a power of 2 because the physical addresses and virtual addresses are represented in bits.
Question 128
Which of the following possibilities for saving the return address of a sub – routine, support sub – routine recursion?
A
In a processor register
B
In a memory location associated with the call
C
On a stack
D
All of the above
       Operating-Systems       Memory-Management       TNPSC-2017-Polytechnic-CS
Question 128 Explanation: 
The return address is saved in some specific location of main memory associated with the call.
Question 129
If there are 64 pages, and the page size is 4096 words, the length of the logical address is _________.
A
16 bits
B
18 bits
C
20 bits
D
22 bits
       Operating-Systems       Memory-Management       TNPSC-2017-Polytechnic-CS
Question 129 Explanation: 
No. of bits required to indicate no. of pages = log2 64 = 6
No. of bits required to indicate page size = log2 4096 = 12
∴ Length of logical address is,
6 + 12 = 18
Question 130

Consider the following statements :

I. Re-construction operation used in mixed fragmentation satisfies commutative rule.

II. Re-construction operation used in vertical fragmentation satisfies commutative rule

Which of the following is correct
A
I
B
II
C
Both are correct
D
None of the statements are correct
       Operating-Systems       Memory-Management       UGC NET CS 2014 Dec - paper-3
Question 131
Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If one page fault is generated for every 106 memory access, what is the closest effective access time for the memory?
A
21ns
B
30ns
C
23ns
D
35ns
       Operating-Systems       Memory-Management       HCU PHD CS 2018 December
Question 131 Explanation: 
P = page fault rate
EA = p × page fault service time + (1 – p) × Memory access time
=1/106×10×106+(1-1/106)×20 ≅29.9 ns
Question 132
Which of the following is NOT possible?
A
TLB miss with no page fault
B
TLB hit with no page fault
C
TLB miss with page fault
D
TLB hit with page fault
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2017
Question 132 Explanation: 
Whenever there is TLB hit ,there cant be page fault,because in TLB only those page table entries are present for which pages are present in main memory .Hence there can never be page fault on TLB hit.So option D is false.
Question 133
Which of following binding schemes has loss of efficiency if there is no TLB in the system?
A
Compile-time binding
B
Load-time binding
C
Run-time binding
D
None of the above
       Operating-Systems       Memory-Management       HCU PHD CS 2018 June
Question 133 Explanation: 
Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another
The user program deals with logical addresses; it never sees the real physical addresses
Execution-time binding occurs when reference is made to location in memory
Logical address bound to physical addresses
Question 134
The number of page table entries for a 64-bit processor with 16KB page size is,
A
250
B
251
C
218
D
264
       Operating-Systems       Memory-Management       HCU PHD CS 2018 June
Question 134 Explanation: 
No. of page table entries is,
(264)/(214)=250
Question 135
Which of the following conditions leads to thrashing? (WSS is Working Set Size)
A
All processes are allocated more memory than their WSS
B
The sum of the WSS of the processes is less than the main memory
C
One of the processes is allocated more memory than its WSS
D
The sum of the WSS of the processes is more than the main memory
       Operating-Systems       Memory-Management       HCU PHD CS 2018 June
Question 135 Explanation: 
Thrashing is a condition in which excessive page fault operations are taking place. A system that is thrashing can be perceived as either a very slow system or one that has come to a halt. If the WSS of processes is more than the main memory then there will be always page fault because the required no. of frames is not there in the main memory.
Question 136
If a system has a 32-bit processor, what are the number of page table entries if the page size is 16KB?
A
16K entries
B
256K entries
C
8K entries
D
64K entries
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2015
Question 136 Explanation: 
Page size is 16KB = 2^14B. So offset bit is 14 bit .
Hence no. of page table entries are 2^(32-14) = 2^18 = 256K entries.
Question 137
Thrashing can be reduced by
A
increasing the CPU power
B
increasing degree of multiprogramming
C
increasing memory
D
increasing the swap space
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2015
Question 137 Explanation: 
Thrashing occurs due to more page fault.And page fault occurs due to less main memory.Hence thrashing can be reduced by increasing the size of main memory
Question 138
A disadvantage of an inverted page table as compared to a normal page table is
A
It is very large in size
B
It cannot support large virtual memory
C
It is inefficient in translation of logical to physical address
D
None of the above
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2015
Question 138 Explanation: 
A disadvantage of an inverted page table as compared to a normal page table is that it is inefficient in translation of logical to physical address because look-up time in an inverted page table may be significantly higher when compared to a simple page table.
Question 139
A disadvantage of an inverted page table as compared to a normal page table is
A
It is very large in size
B
It cannot support large virtual memory
C
It is inefficient in translation of logical to physical address
D
None of the above
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2015
Question 139 Explanation: 
A disadvantage of an inverted page table as compared to a normal page table is that it is inefficient in translation of logical to physical address because look-up time in an inverted page table may be significantly higher when compared to a simple page table.
Question 140
A system has a 24-bit processor and uses a page size of 2KB. Considering that all the registers in the processor are limited to 24 bits, how many entries may be expected in the page table?
A
8192
B
4096
C
2048
D
None of the above.
       Operating-Systems       Memory-Management       HCU PHD CS MAY 2013
Question 140 Explanation: 
Lets first find offset bits,
2KB = 2^11 B. So offset bits = 11
Hence no. of page table entries is = 2^(24-11) = 2^13 = 8192
Question 141
Contiguous memory allocation having variable size partition suffers from:
A
External Fragmentation
B
Internal Fragmentation
C
Both External and Internal Fragmentation
D
None of the options
       Operating-Systems       Memory-Management       NIC-NIELIT Scientist-B 2020
Question 141 Explanation: 
1. Each Fixed partitioning is suffer from internal as well as external fragmentation
2. Variable partitioning suffers from only external fragmentation but not internal that's why we went to the paging concept to avoid external fragmentation. External fragmentation is a more serious problem than internal fragmentation.
Question 142
A 26-bit address bus has maximum accessible memory capacity of _____.
A
64 MB
B
16 MB
C
1 GB
D
4 GB
       Operating-Systems       Memory-Management       NIC-NIELIT STA 2020
Question 142 Explanation: 
The maximum accessible memory capacity of 226 = 64 MB
There are 142 questions to complete.