Memory-Management

Question 1
In the context of operating systems, which of the following statements is/are correct with respect to paging?
A
Page size has no impact on internal fragmentation.
B
Multilevel paging is necessary to support pages of different sizes.
C
Paging incurs memory overheads.
D
Paging helps solve the issue of external fragmentation.
Question 1 Explanation: 
  1. False, Large page size may lead to higher internal fragmentation.
  2. False, To support pages of different sizes, the Instruction set architecture should support it. Multi-level paging is not necessary.
  3. True, The page table has to be stored in main memory, which is an overhead. 
  4. True, Paging avoids the external fragmentation. 
Question 2

A certain moving arm disk storage, with one head, has the following specifications.

 Number of tracks/recording surface = 200
 Disk rotation speed = 2400 rpm
 Track storage capacity = 62,500 bits 

The average latency of this device is P msec and the data transfer rate is Q bits/sec.
Write the value of P and Q.

A
P = 12.5, Q = 2.5×106
Question 2 Explanation: 
RPM = 2400
So, in DOS, the disk rotates 2400 times.
Average latency is the time for half a rotation
= 0.5×60/2400 s
= 12.5 ms
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits
So, disk transfer rate
= 62500 × 2400/60
= 2.5 × 106 bps
So,
P = 12.5, Q = 2.5×106
Question 3

A computer installation has 1000k of main memory. The jobs arrive and finish in the following sequences.

 Job 1 requiring 200k arrives
 Job 2 requiring 350k arrives
 Job 3 requiring 300k arrives
 Job 1 finishes
 Job 4 requiring 120k arrives 
 Job 5 requiring 150k arrives
 Job 6 requiring 80k arrives 

(a) Draw the memory allocation table using Best Fit and First fit algorithms.
(b) Which algorithm performs better for this sequence?

A
Theory Explanation.
Question 4

The capacity of a memory unit is defined by the number of words multiplied by the number of bits/word. How many separate address and data lines are needed for a memory of 4K × 16?

A
10 address, 16 data lines
B
11 address, 8 data lines
C
12 address, 16 data lines
D
12 address, 12 data lines
Question 4 Explanation: 
ROM memory size = 2m × n
m = no. of address lines
n = no. of data lines
Given, 4K × 16 = 212 × 16
Address lines = 12
Data lines = 16
Question 5

In a paged segmented scheme of memory management, the segment table itself must have a page table because

A
the segment table is often too large to fit in one page
B
each segment is spread over a number of pages
C
segment tables point to page table and not to the physical locations of the segment
D
the processor’s description base register points to a page table
E
Both A and B
Question 5 Explanation: 
The segment table is often too large to fit in one page. This is true and the segment table can be divided into pages. Thus page table for each segment table, pages are created.
Segment paging is different from paged segmentation.
Question 6

Consider a paging system that uses a 1-level page table residing in main memory and a TLB for address translation. Each main memory access takes 100 ns and TLB lookup takes 20 ns. Each page transfer to/from the disk takes 5000 ns. Assume that the TLB hit ratio is 95%, page fault rate is 10%. Assume that for 20% of the total page faults, a dirty page has to be written back to disk before the required page is read in from disk. TLB update time is negligible. The average memory access time in ns (round off to 1 decimal places) is ______.

A
154.5 ns
Question 6 Explanation: 
M=100ns
T=20ns
D=5000ns
h=0.95
p=0.1, 1-p=0.9
d=0.2, 1-d=0.8
EMAT = h×(T+M)+(1-h)[(1-p)×2M+p[(1-d)[D+M]+d(2D+M)]+T]
= 0.95×(20+100)+(1-0.95)[(1-0.1)×200+(0.1)[(1-0.2)[5000+100]+(0.2)(10000+100)]+20]
= 154.5 ns
Question 7

Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?

A
The hole created by worst fit is always larger than the hole created by first fit.
B
The hole created by best fit is never larger than the hole created by first fit.
C
The hole created by first fit is always larger than the hole created by next fit.
D
The hole created by next fit is never larger than the hole created by best fit.
Question 7 Explanation: 
The size of hole created using best fit is never greater than size created by first fit. The best fit chooses the smallest available partition to fit the size of the process. Whereas, first fit and next fit doesn’t consider the size of the holes available.
Question 8

Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?

A
8×220
B
4×220
C
16×210
D
256×210
Question 8 Explanation: 
A TLB has 128 valid entries.
So, it can refer to 27 pages.
Each page size is 8 kB & word is 4 bytes.
So, the total addresses of virtual address spaces that can be addressed
Question 9

(a) The access times of the main memory and the Cache memory, in a computer system, are 500 n sec and 50 n sec, respectively. It is estimated that 80% of the main memory request are for read the rest for write. The hit ratio for the read access only is 0.9 and a write-through policy (where both main and cache memories are updated simultaneously) is used. Determine the average time of the main memory.
(b) Three devices A, B and C are corrected to the bus of a computer, input/output transfers for all three devices use interrupt control. Three interrupt request lines INTR1, INTR2 and INTR3 are available with priority of INTR1 > priority of INTR2 > priority of INTR3.
Draw a schematic of the priority logic, using an interrupt mask register, in which Priority of A > Priority of B > Priority of C.

A
Theory Explanation.
Question 10

A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers

 30 70 115 130 110 80 20 25 
How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)

A
2 and 3
B
3 and 3
C
3 and 4
D
4 and 4
Question 10 Explanation: 
SSTF: (90) 120 115 110 130 80 70 30 25 20
Direction changes at 120, 110, 130.
FCFS: (90) 120 30 70 115 130 110 80 20 25
Direction changes at 120, 30, 130, 20.
Question 11

The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400 bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is

A
0.5%
B
1%
C
5%
D
10%
Question 11 Explanation: 
y μs is cycle time.
x μs is data transfer time.
% of time CPU idle is,
y/x × 100
Maximum storage density is given, so consider innermost track to get the capacity
= 2 × 3.14 × 5 × 1700 bits
= 3.14 × 14000 bits
Rotational latency = 60/4200 s = 1/70 s
Therefore, to read 64 bits, time required
(106 × 64)/(70 × 3.14 × 17000) μs = 20.8 μs
As memory cycle time given is 1μs,
Therefore, % CPU cycle stolen = (1/20.8) × 100 ≈ 5%
Question 12

Consider a pipeline processor with 4 stages S1 to S4. We want to execute the following loop:
For(i=1:i<=1000; i++)
{I1,I2,I3,I4}
where the time taken (in ns) by instructions I1 to I4 for stages S1 to S4 are given below:

       S1   S2   S3   S4
I1:    1    2    1    2
I2:    2    1    2    1
I3:    1    1    2    1
I4:    2    1    2    1 
The output of I1 for i=2 will be available after

A
11 ns
B
12 ns
C
13 ns
D
28 ns
Question 12 Explanation: 

So, total time would be 13 ns.
Question 13

Which one of the following is NOT shared by the threads of the same process?

A
Stack
B
Address Space
C
File Descriptor Table
D
Message Queue
Question 13 Explanation: 
Threads cannot share the stack to maintaining the function calls and they can have individual function call sequences.
Question 14

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
If the disk has 20 sectors per track and is currently at the end of the 5th sector of the inner-most track and the head can move at a speed of 10 meters/sec and it is rotating at constant angular velocity of 6000 RPM, how much time will it take to read 1 MB contiguous data starting from the sector 4 of the outer-most track?

A
13.5 ms
B
10 ms
C
9.5 ms
D
20 ms
Question 14 Explanation: 
Radius of inner track is 0.5cm (where the head is standing) and the radius of outermost track is 4cm.
So, the header has to seek (4 - 0.5) = 3.5cm.
For 10m ------- 1s
1m ------- 1/10 s
100cm ------- 1/(10×100) s
3.5cm ------- 3.5/1000 s = 3.5ms
So, the header will take 3.5ms.
Now, angular velocity is constant and header is now at end of 5th sector. To start from front of 4th sector it must rotate upto 18 sector.
6000 rotation in 60000ms.
1 rotation in 10ms (time to traverse 20 sectors).
So, to traverse 18 sectors, it takes 9ms.
In 10ms, 10MB data is read.
So, 1MB data can be read in 1ms.
∴ Total time = 1+9+3.5 = 13.5ms
Question 15

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear Velocity (ii) Constant Angular Velocity?

A
(i) 80 MB (ii) 2040 MB
B
(i) 2040 MB (ii) 80 MB
C
(i) 80 MB (ii) 360 MB
D
(i) 80 MB (ii) 360 MB
Question 15 Explanation: 
Constant linear velocity:
Diameter of inner track = d = 1 cm
Circumference of inner track
= 2 * 3.14 * d/2
= 3.14 cm
Storage capacity = 10 MB (given)
Circumference of all equidistant tracks
= 2 * 3.14 * (0.5+1+1.5+2+2.5+3+3.5+4)
= 113.14 cm
Here, 3.14 cm holds 10 MB
Therefore, 1 cm holds 3.18 MB.
So, 113.14 cm holds
113.14 * 3.18 = 360 MB
So, total amount of data that can be hold on the disk = 360 MB.
For constant angular velocity:
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk
= 8 * 10 = 80 MB
Question 16

Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following sequence of memory block references if LRU policy is used for cache block replacement. Assuming that initially the cache did not have any memory block from the current job?

 0 5 3 9 7 0 16 55 
A
0 3 5 7 16 55
B
0 3 5 7 9 16 55
C
0 5 7 9 16 55
D
3 5 7 9 16 55
Question 16 Explanation: 
The cache is 2-way associative, so in a set, there can be 2 block present at a time.
So,

Since, each set has only 2 places, 3 will be thrown out as its the least recently used block. So final content of cache will be
0 5 7 9 16 55
Hence, answer is (C).
Question 17

Consider six memory partitions of sizes 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB, where KB refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210KB, 468 KB and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any process?

A
200KBand 300 KB
B
200KBand 250 KB
C
250KBand 300 KB
D
300KBand 400 KB
Question 17 Explanation: 

Since Best fit algorithm is used. So, process of size,
357KB will occupy 400KB
210KB will occupy 250KB
468KB will occupy 500KB
491KB will occupy 600KB
So, partitions 200KB and 300KB are NOT alloted to any process.
Question 18
Consider a memory management system that uses a page size of 2 KB. Assume that both the physical and virtual addresses start from 0. Assume that the pages 0, 1, 2, and 3 are stored in the page frames 1, 3, 2, and 0, respectively. The physical address (in decimal format) corresponding to the virtual address 2500 (in decimal format) is _________
A
6596
B
C
D
E
Question 18 Explanation: 
Page number=Virtual address / Page size= 2500 / 2048 = 1 Offset=Virtual address mod Page size Offset= 2500 mod  2048=452 Frame number for Page 1 is 3 Page size = 2048 bytes Calculate the Physical Address: Physical address = (Frame number × Page size) + Offset =(3*2048)+452=6596
Question 19
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is ___________
A
2
B
4
C
8
D
16
Question 20

Let the page fault service time to 10 ms in a computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory accesses, what is the effective access time for the memory?

A
21 ns
B
30 ns
C
23 ns
D
35 ns
Question 20 Explanation: 
P = page fault rate
EA = p × page fault service time + (1 – p) × Memory access time
= 1/106×10×106+(1-1/106)×20 ≅ 29.9 ns
Question 21

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the size of a page in KB in this computer?

A
2
B
4
C
8
D
16
Question 21 Explanation: 
Let the size of page is = 2p B
So the no. of entries in one page is 2p/4, where 4 is the page table entry size given in question.
So we know that process size or virtual address space size is equal to
No. of entries × Page size
So total no. of entries for 3 level page table is,
(2p/4)×(2p/4)×(2p/4)
So, No. of entries × Page size = VAS
(2p/4)×(2p/4)×(2p/4)× (2p) = 246
24p = 252
4p = 52
p = 13
∴ Page size = 213
Question 22

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer?

A
2
B
4
C
8
D
16
Question 22 Explanation: 
Architecture of physically indexed cache:

Architecture of virtual indexed physically tagged (VIPT):

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13bits
These 13bits are not translated during VA→PA. So 13bits are same out of 16 Index bits, 13 are same we need to make 3bits (16-13) same now.
3bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 23

The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9.  The average memory access time (in nanoseconds) in executing the sequence of instructions is   __________.

A
1.68
B
1.69
C
1.70
D
1.71
Question 23 Explanation: 
Total instruction = 100 instruction fetch operation + 60 memory operand read operation + 40 memory operand write op
= 200 instructions (operations)
Time taken for fetching 100 instructions (equivalent to read) = 90*1ns + 10*5ns = 140ns
Memory operand Read operations = 90% (60)*1ns + 10% (60) × 5ns = 54ns + 30 ns = 84ns
Memory operands Write operations time = 90% (40)*2ns + 10% (40)*10ns
= 72ns + 40ns = 112ns
Total time taken for executing 200 instructions = 140 + 84 + 112 = 336ns
∴ Average memory access time = 336 ns/200 = 1.68ns
Question 24

Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is  _________.

A
122
B
123
C
124
D
125
Question 24 Explanation: 
Tavg = TLB access time + miss ratio of TLB × memory access time + memory access time
= 10 + 0.4 × 80 + 80
= 10 + 32 + 80
= 122 ms
Question 25
What is compaction refers to
A
A technique for overcoming internal fragmentation
B
A paging technique
C
A technique for overcoming external fragmentation
D
A technique for compressing the data
Question 25 Explanation: 
Compaction is used to reduce the external fragmentation.
Question 26
The operating system and the other processes are protected from being modified by an already running process because
A
They run at different time instants and not in parallel
B
They are in different logical addresses
C
They use a protection algorithm in the scheduler
D
Every address generated by the CPU is being checked against the relocation and limit parameters
Question 26 Explanation: 
Relocation registers used to protect user processes from each other, and from changing operating-system code and data. Base register contains value of the smallest physical address. Limit register contains range of logical addresses and each logical address must be less than the limit register.
Question 27
Which of the following is solution for external fragmentation of disk space during contiguous file allocation?
A
Dynamic storage allocation
B
Disk space compaction
C
File Allocation Table
D
garbage collection
Question 27 Explanation: 
solution of external fragmentation is disk space compaction. In compaction method we combine all the small holes at different places to make a big hole.
Question 28
Which of the following methods is used to control thrashing in demand paging systems?
A
estimating process-wise demand for frames using working set model and limiting the total demand
B
controlling the page fault frequency within safe range by adjusting degree of multi-programming
C
Banker’s Algorithm
D
estimating process-wise demand for frames using working set model and limiting the total demand and controlling the page fault frequency within safe range by adjusting degree of multi-programming but not Banker’s Algorithm
Question 28 Explanation: 
Bankers algorithm is a deadlock avoidance scheme and is not used for thrashing. Now thrashing is the situation where more pagefaults occur. So to avoid more pagefaults we should estimate process wise demand for frames using working set model and control the pagefault frequency within safe state range by adjusting degree of multiprogramming.
Question 29

If the executing program size is greater than the existing RAM of a computer, it is still possible to execute the program, if the OS supports

A
Synchronization
B
fault tolerance
C
paging system
D
Scheduling
Question 29 Explanation: 
n paging system we can execute program size greater than the existing ram of computer.
Question 30

Variable partition memory management technique with compaction results in

A
reduction of fragmentation
B
minimal wastage
C
segment sharing
D
None of the given options
Question 30 Explanation: 
Variable partition memory management technique with compaction results in reduction of fragmentation or removing external fragmentation.
Question 31

Consider a system with page fault service time(S)=100 ns, main memory access time(M)=20 ns, and page fault rate(P)=65%. Calculate the effective memory access time.

A
62 ns
B
82 ns
C
80 ns
D
72 ns
Question 31 Explanation: 
Let page fault rate = p
EMAT = (1 - p) × M + p・S
= 0.35 × 20 + 0.65 × 100
= 7 + 65
= 72 ns
Question 32

Page fault occurs when

A
The page is an main memory
B
The page has an address, which cannot be loaded
C
The page is not in main memory
D
The page is not in cache memory
Question 32 Explanation: 
A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM. The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
Question 33

The working set model is used in memory management to implement the concept of:

A
Principle of locality
B
Thrashing
C
Paging
D
Segmentation
Question 33 Explanation: 
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.
Or
The working set is the memory that's being accessed "frequently" by an application or set of applications.
Principle of Locality does the same thing as explained above.
In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data, and/or resources, within a relatively small time duration. Spatial locality (also termed data locality) refers to the use of data elements within relatively close storage locations.
Question 34

What should be the access time of cache memory in order to achieve 98% hit ratio, if the memory access time in 200ns and effective access time required is 20ns?

A
16ns
B
18ns
C
20ns
D
10ns
Question 34 Explanation: 
EMAT = 0.98 × Hit time of cache + 0.02 × (200 + Hit time of cache)
20 = 0.98 × H + 0.02 (200 + H)
20 = 0.98H + 0.02 × 200 + 0.02H
20 = 0.98H + 4 + 0.02H
16 = H
∴ Access time of cache memory = 16 ns
Question 35

Which one from the following is a false statement about memory management?

A
Thrashing improves system performance.
B
Swapping increases system overhead.
C
Overhead is more in non-contiguous memory allocation compared to contiguous allocation.
D
Swapping is more effective in non-contiguous memory allocation.
Question 35 Explanation: 
Thrashing decreases the system performance because in thrashing there are more no. of page faults.
Question 36

Consider following three statement about memory management
(I) Memory fragmentation results in poor utilization of memory.
(II) Memory fragmentation is the area of memory, which is allocated to a process but unused.
(III) Demand paging can increase the degree of multiprogramming.

A
Only (I) and (III)
B
Only (ii) and (III)
C
All (I), (II) and (III)
D
None from (I), (II) and (III)
Question 36 Explanation: 
Memory fragmentation is when most of your memory is allocated in a large number of non-contiguous blocks, or chunks - leaving a good percentage of your total memory unallocated, but unusable for most typical scenarios. This results in out of memory exceptions, or allocation errors (i.e. malloc returns null). Hence statement I and II is true.
Also demand paging increase the degree of multiprogramming. Hence statement III is also true.
Question 37
The process of assigning load address to the various parts of the program; and adjusting the code and data in the program to reflect the assigned addresses is called
A
Assembly
B
Parsing
C
Relocation
D
Symbol Resolution
Question 37 Explanation: 
Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses.
Question 38
Thrashing is
A
A mechanism used by OS to boost its performance
B
A phenomenon where CPU utilization is very poor
C
A concept to improve CPU utilization
D
None
Question 38 Explanation: 
If your system has to swap pages with a higher rate that major chunk of CPU time is spent in swapping then this state is known as thrashing. So effectively during thrashing, the CPU spends less time in some actual productive work and more time in swapping.
There are 38 questions to complete.

Access quiz wise question and answers by becoming as a solutions adda PRO SUBSCRIBER with Ad-Free content

Register Now

If you have registered and made your payment please contact solutionsadda.in@gmail.com to get access