Operating-Systems
Question 1 |
Consider three concurrent processes P1, P2 and P3 as shown below, which access a shared variable D that has been initialized to 100.
The process are executed on a uniprocessor system running a time-shared operating system. If the minimum and maximum possible values of D after the three processes have completed execution are X and Y respectively, then the value of Y–X is _______.
10 | |
40 | |
60 | |
80 |

P2 reads D=100, preempted.
P1 executes D=D+20, D=120.
P3 executes D=D+10, D=130.
Now, P2 has D=100, executes
D = D-50 = 100-50 = 50
P2 writes D=50 final value. This is minimum.
Next,
P2 reads D=100, executes D = D-50, before that assume P1 & P3 has read D=100.
P2 makes D=50 & writes it.
P1 executes (D=100), D=D+20 & P3 executes D=D+10 gives maximum value D=130.
So, Y - X = 130 - 50 =80.
Question 2 |
The following C program is executed on a Unix/Linux system:
#include <unistd.h> int main () { int i ; for (i=0; i<10; i++) if (i%2 == 0) fork ( ) ; return 0 ; }
The total number of child processes created is _____.
26 | |
33 | |
31 | |
28 |
#include
int main( )
{
int i;
fork( );
fork( );
fork( );
fork( );
fork( );
}
n - fork statements will have 2n-1 child.
Hence, n = 5 ⇒ We have 31 child processes.
Question 3 |
Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?
8×220 | |
4×220 | |
16×210 | |
256×210 |
So, it can refer to 27 pages.
Each page size is 8 kB & word is 4 bytes.
So, the total addresses of virtual address spaces that can be addressed

Question 4 |
Consider the following four processes with arrival times (in milliseconds) and their length of CPU bursts (in milliseconds) as shown below:

These processes are run on a single processor using preemptive Shortest Remaining Time First scheduling algorithm. If the average waiting time of the processes is 1 millisecond, then the value of Z is _____.
2 | |
7 | |
1 | |
4 |

At this point P4 arrives with burst 'Z' & P3 is in queue with burst 3.
P1 & P2 have executed with P1 incurred delay 1unit & P2 0units.
Hence, Avg = 0+1+x/4 =1
⇒ x=3, the next delay should be 3. It would happen if assume Z=2.
It executes and completes at 6.
P3 will wait totally for 3units.
Hence, Avg=1.

Z=2
Question 5 |
The index node (inode) of a Unix-like file system has 12 direct, one single-indirect and one double-indirect pointers. The disk block size is 4 kB, and the disk block address is 32-bits long. The maximum possible file size is (rounded off to 1 decimal place) ______ GB.
7.0 | |
9.0 | |
2.0 | |
4.0 |
Max. file size
= (12 × 1k + 1k × 1k) × 4kB
≈ (1024 × 12 + 1024 × 1024) × 4 × 1024 bytes
≈ 4GB
Question 6 |
Consider the following snapshot of a system running n concurrent processes. Process i is holding Xi instances of a resource R, 1 ≤ i ≤ n. Assume that all instances of R are currently in use. Further, for all i, process i can place a request for at most Yi additional instances of R while holding the Xi instances it already has. Of the n processes, there are exactly two processes p and q such that Yp = Yq = 0. Which one of the following conditions guarantees that no other process apart from p and q can complete execution?
Min (Xp, Xq) ≥ Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Xp + Xq < Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Min (Xp, Xq) ≤ Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} |
Pi holds Xi instances.
Pi can request additional Yi instances.
Given two process p & q such that their additional requests are zero.
Yp = Yq = 0
{Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} means that out of 'n' processes, we are left with (n-2) process (except p&q), i.e., Yk indicates additional request of all the processes (n-2) except p & q.
For p & q to complete first, accordingly
Xp + Xq < Min {Yk}
Option D is correct.
There are exactly two process p and q which do not need any additional instances of resources.
So, p and q will complete their execution and will release Xp and Xq instances of resources.
Now to guarantee that no other process apart from p and q can complete execution, the no. of instances of resources available must be less than the minimum no. of instances of resources required by any other process, i.e.,
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}.
Question 7 |
Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the system is M units if the corresponding memory page is available in memory and D units if the memory access causes a page fault. It has been experimentally measured that the average time taken for a memory access in the process is X units.
Which one of the following is the correct expression for the page fault rate experienced by the process?
(D-M)/(X-M) | |
(X-M)/(D-M) | |
(D-X)/(D-M) | |
(X-M)/(D-X) |
X = (1 - P)M + D × P
X = M ∙ PM + DP
(X - M) = P(D - M)
⇒ P = (X - M) / (D - M)
Question 8 |
Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is _________.
2 | |
3 | |
4 | |
5 |
No. of resources = 4
Let’s distribute each process one less than maximum demands i.e., (k-1) resources.
So, for three processes, 3(k – 1) resources.
For deadlock avoidance provide an additional resource to any one of the process.
∴ Total resources required to avoid deadlock in any case is 3(k – 1) + 1 = 3k – 2
Now this 3k – 2 should be less than equal to available no. of resources, i.e.,
3k – 2 ≤ 4
k ≤ 2
So maximum value of k = 2
Question 9 |
Consider the following solution to the producer-consumer synchronization problem. The shared buffer size is N. Three semaphores empty, full and mutex are defined with respective initial values of 0, N and 1. Semaphore empty denotes the number of available slots in the buffer, for the consumer to read from. Semaphore full denotes the number of available slots in the buffer, for the producer to write to. The placeholder variables, denoted by P, Q, R and S, in the code below can be assigned either empty or full. The valid semaphore operations are: wait() and sigmal().

Which one of the following assignments to P, Q, R and S will yield the correct solution?
P: full, Q: full, R: empty, S: empty | |
P: empty, Q: empty, R: full, S: full
| |
P: full, Q: empty, R: empty, S: full | |
P: empty, Q: full, R: full, S: empty |
Initial: mutex = 1
empty = 0
full = N

Question 10 |
Consider a storage disk with 4 platters (numbered as 0, 1, 2 and 3), 200 cylinders (numbered as 0, 1, … , 199), and 256 sectors per track (numbered as 0, 1, … 255). The following 6 disk requests of the form [sector number, cylinder number, platter number] are received by the disk controller at the same time:
- [120, 72, 2], [180, 134, 1], [60, 20, 0], [212, 86, 3], [56, 116, 2], [118, 16, 1]
Currently head is positioned at sector number 100 of cylinder 80, and is moving towards higher cylinder numbers. The average power dissipation in moving the head over 100 cylinders is 20 milliwatts and for reversing the direction of the head movement once is 15 milliwatts. Power dissipation associated with rotational latency and switching of head between different platters is negligible.
The total power consumption in milliwatts to satisfy all of the above disk requests using the Shortest Seek Time First disk scheduling algorithm is ______ .
85 | |
86 | |
87 | |
88 |
→ A storage disk - 4 platters(0, 1, 2 & 3), Cylinder - 200 (0, 1, …, 199) , 256 sector per track.
In the above question the disk requests are given in the form of 〈sector no, cylinder no, platter no〉.
In SSTF (Shortest Seek Time First) Disk Scheduling algorithm, requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in queue and then they are scheduled according to their calculated seek time. Head is positioned at 80 and moving towards higher cylinder no.

Head Movement in SSTF = (86 – 80) + (86 – 72) + (134 – 72) + (134 – 16) = 200
P1: Power dissipated by 200 movements = 0.2 * 200 = 40 mW
Power dissipated in reversing Head direction once = 15 mW
No. of times Head changes its direction = 3 P2: Power dissipated in reversing Head direction = 3 * 15 = 45 mW
Total power consumption is P1 + P2 = 85 mW
Question 11 |
In a system, there are three types of resources: E, F and G. Four processes P0, P1, P2 and P3 execute concurrently. At the outset, the processes have declared their maximum resource requirements using a matrix named Max as given below. For example, Max[P2, F] is the maximum number of instances of F that P2 would require. The number of instances of the resources allocated to the various processes at any given state is given by a matrix named Allocation.
Consider a state of the system with the Allocation matrix as shown below, and in which 3 instances of E and 3 instances of F are the only resources available.

From the perspective of deadlock avoidance, which one of the following is true?
The system is in safe state. | |
The system is not in safe state, but would be safe if one more instance of E were available. | |
The system is not in safe state, but would be safe if one more instance of F were available.
| |
The system is not in safe state, but would be safe if one more instance of G were available. |

Safe sequence:
〈P0, P2, P1, P3〉
P0: P0 can be allotted 〈3,3,0〉.
After completion Available = 〈4,3,1〉
P2: P2 can be allotted 〈0,3,0〉.
After completion: Available = 〈5,3,4〉
P1: P1 can be allotted 〈1,0,2〉.
After completion: Available = 〈6,4,6〉
P3: P3 can be allotted 〈3,4,1〉.
After completion: Available = 〈8,4,6〉
Question 12 |
Threads of a process share
global variables but not heap. | |
heap but not global variables. | |
neither global variables nor heap. | |
both heap and global variables. |

Question 13 |
Consider the following CPU processes with arrival times (in milliseconds) and length of CPU bursts (in milliseconds) as given below:
If the pre-emptive shortest remaining time first scheduling algorithm is used to schedule the processes, then the average waiting time across all processes is __________ milliseconds.
3 | |
4 | |
5 | |
6 |

Question 14 |
A multithreaded program P executes with x number of threads and used y number of locks for ensuring mutual exclusion while operating on shared memory locations. All locks in the program are non-reentrant, i.e., if a thread holds a lock l, then it cannot re-acquire lock l without releasing it. If a thread is unable to acquire a lock, it blocks until the lock becomes available. The minimum value of x and the minimum value of y together for which execution of P can result in a deadlock are:
x = 1, y = 2 | |
x = 2, y = 1 | |
x = 2, y = 2 | |
x = 1, y = 1 |
Here a non-reentrant process can’t own the same lock multiple times, so if the process tries to acquire an already owned lock, it will get blocked, and deadlock will happen.
From the above options x=1 (a single thread) and y=1 (a single lock) deadlock is possible when we consider the given situations in question.
Question 15 |
Recall that Belady’s anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now, consider the following statements:
-
- S1: Random page replacement algorithm (where a page chosen at random is replaced) suffers from Belady’s anomaly
- S2: LRU page replacement algorithm suffers from Belady’s anomaly
Which of the following is CORRECT?
S1 is true, S2 is true | |
S1 is true, S2 is false | |
S1 is false, S2 is true | |
S1 is false, S2 is false |
Page replacement algorithm suffers from Belady's anomaly when it is not a stack algorithm.
S1: Random page replacement algorithm is not a stack algorithm. So, S1 is true.
S2: LRU is a stack algorithm. Therefore, it doesn't suffer from Belady's anomaly. S2 is false.
Question 16 |
Which of the following is/are shared by all the threads in a process?
I. Program counter
II. Stack
III. Address space
IV. Registers
I and II only | |
III only | |
IV only | |
III and IV only |
A process, in the simplest terms, is an executing program.
One or more threads run in the context of the process.
A thread is the basic unit to which the operating system allocates processor time.
A thread can execute any part of the process code, including parts currently being executed by another thread.
Each thread has its own stack, register and PC.
So here address space that is shared by all thread for a single process.
Question 17 |
In a file allocation system, which of the following allocation scheme(s) can be used if no external fragmentation is allowed?
I. Contiguous
II. Linked
III. Indexed
I and III only | |
II only | |
III only | |
II and III only |
Advantage:
i) Both sequential and direct access is possible.
ii) Extremely fast.
Disadvantage:
i) Suffers from both internal and external fragmentation.
ii) Increasing file size is difficult, because it depends on the availability of contiguous memory at a particular instance.
→ Linked list allocation:
Advantage:
i) Flexible in terms of size.
ii) Does not suffers from external fragmentation.
Disadvantage:
i) Allocation is slower.
ii) Does not support random or direct access.
iii) Pointers require some extra overhead.
→ Indexed allocation:
Advantage:
i) Support direct access, so provides fast access to the file blocks.
ii) No external fragmentation.
Disadvantage:
i) The pointer overhead for indexed allocation is greater than linked allocation.
ii) Inefficient in terms of memory utilization.
Question 18 |
A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for three processes are shown below:

Which of the following best describe current state of the system?
Safe, Deadlocked | |
Safe, Not Deadlocked | |
Not Safe, Deadlocked | |
Not Safe, Not Deadlocked |

Available: (9 - (3 + 1 + 3)) = 2, P3 can be satisfied.
New available = 3 + 2 = 5
Now, P2 can be satisfied.
New available: 5 + 1 = 6
Now, P1 can be satisfied. Thus safe sequence: P3 → P2 → P1
That is not deadlocked.
Question 19 |
Consider the set of processes with arrival time (in milliseconds), CPU burst time (in milliseconds), and priority (0 is the highest priority) shown below. None of the processes have I/O burst time.

The average waiting time (in milliseconds) of all the processes using preemptive priority scheduling algorithm is __________.
29 | |
30 | |
31 | |
32 |

Waiting Time = 0 + (33 - 5) + (40 - 2) + (49 - 12) + (51 - 9) = 145
Average waiting time: 145/5 = 29
Question 20 |
Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?
Shortest remaining time first | |
Round-robin with time quantum less than the shortest CPU burst | |
Uniform random | |
Highest priority first with priority proportional to CPU burst length |
We can consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system.
We have to choose the appropriate process scheduling algorithms, which would minimize the average waiting time in the ready queue.
Waiting time is the time for which process is ready to run but not executed by CPU scheduler.
In all CPU Scheduling algorithms, shortest job first is optimal.
It gives minimum turnaround time, minimum average waiting time and high throughput and the most important thing is that shortest remaining time first is the preemptive version of shortest job first.
This scheduling algorithm may lead to starvation because if the short processes are added to the CPU scheduler continuously then the currently running process will never be able to execute as they will get pre-empted but here all the processes are arrived at same time so there will be no issue such as starvation.
SRTF would be same as SJF.
So, A is the answer. Shortest remaining time first.
Question 21 |
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a one-level page table per process and each page table entry requires 48 bits, then the size of the per-process page table is __________ megabytes.
384 MB | |
385 MB | |
386 MB | |
387 MB |
Size of memory = 240 and Page size = 16 KB = 214
No. of pages = size of Memory / page size = 240/ 214 = 226
Size of page table = 226 * 48 / 8 bytes = 26 * 6 MB = 384 MB
Question 22 |
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is ___________.
346 | |
347 | |
348 | |
349 |
I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10.
C-LOOK scheduling algorithm is used.
Head is initially at cylinder number 63.
First start from 63
63 → 87 → 92 → 121 → 191 = 24 + 5 + 29 + 70 movements = 128
191 → 10 movements = 181
10 → 11 → 38 → 47 = 1 + 27 + 9 movements = 37
Total = 128 + 181 + 37 = 346
Question 23 |
Consider a computer system with ten physical page frames. The system is provided with an access sequence (a1,a2,…,a20,a1,a2,…,a20), where each ai is a distinct virtual page number. The difference in the number of page faults between the last-in-first-out page replacement policy and the optimal page replacement policy is ___________.
1 | |
2 | |
3 | |
4 |
First we can consider LIFO (Last In First Out) →
a1 to a10 will result in page faults = 10 page faults,
Then a11 will replace a10 (last in is a10), a12 replace a11 and ...till a20 = 10 page faults and a20 will be top of stack and a9…a1 are remained as such.
Then a1 to a9 are already there.
So, 0 page faults from a1 to a9.
a10 will replace a20, a11 will replace a10 and so on = So 11 page faults.
So total faults will be 10+10+11 = 31.
Second Optimal Page Replacement Policy →
a1 to a10 = 10 page faults,
a11 will replace a10 because among a1 to a10, a10 will be used later, a12 will replace a11 and so on = 10 page faults.
a20 will be top of stack and a9…a1 are remained as such.
a1 to a9 = 0 page fault. a10 will replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults.
So no page fault for a20.
Total = 10+ 10 +10 = 30.
So the difference between LIFO - Optimal = 1
Question 24 |
Consider the following proposed solution for the critical section problem. There are n processes: P0...P(n-1). In the code, function pmax returns an integer not smaller than any of its arguments. For all i, t[i] is initialized to zero.
Code for Pi: do { c[i]=1; t[i] = pmax(t[0],...,t[n-1])+1; c[i]=0; for every j ≠ i in {0,...,n-1} { while (c[j]); while (t[j] != 0 && t[j]<=t[i]); } Critical Section; t[i]=0; Remainder Section; } while (true);
Which one of the following is TRUE about the above solution?
At most one process can be in the critical section at any time | |
The bounded wait condition is satisfied | |
The progress condition is satisfied | |
It cannot cause a deadlock |
Based on the above code option B, C and D are not satisfied.
We can see that while (t[j] != 0 && t[j] <= t[i]);
because of this condition deadlock is possible when t[j] = = t[i].
Because Progress == no deadlock as no one process is able to make progress by stopping other process.
Bounded waiting is also not satisfied.
In this case both deadlock and bounded waiting to be arising from the same reason as if t[j] = = t[i] is possible then starvation is possible means infinite waiting.
Mutual exclusion is satisfied.
All other processes j started before i must have a value of t[j]) < t[i] as function pMax() return an integer not smaller than any of its arguments.
So if anyone out of the processes j have positive value will be executing in its critical section as long as the condition t[j] > 0 && t[j] <= t[i] within while will persist.
And when this j process comes out of its critical section, it sets t[j] = 0; and next process will be selected in for loop.
So, when i process reaches to its critical section none of the processes j which started earlier before process i is in its critical section.
This ensures that only one process is executing its critical section at a time.
So, A is the answer.
Question 25 |
In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?
LRU (Least Recently Used) | |
OPT (Optimal Page Replacement) | |
MRU (Most Recently Used) | |
FIFO (First In First Out) |
In Belady’s anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm.
In some situations, FIFO page replacement gives more page faults when increasing the number of page frames.
Question 26 |
Consider the following processes, with the arrival time and the length of the CPU burst given in milliseconds. The scheduling algorithm used is preemptive shortest remaining-time first.

The average turn around time of these processes is _________ milliseconds.
8.25 | |
8.26 | |
8.27 | |
8.28 |
To answer the question we need to design the gantt chart:

In this algorithm, the processes will be scheduled on the CPU which will be having least remaining burst time.
Turnaround Time (TAT) = Completion Time (CT) - Arrival Time (AT)
TAT for P1 = 20 - 0 = 20,
TAT for P2 = 10 - 3 = 7,
TAT for P3 = 8 - 7 = 1,
TAT for P4 = 13 - 8 = 5.
Total TAT = 20 + 7 + 1 + 5 = 33 / 4 = 8.25 (Avg. TAT)
Question 27 |
Consider the following two-process synchronization solution.
Process 0 Process 1 --------- --------- Entry: loop while (turn == 1); Entry: loop while (turn == 0); (critical section) (critical section) Exit: turn = 1; Exit: turn = 0;
The shared variable turn is initialized to zero. Which one of the following is TRUE?
This is a correct two-process synchronization solution. | |
This solution violates mutual exclusion requirement. | |
This solution violates progress requirement. | |
This solution violates bounded wait requirement. |
So False.
C) Progress means if one process does not want to enter the critical section then it should not stop other process to enter the critical section.
But we can see that if process 0 will not enter the critical section then value of turn will not become 1 and process 1 will not be able to enter critical section.
So progress not satisfied. True.
D) Bounded waiting solution as there is a strict alternation.
So, False.
Question 28 |
Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest initial value of S for which at least one P(S) operation will remain blocked is _________.
7 | |
8 | |
9 | |
10 |
P(S) operation remain in blocked state therefore it will -1.
The negative value of the counting semaphore indicates the number of processes in suspended list (or blocked).
Take any sequence of 20P and 12V operations, at least one process will always remain blocked.
So, X - 20 + 12 = -1
Here P(S) = 20 and V(S) = 12
X = 7
Question 29 |
The following two functions P1 and P2 that share a variable B with an initial value of 2 execute concurrently.
P1() { C = B – 1; B = 2*C; } P2() { D = 2 * B; B = D - 1; }
The number of distinct values that B can possibly take after the execution is
3 | |
4 | |
5 | |
6 |
If we execute P1 process after P2 process, then B = 4
If we did preemption between P1 & P2 processes, then B = 2 (Preemption have done from P1 to P2) or B = 3 (Preemption have done from P2 to P1). So, among 2 & 3 values, only one value will be saved in B. So, total no. of distinct values that B can possibly take after the execution is 3.
Question 30 |
Consider a uniprocessor system executing three tasks T1, T2 and T3, each of which is composed of an infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds, respectively. The priority of each task is the inverse of its period and the available tasks are scheduled in order of priority, with the highest priority task scheduled first. Each instance of T1, T2 and T3 requires an execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning of the 1st millisecond and task preemptions are allowed, the first instance of T3 completes its execution at the end of ______________ milliseconds.
13 | |
12 | |
15 | |
16 |

∴ At the end of 12 milliseconds, 1st instance of T3 will complete its execution.
Question 31 |
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks.
10 | |
11 | |
12 | |
13 |

∴ Total head movements,
= 10 + 10 + 10 + 10 + 10 + 55 + 20 + 5 + 10
= 140
SSTF:

∴ Total head movements
= 5 + 15 + 10 + 10 + 65 + 5 + 10 + 10
= 130
∴ Additional distance that will be traversed by R/W head is
= 140 - 130
= 10
Question 32 |
Consider a main memory with five page frames and the following sequence of page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. Which one of the following is true with respect to page replacement policies First-In-First Out (FIFO) and Least Recently Used (LRU)?
Both incur the same number of page faults
| |
FIFO incurs 2 more page faults than LRU | |
LRU incurs 2 more page faults than FIFO | |
FIFO incurs 1 more page faults than LRU |

∴ Number of page faults = 9

∴ Number of page faults = 9
Question 33 |
A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
1 | |
2 | |
3 | |
6 |
Question 34 |
A computer system implements a 40-bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________.
22 | |
23 | |
24 | |
25 |

Question 35 |
A computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table entry contains a valid bit, a dirty bit, three permission bits, and the translation. If the maximum size of the page table of a process is 24 megabytes, the length of the virtual address supported by the system is ________ bits.
36 | |
37 | |
38 | |
39 |
PAS = 32 bit
∴ No. of frames =PA/Page size = 232 / 213 = 219
Also, it is given that each page table entry contains a valid bit, a dirty bit, 3 permission bits:
= 5 bits reserved
So one Page table entry size is
= 19+5 = 24 bits = 3 bytes
Now, Page table size = No. of entries × Entry size
24 × 220 = No. of entries × 3
No. of entries = 8 × 220 = 2 23
∴ Virtual Address size = No. of entries × Page size = 223 × 213 = 236
∴ Virtual Address Space = 36 bits
Question 36 |
The maximum number of processes that can be in Ready state for a computer system with n CPUs is
n | |
n2 | |
2n | |
Independent of n |
Maximum number of processes that will be in ready state is independent of number of processors.
Question 37 |
Consider the following policies for preventing deadlock in a system with mutually exclusive resources.
- I. Processes should acquire all their resources at the beginning of execution. If any resource is not available, all resources acquired so far are released.
II. The resources are numbered uniquely, and processes are allowed to request for resources only in increasing resource numbers.
III. The resources are numbered uniquely, and processes are allowed to request for resources only in decreasing resource numbers.
IV. The resources are numbered uniquely. A process is allowed to request only for a resource with resource number larger than its currently held resources.
Which of the above policies can be used for preventing deadlock?
Any one of I and III but not II or IV | |
Any one of I, III, and IV but not II | |
Any one of II and III but not I or IV | |
Any one of I, II, III, and IV |
Question 38 |
For the processes listed in the following table, which of the following scheduling schemes will give the lowest average turnaround time?

First Come First Serve | |
Non-preemptive Shortest Job First | |
Shortest Remaining Time | |
Round Robin with Quantum value two |

FCFS:

TAT for A = completion time(A) - AT(A) = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 13 - 4 = 9
TAT of D = 15 - 6 = 9
∴ Avg. TAT = 29/4
SJF:

TAT of A = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 15 - 4 = 11
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 27/4
SRTF:

TAT of A = 3 - 0 = 3
TAT of B = 15 - 1 = 14
TAT of C = 8 - 4 = 4
TAT of D = 10 - 6 = 4
∴ Avg. TAT = 25/4
RR:

TAT of A = 5 - 0 = 5
TAT of B = 15 - 1 = 14
TAT of C = 13 - 4 = 9
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 33/4
∴ SRTF has lowest Avg. TAT.
Question 39 |
Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100, and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90 is serviced after servicing ____________ number of requests.
3 | |
4 | |
5 | |
6 |

90 is serviced after servicing three requests, i.e.,
100 → 105 → 110 → 90
Question 40 |
Which one of the following is FALSE?
User level threads are not scheduled by the kernel. | |
When a user level thread is blocked, all other threads of its process are blocked. | |
Context switching between user level threads is faster than context switching between kernel level threads. | |
Kernel level threads cannot share the code segment. |
Question 41 |
An operating system uses the Banker’s algorithm for deadlock avoidance when managing the allocation of three resource types X, Y, and Z to three processes P0, P1, and P2. The table given below presents the current system state. Here, the Allocation matrix shows the current number of resources of each type allocated to each process and the Max matrix shows the maximum number of resources of each type required by each process during its execution.

There are 3 units of type X, 2 units of type Y and 2 units of type Z still available. The system is currently in a safe state. Consider the following independent requests for additional resources in the current state:
REQ1: P0 requests 0 units of X, 0 units of Y and 2 units of Z REQ2: P1 requests 2 units of X, 0 units of Y and 0 units of Z
Which one of the following is TRUE?
Only REQ1 can be permitted. | |
Only REQ2 can be permitted. | |
Both REQ1 and REQ2 can be permitted. | |
Neither REQ1 nor REQ2 can be permitted. |
After allowing Req 1,

Available: X=3, Y=2, Z=0
With this we can satisfy P1's requirement. So available becomes: X=6, Y=4, Z=0.
Since Z is not available, neither P0's nor P2's requirement can be satisfied. So, it is an unsafe state.
Lets take 2nd case,
After allowing Req 2,

Available: X=1, Y=2, Z=2
With this we can satisfy any one of P1's or P2's requirement. Lets first satisfy P1's requirement. So Available now becomes:
X=6, Y=4, Z=2
Now with the availability we can satisfy P2's requirement. So Available now becomes,
X=8, Y=5, Z=3
With this availability P0 can also be satisfied. So, hence it is in safe state.
So from above two cases Req 1 cannot be permitted but Req 2 can be permitted.
Question 42 |
Consider the following set of processes that need to be scheduled on a single CPU. All the times are given in milliseconds.

Using the shortest remaining time first scheduling algorithm, the average process turnaround time (in msec) is ________.
7.2 | |
7.3 | |
7.4 | |
7.5 |

TAT(A) = 8-0 = 8, TAT(B)= 5-3=2, TAT(C)= 12-5=7, TAT(D)= 21-7= 14, TAT(E)=15-10=5
Average turnaround time = 8+2+7+14+5/ 5 = 7.2ms
Question 43 |
Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.
7 | |
8 | |
9 | |
10 |

Total 7 faults are there.
Question 44 |
A FAT (file allocation table) based file system is being used and the total overhead of each entry in the FAT is 4 bytes in size. Given a 100 × 106 bytes disk on which the file system is stored and data block size is 103 bytes, the maximum size of a file that can be stored on this disk in units of 106 bytes is ________.
99.60 | |
99.61 | |
99.62 | |
99.63 |
= Total disk size/ disk block size
= 100 × 106 / 103
= 100 × 103
∴ Total overhead = 100 × 103 × 4B
= 400 × 103 B
= 0.4 × 106 B
So, Maximum size of file that can be stored in disk is,
Total disk size - Total overhead
= 100 × 106 - 0.4 × 106
= 99.6 × 106 B
Question 45 |
Consider the procedure below for the Producer-Consumer problem which uses semaphores:
semaphore n=0; semaphore s=1; void producer() void consumer() { { while(true) while(true) { { produce(); semWait(s); semWait(s); semWait(n); addToBuffer(); removeFromBuffer(); semSignal(s); semSignal(); semSignal(n); consume(); } } } }
Which one of the following is TRUE?
The producer will be able to add an item to the buffer, but the consumer can never consume it. | |
The consumer will remove no more than one item from the buffer. | |
Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty. | |
The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation. |
Question 46 |
Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation that lasts for tio milliseconds. It is assumed that the computer where the processes execute has sufficient number of I/O devices and the OS of the computer assigns different I/O devices to each process. Also, the scheduling overhead of the OS is negligible. The processes have the following characteristics:
Process id tc tio A 100 ms 500 ms B 350 ms 500 ms C 200 ms 500 ms
The processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time sharing system (round robin scheduling) that uses a time slice of 50 milliseconds. The time in milliseconds at which process C would complete its first I/O operation is ___________.
1000 | |
1001 | |
1002 | |
1003 |

So 'C' completes its I/O at 1000 time units.
Question 47 |
A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a program accesses the pages numbered 1, 2, …, 100 in that order, and repeats the access sequence THRICE. Which one of the following page replacement policies experiences the same number of page faults as the optimal page replacement policy for this program?
Least-recently-used | |
First-in-first-out | |
Last-in-first-out | |
Most-recently-used |

So, there would be 300 page faults in total (each access 100 page faults). Also it is visible that every time a replacement is done for the page which is most recently referred as it will be least recently referred in future. So, for the given page reference string optimal page replacement policy is working same as most recently used policy and thus number of page faults will be same in both of them.
Question 48 |
A system contains three programs and each requires three tape units for its operation. The minimum number of tape units which the system must have such that deadlocks never arise is _________.
7 | |
8 | |
9 | |
10 |
Question 49 |
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds):

The average waiting time (in milliseconds) of the processes is
5.5 | |
5.6 | |
5.7 | |
5.8 |

WT - Waiting Time
CT - Completion Time
TAT - Turn Around Time
TAT = CT - AT < br> WT = TAT - BT
Gantt chart using Shortest remaining time first,

Avg. WT = 15+0+3+4/4 = 22/4 = 5.5
Question 50 |
Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.
122 | |
123 | |
124 | |
125 |
= 10 + 0.4 × 80 + 80
= 10 + 32 + 80
= 122 ms
Question 51 |
Consider the basic block given below.
a = b + c c = a + d d = b + c e = d - b a = e + b
The minimum number of nodes and edges present in the DAG representation of the above basic block respectively are
6 and 6 | |
8 and 10 | |
9 and 12 | |
4 and 4 |
c = a + d
d = b + c
e = d - b = b + c - b = c
a = d - b + b = d
So, DAG representation of above basic block respectively,

So, number of nodes = 6 &
number of edges = 6
Question 52 |
The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9. The average memory access time (in nanoseconds) in executing the sequence of instructions is __________.
1.68 | |
1.69 | |
1.70 | |
1.71 |
= 200 instructions (operations)
Time taken for fetching 100 instructions (equivalent to read) = 90*1ns + 10*5ns = 140ns
Memory operand Read operations = 90% (60)*1ns + 10% (60) × 5ns = 54ns + 30 ns = 84ns
Memory operands Write operations time = 90% (40)*2ns + 10% (40)*10ns
= 72ns + 40ns = 112ns
Total time taken for executing 200 instructions = 140 + 84 + 112 = 336ns
∴ Average memory access time = 336 ns/200 = 1.68ns
Question 53 |
A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with priority zero (the lowest priority). The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule. Which one of the following is TRUE if the processes have no I/O operations and all arrive at time zero?
This algorithm is equivalent to the first-come-first-serve algorithm. | |
This algorithm is equivalent to the round-robin algorithm. | |
This algorithm is equivalent to the shortest-job-first algorithm. | |
This algorithm is equivalent to the shortest-remaining-time-first algorithm. |
Question 54 |
Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-free order of invoking the P operations by the processes?
X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a) | |
X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d) | |
X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d) | |
X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a) |
Question 55 |
Consider a hard disk with 16 recording surfaces (0-15) having 16384 cylinders (0-16383) and each cylinder contains 64 sectors (0-63). Data storage capacity in each sector is 512 bytes. Data are organized cylinder-wise and the addressing format is <cylinder no., surface no., sector no.>. A file of size 42797 KB is stored in the disk and the starting disk location of the file is <1200, 9, 40>. What is the cylinder number of the last sector of the file, if it is stored in a contiguous manner?
1281 | |
1282 | |
1283 | |
1284 |
Suppose we have x cylinders & in each cylinder we have 16 surface and 64 sectors so we need to store 42797 KB of data.
x * 16* 64 *512 + 16*64* 512 + 64* 512 = 42797 KB , by solving this we get x = 82.5 so we need 83 cylinders.
We can add this to the no. of cylinders in the starting address <1200, 9 ,40 >, i.e. 1200, but we also need to cover 40 more sectors which will need one more cylinder, this cylinder is not full but still it has to be accommodated.
Hence 1200+83+1 = 1284 and currently we will be on 1284 cylinder.
Question 56 |
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory. Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution?
-2 | |
-1 | |
1 | |
2 |

Suppose, 'W' executes first, and makes value of semaphore S=1. Now it increment value of x to 1 and comes out and make S=2. Now 'X' executes, where the value of x=1 is stored in R1 and then R1 is incremented and value in R1 is 2. Now 'X' gets preempted. Also to be noted that value of semaphore S is now '1' because the execution during process X got preempted and wasn't able to increase value of S to 2.
Now process Y will get completely execute and then Z will get completely execute. Now the value of 'x' is -3. Now again 'X' will get executed and the value stored in R1 i.e., '2' will get stored in 'x'. So the value in 'x' will be finally 2.
So the maximum possible value of 'x' is 2.
Question 57 |
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 ≤ i < n and b[i]=g(a[i]) for 0 ≤ i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X computes the array a and Y computes the array b. The processes employ two binary semaphores R and S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are shown below.
Process X: Process Y: private i; private i; for (i=0; i < n; i++) { for (i=0; i < n; i++) { a[i] = f(i); EntryY(R, S); ExitX(R, S); b[i]=g(a[i]); } }
Which one of the following represents the CORRECT implementations of ExitX and EntryY?
ExitX(R,S) { P(R); V(S); } EntryY(R,S) { P(S); V(R); } | |
ExitX(R,S) { V(R); V(S); } EntryY(R,S) { P(R); P(S); } | |
ExitX(R,S) { P(S); V(R); } EntryY(R,S) { V(S); P(R); } | |
ExitX(R,S) { V(R); P(S); } EntryY(R,S) { V(S); P(R); } |
Option C - Correct because each and every value inserted by the process X in the array will be immediately consumed by process Y.
Question 58 |
A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
What is the size of a page in KB in this computer?
2 | |
4 | |
8 | |
16 |
So the no. of entries in one page is 2p/4, where 4 is the page table entry size given in question.
So we know that process size or virtual address space size is equal to
No. of entries × Page size
So total no. of entries for 3 level page table is,
(2p/4)×(2p/4)×(2p/4)
So, No. of entries × Page size = VAS
(2p/4)×(2p/4)×(2p/4)× (2p) = 246
24p = 252
4p = 52
p = 13
∴ Page size = 213
Question 59 |
A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer?
2 | |
4 | |
8 | |
16 |

Architecture of virtual indexed physically tagged (VIPT):

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13bits
These 13bits are not translated during VA→PA. So 13bits are same out of 16 Index bits, 13 are same we need to make 3bits (16-13) same now.
3bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 60 |
A process executes the code
fork(); fork(); fork();
The total number of child processes created is
3 | |
4 | |
7 | |
8 |
7 are child processes.
Question 61 |
Consider the 3 processes, P1, P2 and P3 shown in the table.
Process Arrival time Time Units Required P1 0 5 P2 1 7 P3 3 4
The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with CPU quantum of 2 time units) are
FCFS: P1, P2, P3 RR2: P1, P2, P3 | |
FCFS: P1, P3, P2 RR2: P1, P3, P2 | |
FCFS: P1, P2, P3 RR2: P1, P3, P2 | |
FCFS: P1, P3, P2 RR2: P1, P2, P3 |
FCFS is clear.
RR Queue: In RR queue time slot is of 2 units.
Processes are assigned in following order
P1, P2, P1, P3, P2, P1, P3, P2, P2
This question used the ready queue concept. At t=2, P2 starts and P1 is sent to the ready queue and at t=3 P3 arrives so then the job P3 is queued in the ready queue after P1. So at t=4, again P1 is executed then P3 is executed for the first time at t=6.
RR2: P1, P3, P2
So option C.
Question 62 |
Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available.
AcquireLock(L){ while (Fetch_And_Add(L,1)) L = 1; } ReleaseLock(L){ L = 0; }
This implementation
fails as L can overflow | |
fails as L can take on a non-zero value when the lock is actually available | |
works correctly but may starve some processes | |
works correctly without starvation |
while (Fetch_And_Add (L,1))
L = 1; // A waiting process can be here just after
// the lock is released, and can make L = 1.
Assume P1 executes until while condition and preempts before executing L =1. Now P2 executes all statements, hence L = 0. Then P1 without checking L it makes L = 1 by executing the statement where it was preempted.
It takes a non-zero value (L=1) when the lock is actually available (L = 0). So option B.
Question 63 |
A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block address and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum possible file size in this file system is
3 KBytes | |
35 KBytes | |
280 KBytes | |
dependent on the size of the disk |
So, one direct block addressing will point to 8 disk blocks = 8*128 B = 1 KB
Singly Indirect block addressing will point to 1 disk block which has 128/8 disc block addresses = (128/8)*128 B = 2 KB
Doubly indirect block addressing will point to 1 disk block which has 128/8 addresses to disk blocks which in turn has 128/8 addresses to disk blocks = 16*16*128 B = 32 KB
Maximum possible file size = 1 KB + 2 KB + 32 KB = 35 KB
Question 64 |
Consider the virtual page reference string
1, 2, 3, 2, 4, 1, 3, 2, 4, 1
On a demand paged virtual memory system running on a computer system that main memory size of 3 pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page replacements policy. Then
OPTIMAL < LRU < FIFO | |
OPTIMAL < FIFO < LRU | |
OPTIMAL = LRU | |
OPTIMAL = FIFO |

No. of page faults with FIFO = 6
LRU:

No. of page faults with LRU = 9
Optimal:

No. of page faults with optimal = 5
∴ Optimal < FIFO < LRU
Question 65 |
Let the page fault service time to 10 ms in a computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory accesses, what is the effective access time for the memory?
21 ns | |
30 ns | |
23 ns | |
35 ns |
EA = p × page fault service time + (1 – p) × Memory access time
= 1/106×10×106+(1-1/106)×20 ≅ 29.9 ns
Question 66 |
A thread is usually defined as a "light weight process" because an operating system (OS) maintains smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?
On per-thread basis, the OS maintains only CPU register state | |
The OS does not maintain a separate stack for each thread | |
On per-thread basis, the OS does not maintain virtual memory state | |
On per thread basis, the OS maintains only scheduling and accounting information |
B) False, OS do maintain a separate stack for each thread.
C) True
D) False
Question 67 |
Let the time taken to switch between user and kernel modes of execution be t1 while the time taken to switch between two processes be t2. Which of the following is TRUE?
(t1) > (t2) | |
(t1) = (t2) | |
(t1) < (t2) | |
Nothing can be said about the relation between t1 and t2 |
Question 68 |
Consider the following table of arrival time and burst time for three processes P0, P1 and P2.

The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or completion of processes. What is the average waiting time for the three processes?
5.0 ms | |
4.33 ms | |
6.33 ms | |
7.33 ms |

CT = Completion time
TAT = Turn Around Time
WT = Waiting Time
TAT = CT - AT
WT = TAT - BT
Gantt Chart using pre-emptive shortest job first scheduling algorithm,

Avg. WT = 4+0+11/3 = 5ns
Question 69 |
Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.

Which one of the following statements describes the properties achieved?
Mutual exclusion but not progress
| |
Progress but not mutual exclusion | |
Neither mutual exclusion nor progress | |
Both mutual exclusion and progress |
Question 70 |
A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page faults will occur?
196 | |
192 | |
197 | |
195 |
Page 1 …….. Page 100 causes 100 faults.
Now, when they are accesses in reverse order page 100, 99, 98, 97 are already present. So we get 96 page faults. So, total of 196 page faults.
Question 71 |
Which of the following statements are true?
- I. Shortest remaining time first scheduling may cause starvation
II. Preemptive scheduling may cause starvation
III. Round robin is better than FCFS in terms of response time
I only | |
I and III only | |
II and III only | |
I, II and III |
- Pre-emptive scheduling causes starvation (for example lower priority jobs are waiting).
- Best Response time is given by RR.
Question 72 |
The following program consists of 3 concurrent processes and 3 binary semaphores.The semaphores are initialized as S0 = 1, S1 = 0, S2 = 0.

How many times will process P0 print '0'?
At least twice | |
Exactly twice | |
Exactly thrice | |
Exactly once |
S1=0
S2=0
P0 enters the critical section first,
prints (‘0’)
releases S1,S2(i.e., S1=1 & S2=1)
Now P1 & P2 both can enter critical section releases S0 & prints (‘0’)
This process continues, hence the number of zero’s printed ≥2.
Question 73 |
A system has n resources R0,...,Rn-1,and k processes P0,....Pk-1.The implementation of the resource request logic of each process Pi is as follows:
if (i % 2 == 0) { if (i < n) request Ri if (i+2 < n) request Ri+2 } else { if (i < n) request Rn-i if (i+2 < n) request Rn-i-2 }
In which one of the following situations is a deadlock possible?
n = 40, k = 26 | |
n = 21, k = 12 | |
n = 20, k = 10 | |
n = 41, k = 19 |
P10 requests R10 & R11
P11 requests R10 & R8
Hence P10 & P11 inorder in deadlock.
Question 74 |
In which one of the following page replacement policies, Belady’s anomaly may occur?
FIFO | |
Optimal | |
LRU | |
MRU |
Question 75 |
The essential content(s) in each entry of a page table is/are
Virtual page number | |
Page frame number | |
Both virtual page number and page frame number | |
Access right information |
Virtual page number can represents index in the page table to get the page frame number.
Question 76 |
Consider a system with 4 types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non-preemptive resource allocation policy is used. At any given instance, a request is not entertained if it cannot be completely satisfied. Three processes P1, P2, P3 request the sources as follows if executed independently.

Which one of the following statements is TRUE if all three processes run concurrently starting at time t=0?
All processes will finish without any deadlock. | |
Only P1 and P2 will be in deadlock. | |
Only P1 and P3 will be in a deadlock. | |
All three processes will be in deadlock. |
→ P1 requests 2 units of R2 which is granted.
→ P2 requests 2 units of R3 which is granted.
→ P3 requests 1 units of R4 which is granted.
Now Available resources are,

At t=1,
→ P1 requests 1 unit of R3 which is granted because it is available.
Now Available resources are,

At t=2,
→ P2 requests 1 unit of R4 which is granted.
→ P3 requests 2 units of R1.
Now Available resources are,

At t=3,
→ P1 requests 2 units of R1 which cannot be granted, and will wait for other processes to release.
Available resources are,

At t=4,
→ P2 requests 1 unit of R1, which is granted.
Available resources are

At t=5,
→ P3 releases 2 units of R1.
Now Available resources are,

→ Now P1 is waiting for R1, so now P1 will be granted 2 units of R1.
Now Available resources are,

→ Now P1 releases 1 unit of R2 and 1 unit of R1.
Now Available resources are

At t=6,
→ Now P2 releases 1 unit of R3.
Now available resources are,

At t=7,
→ P3 requests 1 unit of R2, which is granted.
→ P1 releases 1 unit of R3.
Now Available resources are,

At t=8,
→ P2 fnishes and releases remaining resources. So now Available resources,

→ P3 requests 1 unit of R3 which is granted.
Now Available resources are,

→ P1 requests 2 units of R4 which cannot be granted, so it will wait for other process to release them.
At t=9,
→ P3 finishes, and releases rest of the resources.
Now Available resources are

→ Now, P1 can be granted with resources 2 units of R4 for which it was waiting for.
Now Available resources are,

At t=10,
→ P1 finishes its execution.
So, finally we can conclude that all processes will finish without any deadlock.
Question 77 |
Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence:
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
95 ms | |
119 ms | |
233 ms | |
276 ms |
4, 34, 10,7, 19, 73, 2, 15, 6, 20
Arrange the sequence in order
2, 4, 6, 10, 15, 19, 20, 34, 73

⇒ 1 ms to move from one cylinder to adjacent one
⇒ (16*1)+(14*1)+(1*1)+(4*1)+(5*1)+(3*1)+(1*1)+(2*1)+(2*1)+(71*1)
⇒ 16+14+1+4+5+3+1+2+2+71
⇒ 119 ms
Question 78 |
In the following process state transition diagram for a uniprocessor system, assume that there are always some processes in the ready state:

Now consider the following statements:
- I. If a process makes a transition D, it would result in another process making transition A immediately.
II. A process P2 in blocked state can make transition E while another process P1 is in running state.
III. The OS uses preemptive scheduling.
IV. The OS uses non-preemptive scheduling.
Which of the above statements are TRUE?
I and II | |
I and III
| |
II and III | |
II and IV |
Statement II is true, a process can move to ready state when I/O completes irrespective of other process being in running state or not.
Statement III is true, the transition C, represents preemptive scheduling.
Statement IV is false, because it is preemptive scheduling.
Correct Answer is Option-C (II & III are true).
Question 79 |
The enter_CS() and leave_CS() functions to implement critical section of a process are realized using test-and-set instruction as follows:
void enter_CS(X) { while test-and-set(X) ; } void leave_CS(X) { X = 0; }
In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the following statements:
- I. The above solution to CS problem is deadlock-free.
II. The solution is starvation free.
III. The processes enter CS in FIFO order.
IV More than one process can enter CS at the same time.
Which of the above statements is TRUE?
I only | |
I and II | |
II and III | |
IV only |
It is not using any queue to avoid starvation and there is no use of FIFO.
In CS, only one process can enter.
So, Answer is option A.
Question 80 |
A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical address because
It reduces the memory access time to read or write a memory location.
| |
It helps to reduce the size of page table needed to implement the virtual address space of a process.
| |
It is required by the translation lookaside buffer. | |
It helps to reduce the number of page faults in page replacement algorithms. |
Question 81 |
The data blocks of a very large file in the Unix file system are allocated using
contiguous allocation | |
linked allocation | |
indexed allocation | |
an extension of indexed allocation |
Question 82 |
The P and V operations on counting semaphores, where s is a counting semaphore, are defined as follows:
P(s): s = s - 1; if (s < 0) then wait; V(s): s = s + 1; if (s <= 0) then wakeup a process waiting on s;
Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s): Pb(Xb); s = s - 1; if (s < 0) { Vb(Xb) ; Pb(Yb) ; } else Vb(Xb); V(s): Pb(Xb) ; s = s + 1; if (s <= 0) Vb(Yb) ; Vb(Xb) ;
The initial values of Xb and Yb are respectively
0 and 0 | |
0 and 1 | |
1 and 0 | |
1 and 1 |
Yb must be '0' initially, because if Yb is '1' initially then two process can be in critical section at the same time.
So answer is Option (C).
Question 83 |
Which of the following statements about synchronous and asynchronous I/O is NOT true?
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
| |
In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after completion of the I/O | |
A process making a synchronous I/O call waits until I/O is complete, but a process making an asynchronous I/O call does not wait for completion of the I/O
| |
In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR that is invoked after the completion of I/O
|
Synchronous I/O -- some execution vehicle (like a process or thread) that initiates the I/O also waits for the I/O to complete (and perhaps completes it). When the I/O completes, that same execution vehicle goes on to do something else, perhaps using the results of the I/O.
Asynchronous I/O -- no execution vehicle waits for the I/O to complete. When the I/O completes, whatever execution vehicle happens to complete the I/O may arrange for later things to happen.
Option B is not true, because both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is not invoked after completion of the I/O.
Question 84 |
Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?
In deadlock prevention, the request for resources is always granted if the resulting state is safe | |
In deadlock avoidance, the request for resources is always granted if the result state is safe | |
Deadlock avoidance is less restrictive than deadlock prevention | |
Deadlock avoidance requires knowledge of resource requirements a priori |
Question 85 |
A process executes the following code
for(i=0; i<n; i++) for();
The total number of child processes created is
n | |
(2n) - 1 | |
2n | |
(2n+1) - 1 |
It is a operation where process creates a copy of itself.

1,3,7,15,31,... ⇒ 2n-1
Question 86 |
A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4 Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows
- • Bits 30-31 are used to index into the first level page table
• Bits 21-29 are used to index into the second level page table
• Bits 12-20 are used to index into the third level page table, and
• Bits 0-11 are used as offset within the page
The number of bits required for addressing the next level page table (or page frame) in the page table entry of the first, second and third level page tables are respectively.
20, 20 and 20 | |
24, 24 and 24 | |
24, 24 and 20 | |
25, 25 and 24 |
From the question we can see the below info:
Physical address size = 36 bits
Physical memory size = 236 bytes
Page frame size = 4K bytes = 212 bytes
No. of bits for offset = 12
No. of bits required to access physical memory frame = 36 – 12 = 24
So in third level of page table, 24 bits are required to access an entry.
In second level page table entry -- 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes.
So size of second level page table is (29)*4 = 211 bytes. It means there are (236)/(211) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. the first page table needs 25 bits.
Answer - D
First level
Question 87 |
Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match entries in Group 1 to entries in Group 2.
Group I Group II (P) Gang Scheduling (1) Guaranteed Scheduling (Q) Rate Monotonic Scheduling (2) Real-time Scheduling (R) Fair Share Scheduling (3) Thread Scheduling
P – 3 Q – 2 R – 1 | |
P – 1 Q – 2 R – 3 | |
P – 2 Q – 3 R – 1 | |
P – 1 Q – 3 R – 2 |
⇒ Rate monotonic scheduling is used in Real-time operating system.
⇒ Fair share scheduling distributes the CPU equally among users due to which it generates scheduling process.
Question 88 |
Consider the following statements about user level threads and kernel level threads. Which one of the following statements is FALSE?
Context switch time is longer for kernel level threads than for user level threads. | |
User level threads do not need any hardware support.
| |
Related kernel level threads can be scheduled on different processors in a multi-processor system. | |
Blocking one kernel level thread blocks all related threads. |
B) True, because kernel is not involved in it.
C) True
D) Blocking one kernel level thread blocks all related threads is false, because kernel level threads are independent.
Question 89 |
An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider the arrival times and execution times for the following processes:
Process Execution time Arrival time P1 20 0 P2 25 15 P3 10 30 P4 15 45
What is the total waiting time for process P2?
5 | |
15 | |
40 | |
55 |

Waiting time for process P2 = TAT - Execution time
= Completion time - AT - ET
= 55 - 15 - 25
= 15
Question 90 |
A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate. Q: Some programs do not exhibit locality of reference.
Which one of the following is TRUE?
Both P and Q are true, and Q is the reason for P
| |
Both P and Q are true, but Q is not the reason for P | |
P is false, but Q is true | |
Both P and Q are false |
FIFO suffers from Belady anomaly. Belady anomaly states that when number of page frames increases no. of page fault increases.
Statement Q,
Locality of reference also known as the principle of locality, is the tendency of a processor to accesses the same set of memory locations respectively over a short period of time.
And yes some programs do not exhibit locality of reference.
Hence, both P and Q are true. But Q is not the reason for P.
Question 91 |
A single processor system has three resource types X, Y and Z, which are shared by three processes. There are 5 units of each resource type. Consider the following scenario, where the column alloc denotes the number of units of each resource type allocated to each process, and the column request denotes the number of units of each resource type requested by a process in order to complete execution. Which of these processes will finish LAST?
alloc request X Y Z X Y Z P0 1 2 1 1 0 3 P1 2 0 1 0 1 2 P2 2 2 1 1 2 0
P0 | |
P1 | |
P2 | |
None of the above, since the system is in a deadlock.
|

543:
System is in safe state
Order of Execution =
P2 will execute last.
Question 92 |
Two processes, P1 and P2, need to access a critical section of code. Consider the following synchronization construct used by the processes:
/* P1 */ while (true) { wants1 = true; while (wants2 == true); /* Critical Section */ wants1=false; } /* Remainder section */ /* P2 */ while (true) { wants2 = true; while (wants1==true); /* Critical Section */ wants2 = false; } /* Remainder section */
Here, wants1 and wants2 are shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct?
It does not ensure mutual exclusion. | |
It does not ensure bounded waiting.
| |
It requires that processes enter the critical section in strict alternation. | |
It does not prevent deadlocks, but ensures mutual exclusion. |
Now, if in P1
wants1 = true ;
executed and preempted and now if in P2
wants2 = true ;
executed.
Then the deadlock situation will be created because both will fall into infinite loop.
Question 93 |
A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
If optimal page replacement policy is used, how many page faults occur for the above reference string?
7 | |
8 | |
9 | |
10 |
1, 2, 1, 3, 7, 4, 5, 6, 3, 1
No. of frames = 3
Using Optimal page replacement,

Total 7 page faults.
Question 94 |
A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
Least Recently Used (LRU) page replacement policy is a practical approximation to optimal page replacement. For the above reference string, how many more page faults occur with LRU than with the optimal page replacement policy?
0 | |
1 | |
2 | |
3 |
1, 2, 1, 3, 7, 4, 5, 6, 3, 1 br> No. of frames = 3 (Using LRU)

No. of page faults with LRU = 9
⟶ In the LRU we replace the page with least recently used page.
Using optimal page replacement
⟶ 1,2,1,3,7,4,5,6,3,1

No. of page faults with optimal = 7
Difference between optimal and LRU = 9 - 7 = 2
In optimal we replace the page which will occur last in the future.
Question 95 |
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2 and 6, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
1 | |
2 | |
3 | |
4 |

Total no.of context switches is 2.
Question 96 |
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x n y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore S.
void P (binary_semaphore *s) { unsigned y; unsigned *x = &(s->value); do { fetch-and-set x, y; } while (y); } void V (binary_semaphore *s) { S->value = 0; }
Which one of the following is true?
The implementation may not work if context switching is disabled in P | |
Instead of using fetch-and-set, a pair of normal load/store can be used | |
The implementation of V is wrong | |
The implementation of V is wrong |
B) If we use normal load and store instead of Fetch and Set, then there can be chance that more than one process sees S.value as 0 and then mutual exclusion will not be satisfied. So wrong.
C) Here we are setting S→value to 0, which is correct. This option that's why wrong.
D) Only one process can be in critical section at any time. So this option is wrong.
Question 97 |
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
11 bits | |
13 bits | |
15 bits | |
20 bits |
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 98 |
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?
Efficient implementation of multi-user support is no longer possible
| |
The processor cache organization can be made more efficient now
| |
Hardware support for memory management is no longer needed | |
CPU scheduling can be made more efficient now |
→ Because special hardware support needed only for virtual memory.
Question 99 |
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turnaround time is:
13 units | |
14 units | |
15 units | |
16 units
|

Avg TAT = 12+13+14/3 = 39/3 = 13 units
Question 100 |
Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units, respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining compute time first scheduling algorithm and schedules a new process either when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can be overlapped as much as possible. For what percentage of time does the CPU remain idle?
0% | |
10.6% | |
30.0% | |
89.4% |

Total time needed to complete the execution = 47
Idle time = 2+3 = 5
Percentage of Idle time = 5/47 × 100 =10.6%
Question 101 |
Consider the following snapshot of a system running n processes. Process i is holding xi instances of a resource R, 1≤i≤n. Currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional yi instances while holding the xi instances it already has. There are exactly two processes p and q such that yp = yq = 0. Which one of the following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?
min (xp, xq) < maxk≠p,qyk | |
xp + xq ≥ mink≠p,qyk | |
max (xp, xq) > 1 | |
min (xp, xq) > 1 |
→ When two (or) more processes waiting for another process to release the resources.
→ P and Q can execute if they have sufficient resources, they don’t wait for extra resources (i.e., Xp+ Xq) required.
→ Option B can satisfies the corresponding equation i.e., Xp+ Xq >= min(Yk) where k != p and k != q.
Here we have sufficient resources.
Question 102 |
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) { 1: P(S); 2: process_arrived++; 3. V(S); 4: while (process_arrived !=3); 5: P(S); 6: process_left++; 7: if (process_left==3) { 8: process_arrived = 0; 9: process_left = 0; 10: } 11: V(S); }
The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally.
The above implementation of barrier is incorrect. Which one of the following is true?
The barrier implementation is wrong due to the use of binary semaphore S
| |
The barrier implementation may lead to a deadlock if two barriers in invocations are used in immediate succession
| |
Lines 6 to 10 need not be inside a critical section | |
The barrier implementation is correct if there are only two processes instead of three
|
Hence, it is leads to deadlock.
Question 103 |
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) { 1: P(S); 2: process_arrived++; 3. V(S); 4: while (process_arrived !=3); 5: P(S); 6: process_left++; 7: if (process_left==3) { 8: process_arrived = 0; 9: process_left = 0; 10: } 11: V(S); }
The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally.
Which one of the following rectifies the problem in the implementation?
Lines 6 to 10 are simply replaced by process_arrived-- | |
At the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S).
| |
Context switch is disabled at the beginning of the barrier and re-enabled at the end.
| |
The variable process_left is made private instead of shared. |
Question 104 |
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O?
I/O protection is ensured by operating system routine(s)
| |
I/O protection is ensured by a hardware trap | |
I/O protection is ensured during system configuration
| |
I/O protection is not possible |
Question 105 |
Increasing the RAM of a computer typically improves performance because:
Virtual memory increases | |
Larger RAMs are faster | |
Fewer page faults occur | |
Fewer segmentation faults occur |
→ Such as if RAM size increases, then no. of page entries increases, then no. of page faults decreases.
Question 106 |
Suppose n processes, P1, …., Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is si, where si>0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur?
![]() | |
![]() | |
![]() | |
![]() |
If the deadlock will never occur in the corresponding process then the following condition be true.

Question 107 |
Consider the following code fragment:
if (fork() == 0) { a = a + 5; printf(“%d, %d\n”, a, &a); } else { a = a – 5; printf(“%d, %d\n”, a, &a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is TRUE?
u = x + 10 and v = y
| |
u = x + 10 and v is ≠ y | |
u + 10 = x and v = y | |
u + 10 = x and v ≠ y |
u = a - 5; v be address of a
a = u + 5;
x, y values printed by child process.
x = a + 5; y be the address of a
x = u + 5 + 5; v = y
x = u + 10
Question 108 |
Consider the following statements with respect to user-level threads and kernel supported threads
- (i) context switch is faster with kernel-supported threads
(ii) for user-level threads, a system call can block the entire process
(iii) Kernel supported threads can be scheduled independently
(iv) User level threads are transparent to the kernel
Which of the above statements are true?
(ii), (iii) and (iv) only | |
(ii) and (iii) only | |
(i) and (iii) only | |
(i) and (ii) only |
→ If one user level thread perform blocking operation then entire process will be blocked. Option B is true.
→ User level threads are threads are transparent to the kernel. Because user level threads are created by users. Option D is true.
→ Kernel supported threads can be scheduled independently, that is based on OS. Option C is true.
Question 109 |
Consider an operating system capable of loading and executing a single sequential user process at a time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is the expected improvement in the I/O performance of user programs?
50% | |
40% | |
25% | |
0% |
→ The better vendor benchmark results doesn't effects the I/O performance.
→ In FCFS (or) SSTF only one choice is to choose for IO from multiple IO's. There is always one I/O at a time.
Question 110 |
he minimum number of page frames that must be allocated to a running process in a virtual memory environment is determined by
the instruction set architecture | |
page size | |
physical memory size | |
number of processes in memory |
→ An ISA permits multiple implementations that may vary in performance, physical size and monetary cost.
Question 111 |
Consider the following set of processes, with the arrival times and the CPU-burst times given in milliseconds.
Process Arrival Time Burst Time P1 0 5 P2 1 3 P3 2 3 P4 4 1
What is the average turnaround time for these processes with the preemptive shortest remaining processing time first (SRPT) algorithm?
5.50 | |
5.75 | |
6.00 | |
6.25 |

Avg. TAT = 12+3+6+1/4 = 22/4 = 5.50
Question 112 |
Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?
645 nanoseconds | |
1050 nanoseconds | |
1215 nanoseconds | |
2060 nanoseconds |
= 100ns + 2EMAT
Now lets calculate EMAT,
EMAT = TLB + miss rate × 2 × 150ns + 150ns + 1/10000 × 8ms
= 0 + 0.1 × 300ns + 150ns + 800ns
= 980ns
∴ Effective average instruction time,
= 100ns + 2 × 980ns
= 2060ns
Question 113 |
Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary semaphores SX and SY respectively, both initialized to 1. P and V denote the usual semaphone operators, where P decrements the semaphore value, and V increments the semaphore value. The pseudo-code of P1 and P2 is as follows:
P1 : While true do { L1 : ................ L2 : ................ X = X + 1; Y = Y - 1; V(SX); V(SY); } P2 : While true do { L3 : ................ L4 : ................ Y = Y + 1; X = Y - 1; V(SY); V(SX); }
In order to avoid deadlock, the correct operators at L1, L2, L3 and L4 are respectively
P(Sy), P(Sx); P(Sx), P(Sy)
| |
P(Sx), P(Sy); P(Sy), P(Sx)
| |
P(Sx), P(Sx); P(Sy), P(Sy)
| |
P(Sx), P(Sy); P(Sx), P(Sy)
|
P1 : line 1
P2 : line 3 (block require Sx)
P1 : line 2
P2 : line 4 (still in block state)
P1 : execute CS, the push up the value of Sx.
P2 : line 3 line 4 (block require Sy)
P1 : push up Sy
P2 : line 4 get Sy and enter into CS
P2 : start execution.
So option D is Answer.
Question 114 |
A Unix-style i-node has 10 direct pointers and one single, one double and one triple indirect pointers. Disk block size is 1 Kbyte, disk block address is 32 bits, and 48-bit integers are used. What is the maximum possible file size?
224 bytes
| |
232 bytes
| |
234 bytes | |
248 bytes |
= 10 + 28 + (28)2 + (28)3
= 224 Blocks (App)
Size of each block = 210
Maximum file size = 224 × 210 = 234
Question 115 |
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address translation is not practical because of
the large amount of internal fragmentation | |
the large amount of external fragmentation | |
the large memory overhead in maintaining page tables | |
the large computation overhead in the translation process
|
Virtual address = 32 bit = 232
No. of page level entries = 232 / 210
= 222
= 4M (Too large size)
Question 116 |
A uni-processor computer system only has two processes, both of which alternate 10ms CPU bursts with 90ms I/O bursts. Both the processes were created at nearly the same time. The I/O of both processes can proceed in parallel. Which of the following scheduling strategies will result in the least CPU utilization (over a long period of time) for this system?
First come first served scheduling | |
Shortest remaining time first scheduling | |
Static priority scheduling with different priorities for the two processes | |
Round robin scheduling with a time quantum of 5 ms
|
(i) FCFS:
0-10: Process 1 can execute
10-20: Process 2 can execute
100-110: Process 1 Terminate
110-120: Process 2 Terminate
CPU utilization = 20/100 [In every 100ms it utilizes]
=20%
(ii) SRTF: can process P and Q same as FCFS
then CPU utilization = 20%
(iii) Round robin: with TQ-5
0-5: Process P1
5-10: Process P2
10-15: Process P1
15-20: Process P2
105-110: Process P1
110-115: Process P2
CPU utilization = 20/105 = 19.5
Question 117 |
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns.
Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest 0.5 ns)
1.5 ns | |
2 ns | |
3 ns | |
4 ns |
(TLB Miss * Cache Hit) + (TLB Miss * Cache Miss)
= (0.96*0.9*2)+(0.96*0.1+12)
(0.04*0.9*2)+(0.04*0.1*32)
= 3.8
= 4 (approximately)
Question 118 |
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns.
Suppose a process has only the following pages in its virtual address space: two contiguous code pages starting at virtual address 0x00000000, two contiguous data pages starting at virtual address 0×00400000, and a stack page starting at virtual address 0×FFFFF000. The amount of memory required for storing the page tables of this process is:
8 KB | |
12 KB | |
16 KB | |
20 KB |
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)
First code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000
So next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000
First data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000
So next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000
Only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00, 0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page in Level-1.
Hence, we will have in total 4 pages and page size = 212 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Question 119 |
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.
Process P: while (1) { W: print '0'; print '0'; X: } Process Q: while (1) { Y: print '1'; print '1'; Z: }
Synchronization statements can be inserted only at points W, X, Y and Z.
Which of the following will always lead to an output staring with '001100110011'?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1 | |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0 | |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1 | |
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
|
Process P1 will be executed first and then Process P2 can be executed next.
At the process P: W=P(S)
X=V(T)
At the process Q: Y=P(T)
Z=V(S)
Here, S=1, T=0 then the process P executes first and then Q, and both can run on process alternate way start with P.
Question 120 |
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.
Process P: while (1) { W: print '0'; print '0'; X: } Process Q: while (1) { Y: print '1'; print '1'; Z: }
Synchronization statements can be inserted only at points W, X, Y and Z.
Which of the following will ensure that the output string never contains a substring of the form 01n0 or 10n1 where n is odd?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1 | |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1 | |
P(S) at W, V(S) at X, P(S) at Y, V(S) at Z, S initially 1 | |
V(S) at W, V(T) at X, P(S) at Y, P(T) at Z, S and T initially 1 |
Question 121 |
Which of the following scheduling algorithms is non-preemptive?
Round Robin | |
First-In First-Out | |
Multilevel Queue Scheduling | |
Multilevel Queue Scheduling with Feedback |
Question 122 |
The optimal page replacement algorithm will select the page that
Has not been used for the longest time in the past. | |
Will not be used for the longest time in the future. | |
Has been used least number of times. | |
Has been used most number of times. |
Question 123 |
Dynamic linking can cause security concerns because
Security is dynamic | |
The path for searching dynamic libraries is not known till runtime | |
Linking is insecure | |
Cryptographic procedures are not available for dynamic linking |
Question 124 |
Which combination of the following features will suffice to characterize an OS as a multi-programmed OS?
- (a) More than one program may be loaded into main memory at the same time for execution.
(b) If a program waits for certain events such as I/O, another program is immediately scheduled for execution.
(c) If the execution of program terminates, another program is immediately scheduled for execution.
A | |
A and B | |
A and C | |
A, B and C |
Question 125 |
In the index allocation scheme of blocks to a file, the maximum possible size of the file depends on
the size of the blocks, and the size of the address of the blocks. | |
the number of blocks used for the index, and the size of the blocks. | |
the size of the blocks, the number of blocks used for the index, and the size of the address of the blocks. | |
None of the above |
No. of block addresses available for addressing one file(B) = No. of Maximum blocks we can use for the Index * No. of addressable blocks using one Index block(A)
Size of File = B * Size of Block
Question 126 |
(a) Draw the process state transition diagram of an OS in which (i) each process is in one of the five states: created, ready, running, blocked (i.e. sleep or wait), or terminated, and (ii) only non-preemptive scheduling is used by the OS. Label the transitions appropriately.
(b) The functionality of atomic TEST-AND-SET assembly language instruction is given by the following C function.
int TEST-AND-SET (int *x) { int y; A1:y=*x; A2:*x=1; A3:return y; }
(i) Complete the following C functions for implementing code for entering and leaving critical sections based on the above TEST-AND-SET instruction.
int mutex=0; void enter-cs() { while (…………………………………); } void leave-cs() { …………………………………..; }
(ii) Is the above solution to the critical section problem deadlock free and starvation-free?
(iii) For the above solution, show by an example that mutual exclusion is not ensured if TEST-AND-SET instruction is not atomic.
Theory Explanation is given below. |
Question 127 |
A computer system uses 32-bit virtual address, and 32-bit physical address. The physical memory is byte addressable, and the page size is 4 kbytes. It is decided to use two level page tables to translate from virtual address to physical address. Equal number of bits should be used for indexing first level and second level page table, and the size of each page table entry is 4 bytes.
- (a) Give a diagram showing how a virtual address would be translated to a physical address.
(b) What is the number of page table entries that can be contained in each page?
(c) How many bits are available for storing protection and other information in each page table entry?
Theory Explanation is given below. |
Question 128 |
The following solution to the single producer single consumer problem uses semaphores for synchronization.
#define BUFFSIZE 100 buffer buf[BUFFSIZE]; int first=last=0; semaphore b_full=0; semaphore b_empty=BUFFSIZE; void producer() { while (1) { produce an item; p1: …………………..; put the item into buff (first); first=(first+1)%BUFFSIZE; p2: …………………..; } } void consumer() { while (1) { c1:…………………….. take the item from buf[last]; last=(last+1)%BUFFSIZE; c2: ……………………..; consume the item; } }
- (a) Complete the dotted part of the above solution.
(b) Using another semaphore variable, insert one line statement each immediately after p1, immediately before p2, immediately after c1, and immediately before c2 so that the program works correctly for multiple procedures and consumers.
Theory Explanation is given below. |
Question 129 |
Which of the following statements is false?
Virtual memory implements the translation of a program’s address space into physical memory address space | |
Virtual memory allows each program to exceed the size of the primary memory | |
Virtual memory increases the degree of multiprogramming | |
Virtual memory reduces the context switching overhead |
Question 130 |
Consider a set of n tasks with known runtimes r1, r2, ..., rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
Round-Robin | |
Shortest-Job-First | |
Highest-Response-Ratio-Next | |
First-Come-First-Served |
Question 131 |
Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access pattern, increasing the number of page frames in main memory will
always decrease the number of page faults | |
always increase the number of page faults | |
sometimes increase the number of page faults | |
never affect the number of page faults |
See Belady’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in following page replacement algorithms:
First in first out (FIFO), Second chance algorithm, Random page replacement algorithm.
Question 132 |
Which of the following does not interrupt a running process?
A device | |
Timer | |
Scheduler process | |
Power failure |
Question 133 |
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the approximate size of the page table?
16 MB | |
8 MB | |
2 MB | |
24 MB |
Frame size = 226 / 212 = 214
PT have to be stored in one frame so entry size must be 2 bytes, hence size of PT = 220 * 2 = 2 MB
Question 134 |
Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The program executed by process is shown below.
repeat flag[i] = true; turn = j; while (P) do no-op; Enter critical section, perform actions, then exit critical section flag[i] = false; Perform other non-critical section actions. until false;
For the program to guarantee mutual exclusion, the predicate P in the while loop should be.
flag[j]=true and turn=i | |
flag[j]=true and turn=j | |
flag[i]=true and turn=j | |
flag[i]=true and turn=i |
Question 135 |
Two concurrent processes P1 and P2 want to use two resources R1 and R2 in a mutually exclusive manner. Initially, R1 and R2 are free. The programs executed by the two processes are given below.
Program for P1: S1: While (R1 is busy) do no-op; S2: Set R1 ← busy; S3: While (R2 is busy) do no-op; S4: Set R2 ← busy; S5: Use R1 and R2; S6: Set R1 ← free; S7: Set R2 ← free; Program for P2: Q1: While (R1 is busy) do no-op; Q2: Set R1 ← busy; Q3: While (R1 is busy) do no-op; Q4: Set R1 ← busy; Q5: Use R1 and R2; Q6: Set R2 ← free; Q7: Set R1 ← free;
(a) Is mutual exclusion guaranteed for R1 and R2? If not, show a possible interleaving of the statements of P1 and P2 such that mutual exclusion is violated (i.e., both P1 and P2 use R1 or R2 at the same time).
(b) Can deadlock occur in the above program? If yes, show a possible interleaving of the statements of P1 and P2 leading to deadlock.
(c) Exchange the statements Q1 and Q3 and statements Q2 and Q4. Is mutual exclusion guaranteed now? Can deadlock occur?
Theory Explanation is given below. |
Question 136 |
Which of the following need not necessarily be saved on a context switch between processes?
General purpose registers | |
Translation look-aside buffer | |
Program counter | |
All of the above |
Question 137 |
Let m[0] … m[4] be mutexes (binary semaphores) and P[0] … P[4] be processes. Suppose each process P[i] executes the following:
wait (m[i]); wait(m[(i+1) mode 4]); ------ release (m[i]); release (m[(i+1)mod 4]);
This could cause:
Thrashing | |
Deadlock | |
Starvation, but not deadlock | |
None of the above |

Question 138 |
A graphics card has on board memory of 1MB. Which of the following modes can the card not support?
1600 × 400 resolution with 256 colours on a 17 inch monitor | |
1600 × 400 resolution with 16 million colours on a 14 inch monitor | |
800 × 400 resolution with 16 million colours on a 17 inch monitor | |
800 × 800 resolution with 256 colours on a 14 inch monitor |
Option A:
1600×400 = 64000 = 640 KB
256 colours 8 bits = 1 Byte
⇒ 640×1 = 640KB (we have 1MB)
Option B:
1600×400 = 640KB
16 million = 24 bits = 3 bytes
⇒ 640×3 = 1920KB (Not enough)
Option C:
800×400 = 320 KB
16 millions = 3 bytes ⇒ 320×3 = 960 KB (we have 1MB)
Option D:
800×400 = 320 KB
256 colours = 1 Byte ⇒ 320×1 = 320 (we have 1MB)
Question 139 |
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of
1.9999 milliseconds | |
1 millisecond | |
9.999 microseconds | |
1.9999 microseconds |
= (0.9999*1) + [(1-0.9999) *10000]
= (0.9999) + (0.0001 * 10000)
= 0.9999 + 1
= 1.9999 microseconds
Question 140 |
Which of the following is NOT a valid deadlock prevention scheme?
Release all resources before requesting a new resource | |
Number the resources uniquely and never request a lower numbered resource than the last one requested | |
Never request a resource after releasing any resource | |
Request and all required resources be allocated before execution |
Question 141 |
(a) Fill in the boxes below to get a solution for the readers-writers problem, using a single binary semaphore, mutex (initialized to 1) and busy waiting. Write the box numbers (1, 2 and 3), and their contents in your answer book.
int R = 0, W = 0; Reader () { L1: wait (mutex); If (W ==0) { R = R +1; ▭______(1) } else { ▭______(2) goto L1; } …./* do the read */ wait (mutex) R = R – 1; signal (mutex); } Writer () { L2: wait(mutex); If (▭) {______(3) signal (mutex); goto L2; } W=1; signal (mutex); …./*do the write*/ wait(mutex) W = 0; signal (mutex);
(b) Can the above solution lead to starvation of writers?
Theory Explanation is given below. |
(ii) Signal (mutex)
(iii) R ⇒ 1 (or) W == 1
(b) In CS, atleast one of the reader is always present. That means writer can't be enter into the critical section.
This leads to readers-writers problem may occur in the queue.
Question 142 |
Listed below are some operating system abstractions (in the left column) and the hardware components or mechanism (in the right column) that they are abstractions of. Which of the following matching of pairs is correct?
A. Thread 1. Interrupt B. Virtual address space 2. Memory C. File system 3. CPU D. Signal 4. Disk
(A) – 2 (B) – 4 (C) – 3 (D) - 1 | |
(A) – 1 (B) – 2 (C) – 3 (D) – 4 | |
(A) – 3 (B) – 2 (C) – 4 (D) - 1 | |
(A) – 4 (B) – 1 (C) – 2 (D) – 3 |

⇒ Threads are handled by CPU.
⇒ Virtual address is a memory type.
⇒ File system is used to manage the disk.
⇒ Interrupt is a signal.
Question 143 |
Which of the following disk scheduling strategies is likely to give the best through put?
Farthest cylinder next | |
Nearest cylinder next | |
First come first served | |
Elevator algorithm |
Nearest cylinder next - SSTF
First come first served - FCFS
Elevator - SCAN
SSTF always serves the request of nearest cylinder first. Due to which the necessary movement gets reduced.
Question 144 |
System calls are usually invoked by using
a software interrupt | |
polling | |
an indirect jump | |
a privileged instruction |
Question 145 |
A multi-user, multi-processing operating system cannot be implemented on hardware that does not support
Address translation | |
DMA for disk transfer | |
At least two modes of CPU execution (privileged and non-privileged) | |
Demand paging | |
Both A and C |
Question 146 |
Which of the following is/are advantage of virtual memory?
Faster access to memory on an average. | |
Processes can be given protected address spaces. | |
Linker can assign addresses independent of where the program will be loaded in physical memory. | |
Programs larger than the physical memory size can be run. | |
Both B and D |
B) True. Because in virtual memory concept of address translation provides protected address space so that one process do not interfere the other process.
C) False.
D) True.
Question 147 |
Which of the following actions is/are typically not performed by the operating system when switching context from process A to process B?
Saving current register values and restoring saved register values for process B.
| |
Changing address translation tables. | |
Swapping out the memory image of process A to the disk. | |
Invalidating the translation look-aside buffer. |
B) True.
C) False, because swapping is done when the process is suspended and not during context switching.
D) True, Invalidation of TLB is necessary because, if the TLB is not invalidated then the new process might end up using the translation of old process. Note that Invalidation of TLB is necessary but saving and reuse of TLB is not necessary.
Question 148 |
Consider the following program in a language that has dynamic seeping.
var x: real; procedure show: begin print(x);end; procedure small; var x: real; begin x: = 0.125; show; end; begin x:=0.25 show; small end.
Then the output of the program is:
0.125 0.125 | |
0.25 0.25 | |
0.25 0.125 | |
0.125 0.25 |
Question 149 |
A linker reads four modules whose lengths are 200, 800, 600 and 500 words, respectively. If they are loaded in that order, what are the relocation constants?
0, 200, 500, 600 | |
0, 200, 1000, 1600 | |
200, 500, 600, 800 | |
200, 700, 1300, 2100 |
Now 2nd will start loading at 200, since size is 800, so last address is 999.
Now 3rd module will start loading at 1000, since size is 600. So last address is 1599.
Now 4th module will start loading at 1600 and go till 2099.
Question 150 |
Which of the following is an example of a spooled device?
The terminal used to enter the input data for the C program being executed | |
An output device used to print the output of a number of jobs | |
The secondary memory device in a virtual storage system | |
The swapping area on a disk used by the swapper |
For example in printer if a process attempt to print a document but printer is busy printing another document, the process, instead of waiting for printer to become available, write its output to disk. When the printer becomes available the data on disk is printed.
Spooling allows process to request operation from peripheral device without requiring that the device is ready to service the request.
Question 151 |
When the result of a computation depends on the speed of the processes involved there is said to be
cycle stealing | |
rare condition | |
a time lock | |
a deadlock |
Speed of processes corresponds to ordering of processes.
Question 152 |
A counting semaphore was initialized to 10. Then 6P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is
0 | |
8 | |
10 | |
12 |
S = 10
Now 6P operations and uv operations were completed on this semaphore. So final value of S will be
S = 10 - 6 + 4 = 8
Question 153 |
A computer has six tape drives, with n processes competing for them. Each process may need two drives. What is the maximum value of n for the system to be deadlock free?
6 | |
5 | |
4 | |
3 |
So maximum no. of process for the system to be deadlock free is 5.
Question 154 |
The overlay tree for a program is as shown below:

What will be the size of the partition (in physical memory) required to load (and run) this program?
12 KB | |
14 KB | |
10 KB | |
8 KB |
For the above program, maximum memory will be required when running code portion present at leaves.
Maximum requirement
= MAX(12, 14, 10, 14)
= 14
Question 155 |
Consider n processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes s seconds, what must be the quantum size q such that the overhead resulting from process switching is minimized but at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?
q ≤ t-ns/n-1 | |
q ≥ t-ns/n-1 | |
q ≤ t-ns/n+1 | |
q ≥ t-ns/n+1 |

Question 156 |
If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on the average a page fault occurs every k instruction is:
i + j/k | |
i + j*k | |
i+j/ k | |
(i+j)*k |
= service time + page fault rate * page fault service time
= i + 1/k * j
= i + j/k
Question 157 |
Locality of reference implies that the page reference being made by a process
will always be to the page used in the previous page reference | |
is likely to be to one of the pages used in the last few page references | |
will always be to one of the pages existing in memory | |
will always lead to a page fault |
(1) Temporal locality
(2) Spatial locality
(3) Sequential locality
→ In programs related data are stored in consecutive locations and loops in same locations are used repeatedly.
Question 158 |
The correct matching for the following pairs is
(A) Disk Scheduling (1) Round robin (B) Batch Processing (2) SCAN (C) Time sharing (3) LIFO (D) Interrupt processing (4) FIFO
A – 3 B – 4 C – 2 D – 1 | |
A – 4 B – 3 C – 2 D – 1 | |
A – 2 B – 4 C – 1 D – 3 | |
A – 3 B – 4 C – 3 D – 2 |
Batch processing - FIFO
Time sharing - Round Robin
Interrupt processing - LIFO
Question 159 |
I/O redirection
implies changing the name of a file | |
can be employed to use an existing file as input file for a program | |
implies connection 2 programs through a pipe | |
None of the above
|
Question 160 |
Thrashing
reduces page I/O | |
decreases the degree of multiprogramming | |
implies excessive page I/O | |
improve the system performance |
Question 161 |
Dirty bit for a page in a page table
helps avoid unnecessary writes on a paging device | |
helps maintain LRU information | |
allows only read on a page | |
None of the above |
Question 162 |
An operating system contains 3 user processes each requiring 2 units of resource R. The minimum number of units of r such that no deadlocks will ever arise is
3 | |
5 | |
4 | |
6 |
Question 163 |
Each Process Pi, i = 1.......9 is coded as follows
repeat P(mutex) {Critical section} V(mutex) forever
The code for P10 is identical except it uses V(mutex) in place of P(mutex). What is the largest number of processes that can be inside the critical section at any moment?
1 | |
2 | |
3 | |
None of the above |
repeat
{
V(mutex)
C.S.
V(mutex)
}
forever
Now, let me say P1 is in CS then P1 is in CS
then P10 comes and executes CS (upon mutex).
Now, P2 comes (down on mutex).
Now P10 moves out of CS (again binary semaphore will be 1).
Now P3 comes (down on mutex).
Now P10 comes (upon mutex).
Now P4 comes (down mutex).
⁞
So, if we take P10 'in' and 'out' of CS recursively, all 10 processes can be in CS at same time using binary semaphore only.
Question 164 |
The process state transition diagram in below figure is representative of

a batch operating system | |
an operating system with a preemptive scheduler | |
an operating system with a non-preemptive scheduler | |
a uni-programmed operating system |
So this is preemptive.
Question 165 |
A critical section is a program segment
which should run in a certain specified amount of time | |
which avoids deadlocks | |
where shared resources are accessed | |
which must be enclosed by a pair of semaphore operations, P and V |
Question 166 |
Which of the following is an example of spooled device?
A line printer used to print the output of a number of jobs. | |
A terminal used to enter input data to a running program. | |
A secondary storage device in a virtual memory system. | |
A graphic display device. |
Question 167 |
The correct matching for the following pairs is
A. Activation record 1. Linking loader B. Location counter 2. Garbage collection C. Reference counts 3. Subroutine call D. Address relocation 4. Assembler
A – 3 B – 4 C – 1 D – 2 | |
A – 4 B – 3 C – 1 D – 2 | |
A – 4 B – 3 C – 2 D – 1 | |
A – 3 B – 4 C – 2 D – 1 |
An assembler uses location counter value to give address to each instruction which is needed for relative addressing as well as for jump labels.
Reference count is used by garbage collector to clear the memory whose reference count be comes 0.
Linker loader is a loader which can load several compiled codes and link them together into a single executable. Thus it needs to do relocation of the object codes.
Question 168 |
A 1000 Kbyte memory is managed using variable partitions but to compaction. It currently has two partitions of sizes 200 Kbytes and 260 Kbytes respectively. The smallest allocation request in Kbytes that could be denied is for
151 | |
181 | |
231 | |
541 |

So, smallest allocation request which can be denied is 181 KB.
Question 169 |
A solution to the Dining Philosophers Problem which avoids deadlock is
ensure that all philosophers pick up the left fork before the right fork | |
ensure that all philosophers pick up the right fork before the left fork | |
ensure that one particular philosopher picks up the left fork before the right fork, and that all other philosophers pick up the right fork before the left fork | |
None of the above |
To avoid this, atleast one philosopher should choose its first chopstick in different way so that circular loop is not formed.
Question 170 |
Four jobs to be executed on a single processor system arrive at time 0+ in the order A, B, C, D. their burst CPU time requirements are 4, 1, 8, 1 time units respectively. The completion time of A under round robin scheduling with time slice of one time unit is
10 | |
4 | |
8 | |
9 |

∴ Completion time of A is 9 unit.
Question 171 |
Which of the following statements is true?
ROM is a Read/Write memory | |
PC points to the last instruction that was executed | |
Stack works on the principle of LIFO | |
All instructions affect the flags |
Question 172 |
In a paged segmented scheme of memory management, the segment table itself must have a page table because
the segment table is often too large to fit in one page | |
each segment is spread over a number of pages | |
segment tables point to page table and not to the physical locations of the segment | |
the processor’s description base register points to a page table | |
Both A and B |
Segment paging is different from paged segmentation.
Question 173 |
Which of the following page replacement algorithms suffers from Belady’s anamoly?
Optimal replacement | |
LRU | |
FIFO | |
Both (A) and (C) |
Question 174 |
Which scheduling policy is most suitable for a time shared operating system?
Shortest Job First | |
Round Robin | |
First Come First Serve | |
Elevator |
Question 175 |
The sequence …………… is an optimal non-preemptive scheduling sequence for the following jobs which leaves the CPU idle for …………… unit(s) of time.

{3,2,1),1 | |
(2,1,3},0 | |
{3,2,1),0 | |
{1,2,3},5 |
So, (B) and (C) will be eliminated.
Now in (A) and (D):
For r(A),

So, idle time is between 0 to 1 which in case of option (A).
For option(D),

We can see there is no idle time at all, but in option given idle time is 5, which is not matching with our chart. So option (D) is eliminated.
Therefore, the correct sequence is option (A).
Question 176 |
The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 records per page with 1 free main memory frame is recorded as follows. What is the number of page faults?
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0370
13 | |
8 | |
7 | |
10 |
01, 02, 04, 04, 05, 05, 05, 01, 02, 02, 02, 03, 03.
Clearly 7 page faults.
Question 177 |
In a virtual memory system the address space specified by the address lines of the CUP must be __________ than the physical memory size and _______ than the secondary storage size.
smaller, smaller | |
smaller, larger | |
larger, smaller | |
larger, larger |
Question 178 |
A memory page containing a heavily used variable that was initialized very early and is in constant use is removed when
LRU page replacement algorithm is used | |
FIFO page replacement algorithm is used | |
LFU page replacement algorithm is used | |
None of the above |
In LRU which can eliminate (or) removed which is least recently used.
In LFU the frequency of the page is more. So it is in constant use so cannot be replaced.
Question 179 |
Consider the following heap (figure) in which blank regions are not in use and hatched region are in use.

The sequence of requests for blocks of size 300, 25, 125, 50 can be satisfied if we use
either first fit or best fit policy (any one) | |
first fit but not best fit policy | |
best fit but first fit policy | |
None of the above |
So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 125 size block, reducing it to 125.
Request for 125 satisfied by the 125 size block.
And request for 50 satisfied by 50 size block.
So, all requests can be satisfied.
In best fit strategy, a block request is satisfied by the smallest block in which can fit it. So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 50 size block as its the smallest size that fits 25, reducing it to 25.
Request for 125, satisfied by 150 size block, reducing it to 25.
Now, request for 50 cannot be satisfied as the two 25 size blocks are not contiguous.
Question 180 |
Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?
The hole created by worst fit is always larger than the hole created by first fit. | |
The hole created by best fit is never larger than the hole created by first fit.
| |
The hole created by first fit is always larger than the hole created by next fit. | |
The hole created by next fit is never larger than the hole created by best fit. |
Question 181 |
Consider the following statements about process state transitions for a system using preemptive scheduling.
- I. A running process can move to ready state.
II. A ready process can move to running state.
III. A blocked process can move to running state.
IV. A blocked process can move to ready state.
Which of the above statements are TRUE?
II and III only | |
I, II and III only
| |
I, II, III and IV
| |
I, II and IV only
|

Question 182 |
Consider the following set of processes, assumed to have arrived at time 0. Consider the CPU scheduling algorithms Shortest Job First (SJF) and Round Robin (RR). For RR, assume that the processes are scheduled in the order P1,P2,P3,P4.
If the time quantum for RR is 4 ms, then the absolute value of the difference between the average turnaround times (in ms) of SJF and RR (round off to 2 decimal places) is _____.
5.25 |

Turn Around Time = (21 – 0) + (13 – 0) + (2 – 0) + (6 – 0), Average = 42/4 = 10.50

Turn Around Time (TAT) = (18 – 0) + (21 – 0) + (10 – 0) + (14 – 0), Average = 63/4 = 15.75
Absolute difference = |10.50-15.75| = 5.25
Question 183 |
Each of a set of n processes executes the following code using two semaphores a and b initialized to 1 and 0, respectively. Assume that count is a shared variable initialized to 0 and not used in CODE SECTION P.

wait (a); count = count+1; if (count==n) signal (b); signal (a); wait (b); signal (b);

What does the code achieve?
It ensures that all processes execute CODE SECTION P mutually exclusively. | |
It ensures that at most two processes are in CODE SECTION Q at any time. | |
It ensures that no process executes CODE SECTION Q before every process has finished CODE SECTION P.
| |
It ensures that at most n-1 processes are in CODE SECTION P at any time.
|
Question 184 |
Consider the following five disk access requests of the form (request id, cylinder number) that are present in the disk scheduler queue at a given time.
(P, 155), (Q, 85), (R, 110), (S, 30), (T, 115)
Assume the head is positioned at cylinder 100. The scheduler follows Shortest Seek Time First scheduling to service the requests.
Which one of the following statements is FALSE?
The head reverses its direction of movement between servicing of Q and P. | |
T is serviced before P. | |
R is serviced before P. | |
Q is serviced after S, but before T.
|

Question 185 |
Consider a paging system that uses a 1-level page table residing in main memory and a TLB for address translation. Each main memory access takes 100 ns and TLB lookup takes 20 ns. Each page transfer to/from the disk takes 5000 ns. Assume that the TLB hit ratio is 95%, page fault rate is 10%. Assume that for 20% of the total page faults, a dirty page has to be written back to disk before the required page is read in from disk. TLB update time is negligible. The average memory access time in ns (round off to 1 decimal places) is ______.
154.5 ns |
T=20ns
D=5000ns
h=0.95
p=0.1, 1-p=0.9
d=0.2, 1-d=0.8
EMAT = h×(T+M)+(1-h)[(1-p)×2M+p[(1-d)[D+M]+d(2D+M)]+T]
= 0.95×(20+100)+(1-0.95)[(1-0.1)×200+(0.1)[(1-0.2)[5000+100]+(0.2)(10000+100)]+20]
= 154.5 ns
Question 186 |
A certain moving arm disk storage, with one head, has the following specifications.
Number of tracks/recording surface = 200 Disk rotation speed = 2400 rpm Track storage capacity = 62,500 bits
The average latency of this device is P msec and the data transfer rate is Q bits/sec.
Write the value of P and Q.
P = 12.5, Q = 2.5×106 |
So, in DOS, the disk rotates 2400 times.
Average latency is the time for half a rotation
= 0.5×60/2400 s
= 12.5 ms
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits
So, disk transfer rate
= 62500 × 2400/60
= 2.5 × 106 bps
So,
P = 12.5, Q = 2.5×106
Question 187 |
Consider a system having m resources of the same type. These resources are shared by 3 processes A, B and C, which have peak demands of 3, 4 and 6 respectively. For what value of m deadlock will not occur?
7 | |
9 | |
13, 15 | |
13 | |
15 |
→ If A have 2, B have 3, C have 5 then deadlock will occur i.e., 2+3+5=10.
→ If we have one extra resource then deadlock will not occur i.e., 10+1=11.
→ If we have equal (or) more than 11 resources then deadlock will never occur.
Question 188 |
Assume that the following jobs are to be executed on a single processor system

The jobs are assumed to have arrived at time 0+ and in the order p, q, r, s, t. Calculate the departure time (completion time) for job p if scheduling is round robin with time slice 1.
4 | |
10 | |
11 | |
12 | |
None of the above |

p is departure at 11.
Question 189 |
At a particular time of computation the value of a counting semaphore is 7. Then 20 P operations and 15 V operations were completed on this semaphore. The resulting value of the semaphore is:
42 | |
2 | |
7 | |
12 |
Initial value of S is 7.
So after 20P operations and 15V operations the value of semaphore S will be,
S = 7 - 20 + 15 = 2
Question 190 |
A computer system has 6 tape drives, with n process completing for them. Each process may need 3 tape drives. The maximum value of n for which the system is guaranteed to be deadlock free is:
2 | |
3 | |
4 | |
1 |
Question 191 |
Which of the following is an example of a spooled device?
The terminal used to the input data for a program being executed. | |
The secondary memory device in a virtual memory system | |
A line printer used to print the output of a number of jobs. | |
None of the above |
Question 192 |
Which page replacement policy sometimes leads to more page faults when size of memory is increased?
FIFO |
Question 193 |
Match the pairs in the following questions by writing the corresponding letters only.
(A) Buddy system (P) Run-time type specification (B) Interpretation (Q) Segmentation (C) Pointer type (R) Memory allocation (D) Virtual memory (S) Garbage collection
A-R, B-P, C-S, D-Q |
B-P
C-S
D-Q
Buddy system: The buddy system is a technique to allocate the memory by dividing the memory into partitions to try to satisfy a memory request as suitably as possible.
Interpretation: Runtime.
Pointer type: Garbage collector dereference the dangling pointer.
Virtual memory: Segmentation.
Question 194 |
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: The total size of address space is a virtual memory systems is limited by
the length of MAR | |
the available secondary storage | |
the available main memory | |
all of the above | |
none of the above | |
Both A and B |
MAR can hold the address that is generated from CPU and limits the total virtual memory address space.
Question 195 |
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Indicate all the false statements from the statements given below:
The amount of virtual memory available is limited by the availability of secondary storage. | |
Any implementation of a critical section requires the use of an indivisible machine-instruction, such as test-and-set. | |
The LRU page replacement policy may cause hashing for some type of programs. | |
The best fit techniques for memory allocation ensures the memory will never be fragmented. | |
B and D |
(B) Is false, because one of the solution is Peterson's solution which is purely software based solution without use of hardware.
(C) Is true.
(D) False, memory can get fragmented with best fit technique.
Question 196 |
Match the pairs:
(a) Critical region (p) Hoare's monitor (b) Wait/Signal (q) Mutual exclusion (c) Working Set (r) Principle of locality (d) Deadlock (s) Circular Wait
(a) - (q), (b) - (p), (c) - (r), (d) - (s) |
Question 197 |
State whether the following statements are TRUE or FALSE with reason:
The Link-load-and-go loading scheme required less storage space than the link-and-go loading scheme.True | |
False |
Whereas in link, load and go scheme the linkage editor does not co-exists with program in main memory while performing linking task.
Question 198 |
Match the pairs in the following questions:
(A) Virtual memory (p) Temporal Locality (B) Shared memory (q) Spatial Locality (C) Look-ahead buffer (r) Address Translation (D) Look-aside buffer (s) Mutual Exclusion
(A)-(r), (B)-(s), (C)-(q), (D)-(p) |
Shared Memory - Mutual Exclusion
Look-ahead buffer - Spatial Locality
Look-aside buffer - Temporal Locality
Question 199 |
A critical region is
One which is enclosed by a pair of P and V operations on semaphores. | |
A program segment that has not been proved bug-free. | |
A program segment that often causes unexpected system crashes. | |
A program segment where shared resources are accessed. |
Question 200 |
Consider the execution of the following commands in a shell on a linux operating system.
system bash$cat alpha Mathematics bash$in alpha beta bash$ rm alpha bash$cat >> beta << SAME Information Technology SANE bash$ cat betaThe output of the last command will be:
Mathematics Information Technology SAME. | |
Mathematics Information Technology. | |
Information Technology. | |
Information Technology SAME. |
Question 201 |
Assume that EA = (X)+ is the effective address equal to the contents of location X, with X incremented by one word length after the effective address is calculated; EA = −(X) is the effective address equal to the contents of location X, with X decremented by one word length before the effective address is calculated; EA = (X)− is the effective address equal to the contents of location X, with X decremented by one word length after the effective address is calculated. The format of the instruction is (opcode, source, destination), which means (destination ← source op destination). Using X as a stack pointer, which of the following instructions can pop the top two elements from the stack, perform the addition operation and push the result back to the stack.
6 and 1, 2, 3, 4 | |
7 and 1, 2, 4, 5 | |
8 and 1, 2, 4, 5 | |
9 and 1, 2, 3, 5 |
(Each page contains 16 bytes, so say for page0, it contains the virtual address 0-15)
0: page fault-1, pages in memory-0
4: page fault-1, pages in memory-0
8: page fault-1, pages in memory-0
20: page faults-2, pages in memory-0,1
24: page faults-2, pages in memory-0,1
36: page faults-3, pages in memory-0,1,2
44: page faults-3, pages in memory-0,1,2
12: page faults-3, pages in memory-1,2,0
68: page faults-4, pages in memory-1,2,0,4
72: page faults-4, pages in memory-1,2,0,4
80: page faults-5, pages in memory-2,0,4,5
84: page faults-5, pages in memory-2,0,4,5
28: page faults-6, pages in memory-0,4,5,1
32: page faults-7, pages in memory-4,5,1,2
88: page faults-7, pages in memory-4,1,2,5
92: page faults-7, pages in memory-4,1,2,5
Question 202 |
The following is a code with two threads, producer and consumer, that can run in parallel. Further, S and Q are binary semaphores equipped with the standard P and V operations.
semaphore S = 1, Q = 0; integer x; producer: consumer: while(true) do while(true) do P(S); P(Q); x = produce (); consume (x); V(Q); V(S); done doneWhich of the following is TRUE about the program above?
The process can deadlock | |
One of the threads can starve | |
Some of the items produced by the producer may be lost | |
Values generated and stored in ‘x’ by the producer will always be consumed before the producer can generate a new value |
(B) No starvation happen because there is alteration between Producer and Consumer.
(C) No items is lost.
Question 203 |
An operating system implements a policy that requires a process to release all resources before making a request for another resource. Select the TRUE statement from the following:
Both starvation and deadlock can occur | |
Starvation can occur but deadlock cannot occur | |
Starvation cannot occur but deadlock can occur | |
Neither starvation nor deadlock can occur |
Now, maybe the process has not used the resources it released yet. This may happen again when the process requests another resource.
So, the process starved for proper utilization of resources.
Deadlock will not occur as it is one of the deadlock prevention scheme.
Question 204 |
If the time-slice used in the round-robin scheduling policy is more than the maximum time required to execute any process, then the policy will
degenerate to shortest job first | |
degenerate to priority scheduling | |
degenerate to first come first serve | |
none of the above |
Question 205 |
Match the following flag bits used in the context of virtual memory management on the left side with the different purposes on the right side of the table below.

I-d, II-a, III-b, IV-c | |
I-b, II-c, III-a, IV-d | |
I-c, II-d, III-a, IV-b | |
I-b, II-c, III-d, IV-a |
The bit indicates that its associated block of memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
R/W bit:
If the bit is set, the page is read/ write. Otherwise when it is not set, the page is read only.
Reference bit:
Reference bit is used in a version of FIFO called second chance policy, in order to avoid replacement of heavily used page.
Valid bit:
Valid bit is not used for page replacement. It is not used in any page replacement policy. It tells the page in the memory is valid or not. If it is valid it is directly used and if it is not then a fresh page is loaded. So, basically it is page initialization, because we are not replacing, it is initializing, we not knocking out somebody, we are filling empty space. So initialization and so option (D).
Question 206 |
Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that critical_flag is initialized to FALSE in the main program.
get_exclusive_access ( ) { if (critical _flag == FALSE) { critical_flag = TRUE ; critical_region () ; critical_flag = FALSE; } }
Consider the following statements. i. It is possible for both P1 and P2 to access critical_region concurrently. ii. This may lead to a deadlock.
Which of the following holds?
(i) is false and (ii) is true | |
Both (i) and (ii) are false | |
(i) is true and (ii) is false | |
Both (i) and (ii) are true |
Question 207 |
Let a memory have four free blocks of sizes 4k, 8k, 20k, 2k. These blocks are allocated following the best-fit strategy. The allocation requests are stored in a queue as shown below.

The time at which the request for J7 will be completed will be
16 | |
19 | |
20 | |
37 |

At t = 8

At t = 10

At t = 11

J7 can be finished at t = 19.
Question 208 |
Consider n jobs J1, J2,......Jn such that job Ji has execution time ti and a non-negative integer weight wi. The weighted mean completion time of the jobs is defined to be , where Ti is the completion time of job Ji. Assuming that there is only one processor available, in what order must the jobs be executed in order to minimize the weighted mean completion time of the jobs?
Non-decreasing order of ti | |
Non-increasing order of wi | |
Non-increasing order of witi | |
None-increasing order of wi/ti |
So answer is the non-increasing order of wi/ti.
Question 209 |
A demand paging system takes 100 time units to service a page fault and 300 time units to replace a dirty page. Memory access time is 1 time unit. The probability of a page fault is p. In case of a page fault, the probability of page being dirty is also p. It is observed that the average access time is 3 time units. Then the value of p is
0.194 | |
0.233 | |
0.514 | |
0.981 | |
0.0194 |
(if the page is not dirty)
Page fault service time = 300
(if there is dirty page)
Probability of page fault = P
Probability pf page being dirty = P
So, total page fault service time
= P(300) + (1 - P)100
= 300P + 100 - 100P
= 300P + 100
Now given,
Effective average memory access time = 3
So,
3 = m + P × total page fault service time
= 1 + P(200P + 100)
= 1+ 200P2 + 100P
⇒ 200P2 + 100P - 2 = 0
⇒ P = 0.0194
Question 210 |
The head of a hard disk serves requests following the shortest seek time first (SSTF) policy. The head is initially positioned at truck number 180.
Which of the request sets will cause the head to change its direction after servicing every request assuming that the head does not change direction if there is a tie in SSTF and all the requests arrive before the servicing starts?
11, 139, 170, 178, 181, 184, 201, 265 | |
10, 138, 170, 178, 181, 185, 201, 265 | |
10, 139, 169, 178, 181, 184, 201, 265 | |
10, 138, 170, 178, 181, 185, 200, 265 |

Hence, correct option is (B).
Question 211 |
The head of a hard disk serves requests following the shortest seek time first (SSTF) policy. The head is initially positioned at track number 180.
What is the maximum cardinality of the request set, so that the head changes its direction after servicing every request if the total number of tracks are 2048 and the head can start from any track?
9 | |
10 | |
11 | |
12 |
1) The alternating direction with SSTF policy.
2) Maximize the no. of requests.
The first condition can be satisfied by not having two requests in the equal distance from the current location. As shown below, we must not have request located in the circle marked positions.
Now to maximize the no. of request we need the requests to be located as compact as possible, which can be done by just placing the request in the next position after the circle marked position in particular direction (the direction in which the head need to move).
Now to satisfy the 1st criteria:

Seek length sequence for maximum cardinality and alternating head movements:
→ 1, 3, 7, 15, ...
→ or, 21-1, 22-1, 23-1, 24-1, ...
→ We have 2048 tracks so, maximum swing (seek length) can be 2047.
→ Which corresponds to a seek length of 211-1 in the 11th service.
Question 212 |
In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
1. It initiates another process if there are enough extra frames.
2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total number of available frames.
1 only | |
2 only | |
Neither 1 nor 2 | |
Both 1 and 2 |
Question 213 |
The process state transition diagram of an operating system is as given below.

Which of the following must be FALSE about the above operating system?
It is a multiprogrammed operating system | |
It uses preemptive scheduling | |
It uses non-preemptive scheduling | |
It is a multi-user operating system |
Question 214 |
The arrival time, priority, and duration of the CPU and I/O bursts for each of three processes P1, P2 and P3 are given in the table below. Each process has a CPU burst followed by an I/O burst followed by another CPU burst. Assume that each process has its own I/O resource.

The multi-programmed operating system uses preemptive priority scheduling. What are the finish times of the processes P1, P2 and P3 ?
11, 15, 9 | |
10, 15, 9 | |
11, 16, 10 | |
12, 17, 11 |

Hence, finish times of process.
P1 - 10
P2 - 11
P3 - 9
Question 215 |
Consider the solution to the bounded buffer producer/consumer problem by using general semaphores S, F, and E. The semaphore S is the mutual exclusion semaphore initialized to 1. The semaphore F corresponds to the number of free slots in the buffer and is initialized to N. The semaphore E corresponds to the number of elements in the buffer and is initialized to 0.
Producer Process Consumer Process Produce an item; Wait(E); Wait(F); Wait(S); Wait(S); Append the items to the buffer; Signal(S); Signal(S); Signal(F); Signal(E); Consume the item;
Which of the following interchange operations may result in a deadlock? 1. Interchanging Wait (F) and Wait (S) in the Producer process 2. Interchanging Signal (S) and Signal (F) in the Consumer process
1 only | |
2 only | |
Neither 1 nor 2 | |
Both 1 and 2 |
Now if Wait(S) in producer succeeds, then producer will wait for Wait(F) which is never going to succeed as consumer would be waiting for Wait(S). So deadlock, can happen.
If Signal(S) and Signal(F) are interchanged in consumer, deadlock won't happen. It will just give priority to a producer compared to the next consumer waiting.
Question 216 |
For each of the four processes P1, P2, P3 and P4. The total size in kilobytes (KB) and the number of segments are given below.

The page size is 1 KB. The size of an entry in the page table is 4 bytes. The size of an entry in the segment table is 8 bytes. The maximum size of a segment is 256 KB. The paging method for memory management uses two-level paging, and its storage overhead is P. The storage overhead for the segmentation method is S. The storage overhead for the segmentation and paging method is T. What is the relation among the overheads for the different methods of memory management in the concurrent execution of the above four processes?
P < S < T | |
S < P < T | |
S < T < P | |
T < S < P |
For P1,
Page size is 1KB. So, no. of pages required for P1=195. An entry in page table is of size 4 bytes and assuming an inner level page table takes the size of a page, we can have upto 256 entries in second level page table and we require only 195 for P1. Thus only 1 second level page table is enough. So, memory overhead = 1KB (for first level) + 1KB for second level = 2KB.
For P2 and P3 also, we get 2KB each and for P4 we get 1+2 = 3KB as it requires 1 first level page table and 2 second level page tables (364 > 256). So, total overhead for their concurrent execution = 2×3+3 = 9KB. Thus P = 9KB.
Case-2 (For segmentation method):
P1 uses 4 segments → 4 entries in segment table = 4×8 = 32Bytes
Similarly, for P2, P3 and P4 we get 5×8, 3×8 and 8×8 Bytes respectively and the total overhead will be
32+40+24+64 = 160 Bytes
So, S = 160 Bytes
Case-3 (For segmentation with paging):
Here, we segment first and then page. So, we need the page table size. We are given maximum size of a segment is 256KB and page size is 1KB and thus we require 256 entries in the page table. So, total size of page table = 256 × 4 = 1024 Bytes (exactly one page size).
So, now for P1 we require 1 segment table of size 32 Bytes plus 1 page table of size 1KB.
Similarly,
P2 - 40 Bytes and 1 KB
P3 - 24 Bytes and 1 KB
P4 - 64 Bytes and 1KB
Thus, total overhead = 160 Bytes + 4KB = 4256 Bytes
So, T = 4256 Bytes
So, answer should be S < T < P.
Question 217 |
The wait and signal operations of a monitor are implemented using semaphores as follows. In the following,
x is a condition variable,
mutex is a semaphore initialized to 1,
x_sem is a semaphore initialized to 0,
x_count is the number of processes waiting on semaphore x_sem, initially 0, next is a semaphore initialized to 0,
next_count is the number of processes waiting on semaphore next, initially 0.
The body of each procedure that is visible outside the monitor is replaced with the following:
P(mutex); ⋮ body of procedure ⋮ if (next_count > 0) V(next); else V(mutex);Each occurrence of x.wait is replaced with the following:
x_count = x_count + 1; if (next_count > 0) V(next) else V(mutex); ------------------------------------------------------------ E1; x_count = x_count - 1;Each occurrence of x.signal is replaced with the following:
if (x_count > 0) { next_count = next_count + 1; ------------------- E2; P(next), next_count = next_count - 1; }For correct implementation of the monitor, statements E1 and E2 are, respectively,
P(x_sem), V(next) | |
V(next), P(x_sem) | |
P(next), V(x_sem) | |
P(x_sem), V(x_sem) |
x_count is incremented and decremented in x_wait, which shows that in between them wait(x_sem) must happen which is P(x_sem). Correspondingly V(x_sem) must happen in x_signal. So, D choice.
Question 218 |
A student wishes to create symbolic links in a computer system running Unix. Three text files named "file 1", "file 2" and "file 3" exist in her current working directory, and the student has read and write permissions for all three files. Assume that file 1 contains information about her hobbies, file 2 contains information about her friends and file 3 contains information about her courses. The student executes the following sequence of commands from her current working directory
ln -s file 1 file 2 ln -s file 2 file 3
Which of the following types of information would be lost from her file system?
(I) Hobbies (II) Friends (III) Courses
(I) and (II) only | |
(II) and (III) only | |
(II) only | |
(I) and (III) only |
Question 219 |
The shell command
find -name passwd -print
is executed in /etc directory of a computer system running Unix. Which of the following shell commands will give the same information as the above command when executed in the same directory?
ls passwd | |
cat passwd | |
grep name passwd | |
grep print passwd |
Question 220 |
A user level process in Unix traps the signal sent on a Ctrl-C input, and has a signal handling routine that saves appropriate files before terminating the process. When a Ctrl-C input is given to this process, what is the mode in which the signal handling routine executes?
kernel mode | |
superuser mode | |
privileged mode | |
user mode |
Question 221 |
Two concurrent processes P1 and P2 use four shared resources R1, R2, R3 and R4, as shown below.
P1: P2: Compute: Compute; Use R1; Use R1; Use R2; Use R2; Use R3; Use R3; Use R4; Use R4;
Both processes are started at the same time, and each resource can be accessed by only one process at a time The following scheduling constraints exist between the access of resources by the processes:
P2 must complete use of R1 before P1 gets access to R1.
P1 must complete use of R2 before P2 gets access to R2.
P2 must complete use of R3 before P1 gets access to R3.
P1 must complete use of R4 before P2 gets access to R4.
There are no other scheduling constraints between the processes. If only binary semaphores are used to enforce the above scheduling constraints, what is the minimum number of binary semaphores needed?
1 | |
2 | |
3 | |
4 |

Question 222 |
Which of the following input sequences will always generate a 1 at the output Z at the end of the third cycle?

000 101 111 | |
101 110 111 | |
011 101 111 | |
001 110 111 | |
None of these |

While filling done in reverse order, all operations are not satisfied.
Question 223 |
We wish to schedule three processes P1, P2 and P3 on a uniprocessor system. The priorities, CPU time requirements and arrival times of the processes are as shown below.
Process Priority CPU time required Arrival time (hh:mm:ss) P1 10(highest) 20 sec 00:00:05 P2 9 10 sec 00:00:03 P3 8 (lowest) 15 sec 00:00:00We have a choice of preemptive or non-preemptive scheduling. In preemptive scheduling, a late-arriving higher priority process can preempt a currently running process with lower priority. In non-preemptive scheduling, a late-arriving higher priority process must wait for the currently executing process to complete before it can be scheduled on the processor. What are the turnaround times (time from arrival till completion) of P2 using preemptive and non-preemptive scheduling respectively.
30 sec, 30 sec | |
30 sec, 10 sec | |
42 sec, 42 sec | |
30 sec, 42 sec |

TAT of P2 = Completion time of P2 - Arrival time of P2
= 33 - 3
= 30 sec
For non-preemptive,

TAT of P2 = Completion time of P2 - Arrival time of P2
= 45 - 3
= 42 sec
Question 224 |
Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following sequence of memory block references if LRU policy is used for cache block replacement. Assuming that initially the cache did not have any memory block from the current job?
0 5 3 9 7 0 16 55
0 3 5 7 16 55 | |
0 3 5 7 9 16 55 | |
0 5 7 9 16 55 | |
3 5 7 9 16 55 |
So,

Since, each set has only 2 places, 3 will be thrown out as its the least recently used block. So final content of cache will be
0 5 7 9 16 55
Hence, answer is (C).
Question 225 |
Two shared resources R1 and R2 are used by processes P1 and P2. Each process has a certain priority for accessing each resource. Let Tij denote the priority of Pi for accessing Rj. A process Pi can snatch a resource Rh from process Pj if Tik is greater than Tjk. Given the following:
(I) T11 > T21 (II) T12 > T22 (III) T11 < T21 (IV) T12 < T22Which of the following conditions ensures that P1 and P2 can never deadlock?
(I) and (IV) | |
(II) and (III) | |
(I) and (II) | |
None of the above |
Question 226 |
A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear Velocity (ii) Constant Angular Velocity?
(i) 80 MB (ii) 2040 MB | |
(i) 2040 MB (ii) 80 MB | |
(i) 80 MB (ii) 360 MB | |
(i) 80 MB (ii) 360 MB |
Diameter of inner track = d = 1 cm
Circumference of inner track
= 2 * 3.14 * d/2
= 3.14 cm
Storage capacity = 10 MB (given)
Circumference of all equidistant tracks
= 2 * 3.14 * (0.5+1+1.5+2+2.5+3+3.5+4)
= 113.14 cm
Here, 3.14 cm holds 10 MB
Therefore, 1 cm holds 3.18 MB.
So, 113.14 cm holds
113.14 * 3.18 = 360 MB
So, total amount of data that can be hold on the disk = 360 MB.
For constant angular velocity:
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk
= 8 * 10 = 80 MB
Question 227 |
A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
If the disk has 20 sectors per track and is currently at the end of the 5th sector of the inner-most track and the head can move at a speed of 10 meters/sec and it is rotating at constant angular velocity of 6000 RPM, how much time will it take to read 1 MB contiguous data starting from the sector 4 of the outer-most track?
13.5 ms | |
10 ms | |
9.5 ms | |
20 ms |
So, the header has to seek (4 - 0.5) = 3.5cm.
For 10m ------- 1s
1m ------- 1/10 s
100cm ------- 1/(10×100) s
3.5cm ------- 3.5/1000 s = 3.5ms
So, the header will take 3.5ms.
Now, angular velocity is constant and header is now at end of 5th sector. To start from front of 4th sector it must rotate upto 18 sector.
6000 rotation in 60000ms.
1 rotation in 10ms (time to traverse 20 sectors).
So, to traverse 18 sectors, it takes 9ms.
In 10ms, 10MB data is read.
So, 1MB data can be read in 1ms.
∴ Total time = 1+9+3.5 = 13.5ms
Question 228 |
What is the bit rate of a video terminal unit with 80 characters/line, 8 bits/character and horizontal sweep time of lOOµs (including 20 µs of retrace time)?
8 Mbps | |
6.4 Mbps | |
0.8 Mbps | |
0.64 Mbps |
Total number of bits transmitted = 80 * 8 = 640 bits
Bit rate = (640 * 106)/100 = 6.4 Mbps
Question 229 |
Which one of the following is NOT shared by the threads of the same process?
Stack | |
Address Space | |
File Descriptor Table | |
Message Queue |
Question 230 |
Which of the following commands or sequences of commands will rename a file x to file y in a unix system?
I. mv y,x II. mv x,y III. cp y, x(rm x) IV. cp x, y (rm x)
II and III | |
II and IV | |
I and III | |
II only |
Question 231 |
Consider a pipeline processor with 4 stages S1 to S4. We want to execute the following loop:
For(i=1:i<=1000; i++)
{I1,I2,I3,I4}
where the time taken (in ns) by instructions I1 to I4 for stages S1 to S4 are given below:
S1 S2 S3 S4 I1: 1 2 1 2 I2: 2 1 2 1 I3: 1 1 2 1 I4: 2 1 2 1The output of I1 for i=2 will be available after
11 ns | |
12 ns | |
13 ns | |
28 ns |

So, total time would be 13 ns.
Question 232 |
Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of memory block requests:
4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7
If LRU replacement policy is used, which cache block will have memory block 7?
4 | |
5 | |
6 | |
7 |
4, 3, 25, 8, 19, 6, 16, 35
CPU array (first element being least recently used)
[4, 3, 19, 6, 25, 8, 16, 35]
So, 45 replaces 4.
45, 3, 25, 8, 19, 6, 16, 35 [3, 19, 6, 25, 8, 16, 35, 45]
Similarly, 22 replaces 3 to give,
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 8, 16, 35, 45, 22]
8 hits in cache.
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 16, 35, 45, 22, 8]
3 replaces 19,
45, 22, 25, 8, 3, 6, 16, 35 [6, 25, 16, 35, 45, 22, 8, 3]
16 and 25 hits in cache,
45, 22, 25, 8, 3, 6, 16, 35 [6, 35, 45, 22, 8, 3, 16, 25]
Finally, 7 replaces 6, which is in block 5.
Question 233 |
A CPU has only three instructions I1, I2 and I3, which use the following signals in time steps T1-T5:
I1 : T1 : Ain, Bout, Cin T2 : PCout, Bin T3 : Zout, Ain T4 : Bin, Cout T5 : End I2 : T1 : Cin, Bout, Din T2 : Aout, Bin T3 : Zout, Ain T4 : Bin, Cout T5 : End I3 : T1 : Din, Aout T2 : Ain, Bout T3 : Zout, Ain T4 : Dout, Ain T5 : EndWhich of the following logic functions will generate the hardwired control for the signal Ain ?
T1.I1 + T2.I3 + T4.I3 + T3 | |
(T1 + T2 + T3).I3 + T1.I1 | |
(T1 + T2 ).I1 + (T2 + T4).I3 + T3 | |
(T1 + T2 ).I2 + (T1 + T3).I1 + T3 |
So, Ain is 1 in T3 of I1, I2 and I3. Also during T1, and T2 and T4 of I3. So, answer will be
T1.I1 + T2.I3 + T4.I3 + T3.I1 + T3.I2 + T3.I3
Since, CPU is having only 3 instructions, T3.I1 + T3.I2 + T3.I3 can be replaced by T3.
So, T1.I1 + T2.I3 + T4.I3 + T3
Question 234 |
In an enhancement of a design of a CPU, the speed of a floating point unit has been increased by 20% and the speed of a fixed point unit has been increased by 10%. What is the overall speedup achieved if the ratio of the number of floating point operations to the number of fixed point operations is 2:3 and the floating point operation used to take twice the time taken by the fixed point operation in the original design?
1.155 | |
1.185 | |
1.255 | |
1.285 |
Let n be the time for a fixed point operation,
Original time taken = 3x+2×2n/5 = 7x/5
New time taken = (3x/1.1 + 4x/1.2)/5 = 8x/1.32×5
So, speedup = 7×1.32/8 = 1.155
Question 235 |
The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400 bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is
0.5% | |
1% | |
5% | |
10% |
x μs is data transfer time.
% of time CPU idle is,
y/x × 100
Maximum storage density is given, so consider innermost track to get the capacity
= 2 × 3.14 × 5 × 1700 bits
= 3.14 × 14000 bits
Rotational latency = 60/4200 s = 1/70 s
Therefore, to read 64 bits, time required
(106 × 64)/(70 × 3.14 × 17000) μs = 20.8 μs
As memory cycle time given is 1μs,
Therefore, % CPU cycle stolen = (1/20.8) × 100 ≈ 5%
Question 236 |
A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers
30 70 115 130 110 80 20 25How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)
2 and 3 | |
3 and 3 | |
3 and 4 | |
4 and 4 |
Direction changes at 120, 110, 130.
FCFS: (90) 120 30 70 115 130 110 80 20 25
Direction changes at 120, 30, 130, 20.
Question 237 |
In a certain operating system, deadlock prevention is attempted using the following scheme. Each process is assigned a unique timestamp, and is restarted with the same timestamp if killed. Let Ph be the process holding a resource R, Pr be a process requesting for the same resource R, and T(Ph) and T(Pr) be their timestamps respectively. The decision to wait or preempt one of the processes is based on the following algorithm.
if T(Pr) < T(Ph) then kill Pr else waitWhich one of the following is TRUE?
The scheme is deadlock-free, but not starvation-free | |
The scheme is not deadlock-free, but starvation-free | |
The scheme is neither deadlock-free nor starvation-free | |
The scheme is both deadlock-free and starvation-free |
So, starvation is possible, but deadlock is not possible.
Question 238 |
A process executes the following segment of code:
for(i = 1; i < = n; i++) fork();The number of new processes created is
n | |
n(n+1)/2 | |
2n - 1 | |
3n - 1 |
2n - 1.
Question 239 |
The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process P1 repeatedly adds one item at a time to a buffer of size n, and process P2 repeatedly removes one item at a time from the same buffer using the programs given below. In the programs, K, L, M and N are unspecified statements.
P1: while (1) { K; P(mutex); Add an item to the buffer; V(mutex); L; } P2: while (1) { M; P(mutex); Remove an item from the buffer; V(mutex); N; }The statements K, L, M and N are respectively
P(full), V(empty), P(full), V(empty) | |
P(full), V(empty), P(empty), V(full) | |
P(empty), V(full), P(empty), V(full) | |
P(empty), V(full), P(full), V(empty) |
In process P1, wait on semaphore 'empty' signifies that if there is no space in buffer then P2 cannot produce more items. Signal on semaphore 'full' is to signify that one item has been added to the buffer.
In process P2, wait on semaphore 'full' signifies that if the buffer is empty then consumer cannot consume any item. Signal on semaphore 'empty' increments a space in the buffer after consumption of an item.
Question 240 |
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?
2 | |
10 | |
12 | |
14 |
No. of frames = Physical memory size/Page size
= (230)/(212)
= 218
18+4 = 32 (PT entry size = 32 bits)
k = 14 bits
Question 241 |
Consider the following schedule S of transactions T1 and T2:
T1 T2 Read(A) A = A - 10 Read (A) Temp = 0.2*A Write(A) Read(B) Write(A) Read(B) B = B + 10 Write(B) B = B + Temp Write(B)Which of the following is TRUE about the schedule S?
S is serializable only as T1, T2 | |
S is serializable only as T2, T1 | |
S is serializable both as T1, T2 and T2, T1 | |
S is serializable either as T1 or as T2 | |
None of these |
Question 242 |
Page size has no impact on internal fragmentation. | |
Multilevel paging is necessary to support pages of different sizes. | |
Paging incurs memory overheads. | |
Paging helps solve the issue of external fragmentation. |
- False, Large page size may lead to higher internal fragmentation.
- False, To support pages of different sizes, the Instruction set architecture should support it. Multi-level paging is not necessary.
- True, The page table has to be stored in main memory, which is an overhead.
- True, Paging avoids the external fragmentation.
Question 243 |
12 |
Question 244 |
sleep | |
strlen | |
malloc | |
exit |
- A sleep system call takes a time value as a parameter, specifying the minimum amount of time that the process is to sleep before resuming execution. The parameter typically specifies seconds, although some operating systems provide finer resolution, such as milliseconds or microseconds.
- strlen() is not system call.
- The function call malloc() is a library function call that further uses the brk() or sbrk() system call for memory allocation
- Exit also system call
Question 245 |
- int counter = 0
- Semaphore S = init(5);
- void parop(void)
- {
- wait(S);
- wait(S);
- counter++;
- signal(S);
- signal(S);
- }
The value of counter is 1 after all the threads successfully complete the execution of parop. | |
The value of counter is 5 after all the threads successfully complete the execution of parop. | |
The value of counter is 0 after all the threads successfully complete the execution of parop. | |
There is a deadlock involving all the threads. |
count =0
S=5
func()
{
wait();
wait();
count++;
signal();
signal();
}
Answer:
(1) Deadlock is possible.
For a deadlock free operation
No. of resources >= No. of process (Req-1) + 1
Here, no. of resources = 5 (semaphore value)
Each thread requires = 2 resources (wait call)
No. of threads = 5
5 ≥ 5* (2-1)+1
≱ 6. So deadlock is possible.
This occurs when all 5 threads get blocked on first wait().
(2) count =5 is possible
When all threads enter CS and execute count++ sequentially.
(3) Count=1 is possible.
Assembly level:
read R0, count
inc R0, 1
write count, R0
Thread - 1 reads Count=0 in R0, preempted

Thread-2 reads count=0, is r1, completes, count=1
Thread-3, 4 & 5 completes.
Thread-1 is given CPU
MC R0, 1, so R0=1
Write R0 to count.
So, Count=1.
Question 246 |
Deletion of an existing file from foo | |
Opening of an existing file in foo | |
Creation of a new file in foo | |
Renaming of an existing file in foo |
Question 247 |
- The fastest computer gets the toughest job and the slowest computer gets the easiest job.
- Every computer gets at least one job.
65 |
Let the levels be 1,2,3,4,5,6. (1 is the least difficult, 6 is the most difficult level)
Let the computers be F,M,S( fast, medium, slow).
As per the given constraint, 1 must be given to F and 6 must be given to S.
Now we are left with 2,3,4,5 and F being assigned 1, S being assigned 6 and M being assigned none.
Another constraint is that, every computer must be assigned atleast one.
So compute with assigning one job to M, two jobs to M, three jobs to M and four jobs to M.
Assigning one job to M: we can assign 1 out of 4 jobs in (4C1 ways) and remaining 3 jobs to F,S in 2*2*2 = 8 ways. (each job has two options, either F or S),
Assigning two jobs to M: we can select two jobs from 4 in 4C2 ways and remaining 2 can be distributed to F and S in 2*2 ways ( each job has two options either F or S)
Assigning three jobs to M: we can select 3 out of 4 in 4C3 ways remaining can be distributed to F,M in 2 ways.
Assigning 4 jobs to M: it can be done in only one way.
Total : 4c1*8 + 4C2* 4 + 4C3*2 + 1
= 32+24+8+1
=65
Question 248 |
function OWNRESOURCE(Resource R)
Acquire lock L // a global lock
if R is available then
Acquire R
Release lock L
else
if R is owned by another process P then
Terminate P, after releasing all resources owned by P
Acquire R
Restart P
Release lock L
end if
end if
end function
Which of the following choice(s) about the above scheme is/are correct?
The scheme ensures that deadlocks will not occur. | |
The scheme may lead to live-lock. | |
The scheme violates the mutual exclusive property. | |
The scheme may lead to starvation. |
(1) True,
The scheme ensures deadlock free operation, as there is no hold-and-wait condition possible.
(2) True,
The scheme may lead to priority inversion problems, and hence livelock is possible.
(3) False,
Mutual exclusion is satisfied as only one process can acquire and release locks at a time.
(4) True,
The scheme may lead to starvation. For example, the priority process can get scheduled repeatedly and keeps on killing the lower priority processes. Hence, a low priority process may strave.
Question 249 |
int x=0; // global
Lock L1; // global
main() {
create a thread to execute foo(); // Thread T1
create a thread to execute foo(); // Thread T2
wait for the two threads to finish execution;
print (x); }
foo() {
int y=0;
Acquire L1;
x = x + 1;
y = y + 1;
Release L1;
print (y);}
Which of the following statement(s) is/are correct?
At least one of P1 and P2 will print the value of x as 4. | |
Both T1 and t2, in both the processes, will print the value of y as 1. | |
Both P1 and P2 will print the value of x as2. | |
At least one of the threads will print the value of y as 2. |
- True,
Execution order : P1->T1->T2; P2->T1->T2; P1-print(x),P2-print(x) | output is 4,4
- True,
y=y+1 can be treated as a critical section, and it is well synchronized by Acquire L1 and release L1.
- False, need not be true always.
- False,
Threads maintain their own copy of stack,and local variables (y) are stored on the stack.
Question 250 |

The page size is 4KB (1KB = 210bytes) and page table entry size at every level is 8 bytes. A process P is currently using 2GB (1GB = 230bytes) virtual memory which is mapped to 2GB of physical memory. The minimum amount of memory required for the page table of P across all levels is ________ KB.
8.01 |
A system can support upto 2^39 bytes of Virtual address space. Three level paging is used with Level 1 (9 bits), Level 2 (9 bits) and Level 3 (9 bits) and page offset of 12 bits.
Page table entry is 8 bytes at every level. Therefore,
Level 3 page table size : 2^9 * 8bytes = 2^12 bytes. (Fits in one frame)
Level 2 page table size : 2^9 * 8bytes = 2^12 bytes. (Fits in one frame)
Level 1 page table size : 2^9 * 8bytes = 2^12 bytes. (Fits in one frame).
However, the process P has a virtual address space of 2^31 bytes. Thus, it is clear that the last level page table does not occupy the frame completely. Let x be the number of bits indexed in the last level.
Therefore, we can have (2^x) * (2^9) * (2^9)* (2^12) = 2^31 => x =1. Thus, the minimum size of page table for process P would be,
Page table size at level-1 + Page table size at level-2 + Page table size at level-3 =
2^1 * 8bytes + 2^9 * 8bytes + 2^9 * 8bytes = (1026)*8 bytes = 8.01 KB
Question 251 |
Implementing preemptive scheduling needed hardware support. | |
Turnaround time includes waiting time | |
Round-robin policy can be used even when the CPU time required by each of the processes is not known apriori. | |
The goal is to only maximize CPU utilization and minimize throughput. |
- True. Preemptive scheduling needs hardware support such as a timer
- True. Turnaround time = Burst time + Waiting time
- True, RR assigns qunatume to each process equally. Hence, it is not required to know burst size apriori
- False. Maximize CPU utilization and throughput, and minimize waiting time etc.
Question 252 |
Unlike a regular file, named pipe is a special file | |
The data in a pipe is transient, unlike the content of a regular file | |
Pipes forbid random accessing, while regular files do allow this. | |
All of the above |
→ A FIFO special file (a named pipe) is similar to a pipe, except that it is accessed as part of the filesystem. It can be opened by multiple processes for reading or writing. When processes are exchanging data via the FIFO, the kernel passes all data internally without writing it to the filesystem. Thus, the FIFO special file has no contents on the filesystem; the filesystem entry merely serves as a reference point so that processes can access the pipe using a name in the filesystem.
→ The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also.
Question 253 |

style="font-weight: 400;">Increasing the number of buffers is likely to do which of the following?
I. Increase the rate at which requests are satisfied (throughput)
II. Decrease the likelihood of deadlock
III. Increase the ease of achieving a correct implementation
III Only | |
II Only | |
I Only | |
II and III Only |
Question 254 |
booting up the priority of the process in multi-level of queue without feedback | |
keeping track of the following a page has been in memory for the purpose of LRU replacement | |
letting job reside in memory for a certain amount of time so that the number of pages required can be estimated accurately | |
gradually increasing the priority of jobs that wait in the system for a long time to remedy infinite blocking | |
In Operating systems, aging is a scheduling technique used to avoid starvation. In this, the priority of the jobs that have a longer waiting time is increased as compared to the newer processes, to avoid the starvation of older processes
|
Question 255 |
Round Robin | |
Shortest job first | |
Highest response ratio next | |
first come first served |
Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest execution time to execute next.
Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is maximum. So, maximum number of tasks are completed.
Question 256 |
Which of the following is not a feasible schedule without violating any job schedule?
J2, J4, J1, J3 | |
J4, J1, J2, J3. | |
J4, J2, J1, J3. | |
J4, J2, J3, J1 |
→ From the dead line, we can deduce that Job J2 & J4 will complete by time “2” whereas remaining two requires time “4”.
→ So the order of completion of Jobs are Either J2 or J4 and followed by either J1 or J3.
From the given options , Option A,C & D gives the solution because after completion of Jobs J2 and J4 then only jobs J1 and J3 is going to complete.
→ But option B , order of completing jobs is J4,J1,J2 ,J3 which is not possible and it is not feasible schedule
Question 257 |
FIFO | |
Shortest job first | |
Shortest remaining time | |
Longest remaining time |
Question 258 |
Partitioning | |
Multi-tasking | |
Windowing | |
Paging |
Question 259 |
Part of Main Memory only used for swapping | |
A technique to allow a program, of size more than the size of main memory, to run | |
Part of secondary storage used in program execution | |
None of these |