Operating-Systems

Question 1

Consider three concurrent processes P1, P2 and P3 as shown below, which access a shared variable D that has been initialized to 100.

The process are executed on a uniprocessor system running a time-shared operating system. If the minimum and maximum possible values of D after the three processes have completed execution are X and Y respectively, then the value of Y–X is _______.

A
10
B
40
C
60
D
80
       Operating-Systems       Process-Synchronization       GATE 2019
Question 1 Explanation: 

P2 reads D=100, preempted.
P1 executes D=D+20, D=120.
P3 executes D=D+10, D=130.
Now, P2 has D=100, executes
D = D-50 = 100-50 = 50
P2 writes D=50 final value. This is minimum.
Next,
P2 reads D=100, executes D = D-50, before that assume P1 & P3 has read D=100.
P2 makes D=50 & writes it.
P1 executes (D=100), D=D+20 & P3 executes D=D+10 gives maximum value D=130.
So, Y - X = 130 - 50 =80.
Question 2

The following C program is executed on a Unix/Linux system:

            #include <unistd.h>
            int main ()
            {
                  int i ;
                  for (i=0; i<10; i++)
                         if (i%2 == 0) fork ( ) ;
                  return 0 ;
            }

The total number of child processes created is _____.

A
26
B
33
C
31
D
28
       Operating-Systems       System-Calls       GATE 2019
Question 2 Explanation: 
Fork( ) statement is executed when 'i' is even. Thus the above code is equivalent to
#include
int main( )
{
int i;
fork( );
fork( );
fork( );
fork( );
fork( );
}
n - fork statements will have 2n-1 child.
Hence, n = 5 ⇒ We have 31 child processes.
Question 3

Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?

A
8×220
B
4×220
C
16×210
D
256×210
       Operating-Systems       Memory-Management       GATE 2019
Question 3 Explanation: 
A TLB has 128 valid entries.
So, it can refer to 27 pages.
Each page size is 8 kB & word is 4 bytes.
So, the total addresses of virtual address spaces that can be addressed
Question 4

Consider the following four processes with arrival times (in milliseconds) and their length of CPU bursts (in milliseconds) as shown below:

These processes are run on a single processor using preemptive Shortest Remaining Time First scheduling algorithm. If the average waiting time of the processes is 1 millisecond, then the value of Z is _____.

A
2
B
7
C
1
D
4
       Operating-Systems       Process-Scheduling       GATE 2019
Question 4 Explanation: 
This is the Gantt chart till time = 4 units

At this point P4 arrives with burst 'Z' & P3 is in queue with burst 3.
P1 & P2 have executed with P1 incurred delay 1unit & P2 0units.
Hence, Avg = 0+1+x/4 =1
⇒ x=3, the next delay should be 3. It would happen if assume Z=2.
It executes and completes at 6.
P3 will wait totally for 3units.
Hence, Avg=1.

Z=2
Question 5

The index node (inode) of a Unix-like file system has 12 direct, one single-indirect and one double-indirect pointers. The disk block size is 4 kB, and the disk block address is 32-bits long. The maximum possible file size is (rounded off to 1 decimal place) ______ GB.

A
7.0
B
9.0
C
2.0
D
4.0
       Operating-Systems       File system-I/O-protection       GATE 2019
Question 5 Explanation: 
No. of Disk block pointers = 4kB/32bits = 1k
Max. file size
= (12 × 1k + 1k × 1k) × 4kB
≈ (1024 × 12 + 1024 × 1024) × 4 × 1024 bytes
≈ 4GB
Question 6

Consider the following snapshot of a system running n concurrent processes. Process i is holding Xi instances of a resource R, 1 ≤ i ≤ n. Assume that all instances of R are currently in use. Further, for all i, process i can place a request for at most Yi additional instances of R while holding the Xi instances it already has. Of the n processes, there are exactly two processes p and q such that Yp = Yq = 0. Which one of the following conditions guarantees that no other process apart from p and q can complete execution?

A
Min (Xp, Xq) ≥ Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}
B
Xp + Xq < Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}
C
Min (Xp, Xq) ≤ Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}
D
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}
       Operating-Systems       Deadlock       GATE 2019
Question 6 Explanation: 
{P1, P2, ..., Pn}
Pi holds Xi instances.
Pi can request additional Yi instances.
Given two process p & q such that their additional requests are zero.
Yp = Yq = 0
{Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} means that out of 'n' processes, we are left with (n-2) process (except p&q), i.e., Yk indicates additional request of all the processes (n-2) except p & q.
For p & q to complete first, accordingly
Xp + Xq < Min {Yk}
Option D is correct.
There are exactly two process p and q which do not need any additional instances of resources.
So, p and q will complete their execution and will release Xp and Xq instances of resources.
Now to guarantee that no other process apart from p and q can complete execution, the no. of instances of resources available must be less than the minimum no. of instances of resources required by any other process, i.e.,
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}.
Question 7

Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the system is M units if the corresponding memory page is available in memory and D units if the memory access causes a page fault. It has been experimentally measured that the average time taken for a memory access in the process is X units.

Which one of the following is the correct expression for the page fault rate experienced by the process?

A
(D-M)/(X-M)
B
(X-M)/(D-M)
C
(D-X)/(D-M)
D
(X-M)/(D-X)
       Operating-Systems       Virtual Memory       GATE 2018
Question 7 Explanation: 
Let ‘P’ be page fault rate then, average memory access time
X = (1 - P)M + D × P
X = M ∙ PM + DP
(X - M) = P(D - M)
⇒ P = (X - M) / (D - M)
Question 8

Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is _________.

A
2
B
3
C
4
D
5
       Operating-Systems       Deadlock       GATE 2018
Question 8 Explanation: 
No. of process = 3
No. of resources = 4
Let’s distribute each process one less than maximum demands i.e., (k-1) resources.
So, for three processes, 3(k – 1) resources.
For deadlock avoidance provide an additional resource to any one of the process.
∴ Total resources required to avoid deadlock in any case is 3(k – 1) + 1 = 3k – 2
Now this 3k – 2 should be less than equal to available no. of resources, i.e.,
3k – 2 ≤ 4
k ≤ 2
So maximum value of k = 2
Question 9

Consider the following solution to the producer-consumer synchronization problem. The shared buffer size is N. Three semaphores empty, full and mutex are defined with respective initial values of 0, N and 1. Semaphore empty denotes the number of available slots in the buffer, for the consumer to read from. Semaphore full denotes the number of available slots in the buffer, for the producer to write to. The placeholder variables, denoted by P, Q, R and S, in the code below can be assigned either empty or full. The valid semaphore operations are: wait() and sigmal().

Which one of the following assignments to P, Q, R and S will yield the correct solution?

A
P: full, Q: full, R: empty, S: empty
B
P: empty, Q: empty, R: full, S: full
C
P: full, Q: empty, R: empty, S: full
D
P: empty, Q: full, R: full, S: empty
       Operating-Systems       Process-Synchronization       GATE 2018
Question 9 Explanation: 
P=full, Q=empty, R=empty, S=full
Initial: mutex = 1
empty = 0
full = N
Question 10

Consider a storage disk with 4 platters (numbered as 0, 1, 2 and 3), 200 cylinders (numbered as 0, 1, … , 199), and 256 sectors per track (numbered as 0, 1, … 255). The following 6 disk requests of the form [sector number, cylinder number, platter number] are received by the disk controller at the same time:

    [120, 72, 2], [180, 134, 1], [60, 20, 0], [212, 86, 3], [56, 116, 2], [118, 16, 1]

Currently head is positioned at sector number 100 of cylinder 80, and is moving towards higher cylinder numbers. The average power dissipation in moving the head over 100 cylinders is 20 milliwatts and for reversing the direction of the head movement once is 15 milliwatts. Power dissipation associated with rotational latency and switching of head between different platters is negligible.

The total power consumption in milliwatts to satisfy all of the above disk requests using the Shortest Seek Time First disk scheduling algorithm is ______ .

A
85
B
86
C
87
D
88
       Operating-Systems       Disk-Scheduling       GATE 2018
Question 10 Explanation: 
From the above 36 no. question we are getting
→ A storage disk - 4 platters(0, 1, 2 & 3), Cylinder - 200 (0, 1, …, 199) , 256 sector per track.
In the above question the disk requests are given in the form of 〈sector no, cylinder no, platter no〉.
In SSTF (Shortest Seek Time First) Disk Scheduling algorithm, requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in queue and then they are scheduled according to their calculated seek time. Head is positioned at 80 and moving towards higher cylinder no.

Head Movement in SSTF = (86 – 80) + (86 – 72) + (134 – 72) + (134 – 16) = 200
P1: Power dissipated by 200 movements = 0.2 * 200 = 40 mW
Power dissipated in reversing Head direction once = 15 mW
No. of times Head changes its direction = 3 P2: Power dissipated in reversing Head direction = 3 * 15 = 45 mW
Total power consumption is P1 + P2 = 85 mW
Question 11

In a system, there are three types of resources: E, F and G. Four processes P0, P1, P2 and P3 execute concurrently. At the outset, the processes have declared their maximum resource requirements using a matrix named Max as given below. For example, Max[P2, F] is the maximum number of instances of F that P2 would require. The number of instances of the resources allocated to the various processes at any given state is given by a matrix named Allocation.

Consider a state of the system with the Allocation matrix as shown below, and in which 3 instances of E and 3 instances of F are the only resources available.

From the perspective of deadlock avoidance, which one of the following is true?

A
The system is in safe state.
B
The system is not in safe state, but would be safe if one more instance of E were available.
C
The system is not in safe state, but would be safe if one more instance of F were available.
D
The system is not in safe state, but would be safe if one more instance of G were available.
       Operating-Systems       Deadlock       GATE 2018
Question 11 Explanation: 
〈E, F, G〉 = 〈3, 3, 0〉

Safe sequence:
〈P0, P2, P1, P3
P0: P0 can be allotted 〈3,3,0〉.
After completion Available = 〈4,3,1〉
P2: P2 can be allotted 〈0,3,0〉.
After completion: Available = 〈5,3,4〉
P1: P1 can be allotted 〈1,0,2〉.
After completion: Available = 〈6,4,6〉
P3: P3 can be allotted 〈3,4,1〉.
After completion: Available = 〈8,4,6〉
Question 12

Threads of a process share

A
global variables but not heap.
B
heap but not global variables.
C
neither global variables nor heap.
D
both heap and global variables.
       Operating-Systems       Threads       GATE 2017 [Set-1]
Question 12 Explanation: 
Thread share all other resources of process except local data like – register, stack.
Question 13

Consider the following CPU processes with arrival times (in milliseconds) and length of CPU bursts (in milliseconds) as given below:

If the pre-emptive shortest remaining time first scheduling algorithm is used to schedule the processes, then the average waiting time across all processes is __________ milliseconds.

A
3
B
4
C
5
D
6
       Operating-Systems       Process-Scheduling       GATE 2017 [Set-1]
Question 13 Explanation: 
Question 14

A multithreaded program P executes with x number of threads and used y number of locks for ensuring mutual exclusion while operating on shared memory locations. All locks in the program are non-reentrant, i.e., if a thread holds a lock l, then it cannot re-acquire lock l without releasing it. If a thread is unable to acquire a lock, it blocks until the lock becomes available. The minimum value of x and the minimum value of y together for which execution of P can result in a deadlock are:

A
x = 1, y = 2
B
x = 2, y = 1
C
x = 2, y = 2
D
x = 1, y = 1
       Operating-Systems       Process-Synchronization       GATE 2017 [Set-1]
Question 14 Explanation: 
First you have to know multithreading, mutual exclusion and reentrant mutex. The reentrant mutex (recursive mutex, recursive lock) is particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.
Here non re-entrant process can’t own same lock multiple times, so if process tries to acquire already owned lock, will get blocked, and deadlock will happen.
From the above options x=1 (a single thread) and y=1 (a single lock) deadlock is possible when we consider given situations in question.
Question 15

Recall that Belady’s anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now, consider the following statements:

    S1: Random page replacement algorithm (where a page chosen at random is replaced) suffers from Belady’s anomaly
    S2: LRU page replacement algorithm suffers from Belady’s anomaly

Which of the following is CORRECT?

A
S1 is true, S2 is true
B
S1 is true, S2 is false
C
S1 is false, S2 is true
D
S1 is false, S2 is false
       Operating-Systems       Page-Replacement-Algorithm       GATE 2017 [Set-1]
Question 15 Explanation: 
FIFO may suffer from Belady's anomaly not always FIFO suffer from Belady's anomaly.
Page replacement algorithm suffers from Belady's anomaly when it is not a stack algorithm.
S1: Random page replacement algorithm is not a stack algorithm. So, S1 is true.
S2: LRU is a stack algorithm. Therefore, it doesn't suffer from Belady's anomaly. S2 is false.
Question 16

Which of the following is/are shared by all the threads in a process?

    I. Program counter
    II. Stack
    III. Address space
    IV. Registers
A
I and II only
B
III only
C
IV only
D
III and IV only
       Operating-Systems       Threads       GATE 2017 [Set-2]
Question 16 Explanation: 
First of all, you need to know about process and threads.
A process, in the simplest terms, is an executing program.
One or more threads run in the context of the process.
A thread is the basic unit to which the operating system allocates processor time.
A thread can execute any part of the process code, including parts currently being executed by another thread.
Each thread has its own stack, register and PC.
So here address space that is shared by all thread for a single process.
Question 17

In a file allocation system, which of the following allocation scheme(s) can be used if no external fragmentation is allowed?

    I. Contiguous
    II. Linked
    III. Indexed
A
I and III only
B
II only
C
III only
D
II and III only
       Operating-Systems       File-Allocation-Methods       GATE 2017 [Set-2]
Question 17 Explanation: 
→ Contiguous Allocation:
Advantage:
i) Both sequential and direct access is possible.
ii) Extremely fast.
Disadvantage:
i) Suffers from both internal and external fragmentation.
ii) Increasing file size is difficult, because it depends on the availability of contiguous memory at a particular instance.
→ Linked list allocation:
Advantage:
i) Flexible in terms of size.
ii) Does not suffers from external fragmentation.
Disadvantage:
i) Allocation is slower.
ii) Does not support random or direct access.
iii) Pointers require some extra overhead.
→ Indexed allocation:
Advantage:
i) Support direct access, so provides fast access to the file blocks.
ii) No external fragmentation.
Disadvantage:
i) The pointer overhead for indexed allocation is greater than linked allocation.
ii) Inefficient in terms of memory utilization.
Question 18

A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for three processes are shown below:

Which of the following best describe current state of the system?

A
Safe, Deadlocked
B
Safe, Not Deadlocked
C
Not Safe, Deadlocked
D
Not Safe, Not Deadlocked
       Operating-Systems       deadlock       GATE 2017 [Set-2]
Question 18 Explanation: 

Available: (9 - (3 + 1 + 3)) = 2, P3 can be satisfied.
New available = 3 + 2 = 5
Now, P2 can be satisfied.
New available: 5 + 1 = 6
Now, P1 can be satisfied. Thus safe sequence: P3 → P2 → P1
That is not deadlocked.
Question 19

Consider the set of processes with arrival time (in milliseconds), CPU burst time (in milliseconds), and priority (0 is the highest priority) shown below. None of the processes have I/O burst time.

The average waiting time (in milliseconds) of all the processes using preemptive priority scheduling algorithm is __________.

A
29
B
30
C
31
D
32
       Operating-Systems       Process-Scheduling       GATE 2017 [Set-2]
Question 19 Explanation: 

Waiting Time = 0 + (33 - 5) + (40 - 2) + (49 - 12) + (51 - 9) = 145
Average waiting time: 145/5 = 29
Question 20

Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?

A
Shortest remaining time first
B
Round-robin with time quantum less than the shortest CPU burst
C
Uniform random
D
Highest priority first with priority proportional to CPU burst length
       Operating-Systems       Process-Scheduling       GATE 2016 [Set-1]
Question 20 Explanation: 
From the above question, we can get the following information.
We can consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system.
We have to choose the appropriate process scheduling algorithms, which would minimize the average waiting time in the ready queue.
Waiting time is the time for which process is ready to run but not executed by CPU scheduler.
In all CPU Scheduling algorithms, shortest job first is optimal.
It gives minimum turnaround time, minimum average waiting time and high throughput and the most important thing is that shortest remaining time first is the preemptive version of shortest job first.
This scheduling algorithm may lead to starvation because if the short processes are added to the CPU scheduler continuously then the currently running process will never be able to execute as they will get pre-empted but here all the processes are arrived at same time so there will be no issue such as starvation.
SRTF would be same as SJF.
So, A is the answer. Shortest remaining time first.
Question 21

Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a one-level page table per process and each page table entry requires 48 bits, then the size of the per-process page table is __________ megabytes.

A
384 MB
B
385 MB
C
386 MB
D
387 MB
       Operating-Systems       Paging       GATE 2016 [Set-1]
Question 21 Explanation: 
From the above question, we got the following information.
Size of memory = 240 and Page size = 16 KB = 214
No. of pages = size of Memory / page size = 240/ 214 = 226
Size of page table = 226 * 48 / 8 bytes = 26 * 6 MB = 384 MB
Question 22

Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is ___________.

A
346
B
347
C
348
D
349
       Operating-Systems       Disk-Scheduling       GATE 2016 [Set-1]
Question 22 Explanation: 
From the question, we got the following information.
I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10.
C-LOOK scheduling algorithm is used.
Head is initially at cylinder number 63.
First start from 63
63 → 87 → 92 → 121 → 191 = 24 + 5 + 29 + 70 movements = 128
191 → 10 movements = 181
10 → 11 → 38 → 47 = 1 + 27 + 9 movements = 37
Total = 128 + 181 + 37 = 346
Question 23

Consider a computer system with ten physical page frames. The system is provided with an access sequence (a1,a2,…,a20,a1,a2,…,a20), where each ai is a distinct virtual page number. The difference in the number of page faults between the last-in-first-out page replacement policy and the optimal page replacement policy is ___________.

A
1
B
2
C
3
D
4
       Operating-Systems       Page-Replacement-Algorithm       GATE 2016 [Set-1]
Question 23 Explanation: 
We have to calculate the difference between the last-in-first-out page replacement policy and the optimal page replacement policy.
First we can consider LIFO (Last In First Out) →
a1 to a10 will result in page faults = 10 page faults,
Then a11 will replace a10 (last in is a10), a12 replace a11 and ...till a20 = 10 page faults and a20 will be top of stack and a9…a1 are remained as such.
Then a1 to a9 are already there.
So, 0 page faults from a1 to a9.
a10 will replace a20, a11 will replace a10 and so on = So 11 page faults.
So total faults will be 10+10+11 = 31.
Second Optimal Page Replacement Policy →
a1 to a10 = 10 page faults,
a11 will replace a10 because among a1 to a10, a10 will be used later, a12 will replace a11 and so on = 10 page faults.
a20 will be top of stack and a9…a1 are remained as such.
a1 to a9 = 0 page fault. a10 will replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults.
So no page fault for a20.
Total = 10+ 10 +10 = 30.
So the difference between LIFO - Optimal = 1
Question 24

Consider the following proposed solution for the critical section problem. There are n processes: P0...P(n-1). In the code, function pmax returns an integer not smaller than any of its arguments. For all i, t[i] is initialized to zero.

Code for Pi:

do {

      c[i]=1; t[i] = pmax(t[0],...,t[n-1])+1; c[i]=0;
      for every j ≠ i in {0,...,n-1} {
           while (c[j]);
           while (t[j] != 0 && t[j]<=t[i]);
      }
      Critical Section;
      t[i]=0;
      Remainder Section;
} while (true);

Which one of the following is TRUE about the above solution?

A
At most one process can be in the critical section at any time
B
The bounded wait condition is satisfied
C
The progress condition is satisfied
D
It cannot cause a deadlock
       Operating-Systems       Process-Synchronization       GATE 2016 [Set-1]
Question 24 Explanation: 
We will check the four options one by one.
Based on the above code option B, C and D are not satisfied.
We can see that while (t[j] != 0 && t[j] <= t[i]);
because of this condition deadlock is possible when t[j] = = t[i].
Because Progress == no deadlock as no one process is able to make progress by stopping other process.
Bounded waiting is also not satisfied.
In this case both deadlock and bounded waiting to be arising from the same reason as if t[j] = = t[i] is possible then starvation is possible means infinite waiting.
Mutual exclusion is satisfied.
All other processes j started before i must have value of t[j]) < t[i] as function pMax() return a integer not smaller than any of its arguments.
So if anyone out of the processes j have positive value will be executing in its critical section as long as the condition t[j] > 0 && t[j] <= t[i] within while will persist.
And when this j process comes out of its critical section, it sets t[j] = 0; and next process will be selected in for loop.
So, when i process reaches to its critical section none of the processes j which started earlier before process i is in its critical section.
This ensure that only one process is executing its critical section at a time.
So, A is the answer.
Question 25

In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?

A
LRU (Least Recently Used)
B
OPT (Optimal Page Replacement)
C
MRU (Most Recently Used)
D
FIFO (First In First Out)
       Operating-Systems       Page-Replacement-Algorithm       GATE 2016 [Set-2]
Question 25 Explanation: 
To answer the question you have to know about Belady’s anomaly.
In Belady’s anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm.
In some situations, FIFO page replacement gives more page faults when increasing the number of page frames.
Question 26

Consider the following processes, with the arrival time and the length of the CPU burst given in milliseconds. The scheduling algorithm used is preemptive shortest remaining-time first.

The average turn around time of these processes is _________ milliseconds.

A
8.25
B
8.26
C
8.27
D
8.28
       Operating-Systems       Process-Scheduling       GATE 2016 [Set-2]
Question 26 Explanation: 
Here the scheduling algorithm used is preemptive shortest remaining-time first.
To answer the question we need to design the gantt chart:

In this algorithm, the processes will be scheduled on the CPU which will be having least remaining burst time.
Turnaround Time (TAT) = Completion Time (CT) - Arrival Time (AT)
TAT for P1 = 20 - 0 = 20,
TAT for P2 = 10 - 3 = 7,
TAT for P3 = 8 - 7 = 1,
TAT for P4 = 13 - 8 = 5.
Total TAT = 20 + 7 + 1 + 5 = 33 / 4 = 8.25 (Avg. TAT)
Question 27

Consider the following two-process synchronization solution.

Process 0                                     Process 1
---------                                     ---------
Entry: loop while (turn == 1);                Entry: loop while (turn == 0);
  (critical section)                            (critical section)
Exit: turn = 1;                               Exit: turn = 0;

The shared variable turn is initialized to zero. Which one of the following is TRUE?

A
This is a correct two-process synchronization solution.
B
This solution violates mutual exclusion requirement.
C
This solution violates progress requirement.
D
This solution violates bounded wait requirement.
       Operating-Systems       Process-Synchronization       GATE 2016 [Set-2]
Question 27 Explanation: 
B) Mutual exclusion is satisfied because the value of turn variable cannot be 0 and 1 at the same time.
So False.
C) Progress means if one process does not want to enter the critical section then it should not stop other process to enter the critical section.
But we can see that if process 0 will not enter the critical section then value of turn will not become 1 and process 1 will not be able to enter critical section.
So progress not satisfied. True.
D) Bounded waiting solution as there is a strict alteration.
So, False.
Question 28

Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest initial value of S for which at least one P(S) operation will remain blocked is _________.

A
7
B
8
C
9
D
10
       Operating-Systems       Process-Synchronization       GATE 2016 [Set-2]
Question 28 Explanation: 
We can assume the largest initial value of S for which at least one P(S) operation → X
P(S) operation remain in blocked state therefore it will -1.
The negative value of the counting semaphore indicates the number of processes in suspended list (or blocked).
Take any sequence of 20P and 12V operations, at least one process will always remain blocked.
So, X - 20 + 12 = -1
Here P(S) = 20 and V(S) = 12
X = 7
Question 29

The following two functions P1 and P2 that share a variable B with an initial value of 2 execute concurrently.

P1() 
{ 
   C = B – 1; 
   B = 2*C;  
}

P2()
{
   D = 2 * B;
   B = D - 1; 
}

The number of distinct values that B can possibly take after the execution is

A
3
B
4
C
5
D
6
       Operating-Systems       Process-Synchronization       GATE 2015 [Set-1]
Question 29 Explanation: 
If we execute P2 process after P1 process, then B = 3
If we execute P1 process after P2 process, then B = 4
If we did preemption between P1 & P2 processes, then B = 2 (Preemption have done from P1 to P2) or B = 3 (Preemption have done from P2 to P1). So, among 2 & 3 values, only one value will be saved in B. So, total no. of distinct values that B can possibly take after the execution is 3.
Question 30

Consider a uniprocessor system executing three tasks T1, T2 and T3, each of which is composed of an infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds, respectively. The priority of each task is the inverse of its period and the available tasks are scheduled in order of priority, with the highest priority task scheduled first. Each instance of T1, T2 and T3 requires an execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning of the 1st millisecond and task preemptions are allowed, the first instance of T3 completes its execution at the end of ______________ milliseconds.

A
13
B
12
C
15
D
16
       Operating-Systems       Process-Scheduling       GATE 2015 [Set-1]
Question 30 Explanation: 
All the processes or tasks are available at the begining of 1st millisecond, means at t=0. So, Gantt chart will be as follows:

∴ At the end of 12 milliseconds, 1st instance of T3 will complete its execution.
Question 31

Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks.

A
10
B
11
C
12
D
13
       Operating-Systems       Disk-Scheduling       GATE 2015 [Set-1]
Question 31 Explanation: 
SCAN:

∴ Total head movements,
= 10 + 10 + 10 + 10 + 10 + 55 + 20 + 5 + 10
= 140
SSTF:

∴ Total head movements
= 5 + 15 + 10 + 10 + 65 + 5 + 10 + 10
= 130
∴ Additional distance that will be traversed by R/W head is
= 140 - 130
= 10
Question 32

Consider a main memory with five page frames and the following sequence of page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. Which one of the following is true with respect to page replacement policies First-In-First Out (FIFO) and Least Recently Used (LRU)?

A
Both incur the same number of page faults
B
FIFO incurs 2 more page faults than LRU
C
LRU incurs 2 more page faults than FIFO
D
FIFO incurs 1 more page faults than LRU
       Operating-Systems       Page-Replacement-Algorithm       GATE 2015 [Set-1]
Question 32 Explanation: 

∴ Number of page faults = 9

∴ Number of page faults = 9
Question 33

A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?

A
1
B
2
C
3
D
6
       Operating-Systems       Deadlock       GATE 2015 [Set-2]
Question 33 Explanation: 
Let's assume that each process request 2 resources each. Now there are total 6 identical resources available. Give 1 resource to every processes then there will be deadlock because now each process will wait for another resource which is not available, so there will be deadlock. Since there are total 6 resources so for deadlock to be possible there should be 6 processes available. Hence, the value of N is 6.
Question 34

A computer system implements a 40-bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is _________.

A
22
B
23
C
24
D
25
       Operating-Systems       Virtual Memory       GATE 2015 [Set-2]
Question 34 Explanation: 
22 bits
Question 35

A computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table entry contains a valid bit, a dirty bit, three permission bits, and the translation. If the maximum size of the page table of a process is 24 megabytes, the length of the virtual address supported by the system is ________ bits.

A
36
B
37
C
38
D
39
       Operating-Systems       Virtual Memory       GATE 2015 [Set-2]
Question 35 Explanation: 
Given page size = 8KB = 213B
PAS = 32 bit
∴ No. of frames =PA/Page size = 232 / 213 = 219
Also, it is given that each page table entry contains a valid bit, a dirty bit, 3 permission bits:
= 5 bits reserved
So one Page table entry size is
= 19+5 = 24 bits = 3 bytes
Now, Page table size = No. of entries × Entry size
24 × 220 = No. of entries × 3
No. of entries = 8 × 220 = 2 23
∴ Virtual Address size = No. of entries × Page size = 223 × 213 = 236
∴ Virtual Address Space = 36 bits
Question 36

The maximum number of processes that can be in Ready state for a computer system with n CPUs is

A
n
B
n2
C
2n
D
Independent of n
       Operating-Systems       Process-Scheduling       GATE 2015 [Set-3]
Question 36 Explanation: 
Number of processes which are in running processes will be atmost n as there are n processors.
Maximum number of processes that will be in ready state is independent of number of processors.
Question 37

Consider the following policies for preventing deadlock in a system with mutually exclusive resources.

    I. Processes should acquire all their resources at the beginning of execution. If any resource is not available, all resources acquired so far are released.
    II. The resources are numbered uniquely, and processes are allowed to request for resources only in increasing resource numbers.
    III. The resources are numbered uniquely, and processes are allowed to request for resources only in decreasing resource numbers.
    IV. The resources are numbered uniquely. A process is allowed to request only for a resource with resource number larger than its currently held resources.

Which of the above policies can be used for preventing deadlock?

A
Any one of I and III but not II or IV
B
Any one of I, III, and IV but not II
C
Any one of II and III but not I or IV
D
Any one of I, II, III, and IV
       Operating-Systems       Deadlock       GATE 2015 [Set-3]
Question 37 Explanation: 
All are deadlock prevention strategies.
Question 38

For the processes listed in the following table, which of the following scheduling schemes will give the lowest average turnaround time?

A
First Come First Serve
B
Non-preemptive Shortest Job First
C
Shortest Remaining Time
D
Round Robin with Quantum value two
       Operating-Systems       Process-Scheduling       GATE 2015 [Set-3]
Question 38 Explanation: 

FCFS:

TAT for A = completion time(A) - AT(A) = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 13 - 4 = 9
TAT of D = 15 - 6 = 9
∴ Avg. TAT = 29/4
SJF:

TAT of A = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 15 - 4 = 11
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 27/4
SRTF:

TAT of A = 3 - 0 = 3
TAT of B = 15 - 1 = 14
TAT of C = 8 - 4 = 4
TAT of D = 10 - 6 = 4
∴ Avg. TAT = 25/4
RR:

TAT of A = 5 - 0 = 5
TAT of B = 15 - 1 = 14
TAT of C = 13 - 4 = 9
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 33/4
∴ SRTF has lowest Avg. TAT.
Question 39

Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100, and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90 is serviced after servicing ____________ number of requests.

A
3
B
4
C
5
D
6
       Operating-Systems       Disk-Scheduling       GATE 2014 [Set-1]
Question 39 Explanation: 

90 is serviced after servicing three requests, i.e.,
100 → 105 → 110 → 90
Question 40

Which one of the following is FALSE?

A
User level threads are not scheduled by the kernel.
B
When a user level thread is blocked, all other threads of its process are blocked.
C
Context switching between user level threads is faster than context switching between kernel level threads.
D
Kernel level threads cannot share the code segment.
       Operating-Systems       Threads       GATE 2014 [Set-1]
Question 40 Explanation: 
Kernel level threads shares the code segment.
Question 41

An operating system uses the Banker’s algorithm for deadlock avoidance when managing the allocation of three resource types X, Y, and Z to three processes P0, P1, and P2. The table given below presents the current system state. Here, the Allocation matrix shows the current number of resources of each type allocated to each process and the Max matrix shows the maximum number of resources of each type required by each process during its execution.

There are 3 units of type X, 2 units of type Y and 2 units of type Z still available. The system is currently in a safe state. Consider the following independent requests for additional resources in the current state:

REQ1: P0 requests 0 units of X,  
      0 units of Y and 2 units of Z
REQ2: P1 requests 2 units of X, 
      0 units of Y and 0 units of Z

Which one of the following is TRUE?

A
Only REQ1 can be permitted.
B
Only REQ2 can be permitted.
C
Both REQ1 and REQ2 can be permitted.
D
Neither REQ1 nor REQ2 can be permitted.
       Operating-Systems       Bankers-Algorithm       GATE 2014 [Set-1]
Question 41 Explanation: 
Lets take 1st case,
After allowing Req 1,

Available: X=3, Y=2, Z=0
With this we can satisfy P1's requirement. So available becomes: X=6, Y=4, Z=0.
Since Z is not available, neither P0's nor P2's requirement can be satisfied. So, it is an unsafe state.
Lets take 2nd case,
After allowing Req 2,

Available: X=1, Y=2, Z=2
With this we can satisfy any one of P1's or P2's requirement. Lets first satisfy P1's requirement. So Available now becomes:
X=6, Y=4, Z=2
Now with the availability we can satisfy P2's requirement. So Available now becomes,
X=8, Y=5, Z=3
With this availability P0 can also be satisfied. So, hence it is in safe state.
So from above two cases Req 1 cannot be permitted but Req 2 can be permitted.
Question 42

Consider the following set of processes that need to be scheduled on a single CPU. All the times are given in milliseconds.

Using the shortest remaining time first scheduling algorithm, the average process turnaround time (in msec) is ________.

A
7.2
B
7.3
C
7.4
D
7.5
       Operating-Systems       Process-Scheduling       GATE 2014 [Set-1]
Question 42 Explanation: 
Using SRTF:

TAT(A) = 8-0 = 8, TAT(B)= 5-3=2, TAT(C)= 12-5=7, TAT(D)= 21-7= 14, TAT(E)=15-10=5
Average turnaround time = 8+2+7+14+5/ 5 = 7.2ms
Question 43

Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.

A
7
B
8
C
9
D
10
       Operating-Systems       Page-Replacement-Algorithm       GATE 2014 [Set-1]
Question 43 Explanation: 

Total 7 faults are there.
Question 44

A FAT (file allocation table) based file system is being used and the total overhead of each entry in the FAT is 4 bytes in size. Given a 100 × 106 bytes disk on which the file system is stored and data block size is 103 bytes, the maximum size of a file that can be stored on this disk in units of 106 bytes is ________.

A
99.60
B
99.61
C
99.62
D
99.63
       Operating-Systems       File-Allocation-Table       GATE 2014 [Set-2]
Question 44 Explanation: 
Number of entry in FAT
= Total disk size/ disk block size
= 100 × 106 / 103
= 100 × 103
∴ Total overhead = 100 × 103 × 4B
= 400 × 103 B
= 0.4 × 106 B
So, Maximum size of file that can be stored in disk is,
Total disk size - Total overhead
= 100 × 106 - 0.4 × 106
= 99.6 × 106 B
Question 45

Consider the procedure below for the Producer-Consumer problem which uses semaphores:

semaphore n=0;
semaphore s=1;
void producer()          void consumer()
{                        {
     while(true)              while(true)
     {                        {
     produce();               semWait(s);
     semWait(s);              semWait(n);
     addToBuffer();           removeFromBuffer();
     semSignal(s);            semSignal();
     semSignal(n);            consume();
     }                        }
}                         }

Which one of the following is TRUE?

A
The producer will be able to add an item to the buffer, but the consumer can never consume it.
B
The consumer will remove no more than one item from the buffer.
C
Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty.
D
The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.
       Operating-Systems       Process-Synchronization       GATE 2014 [Set-2]
Question 45 Explanation: 
Answer is (C), because when consumer first access the semaphore 's' it will down (s) and make 's'=0, but for semaphore(n), it has to wait for producer to make it 1 but as for producer it can't access the critical section because the value of s=0, so semWait(s) will not work. So there will be deadlock.
Question 46

Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation that lasts for tio milliseconds. It is assumed that the computer where the processes execute has sufficient number of I/O devices and the OS of the computer assigns different I/O devices to each process. Also, the scheduling overhead of the OS is negligible. The processes have the following characteristics:

 Process id      tc      tio
     A        100 ms    500 ms
     B        350 ms    500 ms
     C        200 ms    500 ms

The processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time sharing system (round robin scheduling) that uses a time slice of 50 milliseconds. The time in milliseconds at which process C would complete its first I/O operation is ___________.

A
1000
B
1001
C
1002
D
1003
       Operating-Systems       Process-Scheduling       GATE 2014 [Set-2]
Question 46 Explanation: 
Gantt chart is shown below:

So 'C' completes its I/O at 1000 time units.
Question 47

A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a program accesses the pages numbered 1, 2, …, 100 in that order, and repeats the access sequence THRICE. Which one of the following page replacement policies experiences the same number of page faults as the optimal page replacement policy for this program?

A
Least-recently-used
B
First-in-first-out
C
Last-in-first-out
D
Most-recently-used
       Operating-Systems       Page-Replacement-Algorithm       GATE 2014 [Set-2]
Question 47 Explanation: 
The current status of 20 frames shows page numbers from 101 to 120. Implementation of optimal page replacement policy for above given page reference string would be as follows:

So, there would be 300 page faults in total (each access 100 page faults). Also it is visible that every time a replacement is done for the page which is most recently referred as it will be least recently referred in future. So, for the given page reference string optimal page replacement policy is working same as most recently used policy and thus number of page faults will be same in both of them.
Question 48

A system contains three programs and each requires three tape units for its operation. The minimum number of tape units which the system must have such that deadlocks never arise is _________.

A
7
B
8
C
9
D
10
       Operating-Systems       Deadlock       GATE 2014 [Set-3]
Question 48 Explanation: 
(3*2 tape units) + 1 tape unit = 7
Question 49

An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds):

The average waiting time (in milliseconds) of the processes is

A
5.5
B
5.6
C
5.7
D
5.8
       Operating-Systems       CPU-Scheduling       GATE 2014 [Set-3]
Question 49 Explanation: 

WT - Waiting Time
CT - Completion Time
TAT - Turn Around Time
TAT = CT - AT < br> WT = TAT - BT
Gantt chart using Shortest remaining time first,

Avg. WT = 15+0+3+4/4 = 22/4 = 5.5
Question 50

Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is  _________.

A
122
B
123
C
124
D
125
       Operating-Systems       Memory-Management       GATE 2014 [Set-3]
Question 50 Explanation: 
Tavg = TLB access time + miss ratio of TLB × memory access time + memory access time
= 10 + 0.4 × 80 + 80
= 10 + 32 + 80
= 122 ms
Question 51

Consider the basic block given below.

a = b + c
c = a + d
d = b + c
e = d - b
a = e + b

The minimum number of nodes and edges present in the DAG representation of the above basic block respectively are

A
6 and 6
B
8 and 10
C
9 and 12
D
4 and 4
       Operating-Systems       DAG       GATE 2014 [Set-3]
Question 51 Explanation: 
a = b + c
c = a + d
d = b + c
e = d - b = b + c - b = c
a = d - b + b = d
So, DAG representation of above basic block respectively,

So, number of nodes = 6 &
number of edges = 6
Question 52

The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write operations. The cache hit-ratio is 0.9.  The average memory access time (in nanoseconds) in executing the sequence of instructions is   __________.

A
1.68
B
1.69
C
1.70
D
1.71
       Operating-Systems       Memory-Management       GATE 2014 [Set-3]
Question 52 Explanation: 
Total instruction = 100 instruction fetch operation + 60 memory operand read operation + 40 memory operand write op
= 200 instructions (operations)
Time taken for fetching 100 instructions (equivalent to read) = 90*1ns + 10*5ns = 140ns
Memory operand Read operations = 90% (60)*1ns + 10% (60) × 5ns = 54ns + 30 ns = 84ns
Memory operands Write operations time = 90% (40)*2ns + 10% (40)*10ns
= 72ns + 40ns = 112ns
Total time taken for executing 200 instructions = 140 + 84 + 112 = 336ns
∴ Average memory access time = 336 ns/200 = 1.68ns
Question 53

A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with priority zero (the lowest priority). The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule. Which one of the following is TRUE if the processes have no I/O operations and all arrive at time zero?

A
This algorithm is equivalent to the first-come-first-serve algorithm.
B
This algorithm is equivalent to the round-robin algorithm.
C
This algorithm is equivalent to the shortest-job-first algorithm.
D
This algorithm is equivalent to the shortest-remaining-time-first algorithm.
       Operating-Systems       CPU-Scheduling       GATE 2013
Question 53 Explanation: 
The given algorithm is equivalent to the round robin algorithm with time quantum T units.
Question 54

Three concurrent processes X, Y, and Z execute three different code segments that access and update certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before entering the respective code segments. After completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-free order of invoking the P operations by the processes?

A
X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a)
B
X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d)
C
X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d)
D
X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a)
       Operating-Systems       Process-Management       GATE 2013
Question 54 Explanation: 
Trick to check deadlock in the given code proceed parallely i.e., first executed the first line of all the processes then executed the second line of all processes and so on.
Question 55

Consider a hard disk with 16 recording surfaces (0-15) having 16384 cylinders (0-16383) and each cylinder contains 64 sectors (0-63). Data storage capacity in each sector is 512 bytes. Data are organized cylinder-wise and the addressing format is <cylinder no., surface no., sector no.>. A file of size 42797 KB is stored in the disk and the starting disk location of the file is <1200, 9, 40>. What is the cylinder number of the last sector of the file, if it is stored in a contiguous manner?

A
1281
B
1282
C
1283
D
1284
       Operating-Systems       Input-and-output-system       GATE 2013
Question 55 Explanation: 
It is given that we have 16 recording surfaces and 16384 cylinders, each cylinder contains 64 sectors and starting address is <1200, 9, 40>. Capacity of each sector is 512Bytes.
Suppose we have x cylinders & in each cylinder we have 16 surface and 64 sectors so we need to store 42797 KB of data.
x * 16* 64 *512 + 16*64* 512 + 64* 512 = 42797 KB , by solving this we get x = 82.5 so we need 83 cylinders.
We can add this to the no. of cylinders in the starting address <1200, 9 ,40 >, i.e. 1200, but we also need to cover 40 more sectors which will need one more cylinder, this cylinder is not full but still it has to be accommodated.
Hence 1200+83+1 = 1284 and currently we will be on 1284 cylinder.
Question 56

A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory. Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution?

A
-2
B
-1
C
1
D
2
       Operating-Systems       Process-Management       GATE 2013
Question 56 Explanation: 

Suppose, 'W' executes first, and makes value of semaphore S=1. Now it increment value of x to 1 and comes out and make S=2. Now 'X' executes, where the value of x=1 is stored in R1 and then R1 is incremented and value in R1 is 2. Now 'X' gets preempted. Also to be noted that value of semaphore S is now '1' because the execution during process X got preempted and wasn't able to increase value of S to 2.
Now process Y will get completely execute and then Z will get completely execute. Now the value of 'x' is -3. Now again 'X' will get executed and the value stored in R1 i.e., '2' will get stored in 'x'. So the value in 'x' will be finally 2.
So the maximum possible value of 'x' is 2.
Question 57

A certain computation generates two arrays a and b such that a[i]=f(i) for 0 ≤ i < n and b[i]=g(a[i]) for 0 ≤ i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X computes the array a and Y computes the array b. The processes employ two binary semaphores R and S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are shown below.

Process X:                         Process Y:
private i;                         private i;
for (i=0; i < n; i++) {            for (i=0; i < n; i++) {
   a[i] = f(i);                       EntryY(R, S);
   ExitX(R, S);                       b[i]=g(a[i]);
}                                 }

Which one of the following represents the CORRECT implementations of ExitX and EntryY?

A
ExitX(R,S) {
P(R);
V(S);
}
EntryY(R,S) {
P(S);
V(R);
}
B
ExitX(R,S) {
V(R);
V(S);
}
EntryY(R,S) {
P(R);
P(S);
}
C
ExitX(R,S) {
P(S);
V(R);
}
EntryY(R,S) {
V(S);
P(R);
}
D
ExitX(R,S) {
V(R);
P(S);
}
EntryY(R,S) {
V(S);
P(R);
}
       Operating-Systems       Process-Management       GATE 2013
Question 57 Explanation: 
The solution is using a binary semaphore. X and Y are two different processes whatever the values inserted by the processes in the array that will be used by process Y afterwards.
Option C - Correct because each and every value inserted by the process X in the array will be immediately consumed by process Y.
Question 58

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the size of a page in KB in this computer?

A
2
B
4
C
8
D
16
       Operating-Systems       Memory-Management       GATE 2013
Question 58 Explanation: 
Let the size of page is = 2p B
So the no. of entries in one page is 2p/4, where 4 is the page table entry size given in question.
So we know that process size or virtual address space size is equal to
No. of entries × Page size
So total no. of entries for 3 level page table is,
(2p/4)×(2p/4)×(2p/4)
So, No. of entries × Page size = VAS
(2p/4)×(2p/4)×(2p/4)× (2p) = 246
24p = 252
4p = 52
p = 13
∴ Page size = 213
Question 59

A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.

What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer?

A
2
B
4
C
8
D
16
       Operating-Systems       Memory-Management       GATE 2013
Question 59 Explanation: 
Architecture of physically indexed cache:

Architecture of virtual indexed physically tagged (VIPT):

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13bits
These 13bits are not translated during VA→PA. So 13bits are same out of 16 Index bits, 13 are same we need to make 3bits (16-13) same now.
3bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 60

A process executes the code

fork();
fork();
fork();

The total number of child processes created is

A
3
B
4
C
7
D
8
       Operating-Systems       System-Calls       GATE 2012
Question 60 Explanation: 
The no. of child process created = 2n - 1 = 23 - 1 = 7 (where n is number of fork() statements)
7 are child processes.
Question 61

Consider the 3 processes, P1, P2 and P3 shown in the table.

Process           Arrival time         Time Units Required
   P1                0                         5
   P2                1                         7
   P3                3                         4

The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with CPU quantum of 2 time units) are

A
FCFS: P1, P2, P3 RR2: P1, P2, P3
B
FCFS: P1, P3, P2 RR2: P1, P3, P2
C
FCFS: P1, P2, P3 RR2: P1, P3, P2
D
FCFS: P1, P3, P2 RR2: P1, P2, P3
       Operating-Systems       Process-Scheduling       GATE 2012
Question 61 Explanation: 
FCFS is - P1, P2, P3
FCFS is clear.
RR Queue: In RR queue time slot is of 2 units.
Processes are assigned in following order
P1, P2, P1, P3, P2, P1, P3, P2, P2
This question used ready queue concept. At t=2, P2 starts and P1 is sent to the ready queue and at t=3 P3 arrives so then the job P3 is queued in ready queue after P1. So at t=4, again P1 is executed then P3 is executed for first time at t=6.
RR2: P1, P3, P2
So option C.
Question 62

Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available.

  AcquireLock(L){
         while (Fetch_And_Add(L,1))
               L = 1;
   }
  ReleaseLock(L){
         L = 0;
   }

This implementation

A
fails as L can overflow
B
fails as L can take on a non-zero value when the lock is actually available
C
works correctly but may starve some processes
D
works correctly without starvation
       Operating-Systems       Process-Synchronization       GATE 2012
Question 62 Explanation: 
Check the loop first:
while (Fetch_And_Add (L,1))
L = 1; // A waiting process can be here just after
// the lock is released, and can make L = 1.
Assume P1 executes until while condition and preempts before executing L =1. Now P2 executes all statements, hence L = 0. Then P1 without checking L it makes L = 1 by executing the statement where it was preempted.
It takes a non-zero value (L=1) when the lock is actually available (L = 0). So option B.
Question 63

A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block address and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum possible file size in this file system is

A
3 KBytes
B
35 KBytes
C
280 KBytes
D
dependent on the size of the disk
       Operating-Systems       File-System       GATE 2012
Question 63 Explanation: 
It’s given disk block is of size 128B.
So, one direct block addressing will point to 8 disk blocks = 8*128 B = 1 KB
Singly Indirect block addressing will point to 1 disk block which has 128/8 disc block addresses = (128/8)*128 B = 2 KB
Doubly indirect block addressing will point to 1 disk block which has 128/8 addresses to disk blocks which in turn has 128/8 addresses to disk blocks = 16*16*128 B = 32 KB
Maximum possible file size = 1 KB + 2 KB + 32 KB = 35 KB
Question 64

Consider the virtual page reference string

1, 2, 3, 2, 4, 1, 3, 2, 4, 1  

On a demand paged virtual memory system running on a computer system that main memory size of 3 pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page replacements policy. Then

A
OPTIMAL < LRU < FIFO
B
OPTIMAL < FIFO < LRU
C
OPTIMAL = LRU
D
OPTIMAL = FIFO
       Operating-Systems       Page-Replacement       GATE 2012
Question 64 Explanation: 
FIFO:

No. of page faults with FIFO = 6
LRU:

No. of page faults with LRU = 9
Optimal:

No. of page faults with optimal = 5
∴ Optimal < FIFO < LRU
Question 65

Let the page fault service time to 10 ms in a computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory accesses, what is the effective access time for the memory?

A
21 ns
B
30 ns
C
23 ns
D
35 ns
       Operating-Systems       Memory-Management       GATE 2011
Question 65 Explanation: 
P = page fault rate
EA = p × page fault service time + (1 – p) × Memory access time
= 1/106×10×106+(1-1/106)×20 ≅ 29.9 ns
Question 66

A thread is usually defined as a "light weight process" because an operating system (OS) maintains smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?

A
On per-thread basis, the OS maintains only CPU register state
B
The OS does not maintain a separate stack for each thread
C
On per-thread basis, the OS does not maintain virtual memory state
D
On per thread basis, the OS maintains only scheduling and accounting information
       Operating-Systems       Threads       GATE 2011
Question 66 Explanation: 
A) False, because on per thread basis OS maintains register, stack and program counter.
B) False, OS do maintain a separate stack for each thread.
C) True
D) False
Question 67

Let the time taken to switch between user and kernel modes of execution be t1 while the time taken to switch between two processes be t2. Which of the following is TRUE?

A
(t1) > (t2)
B
(t1) = (t2)
C
(t1) < (t2)
D
Nothing can be said about the relation between t1 and t2
       Operating-Systems       Context-Switching       GATE 2011
Question 67 Explanation: 
Context switch between the processes involves mode switch also.
Question 68

Consider the following table of arrival time and burst time for three processes P0, P1 and P2.

The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or completion of processes. What is the average waiting time for the three processes?

A
5.0 ms
B
4.33 ms
C
6.33 ms
D
7.33 ms
       Operating-Systems       Process-Scheduling       GATE 2011
Question 68 Explanation: 

CT = Completion time
TAT = Turn Around Time
WT = Waiting Time
TAT = CT - AT
WT = TAT - BT
Gantt Chart using pre-emptive shortest job first scheduling algorithm,

Avg. WT = 4+0+11/3 = 5ns
Question 69

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.

Which one of the following statements describes the properties achieved?

A
Mutual exclusion but not progress
B
Progress but not mutual exclusion
C
Neither mutual exclusion nor progress
D
Both mutual exclusion and progress
       Operating-Systems       Process-Synchronization       GATE 2010
Question 69 Explanation: 
In this mutual exclusion is saisfied because at any point of time either S1 = S2 or S1 ≠ S2, but not both. But here progress is not satisfied because suppose S1 = 1 and S2 = 0 and P1 is not interested to enter into critical section but P2 wants to enter into critical section, and P2 will not be able to enter, because until P1 will not enter critical section, S1 will not become equal to S2. So if one process do not interested in entering critical section, will not allow other process to enter critical section which is interested. So progress is not satisfied.
Question 70

A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page faults will occur?

A
196
B
192
C
197
D
195
       Operating-Systems       Page-Replacement       GATE 2010
Question 70 Explanation: 
The first 100 accesses causes 100 page faults.
Page 1 …….. Page 100 causes 100 faults.
Now, when they are accesses in reverse order page 100, 99, 98, 97 are already present. So we get 96 page faults. So, total of 196 page faults.
Question 71

Which of the following statements are true?

    I. Shortest remaining time first scheduling may cause starvation
    II. Preemptive scheduling may cause starvation
    III. Round robin is better than FCFS in terms of response time
A
I only
B
I and III only
C
II and III only
D
I, II and III
       Operating-Systems       Process-Scheduling       GATE 2010
Question 71 Explanation: 
- In SRTF longer bursts will suffer from starvation.
- Pre-emptive scheduling causes starvation (for example lower priority jobs are waiting).
- Best Response time is given by RR.
Question 72

The following program consists of 3 concurrent processes and 3 binary semaphores.The semaphores are initialized as S0 = 1, S1 = 0, S2 = 0.

How many times will process P0 print '0'?

A
At least twice
B
Exactly twice
C
Exactly thrice
D
Exactly once
       Operating-Systems       Process-Synchronization       GATE 2010
Question 72 Explanation: 
S0=1
S1=0
S2=0
P0 enters the critical section first,
prints (‘0’)
releases S1,S2(i.e., S1=1 & S2=1)
Now P1 & P2 both can enter critical section releases S0 & prints (‘0’)
This process continues, hence the number of zero’s printed ≥2.
Question 73

A system has n resources R0,...,Rn-1,and k processes P0,....Pk-1.The implementation of the resource request logic of each process Pi is as follows:

if (i % 2 == 0) {
      if (i < n) request Ri
      if (i+2 < n) request Ri+2
}
else {
      if (i < n) request Rn-i
      if (i+2 < n) request Rn-i-2
}

In which one of the following situations is a deadlock possible?

A
n = 40, k = 26
B
n = 21, k = 12
C
n = 20, k = 10
D
n = 41, k = 19
       Operating-Systems       Deadlock       GATE 2010
Question 73 Explanation: 
Consider the case where i = 10 & i = 11, n = 21 & k = 12
P10 requests R10 & R11
P11 requests R10 & R8
Hence P10 & P11 inorder in deadlock.
Question 74

In which one of the following page replacement policies, Belady’s anomaly may occur?

A
FIFO
B
Optimal
C
LRU
D
MRU
       Operating-Systems       Page-Replacement       GATE 2009
Question 74 Explanation: 
Belady's anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns. This phenomenon is commonly experienced when using the first-in first-out.
Question 75

The essential content(s) in each entry of a page table is/are

A
Virtual page number
B
Page frame number
C
Both virtual page number and page frame number
D
Access right information
       Operating-Systems       Page-Replacement       GATE 2009
Question 75 Explanation: 
For every page table it contains page frame number.
Virtual page number can represents index in the page table to get the page frame number.
Question 76

Consider a system with 4 types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non-preemptive resource allocation policy is used. At any given instance, a request is not entertained if it cannot be completely satisfied. Three processes P1, P2, P3 request the sources as follows if executed independently.

Which one of the following statements is TRUE if all three processes run concurrently starting at time t=0?

A
All processes will finish without any deadlock.
B
Only P1 and P2 will be in deadlock.
C
Only P1 and P3 will be in a deadlock.
D
All three processes will be in deadlock.
       Operating-Systems       Deadlock       GATE 2009
Question 76 Explanation: 
At t=0,
→ P1 requests 2 units of R2 which is granted.
→ P2 requests 2 units of R3 which is granted.
→ P3 requests 1 units of R4 which is granted.
Now Available resources are,

At t=1,
→ P1 requests 1 unit of R3 which is granted because it is available.
Now Available resources are,

At t=2,
→ P2 requests 1 unit of R4 which is granted.
→ P3 requests 2 units of R1.
Now Available resources are,

At t=3,
→ P1 requests 2 units of R1 which cannot be granted, and will wait for other processes to release.
Available resources are,

At t=4,
→ P2 requests 1 unit of R1, which is granted.
Available resources are

At t=5,
→ P3 releases 2 units of R1.
Now Available resources are,

→ Now P1 is waiting for R1, so now P1 will be granted 2 units of R1.
Now Available resources are,

→ Now P1 releases 1 unit of R2 and 1 unit of R1.
Now Available resources are

At t=6,
→ Now P2 releases 1 unit of R3.
Now available resources are,

At t=7,
→ P3 requests 1 unit of R2, which is granted.
→ P1 releases 1 unit of R3.
Now Available resources are,

At t=8,
→ P2 fnishes and releases remaining resources. So now Available resources,

→ P3 requests 1 unit of R3 which is granted.
Now Available resources are,

→ P1 requests 2 units of R4 which cannot be granted, so it will wait for other process to release them.
At t=9,
→ P3 finishes, and releases rest of the resources.
Now Available resources are

→ Now, P1 can be granted with resources 2 units of R4 for which it was waiting for.
Now Available resources are,

At t=10,
→ P1 finishes its execution.
So, finally we can conclude that all processes will finish without any deadlock.
Question 77

Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence:

    4, 34, 10, 7, 19, 73, 2, 15, 6, 20 

Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1ms to move from one cylinder to adjacent one and shortest seek time first policy is used?

A
95 ms
B
119 ms
C
233 ms
D
276 ms
       Operating-Systems       CPU-Scheduling       GATE 2009
Question 77 Explanation: 
The given sequence is
4, 34, 10,7, 19, 73, 2, 15, 6, 20
Arrange the sequence in order
2, 4, 6, 10, 15, 19, 20, 34, 73

⇒ 1 ms to move from one cylinder to adjacent one
⇒ (16*1)+(14*1)+(1*1)+(4*1)+(5*1)+(3*1)+(1*1)+(2*1)+(2*1)+(71*1)
⇒ 16+14+1+4+5+3+1+2+2+71
⇒ 119 ms
Question 78

In the following process state transition diagram for a uniprocessor system, assume that there are always some processes in the ready state:

Now consider the following statements:

    I. If a process makes a transition D, it would result in another process making transition A immediately.
    II. A process P2 in blocked state can make transition E while another process P1 is in running state.
    III. The OS uses preemptive scheduling.
    IV. The OS uses non-preemptive scheduling.

Which of the above statements are TRUE?

A
I and II
B
I and III
C
II and III
D
II and IV
       Operating-Systems       Process-Scheduling       GATE 2009
Question 78 Explanation: 
Statement I is false, if a process makes a transition D, then it result to perform a transition B immediately not A.
Statement II is true, a process can move to ready state when I/O completes irrespective of other process being in running state or not.
Statement III is true, the transition C, represents preemptive scheduling.
Statement IV is false, because it is preemptive scheduling.
Correct Answer is Option-C (II & III are true).
Question 79

The enter_CS() and leave_CS() functions to implement critical section of a process are realized using test-and-set instruction as follows:

void enter_CS(X)
{
    while test-and-set(X) ;
}
void leave_CS(X)
{
   X = 0;
}

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the following statements:

    I. The above solution to CS problem is deadlock-free.
    II. The solution is starvation free.
    III. The processes enter CS in FIFO order.
    IV More than one process can enter CS at the same time.

Which of the above statements is TRUE?

A
I only
B
I and II
C
II and III
D
IV only
       Operating-Systems       Process-Synchronization       GATE 2009
Question 79 Explanation: 
The given solution is the basic test-and-set solution. So there is no possibility of getting deadlock.
It is not using any queue to avoid starvation and there is no use of FIFO.
In CS, only one process can enter.
So, Answer is option A.
Question 80

A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical address because

A
It reduces the memory access time to read or write a memory location.
B
It helps to reduce the size of page table needed to implement the virtual address space of a process.
C
It is required by the translation lookaside buffer.
D
It helps to reduce the number of page faults in page replacement algorithms.
       Operating-Systems       Virtual Memory       GATE 2009
Question 80 Explanation: 
In multilevel page table size is too big to fit in contiguous space then page tables are to be divided into different levels.
Question 81

The data blocks of a very large file in the Unix file system are allocated using

A
contiguous allocation
B
linked allocation
C
indexed allocation
D
an extension of indexed allocation
       Operating-Systems       File-System       GATE 2008
Question 81 Explanation: 
An Unix file system can utilizes an extension of indexed allocation. Because it uses direct blocks, single indirect blocks, double indirect blocks and triple indirect blocks.
Question 82

The P and V operations on counting semaphores, where s is a counting semaphore, are defined as follows:

P(s): s =  s - 1;
       if (s < 0) then wait;
V(s): s = s + 1;
  if (s <= 0) then wakeup a process waiting on s;

Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:

P(s): Pb(Xb);
  s = s - 1;
  if (s < 0) {
   Vb(Xb) ;
   Pb(Yb) ;
  }
  else Vb(Xb); 

V(s): Pb(Xb) ;
  s = s + 1;
  if (s <= 0) Vb(Yb) ;
  Vb(Xb) ;

The initial values of Xb and Yb are respectively

A
0 and 0
B
0 and 1
C
1 and 0
D
1 and 1
       Operating-Systems       Process-Synchronization       GATE 2008
Question 82 Explanation: 
xb must be 1 because both P(s) operation and V(s) operation perform Pb(xb) first. So if xb=0, then all the process performing these operations will be blocked.
Yb must be '0' initially, because if Yb is '1' initially then two process can be in critical section at the same time.
So answer is Option (C).
Question 83

Which of the following statements about synchronous and asynchronous I/O is NOT true?

A
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
B
In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after completion of the I/O
C
A process making a synchronous I/O call waits until I/O is complete, but a process making an asynchronous I/O call does not wait for completion of the I/O
D
In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR that is invoked after the completion of I/O
       Operating-Systems       I/O-Handling       GATE 2008
Question 83 Explanation: 
Synchronous I/O mean that some flow of execution (such as a process or thread) is waiting for the operation to complete. Asynchronous I/O means that nothing is waiting for the operation to complete and the completion of the operation itself causes something to happen.
Synchronous I/O -- some execution vehicle (like a process or thread) that initiates the I/O also waits for the I/O to complete (and perhaps completes it). When the I/O completes, that same execution vehicle goes on to do something else, perhaps using the results of the I/O.
Asynchronous I/O -- no execution vehicle waits for the I/O to complete. When the I/O completes, whatever execution vehicle happens to complete the I/O may arrange for later things to happen.
Option B is not true, because both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is not invoked after completion of the I/O.
Question 84

Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?

A
In deadlock prevention, the request for resources is always granted if the resulting state is safe
B
In deadlock avoidance, the request for resources is always granted if the result state is safe
C
Deadlock avoidance is less restrictive than deadlock prevention
D
Deadlock avoidance requires knowledge of resource requirements a priori
       Operating-Systems       Deadlock       GATE 2008
Question 84 Explanation: 
Deadlock prevention scheme handles deadlock by making sure that one of the four necessary conditions don't occur. So, it may be the case that a resource request might be rejected even if the resulting state is safe. Hence, option (A) is false.
Question 85

A process executes the following code

for(i=0; i<n; i++) for();

The total number of child processes created is

A
n
B
(2n) - 1
C
2n
D
(2n+1) - 1
       Operating-Systems       System-Calls       GATE 2008
Question 85 Explanation: 
Fork is a system call, implements in kernel.
It is a operation where process creates a copy of itself.

1,3,7,15,31,... ⇒ 2n-1
Question 86

A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4 Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows

    • Bits 30-31 are used to index into the first level page table
    • Bits 21-29 are used to index into the second level page table
    • Bits 12-20 are used to index into the third level page table, and
    • Bits 0-11 are used as offset within the page

The number of bits required for addressing the next level page table (or page frame) in the page table entry of the first, second and third level page tables are respectively.

A
20, 20 and 20
B
24, 24 and 24
C
24, 24 and 20
D
25, 25 and 24
       Operating-Systems       Virtual Memory       GATE 2008
Question 86 Explanation: 
Virtual address size = 32 bits
From the question we can see the below info:
Physical address size = 36 bits
Physical memory size = 236 bytes
Page frame size = 4K bytes = 212 bytes
No. of bits for offset = 12
No. of bits required to access physical memory frame = 36 – 12 = 24
So in third level of page table, 24 bits are required to access an entry.
In second level page table entry -- 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes.
So size of second level page table is (29)*4 = 211 bytes. It means there are (236)/(211) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. the first page table needs 25 bits.
Answer - D
First level
Question 87

Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match entries in Group 1 to entries in Group 2.

     Group I                          Group II
(P) Gang Scheduling              (1) Guaranteed Scheduling
(Q) Rate Monotonic Scheduling    (2) Real-time Scheduling
(R) Fair Share Scheduling        (3) Thread Scheduling
A
P – 3 Q – 2 R – 1
B
P – 1 Q – 2 R – 3
C
P – 2 Q – 3 R – 1
D
P – 1 Q – 3 R – 2
       Operating-Systems       Match-the-Following       GATE 2007
Question 87 Explanation: 
⇒ Gang scheduling is used for parallel system that schedules the threads.
⇒ Rate monotonic scheduling is used in Real-time operating system.
⇒ Fair share scheduling distributes the CPU equally among users due to which it generates scheduling process.
Question 88

Consider the following statements about user level threads and kernel level threads. Which one of the following statements is FALSE?

A
Context switch time is longer for kernel level threads than for user level threads.
B
User level threads do not need any hardware support.
C
Related kernel level threads can be scheduled on different processors in a multi-processor system.
D
Blocking one kernel level thread blocks all related threads.
       Operating-Systems       Threads       GATE 2007
Question 88 Explanation: 
A) True, because as kernel level threads are managed by OS and kernel maintains lots of data structure.
B) True, because kernel is not involved in it.
C) True
D) Blocking one kernel level thread blocks all related threads is false, because kernel level threads are independent.
Question 89

An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider the arrival times and execution times for the following processes:

 
        Process     Execution time     Arrival time
          P1             20                0
          P2             25                15
          P3             10                30
          P4             15                45

What is the total waiting time for process P2?

A
5
B
15
C
40
D
55
       Operating-Systems       Process-Scheduling       GATE 2007
Question 89 Explanation: 

Waiting time for process P2 = TAT - Execution time
= Completion time - AT - ET
= 55 - 15 - 25
= 15
Question 90

A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed number of frames to a process. Consider the following statements:

P: Increasing the number of page frames allocated to a 
   process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.

Which one of the following is TRUE?

A
Both P and Q are true, and Q is the reason for P
B
Both P and Q are true, but Q is not the reason for P
C
P is false, but Q is true
D
Both P and Q are false
       Operating-Systems       Page-Replacement       GATE 2007
Question 90 Explanation: 
Statement P,
FIFO suffers from Belady anomaly. Belady anomaly states that when number of page frames increases no. of page fault increases.
Statement Q,
Locality of reference also known as the principle of locality, is the tendency of a processor to accesses the same set of memory locations respectively over a short period of time.
And yes some programs do not exhibit locality of reference.
Hence, both P and Q are true. But Q is not the reason for P.
Question 91

A single processor system has three resource types X, Y and Z, which are shared by three processes. There are 5 units of each resource type. Consider the following scenario, where the column alloc denotes the number of units of each resource type allocated to each process, and the column request denotes the number of units of each resource type requested by a process in order to complete execution. Which of these processes will finish LAST?

 
    alloc      request
    X Y Z       X Y Z
P0  1 2 1       1 0 3
P1  2 0 1       0 1 2
P2  2 2 1       1 2 0
A
P0
B
P1
C
P2
D
None of the above, since the system is in a deadlock.
       Operating-Systems       Bankers-Algorithm       GATE 2007
Question 91 Explanation: 
Given that there are 5 units of each resource type.

543:
System is in safe state
Order of Execution =
P2 will execute last.
Question 92

Two processes, P1 and P2, need to access a critical section of code. Consider the following synchronization construct used by the processes:

  /* P1 */
while (true) {
  wants1 = true;
  while (wants2 == true);
  /* Critical
    Section */
  wants1=false;
}
/* Remainder section */       


/* P2 */
while (true) {
  wants2 = true;
  while (wants1==true);
  /* Critical
    Section */
  wants2 = false;
}
/* Remainder section */ 

Here, wants1 and wants2 are shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct?

A
It does not ensure mutual exclusion.
B
It does not ensure bounded waiting.
C
It requires that processes enter the critical section in strict alternation.
D
It does not prevent deadlocks, but ensures mutual exclusion.
       Operating-Systems       Process-Synchronization       GATE 2007
Question 92 Explanation: 
First of all it can be easily seen that mutual exclusion is satisfied.
Now, if in P1
wants1 = true ;
executed and preempted and now if in P2
wants2 = true ;
executed.
Then the deadlock situation will be created because both will fall into infinite loop.
Question 93

A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1

If optimal page replacement policy is used, how many page faults occur for the above reference string?

A
7
B
8
C
9
D
10
       Operating-Systems       Page-Replacement       GATE 2007
Question 93 Explanation: 
Given sequence is
1, 2, 1, 3, 7, 4, 5, 6, 3, 1
No. of frames = 3
Using Optimal page replacement,

Total 7 page faults.
Question 94

A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1

Least Recently Used (LRU) page replacement policy is a practical approximation to optimal page replacement. For the above reference string, how many more page faults occur with LRU than with the optimal page replacement policy?

A
0
B
1
C
2
D
3
       Operating-Systems       Page-Replacement       GATE 2007
Question 94 Explanation: 
Given sequence is
1, 2, 1, 3, 7, 4, 5, 6, 3, 1 br> No. of frames = 3 (Using LRU)

No. of page faults with LRU = 9
⟶ In the LRU we replace the page with least recently used page.
Using optimal page replacement
⟶ 1,2,1,3,7,4,5,6,3,1

No. of page faults with optimal = 7
Difference between optimal and LRU = 9 - 7 = 2
In optimal we replace the page which will occur last in the future.
Question 95

Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2 and 6, respectively. How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.

A
1
B
2
C
3
D
4
       Operating-Systems       Process-Scheduling       GATE 2006
Question 95 Explanation: 

Total no.of context switches is 2.
Question 96

The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x n y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore S.

void P (binary_semaphore *s)
{
    unsigned y;
    unsigned *x = &(s->value);
    do
    {
        fetch-and-set x, y;
    }
    while (y);
}
void V (binary_semaphore *s)
{
    S->value = 0;
} 

Which one of the following is true?

A
The implementation may not work if context switching is disabled in P
B
Instead of using fetch-and-set, a pair of normal load/store can be used
C
The implementation of V is wrong
D
The implementation of V is wrong
       Operating-Systems       Process-Synchronization       GATE 2006
Question 96 Explanation: 
A) This is correct because implementation might not work if context switching is disabled in P, then process which is currently blocked may never give control to the process which might eventually execute v. So context switching is must.
B) If we use normal load and store instead of Fetch and Set, then there can be chance that more than one process sees S.value as 0 and then mutual exclusion will not be satisfied. So wrong.
C) Here we are setting S→value to 0, which is correct. This option that's why wrong.
D) Only one process can be in critical section at any time. So this option is wrong.
Question 97

A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:

A
11 bits
B
13 bits
C
15 bits
D
20 bits
       Operating-Systems       Virtual Memory       GATE 2006
Question 97 Explanation: 
Page size = 4 KB = 4 × 210 Bytes = 212 Bytes
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 98

A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which one of the following is true?

A
Efficient implementation of multi-user support is no longer possible
B
The processor cache organization can be made more efficient now
C
Hardware support for memory management is no longer needed
D
CPU scheduling can be made more efficient now
       Operating-Systems       Virtual Memory       GATE 2006
Question 98 Explanation: 
→ When designer decides to get rid of virtual memory entirely then hardware support is no longer needed.
→ Because special hardware support needed only for virtual memory.
Question 99

Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turnaround time is:

A
13 units
B
14 units
C
15 units
D
16 units
       Operating-Systems       Process-Scheduling       GATE 2006
Question 99 Explanation: 
Algorithm: LRTF (Longest Remaining Time First)

Avg TAT = 12+13+14/3 = 39/3 = 13 units
Question 100

Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units, respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining compute time first scheduling algorithm and schedules a new process either when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can be overlapped as much as possible. For what percentage of time does the CPU remain idle?

A
0%
B
10.6%
C
30.0%
D
89.4%
       Operating-Systems       Process-Scheduling       GATE 2006
Question 100 Explanation: 

Total time needed to complete the execution = 47
Idle time = 2+3 = 5
Percentage of Idle time = 5/47 × 100 =10.6%
Question 101

Consider the following snapshot of a system running n processes. Process i is holding xi instances of a resource R, 1≤i≤n. Currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional yi instances while holding the xi instances it already has. There are exactly two processes p and q such that yp = yq = 0. Which one of the following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?

A
min (xp, xq) < maxk≠p,qyk
B
xp + xq ≥ mink≠p,qyk
C
max (xp, xq) > 1
D
min (xp, xq) > 1
       Operating-Systems       Deadlock       GATE 2006
Question 101 Explanation: 
Deadlock refers stops the execution of process due to non-availability of resources.
→ When two (or) more processes waiting for another process to release the resources.
→ P and Q can execute if they have sufficient resources, they don’t wait for extra resources (i.e., Xp+ Xq) required.
→ Option B can satisfies the corresponding equation i.e., Xp+ Xq >= min(Yk) where k != p and k != q.
Here we have sufficient resources.
Question 102

Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.

void barrier (void) {
1:   P(S);
2:   process_arrived++;
3.   V(S);
4:   while (process_arrived !=3);
5:   P(S);
6:   process_left++;
7:   if (process_left==3) {
8:      process_arrived = 0;
9:      process_left = 0;
10:  }
11:  V(S);
} 

The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally.

The above implementation of barrier is incorrect. Which one of the following is true?

A
The barrier implementation is wrong due to the use of binary semaphore S
B
The barrier implementation may lead to a deadlock if two barriers in invocations are used in immediate succession
C
Lines 6 to 10 need not be inside a critical section
D
The barrier implementation is correct if there are only two processes instead of three
       Operating-Systems       Process-Synchronization       GATE 2006
Question 102 Explanation: 
If process-arrived is because greater than 3. Then there is no possibility to be 3.
Hence, it is leads to deadlock.
Question 103

Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left.

void barrier (void) {
1:   P(S);
2:   process_arrived++;
3.   V(S);
4:   while (process_arrived !=3);
5:   P(S);
6:   process_left++;
7:   if (process_left==3) {
8:      process_arrived = 0;
9:      process_left = 0;
10:  }
11:  V(S);
} 

The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally.

Which one of the following rectifies the problem in the implementation?

A
Lines 6 to 10 are simply replaced by process_arrived--
B
At the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S).
C
Context switch is disabled at the beginning of the barrier and re-enabled at the end.
D
The variable process_left is made private instead of shared.
       Operating-Systems       Process-Synchronization       GATE 2006
Question 103 Explanation: 
If process-arrived is becomes zero then process_left becomes zero. Then deadlock may resolves.
Question 104

Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O?

A
I/O protection is ensured by operating system routine(s)
B
I/O protection is ensured by a hardware trap
C
I/O protection is ensured during system configuration
D
I/O protection is not possible
       Operating-Systems       I/O-Handling       GATE 2005
Question 104 Explanation: 
I/O protection can be ensured by operating system. Because all the user application are not modified by user mode. Those are sent to kernel mode as a system calls.
Question 105

Increasing the RAM of a computer typically improves performance because:

A
Virtual memory increases
B
Larger RAMs are faster
C
Fewer page faults occur
D
Fewer segmentation faults occur
       Operating-Systems       Page-Faults       GATE 2005
Question 105 Explanation: 
→ When page frames increases, then no. of page faults decreases.
→ Such as if RAM size increases, then no. of page entries increases, then no. of page faults decreases.
Question 106

Suppose n processes, P1, …., Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is si, where si>0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur?

A
B
C
D
       Operating-Systems       Deadlock       GATE 2005
Question 106 Explanation: 
In the extreme situation, we have si - 1 resources and we require one more resource.
If the deadlock will never occur in the corresponding process then the following condition be true.
Question 107

Consider the following code fragment:

  if (fork() == 0)
      { a = a + 5; printf(“%d, %d\n”, a, &a); }
  else { a = a – 5; printf(“%d, %d\n”, a, &a); }

Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is TRUE?

A
u = x + 10 and v = y
B
u = x + 10 and v is ≠ y
C
u + 10 = x and v = y
D
u + 10 = x and v ≠ y
       Operating-Systems       System-Calls       GATE 2005
Question 107 Explanation: 
u, v values printed by parent process.
u = a - 5; v be address of a
a = u + 5;
x, y values printed by child process.
x = a + 5; y be the address of a
x = u + 5 + 5; v = y
x = u + 10
Question 108

Consider the following statements with respect to user-level threads and kernel supported threads

    (i) context switch is faster with kernel-supported threads
    (ii) for user-level threads, a system call can block the entire process
    (iii) Kernel supported threads can be scheduled independently
    (iv) User level threads are transparent to the kernel

Which of the above statements are true?

A
(ii), (iii) and (iv) only
B
(ii) and (iii) only
C
(i) and (iii) only
D
(i) and (ii) only
       Operating-Systems       Threads       GATE 2004
Question 108 Explanation: 
→ User level thread context switch is faster than kernel level threads. Option A is false.
→ If one user level thread perform blocking operation then entire process will be blocked. Option B is true.
→ User level threads are threads are transparent to the kernel. Because user level threads are created by users. Option D is true.
→ Kernel supported threads can be scheduled independently, that is based on OS. Option C is true.
Question 109

Consider an operating system capable of loading and executing a single sequential user process at a time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is the expected improvement in the I/O performance of user programs?

A
50%
B
40%
C
25%
D
0%
       Operating-Systems       Disk-Scheduling       GATE 2004
Question 109 Explanation: 
There is no improvement in the I/O performance.
→ The better vendor benchmark results doesn't effects the I/O performance.
→ In FCFS (or) SSTF only one choice is to choose for IO from multiple IO's. There is always one I/O at a time.
Question 110

he minimum number of page frames that must be allocated to a running process in a virtual memory environment is determined by

A
the instruction set architecture
B
page size
C
physical memory size
D
number of processes in memory
       Operating-Systems       Virtual Memory       GATE 2004
Question 110 Explanation: 
→ Based on Instruction Set Architecture each process can be need minimum no. of pages.
→ An ISA permits multiple implementations that may vary in performance, physical size and monetary cost.
Question 111

Consider the following set of processes, with the arrival times and the CPU-burst times given in milliseconds.

  Process   Arrival Time    Burst Time
     P1          0              5
     P2          1              3
     P3          2              3
     P4          4              1 

What is the average turnaround time for these processes with the preemptive shortest remaining processing time first (SRPT) algorithm?

A
5.50
B
5.75
C
6.00
D
6.25
       Operating-Systems       Process-Scheduling       GATE 2004
Question 111 Explanation: 
Uses SRPT Algorithm:

Avg. TAT = 12+3+6+1/4 = 22/4 = 5.50
Question 112

Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?

A
645 nanoseconds
B
1050 nanoseconds
C
1215 nanoseconds
D
2060 nanoseconds
       Operating-Systems       Virtual Memory       GATE 2004
Question 112 Explanation: 
Effective average instruction time = CPU time + 2 EMAT
= 100ns + 2EMAT
Now lets calculate EMAT,
EMAT = TLB + miss rate × 2 × 150ns + 150ns + 1/10000 × 8ms
= 0 + 0.1 × 300ns + 150ns + 800ns
= 980ns
∴ Effective average instruction time,
= 100ns + 2 × 980ns
= 2060ns
Question 113

Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary semaphores SX and SY respectively, both initialized to 1. P and V denote the usual semaphone operators, where P decrements the semaphore value, and V increments the semaphore value. The pseudo-code of P1 and P2 is as follows:

P1 :
 While true do {
   L1 : ................
   L2 : ................
   X = X + 1;
   Y = Y - 1;
   V(SX);
   V(SY);             
 }
P2 :
 While true do {
   L3 : ................   
   L4 : ................
   Y = Y + 1;
   X = Y - 1;
   V(SY);
   V(SX);            
}

In order to avoid deadlock, the correct operators at L1, L2, L3 and L4 are respectively

A
P(Sy), P(Sx); P(Sx), P(Sy)
B
P(Sx), P(Sy); P(Sy), P(Sx)
C
P(Sx), P(Sx); P(Sy), P(Sy)
D
P(Sx), P(Sy); P(Sx), P(Sy)
       Operating-Systems       Process-Synchronization       GATE 2004
Question 113 Explanation: 
Option D:
P1 : line 1
P2 : line 3 (block require Sx)
P1 : line 2
P2 : line 4 (still in block state)
P1 : execute CS, the push up the value of Sx.
P2 : line 3 line 4 (block require Sy)
P1 : push up Sy
P2 : line 4 get Sy and enter into CS
P2 : start execution.
So option D is Answer.
Question 114

A Unix-style i-node has 10 direct pointers and one single, one double and one triple indirect pointers. Disk block size is 1 Kbyte, disk block address is 32 bits, and 48-bit integers are used. What is the maximum possible file size?

A
224 bytes
B
232 bytes
C
234 bytes
D
248 bytes
       Operating-Systems       Disk-Scheduling       GATE 2004
Question 114 Explanation: 
Maximum size of file = 10 Direct + 1 Single Indirect + 1 Double Indirect + 1 Triple Indirect
= 10 + 28 + (28)2 + (28)3
= 224 Blocks (App)
Size of each block = 210
Maximum file size = 224 × 210 = 234
Question 115

In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address translation is not practical because of

A
the large amount of internal fragmentation
B
the large amount of external fragmentation
C
the large memory overhead in maintaining page tables
D
the large computation overhead in the translation process
       Operating-Systems       Virtual Memory       GATE 2003
Question 115 Explanation: 
Page size = 1KB
Virtual address = 32 bit = 232
No. of page level entries = 232 / 210
= 222
= 4M (Too large size)
Question 116

A uni-processor computer system only has two processes, both of which alternate 10ms CPU bursts with 90ms I/O bursts. Both the processes were created at nearly the same time. The I/O of both processes can proceed in parallel. Which of the following scheduling strategies will result in the least CPU utilization (over a long period of time) for this system?

A
First come first served scheduling
B
Shortest remaining time first scheduling
C
Static priority scheduling with different priorities for the two processes
D
Round robin scheduling with a time quantum of 5 ms
       Operating-Systems       Process-Scheduling       GATE 2003
Question 116 Explanation: 
We have two processes P and Q. These process have 10ms CPU burst time and 90ms I/O bursts.
(i) FCFS:
0-10: Process 1 can execute
10-20: Process 2 can execute
100-110: Process 1 Terminate
110-120: Process 2 Terminate
CPU utilization = 20/100 [In every 100ms it utilizes]
=20%
(ii) SRTF: can process P and Q same as FCFS
then CPU utilization = 20%
(iii) Round robin: with TQ-5
0-5: Process P1
5-10: Process P2
10-15: Process P1
15-20: Process P2
105-110: Process P1
110-115: Process P2
CPU utilization = 20/105 = 19.5
Question 117

A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns.

Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest 0.5 ns)

A
1.5 ns
B
2 ns
C
3 ns
D
4 ns
       Operating-Systems       Virtual Memory       GATE 2003
Question 117 Explanation: 
The possibilities are = (TLB Hit * Cache Hit) + (TLB Hit * Cache Miss)
(TLB Miss * Cache Hit) + (TLB Miss * Cache Miss)
= (0.96*0.9*2)+(0.96*0.1+12)
(0.04*0.9*2)+(0.04*0.1*32)
= 3.8
= 4 (approximately)
Question 118

A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns, and TLB access time is also 1 ns.

Suppose a process has only the following pages in its virtual address space: two contiguous code pages starting at virtual address 0x00000000, two contiguous data pages starting at virtual address 0×00400000, and a stack page starting at virtual address 0×FFFFF000. The amount of memory required for storing the page tables of this process is:

A
8 KB
B
12 KB
C
16 KB
D
20 KB
       Operating-Systems       Virtual Memory       GATE 2003
Question 118 Explanation: 
Breakup of given addresses into bit form:-
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)
First code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000
So next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000
First data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000
So next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000
Only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00, 0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page in Level-1.
Hence, we will have in total 4 pages and page size = 212 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Question 119

Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.

Process P:
while (1) {
W:
   print '0';
   print '0';
X:
}
	
Process Q:
while (1) {
Y:
   print '1';
   print '1';
Z:
}

Synchronization statements can be inserted only at points W, X, Y and Z.

Which of the following will always lead to an output staring with '001100110011'?

A
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
C
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
D
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
       Operating-Systems       Process-Synchronization       GATE 2003
Question 119 Explanation: 
Option B is correct.
Process P1 will be executed first and then Process P2 can be executed next.
At the process P: W=P(S)
X=V(T)
At the process Q: Y=P(T)
Z=V(S)
Here, S=1, T=0 then the process P executes first and then Q, and both can run on process alternate way start with P.
Question 120

Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T. The code for the processes P and Q is shown below.

Process P:
while (1) {
W:
   print '0';
   print '0';
X:
}
	
Process Q:
while (1) {
Y:
   print '1';
   print '1';
Z:
}

Synchronization statements can be inserted only at points W, X, Y and Z.

Which of the following will ensure that the output string never contains a substring of the form 01n0 or 10n1 where n is odd?

A
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
C
P(S) at W, V(S) at X, P(S) at Y, V(S) at Z, S initially 1
D
V(S) at W, V(T) at X, P(S) at Y, P(T) at Z, S and T initially 1
       Operating-Systems       Process-Synchronization       GATE 2003
Question 120 Explanation: 
Here to print the required output only one semaphore is enough, if we initialize two at a time then they will run concurrently and leads the processing error.
Question 121

Which of the following scheduling algorithms is non-preemptive?

A
Round Robin
B
First-In First-Out
C
Multilevel Queue Scheduling
D
Multilevel Queue Scheduling with Feedback
       Operating-Systems       Process-Scheduling       GATE 2002
Question 121 Explanation: 
First-in first-out scheduling algorithm is non-preemptive. In this whichever the process enter into ready queue first that will be served first.
Question 122

The optimal page replacement algorithm will select the page that

A
Has not been used for the longest time in the past.
B
Will not be used for the longest time in the future.
C
Has been used least number of times.
D
Has been used most number of times.
       Operating-Systems       Page-Replacement       GATE 2002
Question 122 Explanation: 
The optimal page replacement algorithm will select the page which will not been used for the longest time in the future.
Question 123

Dynamic linking can cause security concerns because

A
Security is dynamic
B
The path for searching dynamic libraries is not known till runtime
C
Linking is insecure
D
Cryptographic procedures are not available for dynamic linking
       Operating-Systems       Linker       GATE 2002
Question 123 Explanation: 
Because dynamic linking the path for searching dynamic libraries is not known till runtime.
Question 124

Which combination of the following features will suffice to characterize an OS as a multi-programmed OS?

    (a) More than one program may be loaded into main memory at the same time for execution.
    (b) If a program waits for certain events such as I/O, another program is immediately scheduled for execution.
    (c) If the execution of program terminates, another program is immediately scheduled for execution.
A
A
B
A and B
C
A and C
D
A, B and C
       Operating-Systems       Multi-Programming       GATE 2002
Question 124 Explanation: 
Statement C can be done in both multi programmed OS and as well as uni programmed OS.
Question 125

In the index allocation scheme of blocks to a file, the maximum possible size of the file depends on

A
the size of the blocks, and the size of the address of the blocks.
B
the number of blocks used for the index, and the size of the blocks.
C
the size of the blocks, the number of blocks used for the index, and the size of the address of the blocks.
D
None of the above
       Operating-Systems       File-System       GATE 2002
Question 125 Explanation: 
No. of addressable blocks using one Index block (A) = Size of block/Size of block address.
No. of block addresses available for addressing one file(B) = No. of Maximum blocks we can use for the Index * No. of addressable blocks using one Index block(A)
Size of File = B * Size of Block
Question 126

(a) Draw the process state transition diagram of an OS in which (i) each process is in one of the five states: created, ready, running, blocked (i.e. sleep or wait), or terminated, and (ii) only non-preemptive scheduling is used by the OS. Label the transitions appropriately.

(b) The functionality of atomic TEST-AND-SET assembly language instruction is given by the following C function.

 int TEST-AND-SET (int *x)
 {
    int y;
    A1:y=*x;
    A2:*x=1;
    A3:return y;
 } 

(i) Complete the following C functions for implementing code for entering and leaving critical sections based on the above TEST-AND-SET instruction.

 int mutex=0;
 void enter-cs()
 {
 while (…………………………………);
 }
void leave-cs()
 {
 …………………………………..;
 } 

(ii) Is the above solution to the critical section problem deadlock free and starvation-free?

(iii) For the above solution, show by an example that mutual exclusion is not ensured if TEST-AND-SET instruction is not atomic.

A
Theory Explanation is given below.
       Operating-Systems       Descriptive       GATE 2002
Question 127

A computer system uses 32-bit virtual address, and 32-bit physical address. The physical memory is byte addressable, and the page size is 4 kbytes. It is decided to use two level page tables to translate from virtual address to physical address. Equal number of bits should be used for indexing first level and second level page table, and the size of each page table entry is 4 bytes.

    (a) Give a diagram showing how a virtual address would be translated to a physical address.
    (b) What is the number of page table entries that can be contained in each page?
    (c) How many bits are available for storing protection and other information in each page table entry?
A
Theory Explanation is given below.
       Operating-Systems       Descriptive       GATE 2002
Question 128

The following solution to the single producer single consumer problem uses semaphores for synchronization.

 #define BUFFSIZE 100
 buffer buf[BUFFSIZE];
 int first=last=0;
 semaphore b_full=0;
 semaphore b_empty=BUFFSIZE;
 
void producer()
 {
 while (1) {
     produce an item;
     p1: …………………..;
     put the item into buff (first);
     first=(first+1)%BUFFSIZE;
     p2: …………………..;
        }
 }
 void consumer()
 { 
while (1) {
             c1:……………………..
             take the item from buf[last];
             last=(last+1)%BUFFSIZE;
             c2: ……………………..;
             consume the item;
          }
 } 
    (a) Complete the dotted part of the above solution.
    (b) Using another semaphore variable, insert one line statement each immediately after p1, immediately before p2, immediately after c1, and immediately before c2 so that the program works correctly for multiple procedures and consumers.
A
Theory Explanation is given below.
       Operating-Systems       Descriptive       GATE 2002
Question 129

Which of the following statements is false?

A
Virtual memory implements the translation of a program’s address space into physical memory address space
B
Virtual memory allows each program to exceed the size of the primary memory
C
Virtual memory increases the degree of multiprogramming
D
Virtual memory reduces the context switching overhead
       Operating-Systems       Virtual Memory       GATE 2001
Question 129 Explanation: 
In a system with virtual memory context switch includes extra overhead in switching of address spaces.
Question 130

Consider a set of n tasks with known runtimes r1, r2, ..., rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?

A
Round-Robin
B
Shortest-Job-First
C
Highest-Response-Ratio-Next
D
First-Come-First-Served
       Operating-Systems       Process-Scheduling       GATE 2001
Question 130 Explanation: 
First we know what is throughput. It means total number of tasks executed per unit time. SJF executes shortest job very early so throughput increases in the case of SJF.
Question 131

Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access pattern, increasing the number of page frames in main memory will

A
always decrease the number of page faults
B
always increase the number of page faults
C
sometimes increase the number of page faults
D
never affect the number of page faults
       Operating-Systems       Page-Replacement       GATE 2001
Question 131 Explanation: 
Belady’s Anomaly more number of frames = More page faults.
See Belady’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in following page replacement algorithms:
First in first out (FIFO), Second chance algorithm, Random page replacement algorithm.
Question 132

Which of the following does not interrupt a running process?

A
A device
B
Timer
C
Scheduler process
D
Power failure
       Operating-Systems       Process       GATE 2001
Question 132 Explanation: 
Scheduler process doesn’t interrupt any process, it’s Job is to select the processes for following three purposes.
Question 133

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the approximate size of the page table?

A
16 MB
B
8 MB
C
2 MB
D
24 MB
       Operating-Systems       Process       GATE 2001
Question 133 Explanation: 
No. of entries in page table = 232/ 212 = 220
Frame size = 226 / 212 = 214
PT have to be stored in one frame so entry size must be 2 bytes, hence size of PT = 220 * 2 = 2 MB
Question 134

Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The program executed by process is shown below.

   repeat   
      flag[i] = true; 
      turn = j; 
      while (P) do no-op; 
      Enter critical section, perform actions, then exit critical 
      section 
      flag[i] = false; 
      Perform other non-critical section actions. 
   until false; 

For the program to guarantee mutual exclusion, the predicate P in the while loop should be.

A
flag[j]=true and turn=i
B
flag[j]=true and turn=j
C
flag[i]=true and turn=j
D
flag[i]=true and turn=i
       Operating-Systems       Process-Synchronization       GATE 2001
Question 134 Explanation: 
To ensure the mutual exclusion, we have to take care that ‘turn’ value which is ‘j’ so we should not allow that process in CS which is having flag[j] = true.
Question 135

Two concurrent processes P1 and P2 want to use two resources R1 and R2 in a mutually exclusive manner. Initially, R1 and R2 are free. The programs executed by the two processes are given below.

Program for P1:
                    S1: While (R1 is busy) do no-op;
                    S2: Set R1 ← busy;
                    S3: While (R2 is busy) do no-op;
                    S4: Set R2 ← busy;
                    S5: Use R1 and R2;
                    S6: Set R1 ← free;
                    S7: Set R2 ← free;
Program for P2:
                    Q1: While (R1 is busy) do no-op;
                    Q2: Set R1 ← busy;
                    Q3: While (R1 is busy) do no-op;
                    Q4: Set R1 ← busy;
                    Q5: Use R1 and R2;
                    Q6: Set R2 ← free;
                    Q7: Set R1 ← free; 

(a) Is mutual exclusion guaranteed for R1 and R2? If not, show a possible interleaving of the statements of P1 and P2 such that mutual exclusion is violated (i.e., both P1 and P2 use R1 or R2 at the same time).
(b) Can deadlock occur in the above program? If yes, show a possible interleaving of the statements of P1 and P2 leading to deadlock.
(c) Exchange the statements Q1 and Q3 and statements Q2 and Q4. Is mutual exclusion guaranteed now? Can deadlock occur?

A
Theory Explanation is given below.
       Operating-Systems       Descriptive       GATE 2001
Question 136

Which of the following need not necessarily be saved on a context switch between processes?

A
General purpose registers
B
Translation look-aside buffer
C
Program counter
D
All of the above
       Operating-Systems       Context-Switching       GATE 2000
Question 136 Explanation: 
We don't need to save TLB or cache to ensure correct program resumption. They are just bonus for ensuring better performance. But PC, stack and registers must be saved as otherwise program cannot resume.
Question 137

Let m[0] … m[4] be mutexes (binary semaphores) and P[0] … P[4] be processes. Suppose each process P[i] executes the following:

  wait (m[i]); wait(m[(i+1) mode 4]);
  ------
  release (m[i]); release (m[(i+1)mod 4]);  

This could cause:

A
Thrashing
B
Deadlock
C
Starvation, but not deadlock
D
None of the above
       Operating-Systems       Process-Synchronization       GATE 2000
Question 137 Explanation: 
Question 138

A graphics card has on board memory of 1MB. Which of the following modes can the card not support?

A
1600 × 400 resolution with 256 colours on a 17 inch monitor
B
1600 × 400 resolution with 16 million colours on a 14 inch monitor
C
800 × 400 resolution with 16 million colours on a 17 inch monitor
D
800 × 800 resolution with 256 colours on a 14 inch monitor
       Operating-Systems       Graphics       GATE 2000
Question 138 Explanation: 
For every option we need to calculate memory.
Option A:
1600×400 = 64000 = 640 KB
256 colours 8 bits = 1 Byte
⇒ 640×1 = 640KB (we have 1MB)
Option B:
1600×400 = 640KB
16 million = 24 bits = 3 bytes
⇒ 640×3 = 1920KB (Not enough)
Option C:
800×400 = 320 KB
16 millions = 3 bytes ⇒ 320×3 = 960 KB (we have 1MB)
Option D:
800×400 = 320 KB
256 colours = 1 Byte ⇒ 320×1 = 320 (we have 1MB)
Question 139

Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of

A
1.9999 milliseconds
B
1 millisecond
C
9.999 microseconds
D
1.9999 microseconds
       Operating-Systems       Virtual Memory       GATE 2000
Question 139 Explanation: 
Average memory access time = (P*t1) + [(1-P)t2]
= (0.9999*1) + [(1-0.9999) *10000]
= (0.9999) + (0.0001 * 10000)
= 0.9999 + 1
= 1.9999 microseconds
Question 140

Which of the following is NOT a valid deadlock prevention scheme?

A
Release all resources before requesting a new resource
B
Number the resources uniquely and never request a lower numbered resource than the last one requested
C
Never request a resource after releasing any resource
D
Request and all required resources be allocated before execution
       Operating-Systems       Deadlock       GATE 2000
Question 140 Explanation: 
Given statement is wrong. We can request a resource after releasing any resource.
Question 141

(a) Fill in the boxes below to get a solution for the readers-writers problem, using a single binary semaphore, mutex (initialized to 1) and busy waiting. Write the box numbers (1, 2 and 3), and their contents in your answer book.

     int R = 0, W = 0;
     Reader () {
 L1:        wait (mutex);
            If (W ==0) {
                R = R +1; 
                ▭______(1)
     }
     else {
                ▭______(2)
                goto L1;
                }
                …./* do the read */
                wait (mutex)
                R = R – 1;
                signal (mutex);
          }
          Writer () {
 L2:            wait(mutex);
                If (▭) {______(3)
                    signal (mutex);
                    goto L2;
                }
                W=1;
                signal (mutex);
                …./*do the write*/
                wait(mutex)
                W = 0;
                signal (mutex); 

(b) Can the above solution lead to starvation of writers?

A
Theory Explanation is given below.
       Operating-Systems       Process-Synchronization       GATE 2000
Question 141 Explanation: 
(a) (i) Signal (mutex)
(ii) Signal (mutex)
(iii) R ⇒ 1 (or) W == 1
(b) In CS, atleast one of the reader is always present. That means writer can't be enter into the critical section.
This leads to readers-writers problem may occur in the queue.
Question 142

Listed below are some operating system abstractions (in the left column) and the hardware components or mechanism (in the right column) that they are abstractions of. Which of the following matching of pairs is correct?

    A. Thread                      1. Interrupt
    B. Virtual address space       2. Memory
    C. File system                 3. CPU
    D. Signal                      4. Disk 
A
(A) – 2 (B) – 4 (C) – 3 (D) - 1
B
(A) – 1 (B) – 2 (C) – 3 (D) – 4
C
(A) – 3 (B) – 2 (C) – 4 (D) - 1
D
(A) – 4 (B) – 1 (C) – 2 (D) – 3
       Operating-Systems       Match-the-Following       GATE 1999
Question 142 Explanation: 

⇒ Threads are handled by CPU.
⇒ Virtual address is a memory type.
⇒ File system is used to manage the disk.
⇒ Interrupt is a signal.
Question 143

Which of the following disk scheduling strategies is likely to give the best through put?

A
Farthest cylinder next
B
Nearest cylinder next
C
First come first served
D
Elevator algorithm
       Operating-Systems       Disk-Scheduling       GATE 1999
Question 143 Explanation: 
Farthest cylinder next - Longest job first
Nearest cylinder next - SSTF
First come first served - FCFS
Elevator - SCAN
SSTF always serves the request of nearest cylinder first. Due to which the necessary movement gets reduced.
Question 144

System calls are usually invoked by using

A
a software interrupt
B
polling
C
an indirect jump
D
a privileged instruction
       Operating-Systems       System-Calls       GATE 1999
Question 144 Explanation: 
Software interrupts are implementing device drivers (or) transitions between protected mode of operations, such as system calls.
Question 145

A  multi-user,  multi-processing  operating  system  cannot  be  implemented  on hardware that does not support

A
Address translation
B
DMA for disk transfer
C
At least two modes of CPU execution (privileged and non-privileged)
D
Demand paging
E
Both A and C
       Operating-Systems       Virtual Memory       GATE 1999
Question 145 Explanation: 
Address translation and atleast two modes of CPU execution (Privileged and non-privileged) are needed to implement multiuser and multiprocessing operating system, because address translation provides memory protection which ensures that a given process does not interfere with another, and we need privileged and non-privileged instruction, so that user and OS interconnects properly.
Question 146

Which of the following is/are advantage of virtual memory?

A
Faster access to memory on an average.
B
Processes can be given protected address spaces.
C
Linker can assign addresses independent of where the program will be loaded in physical memory.
D
Programs larger than the physical memory size can be run.
E
Both B and D
       Operating-Systems       Virtual Mem       GATE 1999
Question 146 Explanation: 
A) False. Because in virtual memory concept address translation is required due to which access is slow.
B) True. Because in virtual memory concept of address translation provides protected address space so that one process do not interfere the other process.
C) False.
D) True.
Question 147

Which  of  the  following  actions  is/are  typically  not  performed  by  the  operating system when switching context from process A to process B?

A
Saving current register values and restoring saved register values for process B.
B
Changing address translation tables.
C
Swapping out the memory image of process A to the disk.
D
Invalidating the translation look-aside buffer.
       Operating-Systems       Context-Switching       GATE 1999
Question 147 Explanation: 
A) True.
B) True.
C) False, because swapping is done when the process is suspended and not during context switching.
D) True, Invalidation of TLB is necessary because, if the TLB is not invalidated then the new process might end up using the translation of old process. Note that Invalidation of TLB is necessary but saving and reuse of TLB is not necessary.
Question 148

Consider the following program in a language that has dynamic seeping.

   var x: real;
   procedure show:
       begin print(x);end;
   procedure small;
       var x: real;
           begin x: = 0.125; show; end;
   begin x:=0.25
       show; small
       end. 

Then the output of the program is:

A
0.125 0.125
B
0.25 0.25
C
0.25 0.125
D
0.125 0.25
       Operating-Systems       Dynamic-Scoping       GATE 1999
Question 148 Explanation: 
Note: Out of syllabus.
Question 149

A linker reads four modules whose lengths are 200, 800, 600 and 500 words, respectively. If they are loaded in that order, what are the relocation constants?

A
0, 200, 500, 600
B
0, 200, 1000, 1600
C
200, 500, 600, 800
D
200, 700, 1300, 2100
       Operating-Systems       Linker       GATE 1998
Question 149 Explanation: 
First module loaded starting at address 0. Size is 200. Hence it will occupy first 200 address, last address being 199.
Now 2nd will start loading at 200, since size is 800, so last address is 999.
Now 3rd module will start loading at 1000, since size is 600. So last address is 1599.
Now 4th module will start loading at 1600 and go till 2099.
Question 150

Which of the following is an example of a spooled device?

A
The terminal used to enter the input data for the C program being executed
B
An output device used to print the output of a number of jobs
C
The secondary memory device in a virtual storage system
D
The swapping area on a disk used by the swapper
       Operating-Systems       Deadlock-Prevention       GATE 1998
Question 150 Explanation: 
Spooling is a technique in which an intermediate device such as disk is interposed between process and low speed i/o device.
For example in printer if a process attempt to print a document but printer is busy printing another document, the process, instead of waiting for printer to become available, write its output to disk. When the printer becomes available the data on disk is printed.
Spooling allows process to request operation from peripheral device without requiring that the device is ready to service the request.
Question 151

When  the  result  of  a  computation  depends  on  the  speed  of  the  processes involved there is said to be

A
cycle steating
B
rare condition
C
a time lock
D
a deadlock
       Operating-Systems       Process-Synchronization       GATE 1998
Question 151 Explanation: 
When first result depends on ordering of processes it is called race condition.
Speed of processes corresponds to ordering of processes.
Question 152

A counting semaphore was initialized to 10. Then 6P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is

A
0
B
8
C
10
D
12
       Operating-Systems       Process-Synchronization       GATE 1998
Question 152 Explanation: 
Let the semaphore be S which is initially 10.
S = 10
Now 6P operations and uv operations were completed on this semaphore. So final value of S will be
S = 10 - 6 + 4 = 8
Question 153

A  computer  has  six  tape  drives,  with  n  processes  competing  for  them.  Each process may need two drives. What is the maximum value of n for the system to be deadlock free?

A
6
B
5
C
4
D
3
       Operating-Systems       Deadlock       GATE 1998
Question 153 Explanation: 
Each process needs 2 drives. So for deadlock just give each process one drive. So total 6 process can be given 1 drive each and can cause deadlock. So to break deadlock just reduce 1 process.
So maximum no. of process for the system to be deadlock free is 5.
Question 154

The overlay tree for a program is as shown below:

What will be the size of the partition (in physical memory) required to load (and run) this program?

A
12 KB
B
14 KB
C
10 KB
D
8 KB
       Operating-Systems       Overlays       GATE 1998
Question 154 Explanation: 
To enable a process which is larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to heap only those instructions and data that are needed at any given time. When some instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed.
For the above program, maximum memory will be required when running code portion present at leaves.
Maximum requirement
= MAX(12, 14, 10, 14)
= 14
Question 155

Consider n processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes s seconds, what must be the quantum size q such that the overhead resulting from process switching is minimized but at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?

A
q ≤ t-ns/n-1
B
q ≥ t-ns/n-1
C
q ≤ t-ns/n+1
D
q ≥ t-ns/n+1
       Operating-Systems       Process-Scheduling       GATE 1998
Question 155 Explanation: 
Question 156

If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on the average a page fault occurs every k instruction is:

A
i + j/k
B
i + j*k
C
i+j/ k
D
(i+j)*k
       Operating-Systems       Virtual Memory       GATE 1998
Question 156 Explanation: 
Effective memory access time
= service time + page fault rate * page fault service time
= i + 1/k * j
= i + j/k
Question 157

Locality of reference implies that the page reference being made by a process

A
will always be to the page used in the previous page reference
B
is likely to be to one of the pages used in the last few page references
C
will always be to one of the pages existing in memory
D
will always lead to a page fault
       Operating-Systems       Locality-of-Reference       GATE 1997
Question 157 Explanation: 
Locality of reference is also called as principle of locality. There are three different principle of locality:
(1) Temporal locality
(2) Spatial locality
(3) Sequential locality
→ In programs related data are stored in consecutive locations and loops in same locations are used repeatedly.
Question 158

The correct matching for the following pairs is

(A) Disk Scheduling        (1) Round robin
(B) Batch Processing       (2) SCAN
(C) Time sharing           (3) LIFO
(D) Interrupt processing   (4) FIFO  
A
A – 3 B – 4 C – 2 D – 1
B
A – 4 B – 3 C – 2 D – 1
C
A – 2 B – 4 C – 1 D – 3
D
A – 3 B – 4 C – 3 D – 2
       Operating-Systems       Match-the-Following       GATE 1997
Question 158 Explanation: 
Disk scheduling - SCAN
Batch processing - FIFO
Time sharing - Round Robin
Interrupt processing - LIFO
Question 159

I/O redirection

A
implies changing the name of a file
B
can be employed to use an existing file as input file for a program
C
implies connection 2 programs through a pipe
D
None of the above
       Operating-Systems       File-System       GATE 1997
Question 159 Explanation: 
Redirection is known as capturing output from a file, command, program (or) even code block within a script and sending it as input to another file, command, program (or) script.
Question 160

Thrashing

A
reduces page I/O
B
decreases the degree of multiprogramming
C
implies excessive page I/O
D
improve the system performance
       Operating-Systems       Thrashing       GATE 1997
Question 160 Explanation: 
Thrashing implies excessive page I/O.
Question 161

Dirty bit for a page in a page table

A
helps avoid unnecessary writes on a paging device
B
helps maintain LRU information
C
allows only read on a page
D
None of the above
       Operating-Systems       Virtual memory       GATE 1997
Question 161 Explanation: 
The dirty bit allows for a performance optimization i.e., Dirty bit for a page in a page table helps to avoid unnecessary writes on a paging device.
Question 162

An operating system contains 3 user processes each requiring 2 units of resource R. The minimum number of units of r such that no deadlocks will ever arise is

A
3
B
5
C
4
D
6
       Operating-Systems       Deadlock       GATE 1997
Question 162 Explanation: 
For the system to cause deadlock give each process 1 resource less than they require. Since in this case they require 2 resource each, so just give them 1 resource each. So if at max if 3 resource will be available then there can be deadlock. So by adding one more resource deadlock will never occur. So in total minimum 4 resources are required so that deadlock will never occur.
Question 163

Each Process Pi, i = 1.......9 is coded as follows

    repeat 
           P(mutex)
           {Critical section}
           V(mutex)
    forever   

The code for P10 is identical except it uses V(mutex) in place of P(mutex). What is the largest number of processes that can be inside the critical section at any moment?

A
1
B
2
C
3
D
None of the above
       Operating-Systems       Process-Synchronization       GATE 1997
Question 163 Explanation: 
Since the both code (i.e., P1 to P9 and P10) can be executed any number of times and code for P10 is
repeat
{
V(mutex)
C.S.
V(mutex)
}
forever
Now, let me say P1 is in CS then P1 is in CS
then P10 comes and executes CS (upon mutex).
Now, P2 comes (down on mutex).
Now P10 moves out of CS (again binary semaphore will be 1).
Now P3 comes (down on mutex).
Now P10 comes (upon mutex).
Now P4 comes (down mutex).

So, if we take P10 'in' and 'out' of CS recursively, all 10 processes can be in CS at same time using binary semaphore only.
Question 164

The process state transition diagram in below figure is representative of

A
a batch operating system
B
an operating system with a preemptive scheduler
C
an operating system with a non-preemptive scheduler
D
a uni-programmed operating system
       Operating-Systems       Process-Scheduling       GATE 1996
Question 164 Explanation: 
Transaction from running → ready present.
So this is preemptive.
Question 165

A critical section is a program segment

A
which should run in a certain specified amount of time
B
which avoids deadlocks
C
where shared resources are accessed
D
which must be enclosed by a pair of semaphore operations, P and V
       Operating-Systems       Process-Synchronization       GATE 1996
Question 165 Explanation: 
In CS, share resources are accessed.
Question 166

Which of the following is an example of spooled device?

A
A line printer used to print the output of a number of jobs.
B
A terminal used to enter input data to a running program.
C
A secondary storage device in a virtual memory system.
D
A graphic display device.
       Operating-Systems       Deadlock-Prevention       GATE 1996
Question 166 Explanation: 
Example of spooled device is a line printer used to print the output of a number of jobs.
Question 167

The correct matching for the following pairs is

         A. Activation record	   1. Linking loader
         B. Location counter	   2. Garbage collection
         C. Reference counts	   3. Subroutine call
         D. Address relocation	   4. Assembler 
A
A – 3 B – 4 C – 1 D – 2
B
A – 4 B – 3 C – 1 D – 2
C
A – 4 B – 3 C – 2 D – 1
D
A – 3 B – 4 C – 2 D – 1
       Operating-Systems       Match-the-Following       GATE 1996
Question 167 Explanation: 
Each time a subroutine is called its activation record is created.
An assembler uses location counter value to give address to each instruction which is needed for relative addressing as well as for jump labels.
Reference count is used by garbage collector to clear the memory whose reference count be comes 0.
Linker loader is a loader which can load several compiled codes and link them together into a single executable. Thus it needs to do relocation of the object codes.
Question 168

A 1000 Kbyte memory is managed using variable partitions but to compaction. It currently has two partitions of sizes 200 Kbytes and 260 Kbytes respectively. The smallest allocation request in Kbytes that could be denied is for

A
151
B
181
C
231
D
541
       Operating-Systems       Virtual Memory       GATE 1996
Question 168 Explanation: 
200 and 260 are already hold by some other processes. Now we have to model the partition in such a way so that smallest allocation request could be denied. So, we can do the division as,

So, smallest allocation request which can be denied is 181 KB.
Question 169

A solution to the Dining Philosophers Problem which avoids deadlock is

A
ensure that all philosophers pick up the left fork before the right fork
B
ensure that all philosophers pick up the right fork before the left fork
C
ensure that one particular philosopher picks up the left fork before the right fork, and that all other philosophers pick up the right fork before the left fork
D
None of the above
       Operating-Systems       Deadlock       GATE 1996
Question 169 Explanation: 
In the Dining philosopher problem, each philosopher needs exactly two chopsticks to eat food but the problem is: each philosopher is going to take one chopstick at a time, which is placed at its right-hand side or at its left-hand side, but remember all should choose in the same manner like if first one chooses in a clockwise manner then each one should choose in clockwise, this type of picking cause, a circular waiting loop because each one is depending on other. This is also called as circular waiting and it leads to deadlock.
To avoid this, atleast one philosopher should choose its first chopstick in different way so that circular loop is not formed.
Question 170

Four jobs to be executed on a single processor system arrive at time 0+ in the order A, B, C, D. their burst CPU time requirements are 4, 1, 8, 1 time units respectively. The completion time of A under round robin scheduling with time slice of one time unit is

A
10
B
4
C
8
D
9
       Operating-Systems       Process-Scheduling       GATE 1996
Question 170 Explanation: 
Time quantum is 1 unit.

∴ Completion time of A is 9 unit.
Question 171

Which of the following statements is true?

A
ROM is a Read/Write memory
B
PC points to the last instruction that was executed
C
Stack works on the principle of LIFO
D
All instructions affect the flags
       Operating-Systems       General       GATE 1995
Question 171 Explanation: 
We know that stack works on the principle of LIFO.
Question 172

In a paged segmented scheme of memory management, the segment table itself must have a page table because

A
the segment table is often too large to fit in one page
B
each segment is spread over a number of pages
C
segment tables point to page table and not to the physical locations of the segment
D
the processor’s description base register points to a page table
E
Both A and B
       Operating-Systems       Memory-Management       GATE 1995
Question 172 Explanation: 
The segment table is often too large to fit in one page. This is true and the segment table can be divided into pages. Thus page table for each segment table, pages are created.
Segment paging is different from paged segmentation.
Question 173

Which of the following page replacement algorithms suffers from Belady’s anamoly?

A
Optimal replacement
B
LRU
C
FIFO
D
Both (A) and (C)
       Operating-Systems       Page-Replacement-Algorithm       GATE 1995
Question 173 Explanation: 
FIFO is suffers from Belady's Anamoly.
Question 174

Which scheduling policy is most suitable for a time shared operating system?

A
Shortest Job First
B
Round Robin
C
First Come First Serve
D
Elevator
       Operating-Systems       CPU-Scheduling       GATE 1995
Question 174 Explanation: 
In Round robin, we use the time quantum based on this execution can be done. Then it is said to be time shared operating system.
Question 175

The sequence …………… is an optimal non-preemptive scheduling sequence for the following jobs which leaves the CPU idle for …………… unit(s) of time.

A
{3,2,1),1
B
(2,1,3},0
C
{3,2,1),0
D
{1,2,3},5
       Operating-Systems       CPU-Scheduling       GATE 1995
Question 175 Explanation: 
Here, in option (B) and (C) they have given CPU idle time is 0 which is not possible as per schedule (B) and (C).
So, (B) and (C) will be eliminated.
Now in (A) and (D):
For r(A),

So, idle time is between 0 to 1 which in case of option (A).
For option(D),

We can see there is no idle time at all, but in option given idle time is 5, which is not matching with our chart. So option (D) is eliminated.
Therefore, the correct sequence is option (A).
Question 176

The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 records per page with 1 free main memory frame is recorded as follows. What is the number of page faults?

 0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0370 
A
13
B
8
C
7
D
10
       Operating-Systems       Page-Replacement-Algorithm       GATE 1995
Question 176 Explanation: 
Page are fitted in frames, so first we need to determine the pages but given request are just the record request in decimal. We can assume that first page to address from 0000 to 0099 and page 2 contains records from 0100 to 0199 and so on (it is given in question that each page contains 100 records) and so on. So page request string is
01, 02, 04, 04, 05, 05, 05, 01, 02, 02, 02, 03, 03.
Clearly 7 page faults.
Question 177

In a virtual memory system the address space specified by the address lines of the CUP must be __________ than the physical memory size and _______ than the secondary storage size.

A
smaller, smaller
B
smaller, larger
C
larger, smaller
D
larger, larger
       Operating-Systems       Virtual Memory       GATE 1995
Question 177 Explanation: 
Primary memory < Virtual memory < Secondary memory.
Question 178

A memory page containing a heavily used variable that was initialized very early and is in constant use is removed when

A
LRU page replacement algorithm is used
B
FIFO page replacement algorithm is used
C
LFU page replacement algorithm is used
D
None of the above
       Operating-Systems       Page-Replacement-Algorithm       GATE 1994
Question 178 Explanation: 
In FIFO, whichever comes first that can be removed first. If the variable was initialized very early, it is in set of first pages. So it was removed.
In LRU which can eliminate (or) removed which is least recently used.
In LFU the frequency of the page is more. So it is in constant use so cannot be replaced.
Question 179

Consider the following heap (figure) in which blank regions are not in use and hatched region are in use.

The sequence of requests for blocks of size 300, 25, 125, 50 can be satisfied if we use

A
either first fit or best fit policy (any one)
B
first fit but not best fit policy
C
best fit but first fit policy
D
None of the above
       Operating-Systems       Page-Replacement-Algorithm       GATE 1994
Question 179 Explanation: 
In first fit, block request will be satisfied from the first free block that fits it.
So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 125 size block, reducing it to 125.
Request for 125 satisfied by the 125 size block.
And request for 50 satisfied by 50 size block.
So, all requests can be satisfied.
In best fit strategy, a block request is satisfied by the smallest block in which can fit it. So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 50 size block as its the smallest size that fits 25, reducing it to 25.
Request for 125, satisfied by 150 size block, reducing it to 25.
Now, request for 50 cannot be satisfied as the two 25 size blocks are not contiguous.
Question 180

Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes. Which one of the following statements is TRUE?

A
The hole created by worst fit is always larger than the hole created by first fit.
B
The hole created by best fit is never larger than the hole created by first fit.
C
The hole created by first fit is always larger than the hole created by next fit.
D
The hole created by next fit is never larger than the hole created by best fit.
       Operating-Systems       Memory-Management       GATE 2020
Question 180 Explanation: 
The size of hole created using best fit is never greater than size created by first fit. The best fit chooses the smallest available partition to fit the size of the process. Whereas, first fit and next fit doesn’t consider the size of the holes available.
Question 181

Consider the following statements about process state transitions for a system using preemptive scheduling.

    I. A running process can move to ready state.
    II. A ready process can move to running state.
    III. A blocked process can move to running state.
    IV. A blocked process can move to ready state.

Which of the above statements are TRUE?

A
II and III only
B
I, II and III only
C
I, II, III and IV
D
I, II and IV only
       Operating-Systems       Process-Scheduling       GATE 2020
Question 181 Explanation: 
Question 182

Consider the following set of processes, assumed to have arrived at time 0. Consider the CPU scheduling algorithms Shortest Job First (SJF) and Round Robin (RR). For RR, assume that the processes are scheduled in the order P1,P2,P3,P4.

If the time quantum for RR is 4 ms, then the absolute value of the difference between the average turnaround times (in ms) of SJF and RR (round off to 2 decimal places) is _____.

A
5.25
       Operating-Systems       Process-Scheduling       GATE 2020
Question 182 Explanation: 
SJF:

Turn Around Time = (21 – 0) + (13 – 0) + (2 – 0) + (6 – 0), Average = 42/4 = 10.50

Turn Around Time (TAT) = (18 – 0) + (21 – 0) + (10 – 0) + (14 – 0), Average = 63/4 = 15.75
Absolute difference = |10.50-15.75| = 5.25
Question 183

Each of a set of n processes executes the following code using two semaphores a and b initialized to 1 and 0, respectively. Assume that count is a shared variable initialized to 0 and not used in CODE SECTION P.

wait (a); count = count+1;
if (count==n) signal (b);
signal (a); wait (b); signal (b);

What does the code achieve?

A
It ensures that all processes execute CODE SECTION P mutually exclusively.
B
It ensures that at most two processes are in CODE SECTION Q at any time.
C
It ensures that no process executes CODE SECTION Q before every process has finished CODE SECTION P.
D
It ensures that at most n-1 processes are in CODE SECTION P at any time.
       Operating-Systems       Process-Synchronization       GATE 2020
Question 183 Explanation: 
The wait(a) ensures that count value is correctly incremented (no race condition) if(count==n) signal (b); // This signal(b) statement is executed by the last (nth) process only. Rest of the n-1 processes are blocked on wait(b). Once the nth process makes signal(b) then rest of the processes can proceed an enter Code section Q.
Question 184

Consider the following five disk access requests of the form (request id, cylinder number) that are present in the disk scheduler queue at a given time.

         (P, 155), (Q, 85), (R, 110), (S, 30), (T, 115) 

Assume the head is positioned at cylinder 100. The scheduler follows Shortest Seek Time First scheduling to service the requests.

Which one of the following statements is FALSE?

A
The head reverses its direction of movement between servicing of Q and P.
B
T is serviced before P.
C
R is serviced before P.
D
Q is serviced after S, but before T.
       Operating-Systems       Disk-Scheduling       GATE 2020
Question 184 Explanation: 
(P,155)(Q,85)(R,110)(S,30)(T,115)
Question 185

Consider a paging system that uses a 1-level page table residing in main memory and a TLB for address translation. Each main memory access takes 100 ns and TLB lookup takes 20 ns. Each page transfer to/from the disk takes 5000 ns. Assume that the TLB hit ratio is 95%, page fault rate is 10%. Assume that for 20% of the total page faults, a dirty page has to be written back to disk before the required page is read in from disk. TLB update time is negligible. The average memory access time in ns (round off to 1 decimal places) is ______.

A
154.5 ns
       Operating-Systems       Memory-Management       GATE 2020
Question 185 Explanation: 
M=100ns
T=20ns
D=5000ns
h=0.95
p=0.1, 1-p=0.9
d=0.2, 1-d=0.8
EMAT = h×(T+M)+(1-h)[(1-p)×2M+p[(1-d)[D+M]+d(2D+M)]+T]
= 0.95×(20+100)+(1-0.95)[(1-0.1)×200+(0.1)[(1-0.2)[5000+100]+(0.2)(10000+100)]+20]
= 154.5 ns
Question 186

A certain moving arm disk storage, with one head, has the following specifications.

 Number of tracks/recording surface = 200
 Disk rotation speed = 2400 rpm
 Track storage capacity = 62,500 bits 

The average latency of this device is P msec and the data transfer rate is Q bits/sec.
Write the value of P and Q.

A
P = 12.5, Q = 2.5×106
       Operating-Systems       Memory-Management       GATE 1993
Question 186 Explanation: 
RPM = 2400
So, in DOS, the disk rotates 2400 times.
Average latency is the time for half a rotation
= 0.5×60/2400 s
= 12.5 ms
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits
So, disk transfer rate
= 62500 × 2400/60
= 2.5 × 106 bps
So,
P = 12.5, Q = 2.5×106
Question 187

Consider a system having m resources of the same type. These resources are shared by 3 processes A, B and C, which have peak demands of 3, 4 and 6 respectively. For what value of m deadlock will not occur?

A
7
B
9
C
13, 15
D
13
E
15
       Operating-Systems       Deadlock       GATE 1993
Question 187 Explanation: 
A requires 3, B-4, C-6;
→ If A have 2, B have 3, C have 5 then deadlock will occur i.e., 2+3+5=10.
→ If we have one extra resource then deadlock will not occur i.e., 10+1=11.
→ If we have equal (or) more than 11 resources then deadlock will never occur.
Question 188

Assume that the following jobs are to be executed on a single processor system

The jobs are assumed to have arrived at time 0+ and in the order p, q, r, s, t. Calculate the departure time (completion time) for job p if scheduling is round robin with time slice 1.

A
4
B
10
C
11
D
12
E
None of the above
       Operating-Systems       Deadlock       GATE 1993
Question 188 Explanation: 
Algorithm: Round robin; TQ: 1

p is departure at 11.
Question 189

At a particular time of computation the value of a counting semaphore is 7. Then 20 P operations and 15 V operations were completed on this semaphore. The resulting value of the semaphore is:

A
42
B
2
C
7
D
12
       Operating-Systems       Semaphores       GATE 1992
Question 189 Explanation: 
Let the semaphore be S.
Initial value of S is 7.
So after 20P operations and 15V operations the value of semaphore S will be,
S = 7 - 20 + 15 = 2
Question 190

A computer system has 6 tape drives, with n process completing for them. Each process may need 3 tape drives. The maximum value of n for which the system is guaranteed to be deadlock free is:

A
2
B
3
C
4
D
1
       Operating-Systems       Deadlock       GATE 1992
Question 190 Explanation: 
Lets give 2 tape driver to each process, so that there will be deadlock. So 3 processes will be given two drives each so that there will be deadlock. So to avoid deadlock maximum no. of process should be 1 less than the minimum no. of process that will cause deadlock. So for n=2, the system is guaranteed to be deadlock free.
Question 191

Which of the following is an example of a spooled device?

A
The terminal used to the input data for a program being executed.
B
The secondary memory device in a virtual memory system
C
A line printer used to print the output of a number of jobs.
D
None of the above
       Operating-Systems       Spooled-Devices       GATE 1992
Question 191 Explanation: 
Spool stands for simultaneous peripheral operations online.
Question 192

Which page replacement policy sometimes leads to more page faults when size of memory is increased?

A
FIFO
       Operating-Systems       Page-Replacement-Algorithm       GATE 1992
Question 192 Explanation: 
FIFO, because it sometimes leads to more page faults when size of memory is increased.
Question 193

Match the pairs in the following questions by writing the corresponding letters only.

(A) Buddy system     (P) Run-time type specification
(B) Interpretation   (Q) Segmentation
(C) Pointer type     (R) Memory allocation
(D) Virtual memory   (S) Garbage collection
A
A-R, B-P, C-S, D-Q
       Operating-Systems       General       GATE 1991
Question 193 Explanation: 
A-R
B-P
C-S
D-Q
Buddy system: The buddy system is a technique to allocate the memory by dividing the memory into partitions to try to satisfy a memory request as suitably as possible.
Interpretation: Runtime.
Pointer type: Garbage collector dereference the dangling pointer.
Virtual memory: Segmentation.
Question 194

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: The total size of address space is a virtual memory systems is limited by

A
the length of MAR
B
the available secondary storage
C
the available main memory
D
all of the above
E
none of the above
F
Both A and B
       Operating-Systems       Virtual Memory       GATE 1991
Question 194 Explanation: 
Virtual memory is independent of size of main memory and depends on available secondary storage.
MAR can hold the address that is generated from CPU and limits the total virtual memory address space.
Question 195

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Indicate all the false statements from the statements given below:

A
The amount of virtual memory available is limited by the availability of secondary storage.
B
Any implementation of a critical section requires the use of an indivisible machine-instruction, such as test-and-set.
C
The LRU page replacement policy may cause hashing for some type of programs.
D
The best fit techniques for memory allocation ensures the memory will never be fragmented.
E
B and D
       Operating-Systems       Virtual Memory       GATE 1991
Question 195 Explanation: 
(A) Is true.
(B) Is false, because one of the solution is Peterson's solution which is purely software based solution without use of hardware.
(C) Is true.
(D) False, memory can get fragmented with best fit technique.
Question 196

Match the pairs:

(a) Critical region     (p) Hoare's monitor
(b) Wait/Signal         (q) Mutual exclusion
(c) Working Set         (r) Principle of locality
(d) Deadlock            (s) Circular Wait
A
(a) - (q), (b) - (p), (c) - (r), (d) - (s)
       Operating-Systems       Match-the-Following       GATE 1990
Question 197

State whether the following statements are TRUE or FALSE with reason:

The Link-load-and-go loading scheme required less storage space than the link-and-go loading scheme.
A
True
B
False
       Operating-Systems       Linker       GATE 1990
Question 197 Explanation: 
In link and go scheme the linkage editor co-exists with program in main memory while performing the linking task.
Whereas in link, load and go scheme the linkage editor does not co-exists with program in main memory while performing linking task.
Question 198

Match the pairs in the following questions:

(A) Virtual memory      (p) Temporal Locality
(B) Shared memory       (q) Spatial Locality
(C) Look-ahead buffer   (r) Address Translation
(D) Look-aside buffer   (s) Mutual Exclusion
A
(A)-(r), (B)-(s), (C)-(q), (D)-(p)
       Operating-Systems       Match-the-Following       GATE 1989
Question 198 Explanation: 
Virtual Memory - Address Translation
Shared Memory - Mutual Exclusion
Look-ahead buffer - Spatial Locality
Look-aside buffer - Temporal Locality
Question 199

A critical region is

A
One which is enclosed by a pair of P and V operations on semaphores.
B
A program segment that has not been proved bug-free.
C
A program segment that often causes unexpected system crashes.
D
A program segment where shared resources are accessed.
       Operating-Systems       Process-Synchronization       GATE 1987
Question 199 Explanation: 
A critical region is a program segment where shared resources are accessed.
Question 200

Consider the execution of the following commands in a shell on a linux operating system.

system
bash$cat alpha
Mathematics
bash$in alpha beta
bash$ rm alpha
bash$cat >> beta << SAME
Information Technology
SANE
bash$ cat beta
The output of the last command will be:
A
Mathematics Information Technology SAME.
B
Mathematics Information Technology.
C
Information Technology.
D
Information Technology SAME.
       Operating-Systems       LINUX       GATE 2008-IT
Question 200 Explanation: 
Note: Out of syllabus.
Question 201

Assume that EA = (X)+ is the effective address equal to the contents of location X, with X incremented by one word length after the effective address is calculated; EA = −(X) is the effective address equal to the contents of location X, with X decremented by one word length before the effective address is calculated; EA = (X)− is the effective address equal to the contents of location X, with X decremented by one word length after the effective address is calculated. The format of the instruction is (opcode, source, destination), which means (destination ← source op destination). Using X as a stack pointer, which of the following instructions can pop the top two elements from the stack, perform the addition operation and push the result back to the stack.

A
6 and 1, 2, 3, 4
B
7 and 1, 2, 4, 5
C
8 and 1, 2, 4, 5
D
9 and 1, 2, 3, 5
       Operating-Systems       Page-Replacement       GATE 2008-IT
Question 201 Explanation: 
In this, we can have 4 spaces for a page and there is a replacement whether the 5th page comes.
(Each page contains 16 bytes, so say for page0, it contains the virtual address 0-15)
0: page fault-1, pages in memory-0
4: page fault-1, pages in memory-0
8: page fault-1, pages in memory-0
20: page faults-2, pages in memory-0,1
24: page faults-2, pages in memory-0,1
36: page faults-3, pages in memory-0,1,2
44: page faults-3, pages in memory-0,1,2
12: page faults-3, pages in memory-1,2,0
68: page faults-4, pages in memory-1,2,0,4
72: page faults-4, pages in memory-1,2,0,4
80: page faults-5, pages in memory-2,0,4,5
84: page faults-5, pages in memory-2,0,4,5
28: page faults-6, pages in memory-0,4,5,1
32: page faults-7, pages in memory-4,5,1,2
88: page faults-7, pages in memory-4,1,2,5
92: page faults-7, pages in memory-4,1,2,5
Question 202

The following is a code with two threads, producer and consumer, that can run in parallel. Further, S and Q are binary semaphores equipped with the standard P and V operations.

semaphore S = 1, Q = 0; 
integer x; 
producer:                            consumer:
while(true) do                      while(true) do 
     P(S);                               P(Q); 
     x = produce ();                     consume (x); 
     V(Q);                               V(S); 
done                                done
Which of the following is TRUE about the program above?
A
The process can deadlock
B
One of the threads can starve
C
Some of the items produced by the producer may be lost
D
Values generated and stored in ‘x’ by the producer will always be consumed before the producer can generate a new value
       Operating-Systems       Process-Synchronization       GATE 2008-IT
Question 202 Explanation: 
(A) Deadlock cannot happen as both producer and consumer are operating on different semaphores (No hold and wait).
(B) No starvation happen because there is alteration between Producer and Consumer.
(C) No items is lost.
Question 203

An operating system implements a policy that requires a process to release all resources before making a request for another resource. Select the TRUE statement from the following:

A
Both starvation and deadlock can occur
B
Starvation can occur but deadlock cannot occur
C
Starvation cannot occur but deadlock can occur
D
Neither starvation nor deadlock can occur
       Operating-Systems       Deadlock       GATE 2008-IT
Question 203 Explanation: 
Starvation can occur as each time a process requests a resource it has to release all its resources.
Now, maybe the process has not used the resources it released yet. This may happen again when the process requests another resource.
So, the process starved for proper utilization of resources.
Deadlock will not occur as it is one of the deadlock prevention scheme.
Question 204

If the time-slice used in the round-robin scheduling policy is more than the maximum time required to execute any process, then the policy will

A
degenerate to shortest job first
B
degenerate to priority scheduling
C
degenerate to first come first serve
D
none of the above
       Operating-Systems       Round-Robin       GATE 2008-IT
Question 204 Explanation: 
Round robin executes processes in FCFS manner with a time slice. In this time slice becomes long enough, so that a process finishes within it, it becomes FCFS.
Question 205

Match the following flag bits used in the context of virtual memory management on the left side with the different purposes on the right side of the table below.

A
I-d, II-a, III-b, IV-c
B
I-b, II-c, III-a, IV-d
C
I-c, II-d, III-a, IV-b
D
I-b, II-c, III-d, IV-a
       Operating-Systems       Virtual Memory       GATE 2008-IT
Question 205 Explanation: 
Dirty bit:
The bit indicates that its associated block of memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
R/W bit:
If the bit is set, the page is read/ write. Otherwise when it is not set, the page is read only.
Reference bit:
Reference bit is used in a version of FIFO called second chance policy, in order to avoid replacement of heavily used page.
Valid bit:
Valid bit is not used for page replacement. It is not used in any page replacement policy. It tells the page in the memory is valid or not. If it is valid it is directly used and if it is not then a fresh page is loaded. So, basically it is page initialization, because we are not replacing, it is initializing, we not knocking out somebody, we are filling empty space. So initialization and so option (D).
Question 206

Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that critical_flag is initialized to FALSE in the main program.

        get_exclusive_access ( ) 
{ 
  if (critical _flag == FALSE) { 
      critical_flag = TRUE ; 
      critical_region () ; 
      critical_flag = FALSE; 
      } 
}

Consider the following statements. i. It is possible for both P1 and P2 to access critical_region concurrently. ii. This may lead to a deadlock.

Which of the following holds?

A
(i) is false and (ii) is true
B
Both (i) and (ii) are false
C
(i) is true and (ii) is false
D
Both (i) and (ii) are true
       Operating-Systems       Process-Synchronization       GATE 2007-IT
Question 206 Explanation: 
Both the processes run concurrently if they run concurrently till the execution then there is no deadlock.
Question 207

Let a memory have four free blocks of sizes 4k, 8k, 20k, 2k. These blocks are allocated following the best-fit strategy. The allocation requests are stored in a queue as shown below.

The time at which the request for J7 will be completed will be

A
16
B
19
C
20
D
37
       Operating-Systems       Process-Scheduling       GATE 2007-IT
Question 207 Explanation: 
At t = 0

At t = 8

At t = 10

At t = 11

J7 can be finished at t = 19.
Question 208

Consider n jobs J1, J2,......Jn such that job Ji has execution time ti and a non-negative integer weight wi. The weighted mean completion time of the jobs is defined to be , where Ti is the completion time of job Ji. Assuming that there is only one processor available, in what order must the jobs be executed in order to minimize the weighted mean completion time of the jobs?

A
Non-decreasing order of ti
B
Non-increasing order of wi
C
Non-increasing order of witi
D
None-increasing order of wi/ti
       Operating-Systems       Process-Scheduling       GATE 2007-IT
Question 208 Explanation: 
Like OS concept, execute job which have more completion time in 1 sec i.e., find wi/ti for every process then arrange it in the decreasing order. So, we get complete time is minimum always.
So answer is the non-increasing order of wi/ti.
Question 209

A demand paging system takes 100 time units to service a page fault and 300 time units to replace a dirty page. Memory access time is 1 time unit. The probability of a page fault is p. In case of a page fault, the probability of page being dirty is also p. It is observed that the average access time is 3 time units. Then the value of p is

A
0.194
B
0.233
C
0.514
D
0.981
E
0.0194
       Operating-Systems       Virtual Memory       GATE 2007-IT
Question 209 Explanation: 
Given page fault service time = 100
(if the page is not dirty)
Page fault service time = 300
(if there is dirty page)
Probability of page fault = P
Probability pf page being dirty = P
So, total page fault service time
= P(300) + (1 - P)100
= 300P + 100 - 100P
= 300P + 100
Now given,
Effective average memory access time = 3
So,
3 = m + P × total page fault service time
= 1 + P(200P + 100)
= 1+ 200P2 + 100P
⇒ 200P2 + 100P - 2 = 0
⇒ P = 0.0194
Question 210

The head of a hard disk serves requests following the shortest seek time first (SSTF) policy. The head is initially positioned at truck number 180.
Which of the request sets will cause the head to change its direction after servicing every request assuming that the head does not change direction if there is a tie in SSTF and all the requests arrive before the servicing starts?

A
11, 139, 170, 178, 181, 184, 201, 265
B
10, 138, 170, 178, 181, 185, 201, 265
C
10, 139, 169, 178, 181, 184, 201, 265
D
10, 138, 170, 178, 181, 185, 200, 265
       Operating-Systems       Disk-Scheduling       GATE 2007-IT
Question 210 Explanation: 

Hence, correct option is (B).
Question 211

The head of a hard disk serves requests following the shortest seek time first (SSTF) policy. The head is initially positioned at track number 180.
What is the maximum cardinality of the request set, so that the head changes its direction after servicing every request if the total number of tracks are 2048 and the head can start from any track?

A
9
B
10
C
11
D
12
       Operating-Systems       Disk-Scheduling       GATE 2007-IT
Question 211 Explanation: 
We need two conditions to satisfy:
1) The alternating direction with SSTF policy.
2) Maximize the no. of requests.
The first condition can be satisfied by not having two requests in the equal distance from the current location. As shown below, we must not have request located in the circle marked positions.
Now to maximize the no. of request we need the requests to be located as compact as possible, which can be done by just placing the request in the next position after the circle marked position in particular direction (the direction in which the head need to move).
Now to satisfy the 1st criteria:

Seek length sequence for maximum cardinality and alternating head movements:
→ 1, 3, 7, 15, ...
→ or, 21-1, 22-1, 23-1, 24-1, ...
→ We have 2048 tracks so, maximum swing (seek length) can be 2047.
→ Which corresponds to a seek length of 211-1 in the 11th service.
Question 212

In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
1. It initiates another process if there are enough extra frames.
2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total number of available frames.

A
1 only
B
2 only
C
Neither 1 nor 2
D
Both 1 and 2
       Operating-Systems       Thrashing       GATE 2006-IT
Question 212 Explanation: 
According to the concept of thrashing both statement (1) and (2) are true.
Question 213

The process state transition diagram of an operating system is as given below.

Which of the following must be FALSE about the above operating system?

A
It is a multiprogrammed operating system
B
It uses preemptive scheduling
C
It uses non-preemptive scheduling
D
It is a multi-user operating system
       Operating-Systems       Process-Scheduling       GATE 2006-IT
Question 213 Explanation: 
Option B is wrong. Because given OS is non-preempting scheduling algorithm.
Question 214

The arrival time, priority, and duration of the CPU and I/O bursts for each of three processes P1, P2 and P3 are given in the table below. Each process has a CPU burst followed by an I/O burst followed by another CPU burst. Assume that each process has its own I/O resource.

The multi-programmed operating system uses preemptive priority scheduling. What are the finish times of the processes P1, P2 and P3 ?

A
11, 15, 9
B
10, 15, 9
C
11, 16, 10
D
12, 17, 11
       Operating-Systems       Process-Scheduling       GATE 2006-IT
Question 214 Explanation: 
Gantt-chart:

Hence, finish times of process.
P1 - 10
P2 - 11
P3 - 9
Question 215

Consider the solution to the bounded buffer producer/consumer problem by using general semaphores S, F, and E. The semaphore S is the mutual exclusion semaphore initialized to 1. The semaphore F corresponds to the number of free slots in the buffer and is initialized to N. The semaphore E corresponds to the number of elements in the buffer and is initialized to 0.

Producer Process                Consumer Process
Produce an item;                Wait(E);
Wait(F);                        Wait(S);
Wait(S);
Append the items to the buffer; Signal(S);
Signal(S);                      Signal(F);
Signal(E);                      Consume the item;  

Which of the following interchange operations may result in a deadlock?
1. Interchanging Wait (F) and Wait (S) in the Producer process
2. Interchanging Signal (S) and Signal (F) in the Consumer process

A
1 only
B
2 only
C
Neither 1 nor 2
D
Both 1 and 2
       Operating-Systems       Process-Synchronization       GATE 2006-IT
Question 215 Explanation: 
Suppose Wait(F) and Wait(S) are interchanged. And let the slots are full → F=0.
Now if Wait(S) in producer succeeds, then producer will wait for Wait(F) which is never going to succeed as consumer would be waiting for Wait(S). So deadlock, can happen.
If Signal(S) and Signal(F) are interchanged in consumer, deadlock won't happen. It will just give priority to a producer compared to the next consumer waiting.
Question 216

For each of the four processes P1, P2, P3 and P4. The total size in kilobytes (KB) and the number of segments are given below.

The page size is 1 KB. The size of an entry in the page table is 4 bytes. The size of an entry in the segment table is 8 bytes. The maximum size of a segment is 256 KB. The paging method for memory management uses two-level paging, and its storage overhead is P. The storage overhead for the segmentation method is S. The storage overhead for the segmentation and paging method is T. What is the relation among the overheads for the different methods of memory management in the concurrent execution of the above four processes?

A
P < S < T
B
S < P < T
C
S < T < P
D
T < S < P
       Operating-Systems       Virtual memory       GATE 2006-IT
Question 216 Explanation: 
Case-1 (Two level paging):
For P1,
Page size is 1KB. So, no. of pages required for P1=195. An entry in page table is of size 4 bytes and assuming an inner level page table takes the size of a page, we can have upto 256 entries in second level page table and we require only 195 for P1. Thus only 1 second level page table is enough. So, memory overhead = 1KB (for first level) + 1KB for second level = 2KB.
For P2 and P3 also, we get 2KB each and for P4 we get 1+2 = 3KB as it requires 1 first level page table and 2 second level page tables (364 > 256). So, total overhead for their concurrent execution = 2×3+3 = 9KB. Thus P = 9KB.
Case-2 (For segmentation method):
P1 uses 4 segments → 4 entries in segment table = 4×8 = 32Bytes
Similarly, for P2, P3 and P4 we get 5×8, 3×8 and 8×8 Bytes respectively and the total overhead will be
32+40+24+64 = 160 Bytes
So, S = 160 Bytes
Case-3 (For segmentation with paging):
Here, we segment first and then page. So, we need the page table size. We are given maximum size of a segment is 256KB and page size is 1KB and thus we require 256 entries in the page table. So, total size of page table = 256 × 4 = 1024 Bytes (exactly one page size).
So, now for P1 we require 1 segment table of size 32 Bytes plus 1 page table of size 1KB.
Similarly,
P2 - 40 Bytes and 1 KB
P3 - 24 Bytes and 1 KB
P4 - 64 Bytes and 1KB
Thus, total overhead = 160 Bytes + 4KB = 4256 Bytes
So, T = 4256 Bytes
So, answer should be S < T < P.
Question 217

The wait and signal operations of a monitor are implemented using semaphores as follows. In the following,
x is a condition variable,
mutex is a semaphore initialized to 1,
x_sem is a semaphore initialized to 0,
x_count is the number of processes waiting on semaphore x_sem, initially 0, next is a semaphore initialized to 0,
next_count is the number of processes waiting on semaphore next, initially 0.
The body of each procedure that is visible outside the monitor is replaced with the following:

P(mutex); 
      ⋮
body of procedure 
      ⋮
if (next_count > 0)
    V(next);
else
    V(mutex); 
Each occurrence of x.wait is replaced with the following:
x_count = x_count + 1;
if (next_count > 0)
    V(next)
else
    V(mutex);
------------------------------------------------------------ E1;
x_count = x_count - 1; 
Each occurrence of x.signal is replaced with the following:
if (x_count > 0)
{
    next_count = next_count + 1;
    ------------------- E2;
    P(next),
    next_count = next_count - 1;
} 
For correct implementation of the monitor, statements E1 and E2 are, respectively,

A
P(x_sem), V(next)
B
V(next), P(x_sem)
C
P(next), V(x_sem)
D
P(x_sem), V(x_sem)
       Operating-Systems       Process-Synchronization       GATE 2006-IT
Question 217 Explanation: 
x_count is the no. of processes waiting on semaphore x_sem, initially 0.
x_count is incremented and decremented in x_wait, which shows that in between them wait(x_sem) must happen which is P(x_sem). Correspondingly V(x_sem) must happen in x_signal. So, D choice.
Question 218

A student wishes to create symbolic links in a computer system running Unix. Three text files named "file 1", "file 2" and "file 3" exist in her current working directory, and the student has read and write permissions for all three files. Assume that file 1 contains information about her hobbies, file 2 contains information about her friends and file 3 contains information about her courses. The student executes the following sequence of commands from her current working directory

ln -s file 1 file 2
ln -s file 2 file 3 

Which of the following types of information would be lost from her file system?

(I) Hobbies (II) Friends (III) Courses
A
(I) and (II) only
B
(II) and (III) only
C
(II) only
D
(I) and (III) only
       Operating-Systems       Unix       GATE 2005-IT
Question 218 Explanation: 
Note: Out of syllabus.
Question 219

The shell command

 find -name passwd -print 

is executed in /etc directory of a computer system running Unix. Which of the following shell commands will give the same information as the above command when executed in the same directory?

A
ls passwd
B
cat passwd
C
grep name passwd
D
grep print passwd
       Operating-Systems       Shell-Command       GATE 2005-IT
Question 219 Explanation: 
Note: Out of syllabus.
Question 220

A user level process in Unix traps the signal sent on a Ctrl-C input, and has a signal handling routine that saves appropriate files before terminating the process. When a Ctrl-C input is given to this process, what is the mode in which the signal handling routine executes?

A
kernel mode
B
superuser mode
C
privileged mode
D
user mode
       Operating-Systems       Unix       GATE 2005-IT
Question 220 Explanation: 
Note: Out of syllabus.
Question 221

Two concurrent processes P1 and P2 use four shared resources R1, R2, R3 and R4, as shown below.

P1:	              P2:
Compute:              Compute; 
Use R1;               Use R1;
Use R2;               Use R2;  
Use R3;               Use R3;
Use R4;	              Use R4; 

Both processes are started at the same time, and each resource can be accessed by only one process at a time The following scheduling constraints exist between the access of resources by the processes:
P2 must complete use of R1 before P1 gets access to R1.
P1 must complete use of R2 before P2 gets access to R2.
P2 must complete use of R3 before P1 gets access to R3.
P1 must complete use of R4 before P2 gets access to R4.
There are no other scheduling constraints between the processes. If only binary semaphores are used to enforce the above scheduling constraints, what is the minimum number of binary semaphores needed?

A
1
B
2
C
3
D
4
       Operating-Systems       Process-Management       GATE 2005-IT
Question 221 Explanation: 
It needs two semaphores such as X=0; Y=0
Question 222

Which of the following input sequences will always generate a 1 at the output Z at the end of the third cycle?

A
000
101
111
B
101
110
111
C
011
101
111
D
001
110
111
E
None of these
       Operating-Systems       Sequential-Circuits       GATE 2005-IT
Question 222 Explanation: 

While filling done in reverse order, all operations are not satisfied.
Question 223

We wish to schedule three processes P1, P2 and P3 on a uniprocessor system. The priorities, CPU time requirements and arrival times of the processes are as shown below.

 Process	 Priority	 CPU time required	 Arrival time (hh:mm:ss)
   P1	        10(highest)	      20 sec	              00:00:05
   P2	        9	              10 sec	              00:00:03
   P3	        8 (lowest)	      15 sec	              00:00:00 
We have a choice of preemptive or non-preemptive scheduling. In preemptive scheduling, a late-arriving higher priority process can preempt a currently running process with lower priority. In non-preemptive scheduling, a late-arriving higher priority process must wait for the currently executing process to complete before it can be scheduled on the processor. What are the turnaround times (time from arrival till completion) of P2 using preemptive and non-preemptive scheduling respectively.

A
30 sec, 30 sec
B
30 sec, 10 sec
C
42 sec, 42 sec
D
30 sec, 42 sec
       Operating-Systems       PU-Scheduling       GATE 2005-IT
Question 223 Explanation: 
For preemptive,

TAT of P2 = Completion time of P2 - Arrival time of P2
= 33 - 3
= 30 sec
For non-preemptive,

TAT of P2 = Completion time of P2 - Arrival time of P2
= 45 - 3
= 42 sec
Question 224

Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following sequence of memory block references if LRU policy is used for cache block replacement. Assuming that initially the cache did not have any memory block from the current job?

 0 5 3 9 7 0 16 55 
A
0 3 5 7 16 55
B
0 3 5 7 9 16 55
C
0 5 7 9 16 55
D
3 5 7 9 16 55
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 224 Explanation: 
The cache is 2-way associative, so in a set, there can be 2 block present at a time.
So,

Since, each set has only 2 places, 3 will be thrown out as its the least recently used block. So final content of cache will be
0 5 7 9 16 55
Hence, answer is (C).
Question 225

Two shared resources R1 and R2 are used by processes P1 and P2. Each process has a certain priority for accessing each resource. Let Tij denote the priority of Pi for accessing Rj. A process Pi can snatch a resource Rh from process Pj if Tik is greater than Tjk. Given the following:

(I) T11 > T21
(II) T12 > T22
(III) T11 < T21
(IV) T12 < T22 
Which of the following conditions ensures that P1 and P2 can never deadlock?

A
(I) and (IV)
B
(II) and (III)
C
(I) and (II)
D
None of the above
       Operating-Systems       Deadlock       GATE 2005-IT
Question 225 Explanation: 
If all resources are allocated to one process then deadlock will never occur. So, if we allocate both R1 and R2 to process P1 or both R1 and R2 to process P2 then deadlock will never occur. So when one process will complete its execution then both the resources are allocated to the other process. So, either condition (I) and (II) or (III) and (IV) ensures that deadlock will never occur.
Question 226

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear Velocity (ii) Constant Angular Velocity?

A
(i) 80 MB (ii) 2040 MB
B
(i) 2040 MB (ii) 80 MB
C
(i) 80 MB (ii) 360 MB
D
(i) 80 MB (ii) 360 MB
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 226 Explanation: 
Constant linear velocity:
Diameter of inner track = d = 1 cm
Circumference of inner track
= 2 * 3.14 * d/2
= 3.14 cm
Storage capacity = 10 MB (given)
Circumference of all equidistant tracks
= 2 * 3.14 * (0.5+1+1.5+2+2.5+3+3.5+4)
= 113.14 cm
Here, 3.14 cm holds 10 MB
Therefore, 1 cm holds 3.18 MB.
So, 113.14 cm holds
113.14 * 3.18 = 360 MB
So, total amount of data that can be hold on the disk = 360 MB.
For constant angular velocity:
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk
= 8 * 10 = 80 MB
Question 227

A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB.
If the disk has 20 sectors per track and is currently at the end of the 5th sector of the inner-most track and the head can move at a speed of 10 meters/sec and it is rotating at constant angular velocity of 6000 RPM, how much time will it take to read 1 MB contiguous data starting from the sector 4 of the outer-most track?

A
13.5 ms
B
10 ms
C
9.5 ms
D
20 ms
       Operating-Systems       Memory-Management       GATE 2005-IT
Question 227 Explanation: 
Radius of inner track is 0.5cm (where the head is standing) and the radius of outermost track is 4cm.
So, the header has to seek (4 - 0.5) = 3.5cm.
For 10m ------- 1s
1m ------- 1/10 s
100cm ------- 1/(10×100) s
3.5cm ------- 3.5/1000 s = 3.5ms
So, the header will take 3.5ms.
Now, angular velocity is constant and header is now at end of 5th sector. To start from front of 4th sector it must rotate upto 18 sector.
6000 rotation in 60000ms.
1 rotation in 10ms (time to traverse 20 sectors).
So, to traverse 18 sectors, it takes 9ms.
In 10ms, 10MB data is read.
So, 1MB data can be read in 1ms.
∴ Total time = 1+9+3.5 = 13.5ms
Question 228

What is the bit rate of a video terminal unit with 80 characters/line, 8 bits/character and horizontal sweep time of lOOµs (including 20 µs of retrace time)?

A
8 Mbps
B
6.4 Mbps
C
0.8 Mbps
D
0.64 Mbps
       Operating-Systems       I/O-Handling       GATE 2004-IT
Question 228 Explanation: 
Horizontal sweep time = 100µs
Total number of bits transmitted = 80 * 8 = 640 bits
Bit rate = (640 * 106)/100 = 6.4 Mbps
Question 229

Which one of the following is NOT shared by the threads of the same process?

A
Stack
B
Address Space
C
File Descriptor Table
D
Message Queue
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 229 Explanation: 
Threads cannot share the stack to maintaining the function calls and they can have individual function call sequences.
Question 230

Which of the following commands or sequences of commands will rename a file x to file y in a unix system?

I. mv y,x
II. mv x,y
III. cp y, x(rm x)
IV. cp x, y (rm x) 
A
II and III
B
II and IV
C
I and III
D
II only
       Operating-Systems       Unix       GATE 2004-IT
Question 230 Explanation: 
Note: Out of syllabus.
Question 231

Consider a pipeline processor with 4 stages S1 to S4. We want to execute the following loop:
For(i=1:i<=1000; i++)
{I1,I2,I3,I4}
where the time taken (in ns) by instructions I1 to I4 for stages S1 to S4 are given below:

       S1   S2   S3   S4
I1:    1    2    1    2
I2:    2    1    2    1
I3:    1    1    2    1
I4:    2    1    2    1 
The output of I1 for i=2 will be available after

A
11 ns
B
12 ns
C
13 ns
D
28 ns
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 231 Explanation: 

So, total time would be 13 ns.
Question 232

Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of memory block requests:
4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7
If LRU replacement policy is used, which cache block will have memory block 7?

A
4
B
5
C
6
D
7
       Operating-Systems       Page-Replacement-Algorithm       GATE 2004-IT
Question 232 Explanation: 
When 45 comes, the cache contents are:
4, 3, 25, 8, 19, 6, 16, 35
CPU array (first element being least recently used)
[4, 3, 19, 6, 25, 8, 16, 35]
So, 45 replaces 4.
45, 3, 25, 8, 19, 6, 16, 35 [3, 19, 6, 25, 8, 16, 35, 45]
Similarly, 22 replaces 3 to give,
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 8, 16, 35, 45, 22]
8 hits in cache.
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 16, 35, 45, 22, 8]
3 replaces 19,
45, 22, 25, 8, 3, 6, 16, 35 [6, 25, 16, 35, 45, 22, 8, 3]
16 and 25 hits in cache,
45, 22, 25, 8, 3, 6, 16, 35 [6, 35, 45, 22, 8, 3, 16, 25]
Finally, 7 replaces 6, which is in block 5.
Question 233

A CPU has only three instructions I1, I2 and I3, which use the following signals in time steps T1-T5:

I1 : 
T1 : Ain, Bout, Cin 
T2 : PCout, Bin 
T3 : Zout, Ain 
T4 : Bin, Cout 
T5 : End 
I2 : 
T1 : Cin, Bout, Din 
T2 : Aout, Bin 
T3 : Zout, Ain 
T4 : Bin, Cout 
T5 : End 
I3 : 
T1 : Din, Aout 
T2 : Ain, Bout 
T3 : Zout, Ain 
T4 : Dout, Ain 
T5 : End  
Which of the following logic functions will generate the hardwired control for the signal Ain ?

A
T1.I1 + T2.I3 + T4.I3 + T3
B
(T1 + T2 + T3).I3 + T1.I1
C
(T1 + T2 ).I1 + (T2 + T4).I3 + T3
D
(T1 + T2 ).I2 + (T1 + T3).I1 + T3
       Operating-Systems       Pipeling-and-addressing-modes       GATE 2004-IT
Question 233 Explanation: 
We just have to see which option gives 1 whenever Ain is 1 and 0 otherwise.
So, Ain is 1 in T3 of I1, I2 and I3. Also during T1, and T2 and T4 of I3. So, answer will be
T1.I1 + T2.I3 + T4.I3 + T3.I1 + T3.I2 + T3.I3
Since, CPU is having only 3 instructions, T3.I1 + T3.I2 + T3.I3 can be replaced by T3.
So, T1.I1 + T2.I3 + T4.I3 + T3
Question 234

In an enhancement of a design of a CPU, the speed of a floating point unit has been increased by 20% and the speed of a fixed point unit has been increased by 10%. What is the overall speedup achieved if the ratio of the number of floating point operations to the number of fixed point operations is 2:3 and the floating point operation used to take twice the time taken by the fixed point operation in the original design?

A
1.155
B
1.185
C
1.255
D
1.285
       Operating-Systems       Pipeling-and-addressing-modes       GATE 2004-IT
Question 234 Explanation: 
Speedup = Original time taken/New time taken
Let n be the time for a fixed point operation,
Original time taken = 3x+2×2n/5 = 7x/5
New time taken = (3x/1.1 + 4x/1.2)/5 = 8x/1.32×5
So, speedup = 7×1.32/8 = 1.155
Question 235

The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400 bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is

A
0.5%
B
1%
C
5%
D
10%
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 235 Explanation: 
y μs is cycle time.
x μs is data transfer time.
% of time CPU idle is,
y/x × 100
Maximum storage density is given, so consider innermost track to get the capacity
= 2 × 3.14 × 5 × 1700 bits
= 3.14 × 14000 bits
Rotational latency = 60/4200 s = 1/70 s
Therefore, to read 64 bits, time required
(106 × 64)/(70 × 3.14 × 17000) μs = 20.8 μs
As memory cycle time given is 1μs,
Therefore, % CPU cycle stolen = (1/20.8) × 100 ≈ 5%
Question 236

A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120, and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers

 30 70 115 130 110 80 20 25 
How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist Serve)

A
2 and 3
B
3 and 3
C
3 and 4
D
4 and 4
       Operating-Systems       Memory-Management       GATE 2004-IT
Question 236 Explanation: 
SSTF: (90) 120 115 110 130 80 70 30 25 20
Direction changes at 120, 110, 130.
FCFS: (90) 120 30 70 115 130 110 80 20 25
Direction changes at 120, 30, 130, 20.
Question 237

In a certain operating system, deadlock prevention is attempted using the following scheme. Each process is assigned a unique timestamp, and is restarted with the same timestamp if killed. Let Ph be the process holding a resource R, Pr be a process requesting for the same resource R, and T(Ph) and T(Pr) be their timestamps respectively. The decision to wait or preempt one of the processes is based on the following algorithm.

if T(Pr) < T(Ph) then 
      kill Pr
else 
      wait
Which one of the following is TRUE?

A
The scheme is deadlock-free, but not starvation-free
B
The scheme is not deadlock-free, but starvation-free
C
The scheme is neither deadlock-free nor starvation-free
D
The scheme is both deadlock-free and starvation-free
       Operating-Systems       Process-Management       GATE 2004-IT
Question 237 Explanation: 
When the process washes up again after it has been killed once or twice, it will have same time stamp as it had when it was killed first time. And that time stamp can never be greater than a process that was killed after that or a new process that may have arrived. So, every time when the killed process washes up it might always find a new process that will say "your time stamp is less than me and I take this resource", which of course is as we know, and that process will again be killed.
So, starvation is possible, but deadlock is not possible.
Question 238

A process executes the following segment of code:

for(i = 1; i < = n; i++)
   fork();
The number of new processes created is

A
n
B
n(n+1)/2
C
2n - 1
D
3n - 1
       Operating-Systems       Process-Management       GATE 2004-IT
Question 238 Explanation: 
The number of new processes or child processes created is,
2n - 1.
Question 239

The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process P1 repeatedly adds one item at a time to a buffer of size n, and process P2 repeatedly removes one item at a time from the same buffer using the programs given below. In the programs, K, L, M and N are unspecified statements.

P1:  while (1) {     
                 K; P(mutex); 
                 Add an item to the buffer; 
                 V(mutex); L; 
     } 
P2:  while (1) {    
                 M; P(mutex); 
                 Remove an item from the buffer; 
                 V(mutex); N; 
     }  
The statements K, L, M and N are respectively

A
P(full), V(empty), P(full), V(empty)
B
P(full), V(empty), P(empty), V(full)
C
P(empty), V(full), P(empty), V(full)
D
P(empty), V(full), P(full), V(empty)
       Operating-Systems       Process-Management       GATE 2004-IT
Question 239 Explanation: 
Process P1 is the producer and process P2 is the consumer. Semaphore 'full' is initialized to '0'. This means there is not item if the buffer semaphore 'empty' is initialized to 'n'. This means there is space for n items in the buffer.
In process P1, wait on semaphore 'empty' signifies that if there is no space in buffer then P2 cannot produce more items. Signal on semaphore 'full' is to signify that one item has been added to the buffer.
In process P2, wait on semaphore 'full' signifies that if the buffer is empty then consumer cannot consume any item. Signal on semaphore 'empty' increments a space in the buffer after consumption of an item.
Question 240

In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?

A
2
B
10
C
12
D
14
       Operating-Systems       Process-Management       GATE 2004-IT
Question 240 Explanation: 
Page table entry must contain bits for representing frames and other bits for storing information like dirty bit, reference bit, etc.
No. of frames = Physical memory size/Page size
= (230)/(212)
= 218
18+4 = 32 (PT entry size = 32 bits)
k = 14 bits
Question 241

Consider the following schedule S of transactions T1 and T2:

   T1	         T2
Read(A)
A = A - 10	
               Read (A)
               Temp = 0.2*A 
               Write(A) 
               Read(B)
Write(A)
Read(B) 
B = B + 10 
Write(B)	
               B = B + Temp
               Write(B) 
Which of the following is TRUE about the schedule S?

A
S is serializable only as T1, T2
B
S is serializable only as T2, T1
C
S is serializable both as T1, T2 and T2, T1
D
S is serializable either as T1 or as T2
E
None of these
       Operating-Systems       Process-Management       GATE 2004-IT
Question 241 Explanation: 
The given statement is not serializable as cycle exist in precedence graph. Therefore, options A, B, C, D are not correct.
Question 242
The difference between a named pipe and a regular file in Unix is that
A
Unlike a regular file, named pipe is a special file
B
The data in a pipe is transient, unlike the content of a regular file
C
Pipes forbid random accessing, while regular files do allow this.
D
All of the above
       Operating-Systems       UNIX-Operating-System       ISRO-2018
Question 242 Explanation: 
→ Named pipe is a special instance of a file that has no contents on the filesystem.
→ A FIFO special file (a named pipe) is similar to a pipe, except that it is accessed as part of the filesystem. It can be opened by multiple processes for reading or writing. When processes are exchanging data via the FIFO, the kernel passes all data internally without writing it to the filesystem. Thus, the FIFO special file has no contents on the filesystem; the filesystem entry merely serves as a reference point so that processes can access the pipe using a name in the filesystem.
→ The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also.
Question 243
Processes P1 and P2 have a producer-consumer relationship, communicating by the use of a set of shared buffers. Increasing the number of buffers is likely to do which of the following? I. Increase the rate at which requests are satisfied (throughput) II. Decrease the likelihood of deadlock III. Increase the ease of achieving a correct implementation
A
III Only
B
II Only
C
I Only
D
II and III Only
       Operating-Systems       Deadlock       ISRO-2018
Question 243 Explanation: 
It only satisfied statement I. because increasing the memory size increases the rate at which requests are satisfied but can not alter the possibility of deadlock and neither does it play any role in implementation.
Question 244
The term ‘aging’ refers to
A
booting up the priority of the process in multi-level of queue without feedback
B
keeping track of the following a page has been in memory for the purpose of LRU replacement
C
letting job reside in memory for a certain amount of time so that the number of pages required can be estimated accurately
D
gradually increasing the priority of jobs that wait in the system for a long time to remedy infinite blocking
E
In Operating systems, aging is a scheduling technique used to avoid starvation. In this, the priority of the jobs that have a longer waiting time is increased as compared to the newer processes, to avoid the starvation of older processes
       Operating-Systems       Memory-Management       ISRO-2007
Question 245
Consider a set of n tasks with known runtimes r1, r2….rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
A
Round Robin
B
Shortest job first
C
Highest response ratio next
D
first come first served
       Operating-Systems       CPU-Scheduling       ISRO-2007
Question 245 Explanation: 
Throughput means total number of tasks executed per unit time i.e. sum of waiting time and burst time.
Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest execution time to execute next.
Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is maximum. So, maximum number of tasks are completed.
Question 246
Consider a job scheduling problem with 4 jobs J1, J2, J3, J4 and with corresponding deadlines: ( d1, d2, d3, d4) = (4, 2, 4, 2).
Which of the following is not a feasible schedule without violating any job schedule?
A
J2, J4, J1, J3
B
J4, J1, J2, J3.
C
J4, J2, J1, J3.
D
J4, J2, J3, J1
       Operating-Systems       Deadlock       ISRO-2007
Question 246 Explanation: 
→ Feasible schedule is completing all the jobs within deadline.
→ From the dead line, we can deduce that Job J2 & J4 will complete by time “2” whereas remaining two requires time “4”.
→ So the order of completion of Jobs are Either J2 or J4 and followed by either J1 or J3.
From the given options , Option A,C & D gives the solution because after completion of Jobs J2 and J4 then only jobs J1 and J3 is going to complete.
→ But option B , order of completing jobs is J4,J1,J2 ,J3 which is not possible and it is not feasible schedule
Question 247
Round Robin schedule is essentially the preemptive version of
A
FIFO
B
Shortest job first
C
Shortest remaining time
D
Longest remaining time
       Operating-Systems       CPU-Scheduling       ISRO-2007
Question 247 Explanation: 
FIFO is when implemented in preemptive version, it acts like round-robin algorithm.
Question 248
What is the name of the technique in which the operating system of a computer executes several programs concurrently by switching back and forth between them?
A
Partitioning
B
Multi-tasking
C
Windowing
D
Paging
       Operating-Systems       Multiprogramming       ISRO-2007
Question 248 Explanation: 
In a multitasking system, a computer executes several programs simultaneously by switching them back and forth to increase the user interactivity. Processes share the CPU and execute in an interleaving manner. This allows the user to run more than one program at a time.
Question 249
Virtual memory is
A
Part of Main Memory only used for swapping
B
A technique to allow a program, of size more than the size of main memory, to run
C
Part of secondary storage used in program execution
D
None of these
       Operating-Systems       Virtual Memory       ISRO-2007
Question 249 Explanation: 
A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by using disk. Second, it allows us to have memory protection, because each virtual address is translated to a physical address
Question 250
Disk requests are received by a disk drive for cylinder 5, 25, 18, 3, 39, 8 and 35 in that order. A seek takes 5 msec per cylinder moved. How much seek time is needed to serve these requests for a Shortest Seek First (SSF) algorithm? Assume that the arm is at cylinder 20 when the last of these requests is made with none of the requests yet served
A
125 msec
B
295 msec
C
575 msec
D
750 msec
       Operating-Systems       Disk-Scheduling       ISRO-2007
Question 250 Explanation: 
The arm is at cylinder 20, so the service order = 18, 25, 35, 39, 8, 5, 3.
Seek time = (20−18) + (25−18) + (35−25) + (39−35) + (39−8) + (8−5) + (5−3)
= 2 + 7 + 10 + 4 + 31 + 3 + 2 = 59
Total seek time = 59 * 5 = 29
Question 251
Consider a system having ‘m’ resources of the same type. The resources are shared by 3 processes A, B, C, which have peak time demands of 3, 4, 6 respectively. The minimum value of ‘m’ that ensures that deadlock will never occur is
A
11
B
12
C
13
D
14
       Operating-Systems       Deadlock       ISRO-2007
Question 251 Explanation: 
Minimum resources required to avoid deadlock = (m1 – 1) + (m2 – 1) +..+ (my – 1) + 1
where m = resource required by process
y = number of processes
so, Number of resources that ensures that deadlock will never occur is = (3-1) + (4-1) + (6-1) + 1 = 11
Option (A) is correct.
Question 252
A task in a blocked state
A
is executable
B
is running
C
must still be placed in the run queues
D
is waiting for some temporarily unavailable resources
       Operating-Systems       Peocess-state-transition-diagram       ISRO-2007
Question 252 Explanation: 
Waiting or Blocked state is when a process has requested some input/output and is waiting for the resource.
Question 253
Semaphores
A
synchronize critical resources to prevent deadlock
B
synchronize critical resources to prevent contention
C
are used to do I/O
D
are used for memory management
       Operating-Systems       Process-Synchronization       ISRO-2007
Question 253 Explanation: 
Semaphore is a variable and is used to solve critical section problem and to achieve process synchronization in the multi processing environment.
Question 254
On a system using non-preemptive scheduling, processes with expected run times of 5, 18, 9 and 12 are in the ready queue. In what order should they be run to minimize wait time?
A
5, 12, 9, 18
B
5, 9, 12, 18
C
12, 18, 9, 5
D
9, 12, 18, 5
       Operating-Systems       CPU-Scheduling       ISRO-2007
Question 254 Explanation: 
The processes should execute in SJF manner to get the lowest waiting time. So, the order should be 5, 9, 12, 18.
Question 255
The number of page frames that must be allocated to a running process in a virtual memory environment is determined by
A
the instruction set architecture
B
page size
C
number of processes in memory
D
physical memory size
       Operating-Systems       Virtual Memory       ISRO-2007
Question 255 Explanation: 
There are two important tasks in virtual memory management: a page-replacement strategy and a frame-allocation strategy. Frame allocation strategy says gives the idea of minimum number of frames which should be allocated. The absolute minimum number of frames that a process must be allocated is dependent on system architecture, and corresponds to the number of pages that could be touched by a single (machine) instruction.So, it is instruction set architecture
Question 256
Consider a small 2-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8
A
2
B
3
C
4
D
5
       Operating-Systems       Page-Replacement-algorithm        ISRO-2007
Question 256 Explanation: 
It is given that the cache is 2-way set associative, so there will be 2 blocks within each cache set.
It is also given that there are 4 blocks in the cache.
So, no of sets = no. of blocks in the cache / no. of blocks in a set = 4 / 2 = 2
In set associative cache mapping, the given main memory block is mapped to a cache set which is obtained by: (block number) mod (number of sets).
Here there are 2 sets and all the given main memory blocks are even numbers so all of them are mapped to set-0, as 8 mod 2 = 0, 12 mod 2 = 0
and 0 mod 2 = 0.
The given block pattern 8, 12, 0, 12, 8 are mapped to the cache blocks as shown in the image attached with this email.
From this the first access of 8, 12, 0 are all 3 misses. And the second access of 12 is a hit, but the second access of 8 is a miss.
So total we have 4 misses.
Question 257
Which of the following RAID level provides the highest Data Transfer Rate (Read/Write)
A
RAID 1
B
RAID 3
C
RAID 4
D
RAID 5
       Operating-Systems       RAID       ISRO-2007
Question 257 Explanation: 
Disk mirroring, also known as RAID 1, is the replication of data to two or more disks. Disk mirroring is a good choice for applications that require high performance and high availability, such as transactional applications, email and operating systems.
Question 258
Feedback queues
A
are very simple to implement
B
dispatch tasks according to execution characteristics
C
are used to favour real time tasks
D
require manual intervention to implement properly
       Operating-Systems       CPU-Scheduling       ISRO-2007
Question 258 Explanation: 
n a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the system. Processes do not move between queues. This setup has the advantage of low scheduling overhead, but the disadvantage of being inflexible.

Multilevel feedback queue scheduling, however, allows a process to move between queues. The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.
Question 259
A particular parallel program computation requires 100 sec when executed on a single processor. If 40% of this computation is inherently sequential (i.e. will not benefit from additional processors), then theoretically best possible elapsed times of this program running with 2 and 4 processors, respectively, are
A
20 sec and 10 sec
B
30 sec and 15 sec
C
50 sec and 25 sec
D
70 sec and 55 sec
       Operating-Systems       ISRO-2018
Question 259 Explanation: 
→ The computation requires 100 seconds on a single processor implies that 40% of the computation takes 40 seconds on any number of processors and the remaining 60% takes 60 / (#processors) seconds on parallel computation which becomes 30 seconds on two processors and 15 seconds on four.
→ Hence, in total, the computation takes 40+30=70 seconds on two processors and 40+15=55 seconds on four processors.
Question 260
The Operating System of a computer may periodically collect all the free memory space to form contiguous block of free space. This is called:
A
Concatenation
B
Garbage collection
C
Collision
D
Dynamic Memory Allocation
       Operating-Systems       Memory-Management       ISRO-2018
Question 260 Explanation: 
→ The Operating System of a computer may periodically collect all the free memory space to form a contiguous block of free space. This is called garbage collection
→ We can also use compaction to minimize the probability of external fragmentation.
→ In compaction, all the free partitions are made contiguous and all the loaded partitions are brought together.
Question 261
The following C program If we execute this core segment, how many times the string yes will be printed ?
A
Only once
B
2 times
C
4 times
D
8 times
       Operating-Systems       System-Calls       ISRO-2018
Question 261 Explanation: 
There is a direct formula to find the number of child processes is 2n -1 =3 times and already one ‘yes’ they are given. So, it will print 4 times ‘yes’
Question 262
Determine the number of page faults when references to pages occur in order – 1, 2, 4, 5, 2, 1, 2, 4. Assume that the main memory can accommodate 3 pages and the main memory already has the pages 1 and 2, with page 1 having brought earlier than page 2.(assume LRU algorithm is used)
A
3
B
4
C
5
D
None of these
       Operating-Systems       Page-Replacement-algorithm        ISRO-2018
Question 262 Explanation: 
Page fault table is
The main memory already has the pages 1 and 2, with page 1 having brought earlier than page 2. So, total 6 page faults. 6-2=4
Question 263
Consider a system having m resources of the same type. These resources are shared by 3 processes A, B, C which have peak time demands of 3, 4, 6 respectively. The minimum value of m that ensures deadlock will never occur is
A
11
B
12
C
13
D
14
       Operating-Systems       Deadlock       ISRO-2018
Question 263 Explanation: 
A requires 3, B-4, C-6;
→ If A has 2, B has 3, C has 5 then a deadlock will occur i.e., 2+3+5=10.
→ If we have one extra resource then a deadlock will not occur i.e., 10+1=11.
→ If we have equal (or) more than 11 resources then deadlock will never occur.
Question 264
A computer has 1000 K of main memory. The jobs arrive and finish in the sequence Job 1 requiring 200 K arrives Job 2 requiring 350 K arrives Job 3 requiring 300 K arrives Job 1 finishes Job 4 requiring 120 K arrives Job 5 requiring 150 K arrives Job 6 requiring 80 K arrives Among the best fit and first fit, which performs better for this sequence?
A
First fit
B
Best fit
C
Both perform the same
D
None of the above
       Operating-Systems       Memory-Management       ISRO-2018
Question 264 Explanation: 
Main memory = 1000K
Job 1 requiring 200 K arrives
Job 2 requiring 350 K arrives
Job 3 requiring 300 K arrives and assuming continuous allocation:
Free memory = 1000 − 850(200 + 350 + 300) = 150 K (till these jobs first fit and best fit are same)
Since, job 1 finishes, Free memory = 200 K and 150 K
Case 1: First fit
Job 4 requiring 120 K arrives
Since 200 K will be the first slot, so Job 4 will acquire this slot only. Remaining memory = 200 – 120 = 80 K
Job 5 requiring 150 K arrives
It will acquire 150 K slot
Job 6 requiring 80 K arrives
It will occupy 80 K slot, so, all jobs will be allocated successfully.
Case 2: Best fit
Job 4 requiring 120 K arrives
It will occupy best fit slot which is 150 K. So, remaining memory = 150 − 120 = 30 K
Job 5 requiring 150 K arrives
It will occupy 200 K slot. So, free space = 200 − 150 = 50 K
Job 6 requiring 80 K arrives
There is no continuous 80 K memory free. So, it will not be able to allocate.
So, first fit is better.
Question 265
Disk requests come to a disk driver for cylinders in the order 10, 22, 20, 2, 40, 6, and 38 at a time when the disk drive is reading from cylinder 20. The seek time is 6 ms/cylinder. The total seek time if the disk arm scheduling algorithms is first-come-first-served is360 ms
A
360 ms
B
850 ms
C
900 ms
D
None of the above
       Operating-Systems       CPU-Scheduling       ISRO-2018
Question 265 Explanation: 
FCFS
Total seek time in FCFS Scheduling when the disk drive is reading from cylinder 20 for cylinders in the order 10, 22, 20, 2, 40, 6, and 38.
Question 266
In multiprogramming systems, it is advantageous if some programs such as editors and compilers can be shared by several users. Which of the following must be true of multi-programmed systems in order that a single copy of a program can be shared by several users. I. The program is a macro II. The program is recursive III. The program is reentrant
A
I only
B
II only
C
Ill only
D
I, II and III
       Operating-Systems       Multiprogramming       ISRO-2018
Question 266 Explanation: 
→ Reentrant code is commonly required in operating systems and in applications intended to be shared in multi-use systems. A programmer writes a reentrant program by making sure that no instructions modify the contents of variable values in other instructions within the program.
→ Each time the program is entered for a user, a data area is obtained which keep all the variable values for that user. The data area is in another part of memory from the program itself. When the program is interrupted to give another user a turn to use the program, information about the data area associated with that user is saved.
→ When the interrupted user of the program is once again given control of the program, information in the saved data area is recovered and the program can be re-entered without concern that the previous user has changed some instruction within the program.
Question 267
Four jobs to be executed on a single processor system arrive at time 0 in the order A, B, C, D . Their burst CPU time requirements are 4, 1, 8, 1 time units respectively. The completion time of A under round robin scheduling with time slice of one time unit is
A
10
B
4
C
8
D
9
       Operating-Systems       CPU-scheduling       ISRO CS 2008
Question 267 Explanation: 
1. All processes are arrived at time 0.
2. Algorithm used for scheduling is round robin with time quantum of one unit time.
3. The order of execution of the processes A B C D A C A C A,C,C,C,C,C
4. After 8 context switches, process A completes it execution So the completion time is 9
Question 268
What problem is solved by Dijkstra banker’s algorithm?
A
Cache coherence
B
Mutual exclusion
C
Deadlock recovery
D
Deadlock avoidance
       Operating-Systems       Deadlock       ISRO-2017 May
Question 268 Explanation: 
Deadlock avoidance is solved by Dijkstra banker’s algorithm
Question 269
What is the output of the following program?
A
10 and 11
B
10
C
11
D
11 and 11
       Operating-Systems       System-Calls       ISRO-2017 May
Question 269 Explanation: 
→ The purpose of the fork system call is to create a child process.
→ The parent process fork call will return process ID which will make if condition false then parent process will print 10
→ The child process will execute the next instruction is a++ because in the child process if the condition is not tested. Execution starts from next instruction and it will print 11
Question 270
Given the reference to the following pages by a program 0, 9, 0, 1, 8, 1, 8, 7, 8, 7, 1, 2, 8, 2, 7, 8, 2, 3, 8, 3 How many page faults will occur if the program has three-page frames available to it and uses an optimal replacement?
A
7
B
8
C
9
D
None of these
       Operating-Systems       Page-Replacement-algorithm        ISRO-2017 May
Question 270 Explanation: 
Optimal page replacement: Replace the page that will not be used for the longest period of time.

Note: Total 7-page faults will occur.
Question 271
Mutual exclusion problem occurs
A
Between two disjoint processes that do not interact
B
Among processes that share resources
C
Among processes that do not use the same resource
D
Between two processes that uses different resources of different machine
       Operating-Systems       Process-Synchronization       ISRO-2017 May
Question 271 Explanation: 
→ Mutual exclusion is used to avoid the concurrent use of the same resources. But sometimes the problem occurred in mutual exclusion when process or program is not sharing the same resources.
Question 272
In a resident – OS computer, which of the following systems must reside in the main memory under all situations?
A
Assembler
B
Linker
C
Loader
D
Compiler
       Operating-Systems       Linker-loader       ISRO CS 2008
Question 272 Explanation: 
Loader is the part of an operating system that is responsible for loading programs and libraries. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution.
Loading a program involves tasks such as reading the contents of the executable file containing the program instructions into memory, and then carrying out other required preparatory tasks to prepare the executable for running.
The operating system starts the program by passing control to the loaded program code once the loading process is completed.
Question 273
Raid configurations of the disks are used to provide
A
Fault-tolerance
B
High speed
C
High data density
D
A & B
       Operating-Systems       RAID       ISRO CS 2008
Question 273 Explanation: 
RAID stands for Redundant Array of Independent Disks. A RAID-enabled system uses two or more hard disks to improve the performance and provide some level of fault tolerance for a machine.
Question 274
A critical region
A
is a piece of code which only one process executes at a time
B
is a region prone to deadlock