TimeComplexity
Question 1 
Consider the recurrence function
Then T(n) in terms of Θ notation is
Θ(loglogn)  
Θ(logn)  
Θ(√n)  
Θ(n) 
(or)
T(n) = 2T(n^{(1⁄2)}) + 1
Here, Assume n = 2^{k}
T(2^{k}) = 2T(2^{k})^{(1⁄2)} + 1
T(2^{k}) = 2T(2^{(k/2)} ) + 1
Assume T(2^{k}) = S(k)
S(k) = 2S(k/2) + 1
Apply Master’s theorem Case1
a=2, b=2
S(k) = k^{(log22 )}
S(k) = θ(k')
but S(k) = T(2^{k})
T(2^{k}) = θ(k')
but n = 2^{k}
k = logn
T(n) = θ(logn)
Question 2 
Consider the following C function.
int fun (int n) { int i, j; for (i = 1; i <= n; i++) { for (j = 1; j < n; j += i) { printf("%d %d",i,j); } } }
Time complexity of fun in terms of Θ notation is
Θ(n√n)  
Θ(n^{2})  
Θ(n logn)  
Θ(n^{2} logn) 
int fun(int n)
{
int i, j;
for (i = 1; i <= n ; i++) /* It is independent loop. So, it takes O(n) time */
{
for (j = 1; j < n; j += i) /* It is dependent loop, It always incrementing with the help of i. It will take approximately O(log n) times*/
{
printf("%d %d", i, j); /* It takes O(1) time */
}
}
}
So, the overall complexity of the program is θ(n logn) times.
Question 3 
The given diagram shows the ﬂowchart for a recursive function A(n). Assume that all statements, except for the recursive calls, have O(1) time complexity. If the worst case time complexity of this function is O(n^{α}), then the least possible value (accurate upto two decimal positions) of α is _________.
2.3219280  
2.3219281  
2.3219282  
2.3219283 
According to flow chart total 5 worst case possibilities of function calls.
The remaining function calls/return statements will execute only constant amount of time.
So, total function calls 5.
The Recurrence will be
A(n) = 5A(n/2) + O(1)
Apply Master’s theorem,
A=5, b=2, k=0, p=0
a > b^{k} case
A(n) = n^{(logba )} = n^{(log25)} = n^{2.3219280}
∴α = 2.3219280
Question 4 
An algorithm performs (log N)^{1/2} find operations, N insert operations, (log N)^{1/2} delete operations, and (log N)^{1/2} decreasekey operations on a set of data items with keys drawn from a linearly ordered set. For a delete operation, a pointer is provided to the record that must be deleted. For the decreasekey operation, a pointer is provided to the record that has its key decreased. Which one of the following data structures is the most suited for the algorithm to use, if the goal is to achieve the best total asymptotic complexity considering all the operations?
Unsorted array  
Minheap  
Sorted array  
Sorted doubly linked list 
Question 5 
Consider a rooted n node binary tree represented using pointers. The best upper bound on the time required to determine the number of subtrees having exactly 4 nodes is O(n^{a}log^{b}n). Then the value of a+10b is ________.
1  
2  
3  
4 
Substitute into a and b values in equation.
⇒ a+10b
⇒ a*1 + 10*0
⇒ 1+0
⇒ 1
Question 6 
Consider the following pseudo code. What is the total number of multiplications to be performed?
D = 2 for i = 1 to n do for j = i to n do for k = j + 1 to n do D = D * 3
Half of the product of the 3 consecutive integers.  
Onethird of the product of the 3 consecutive integers.  
Onesixth of the product of the 3 consecutive integers.  
None of the above. 
for i = 1 to n do
for j = i to n do
for k = j + 1 to n do
D = D * 3;
Also you can try for smaller ‘n’.
Question 7 
Which one of the following correctly determines the solution of the recurrence relation with T(1) = 1?
T(n) = 2T(n/2) + Logn
Θ(n)  
Θ(nlog n)  
Θ(n^{2})  
Θ(logn) 
Apply Master’s theorem,
a=2, b=2, k=0, p=1
According to Master’s theorem, we can apply CaseI
(I) If a>b^{k}, then T(n) = θ(n^{(logba )}) = θ(n^{(log22)}) = θ (n)
Question 8 
Suppose you want to move from 0 to 100 on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a prespecified pair of integers i, j with i < j. Given a shortcut i, j if you are at position i on the number line, you may directly move to j. Suppose T(k) denotes the smallest number of steps needed to move from k to 100. Suppose further that there is at most 1 shortcut involving any number, and in particular from 9 there is a shortcut to 15. Let y and z be such that T(9) = 1 + min(T(y),T(z)). Then the value of the product yz is _____.
150  
151  
152  
153 
Now, it is given that
T(9) = 1 + min(T(y),T(z))
where,
T(y) = steps from y to 100
T(z) = steps from z to 100
where y and z are two possible values that can be reached from 9.
One number that can be reached from 9 is 10. Another no. is 15, the shortcut path from 9, as given in the question.
∴ The value of 'y' and 'z' are 10 and 15.
So, y × z = 10 × 15 = 150
Question 9 
Consider the following operation along with Enqueue and Dequeue operations on queues, where k is a global parameter.
MultiDequeue(Q){ m = k while (Q is not empty and m > 0) { Dequeue(Q) m = m  1 } }
What is the worst case time complexity of a sequence of n MultiDequeue() operations on an initially empty queue?
Θ(n)  
Θ(n + k)  
Θ(nk)  
Θ(n^{2}) 
i. One option is to perform all Enqueue operations i.e. n Enqueue operations. Complexity will be Ѳ(n).
or
ii. We can perform a mix of Enqueue and Dequeue operations. It can be Enqueue for first n/2 times and then Dequeue for next n/2, or Enqueue and Dequeue alternately, or any permutation of Enqueues and Dequeues totaling ‘n’ times. Completity will be Ѳ(n).
or iii. We can perform Enqueues and MultiDequeues. A general pattern could be as follows: Enqueue Enqueue …(ktimes) MultiDequeue Enqueue Enqueue …(ktimes) MultiDequeue …Up to total n …. K items enqueued ….k items deleted….k items enqueued…..k items deleted  and so on.
The number of times this kEnqueues, MultiDequeue cycle is performed = n/k+1
So, Complexity will be k times Enqueue + 1 MultiDequeue) ×n/k+1
Which is (2k×n/k+1) = (n)
or
iv. We can just perform n MultiDequeues (or n Dequeues for the matter):
Each time the while condition is false (empty queue), condition is checked just once for each of the ‘n’ operations. So Ѳ(n).
Question 10 
Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. Which of the following is ALWAYS TRUE?
A(n) = Ω (W(n))  
A(n) = Θ (W(n))  
A(n) = O (W(n))  
A(n) = o (W(n)) 
So, A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).
A(n) = O(W(n))
Note: Option A is wrong because A(n) is not equal to Ω(w(n)) .
Question 11 
Two alternative packages A and B are available for processing a database having 10^{k} records. Package A requires 0.0001n^{2} time units and package B requires 10nlog_{10}n time units to process n records.What is the smallest value of k for which package B will be preferred over A?
12  
10  
6  
5 
Let n = 10^{k} records. Substitute into 10nlog_{10}n ≤ 0.0001n^{2}
10(10^{k})log_{10}10^{k} ≤ 0.0001(10^{k})^{2}
10^{k+1}k ≤ 0.0001 × 10^{2k}
k ≤ 10^{2k−k−1−4}
k ≤ 10^{k−5}
According to the problem value 6 is suitable for K.
Question 12 
The running time of an algorithm is represented by the following recurrence relation:
Which one of the following represents the time complexity of the algorithm?
θ(n)  
θ(n log n)  
θ(n^{2})  
θ(n^{2}log n) 
The recurrence relation is T(n) = T(n/3) + cn
Here a=1 , b=3, k=1, p=0.
So, ak and p>=0. So, it comes under case 3.
Therefore T(n) = θ(n^{k}(log^{p}n)) => θ(n^{1}(log^{0} n)) => θ(n)
Question 13 
Consider the following functions:

f (n) = 2^{n}
g(n) = n!
h (n) = n^{logn}
Which of the following statements about the asymptotic behaviour of f(n), g(n), and h(n) is true?
f(n) = O(g(n)); g(n) = O(h(n))
 
f(n) = Ω(g(n)); g(n) = O(h(n))  
g(n) = O(f(n)); h(n) = O(f(n))  
h(n) = O(f(n)); g(n) = Ω(f(n))

1. Check whether the function if polynomial time or exponential time.
2. Group them into polynomial functions and exponential function. The exponential functions are always bigger than polynomial functions.
As per the above 3 functions, the g(n) is greater than others. f(n) greater than h(n). So, the order of growth h(n) < f(n) < g(n).
Note: For computing, take always big number.
Question 14 
The minimum number of comparisons required to determine if an integer appears more than n/2 times in a sorted array of n integers is
θ(n)  
θ(logn)  
θ(log*n)  
θ(1) 
1. The First occurrence of an element can be found out in O(log(n)) time using divide and conquer technique, lets say it is i.
2. The Last occurrence of an element can be found out in O(log(n)) time using divide and conquer technique,lets say it is j.
3. Now number of occurrence of that element(count) is (ji+1).
Overall time complexity = log n +log n +1 = O(logn)
Program:
/* If x is present in arr[low...high] then returns the index of first occurrence of x, otherwise returns 1 */
int binarySearch(int arr[], int low, int high, int x){
if (high >= low) {
int mid = (low + high)/2; /*low + (high  low)/2;*/
/* Check if arr[mid] is the first occurrence of x.
arr[mid] is first occurrence if x is one of the following is true:
(i) mid == 0 and arr[mid] == x
(ii) arr[mid1] < x and arr[mid] == x */
if ( (mid == 0  x > arr[mid1]) && (arr[mid] == x) )
return mid;
else if (x > arr[mid]
) return_binarySearch(arr, (mid + 1), high, x);
else
return_binarySearch(arr, low, (mid  1), x);
}
return 1;
}
Note: The code finds out the first occurrence of the element in the array and checks if the element at (n/2+1)^{th} position is same(as the array is sorted). Takes O(logn) time.
Question 15 
Consider the following C program
int f1 (int n) { if(n == 0n == 1) return n; else return (2*f1(n1)+3*f1(n2)); } int f2 (int n) { int i; int X[N], Y[N], Z[N]; X[0]=Y[0]=Z[0]=0; X[1]=1; Y[1]=2; Z[1]=3; for(i=2; i<=n; i++) { X[i] = Y[i1] + Z[i2]; Y[i] = 2*X[i]; Z[i] = 3*X[i]; } return X[n]; }
The running time of f1(n) and f2(n) are
θ(n) and θ(n)  
θ(2^{n}) and θ(n)  
θ(n) and θ(2^{n})  
θ(2^{n}) and θ(2^{n}) 
T(n) = T(n1) + T(n2) (multiplication by 2 and 3 won't affect complexity as it is a constant time operation)
T(0) = T(1) = 1
The recurrence is similar to Fibonacci series. The time complexity is O(2^{n})
T(n) = T(n1) + T(n2) + c
T(0) = 1
T(n1) ≈ T(n2)
But in reality, T(n1) > T(n2)
So to find upper bound the expression will be
T(n) = 2T(n1) + c
= 4T(n2) + 3c
= 8T(n3) + 7c
= 2^{k}T(nk) + (2^{k}1)c
nk=0 ⇒ k = n
T(n) = 2^{n}T(0) + (2^{n}1)c
= 2^{n} + (2^{n}1)c
⇒ T(n) = O(2^{n})
The time required for f2 is O(n) (because it consists of only one loop).
Question 16 
Which of the following sorting algorithms has the lowest worstcase complexity?
Merge sort  
Bubble Sort  
Quick Sort  
Selection Sort 
Merge sort→ O(nlogn)
Bubble sort→ O(n^{2})
Quick sort→ O(n^{2})
Selection sort→ O(n^{2})
Question 17 
Consider the following segment of Ccode:
int j, n; j = 1; while (j <= n) j = j*2;
The number of comparisons made in the execution of the loop for any n > 0 is: Base of Log is 2 in all options.
⌈log_{2}n⌉ + 1
 
n  
⌈log_{2}n⌉  
⌊log_{2}n⌋ + 1 
1<=6 (✔️)
2<=6 (✔️)
4<=6 (✔️)
8<=6 (❌)
4 comparisons required
Option A:
[log n]+1
[log 6]+1
3+1 = 4 (✔)
Option B:
n=6 (❌)
Option C:
[log n]
[log 6]=3 (❌)
Option D:
[log_{2}n]+1
[log_{2}6]+1 = 2+1 = 3 (❌)
Question 18 
In an unweighted, undirected connected graph, the shortest path from a node S to every other node is computed most efficiently, in terms of time complexity, by
Dijkstra’s algorithm starting from S.  
Warshall’s algorithm.  
Performing a DFS starting from S.  
Performing a BFS starting from S. 
→ Time Complexity of the Warshall’s algorithm: O(n^{3}). Warshall’s algorithm basically we are using to find all pair shortest path.
→ DFS cannot be used for finding shortest paths.
→ Time Complexity for BFS : O(E+V)
Question 19 
In the following C function, let n >= m.
int gcd(n,m) { if (n%m ==0) return m; n = n%m; return gcd(m,n); }
How many recursive calls are made by this function?
θ (log_{2} n)  
Ω (n)  
θ (log_{2}log_{2} n)
 
θ(√n) 
Then, time complexity is = O(log(a∙b) = O(log(a+b)) = O(log n)
Question 20 
What is the time complexity of the following recursive function:
int DoSomething (int n) { if (n <= 2) return 1; else return (DoSomething (floor(sqrt(n))) + n); }
Ω (n^{2})  
θ (nlog_{2} n)
 
θ (log_{2} n)  
θ (log_{2}log_{2} n) 
Now apply Master's theorem,
a=1, b=2, k=0, p=0
a = b^{k}, so Case2 will be applied, and p>1, so
Question 21 
An array of n numbers is given, where n is an even number. The maximum as well as the minimum of these n numbers needs to be determined. Which of the following is TRUE about the number of comparisons needed?
At least 2n  c comparisons, for some constant c, are needed.  
At most 1.5n  2 comparisons are needed.  
At least nlog_{2}n comparisons are needed.
 
None of the above. 
→ Now take other two element and compare between them and we will get one minimum element which will be compared with min and we will get one maximum element which will be compared with max and accordingly we will mark them. So in total 3 comparisions are required, and we will do this with all (n1)/2 pairs.
So in total, no. of comparisions required,
= 3(n2)/2 + 2
= 3n/2  3 + 2
= 3n/2  1
= 1.5n  1
Question 22 
Consider the following C code segment:
int IsPrime(n) { int i,n; for(i=2;i<=sqrt(n);i++) if(n%i == 0) {printf(“Not Prime\n”); return 0;} return 1; }
Let T(n) denotes the number of times the for loop is executed by the program on input n. Which of the following is TRUE?>/p>
T(n) = O(√n) and T(n) = Ω(√n)
 
T(n) = O(√n) and T(n) = Ω(1)
 
T(n) = O(n) and T(n) = Ω(√n)  
None of the above 
Question 23 
Consider the following Cprogram fragment in which i, j and n are integer variables.
for (i = n, j = 0; i >0; i /= 2, j += i);
Let val(j) denote the value stored in the variable j after termination of the for loop. Which one of the following is true?
val(j) = θ(logn)  
val(j) = θ(√n)  
val(j) = θ(n)  
val(j) = θ(n logn) 
Question 24 
An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array
Solves it in linear time using a left to right pass of the array  
Solves it in linear time using a right to left pass of the array  
Solves it using divide and conquer in time θ(nlogn)  
Solves it in time θ(n^{2}) 
Question 25 
Consider the following recurrence:
Which one of the following is true?
T(n) = θ(log logn)
 
T(n) = θ(logn)  
T(n) = θ(√n)  
T(n) = θ(n) 
Let n =2^{m}
T(2^{m}) = 2T(2^{m/2}) + 1 (1)
Let T(2^{m}) = S(m)
Then equation (1) becomes,
S(m) = 2S(m/2) + 1
So now lets apply Master's theorem,
a=2, b=2, k=0
Since, a>b^{k}, so Case 1 applied here
But m = log n
So finally the answer is,
O(log n)
Question 26 
The median of n elements can be found in O(n) time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot?
θ(n)  
θ(n logn)  
θ(n^{2})  
θ(n^{3}) 
T(n) = 2T(n/2) +O(n)
The above recurrence in the form of masters theorem: a=2, b=2, k=1, p=0. Since, a=b^{k}, so Case2 is applied
and also p > 1, so
Question 27 
Given two arrays of numbers a_{1}, a_{2}, a_{3},...a_{n} and b_{1}, b_{2}, .. b_{n} where each number is 0 or 1, the fastest algorithm to find the largest span(i, j) such that a_{i} + a_{i+1}, ....a_{j} = b_{i} + b_{i+1}, ... b_{j}. or report that there is not such span,
Takes O(3^{n}) and Ω(2^{n}) time if hashing is permitted  
Takes O(n^{3}) and Ω(n^{2.5}) time in the key comparison model  
Takes θ(n) time and space  
Takes O(√n) time only if the sum of the 2n elements is an even number 
1. Since there are total n elements, maximum sum is n for both arrays.
2. Difference between two sums varies from n to n. So there are total 2n + 1 possible values of difference.
3. If differences between prefix sums of two arrays become same at two points, then subarrays between these two points have same sum.
Question 28 
The time complexity of computing the transitive closure of a binary relation on a set of n elements is known to be:
O(n)  
O(n log n)  
O(n^{3/2})  
O(n^{3}) 
For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R^{+} such that x R^{+} y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place.
To find the transitive closure of given relation, we can represent the given relation by the graph such that if x R y then there should be directed edge between x and y in a graph. For this graph, we have to apply modified Floyd_Warshall Algorithm to find the transitive closure of the graph.
The time complexity of this algorithm is O(V3) where V is the number of vertices in the given graph. You can take here V as the number of elements is set i.e., N. Therefore time complexity for the given question is O(n^{3}).
Question 29 
Suppose T(n) = 2T (n/2) + n, T(0) = T(1) = 1
Which one of the following is FALSE?
T(n) = O(n^{2})  
T(n) = θ(n log n)  
T(n) = Ω(n^{2})
 
T(n) = O(n log n) 
a=2, b=2, k=1, p=0.
The recurrence relation result will be O(n logn).
Question 30 
Suppose there are ⌈log n⌉ sorted lists of ⌊n/log n⌋ elements each. The time complexity of producing a sorted list of all these elements is: (Hint: Use a heap data structure)
O(nlog log n)  
θ(nlog n)  
Ω(nlog n)  
Ω(n^{3/2}) 
P = log n
Q = n/log n
∴ Putting values we get,
O(n log log n)
Question 31 
The tightest lower bound on the number of comparisons, in the worst case, for comparisonbased sorting is of the order of
n  
n^{2}  
nlogn  
nlog^{2}n 
→ Tightest upper bound is (big O).
Tightest lower bound is (big Ω).