Time-Complexity

Question 1
Let P be an array containing n integers. Let t be the lowest upper bound on the number of comparisons of the array elements, required to find the minimum and maximum values in an arbitrary array of n elements. Which one of the following choices is correct?
A
B
C
D
Question 1 Explanation: 

Let assume n=512

Method-1: 

Using standard recursive algorithm:

MaxMin is a recursive algorithm that finds the maximum and minimum of the set of elements {a(i), a(i+1), ..., a(j)}. The situation of set sizes one (i=j) and two (i=j-1) are handled separately. For sets containing more than two elements, the midpoint is determined ( just as in binary search) and two new subproblems are generated. When the maxima and minima of these subproblems are determined, the two maxima are compared and the two minima are compared to achieve the solution for the entire set. 

To find the number of element comparisons needed for Maxmin, T(n) represents this number, then the resulting recurrence relation is

When n is a power of two, n = 2k for some positive integer k, then

T(n)=2T(n/2)+2

        =2(2T(n/4)+2)+2

        =4T(n/4)+4+2

           ፧

        =2k-1T(2)+1ik-12i

        =2k-1+2k-2

        =3n/2-2 

→ The given example n=512

Apply into 3n/2 -2

= (3*512)/2 -2

= 768-2

= 766

Method-2: 

Find the minimum and maximum independently, using n-1 comparisons for each, for a total of 2n-2 comparisons. In fact, at most 3⌊n/2⌋ comparisons are sufficient to find both the minimum and the maximum. The strategy is to maintain the minimum and maximum elements seen thus far. Rather than processing each element of the input by comparing it against the current minimum and maximum, at a cost of 2 comparisons per element, we process elements in pairs. We compare pairs of elements from the input first with each other, and then we compare the smaller to the current minimum and the larger to the current maximum, at a cost of 3 comparisons for every 2 elements. 

Setting up initial values for the current minimum and maximum depends on whether n is odd or even. If n is odd, we set both the minimum and maximum to the value of the first element,and then we process the rest of the elements in pairs. If n is even, we perform 1 comparison on the first 2 elements to determine the initial values of the minimum and maximum, and then process the rest of the elements in pairs as in the case for odd n.

Let us analyze the total number of comparisons. If n is odd, then we perform 3⌊n/2⌋ comparisons. If n is even, we perform 1 initial comparison followed by 3(n-2)/2 comparisons, for a total of (3n/2)-2.  Thus, in either case, the total number of comparisons is at most 3⌊n/2⌋.

Given an even number of elements. So, 3n/2 -2 comparisons. 

= (3*512)/2 -2

= 768-2

= 766

 

Method-3:

By using Tournament Method: 

Step-1: To find the minimum element in the array will take n-1 comparisons. 

We are given 512 elements. So, to find the minimum element in the array will take 512-1= 511 

Step-2: To find the largest element in the array using the tournament method. 

  1. After the first round of Tournament , there will be exactly n/2 numbers =256 that will lose the round.
  2. The biggest loser (the largest number) should be among these 256 loosers.To find the largest number will take (n/2)−1 comparisons =256-1 = 255

Total 511+255= 766

Question 2
Consider the following array.

Which algorithm out of the following options uses the least number of comparisons (among the array elements) to sort the above array in ascending order?
A
Quicksort using the last element as pivot
B
Insertion sort
C
Selection sort
D
Mergesort
Question 2 Explanation: 

Quick sort(with last element as pivot) → will give the worst case time complexity as O(n^2).

Merge Sort → The merge sort will not depend upon the input order and will give O(nlogn) time complexity. 

Insertion Sort → Insertion sort will perform best case time complexity when the input array is in sorted order. If the array is already sorted then the inversion count is 0 and If the array is sorted in reverse order that inversion count is the maximum. 

Note: Inversion formal definition is two elements A[i] and A[j] form an inversion if A[i] > A[j] and i < j. 

The number of comparisons will not take more than “n” for the given input array. 

Selection Sort → Selection sort will not depend upon the input order and will give O(n^2) time complexity. 

Question 3
Consider the following recurrence relation.

Which one of the following options is correct?
A
B
C
D
Question 3 Explanation: 
Question 4

The recurrence relation that arises in relation with the complexity of binary search is:

A
T(n) = T(n/2) + k, k a constant
B
T(n) = 2T(n/2) + k, k a constant
C
T(n) = T(n/2) + log n
D
T(n) = T(n/2) + n
Question 4 Explanation: 
In binary search, search for the half of the list and constant time for comparing. So,
∴ T(n) = 2T(n/2) + k, k a constant
Question 5

Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. Which of the following is ALWAYS TRUE?

A
A(n) = Ω (W(n))
B
A(n) = Θ (W(n))
C
A(n) = O (W(n))
D
A(n) = o (W(n))
Question 5 Explanation: 
The average case time can be lesser than or even equal to the worst case.
So, A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).
A(n) = O(W(n))
Note: Option A is wrong because A(n) is not equal to Ω(w(n)) .
Question 6

Quicksort is run on two inputs shown below to sort in ascending order taking first element as pivot,

(i) 1,2,3,...,n
(ii) n,n-1,n-2,...,2,1 

Let C1 and C2 be the number of comparisons made for the inputs (i) and (ii) respectively. Then,

A
C1 < C2
B
C1 > C2
C
C1 = C2
D
we cannot say anything for arbitrary n.
Question 6 Explanation: 
Both are the worst cases of Quick sort, i.e., either the elements are already in increasing order or the elements are already in decreasing order.
So, option is (C) is correct.
Question 7

The recurrence relation

 T(1) = 2
 T(n) = 3T(n/4)+n 

has the solution, T(n) equals to

A
O(n)
B
O(log n)
C
O(n3/4)
D
None of the above
Question 7 Explanation: 
Apply Master's theorem.
Question 8

The average number of key comparisons done on a successful sequential search in list of length n is

A
log n
B
n-1/2
C
n/2
D
n+1/2
Question 8 Explanation: 
Total comparisons required
= No. of comparisons if element present in 1st position + No. of comparisons if element present in 2nd position + ............. + No. of comparisons if element present in nth position
= 1 + 2 + 3 + ... + n
= n(n+1)/2
Since there are n elements in the list, so average no. of comparisons
= Total comparisons/Total no. of elements
= (n(n+1)/2)/n
= n+1/2
Question 9

Which of the following is false?

A
B
C
D
Question 9 Explanation: 
Question 10

(a) Consider the following algorithm. Assume procedure A and procedure B take O(1) and O(1/n) unit of time respectively. Derive the time complexity of the algorithm in O-notation.

         algorithm what (n)      
             begin 
                  if n = 1 then call A 
             else begin
                   what (n-1);
                   call B(n)
             end
         end. 

(b) Write a constant time algorithm to insert a node with data D just before the node with address p of a singly linked list.

A
Theory Explanation.
Question 11

If T1 = O(1), give the correct matching for the following pairs:

   (M) Tn = Tn−1 + n          (U) Tn = O(n)
   (N) Tn = Tn/2 + n          (V) Tn = O(nlogn)
   (O) Tn = Tn/2 + nlogn      (W) T = O(n2)
   (P) Tn = Tn−1 + logn       (X) Tn = O(log2n) 
A
M – W N – V O – U P - X
B
M – W N – U O – X P - V
C
M – V N – W O – X P - U
D
None of the above
Question 11 Explanation: 
(M) T(n) = Sum of first n natural nos. = n(n+1)/2 = O(n2)
(N) Apply Master's theorem
T(n) = θ(n) = O(n)
(O) Apply Master's theorem
T(n) = θ(n logn) = O(n logn)
(P) Here we are adding the log of firstn natural numbers.
So,
Tn = log1 + log2 + log3 + ... + logn
= log (1×2×...n)
= log(n!)
= θ(n logn)
Question 12

If n is a power of 2, then the minimum number of multiplications needed to compute a* is

A
log2 n
B
√n
C
n-1
D
n
Question 12 Explanation: 

We require 4 multiplications to calculate a16 .....(I)
→ Like that 3 multiplications requires to calculate a8 .....(II)
I, II are satisfied with the option A.
Question 13

Suppose  we  want  to  arrange  the  n  numbers  stored  in any  array  such  that  all negative  values  occur  before  all  positive  ones. Minimum  number  of  exchanges required in the worst case is

A
n - 1
B
n
C
n + 1
D
None of the above
Question 13 Explanation: 
Minimum no. of exchanges required in worst case will be when 1st half will contain all +ve nos and 2nd half will contain all -ve nos.
Now we will swap 1st no. with nth no. and then 2nd no. with (n-1)th no. and then 3rd no. with (n-2)th and so on. Like this we will have to do n/2 swaps in worst case.
Question 14

Consider the following functions

Which of the following is true?

A
h(n) is O (f(n))
B
h(n) is O (g(n))
C
g(n) is not O (f(n))
D
f(n) is O(g(n))
Question 14 Explanation: 
Consider n value as 210
Then
f(n) = 3(n32) = 3*(210)32 = 3*2320
g(n) = 2320
h(n) = 1024!
So relation between the functions can be:
f(n) and g(n) are of same order, so f(n) is O(g(n)) and g(n) = O(f(n)). Option C is wrong.
h(n) is n! Which is of higher order than f(n) and g(n). So options A and B are wrong.
Question 15

Let f(n) = n2logn and g(n) = n(logn)10 be two positive functions of n. Which of the following statements is correct?

A
f(n) = O(g(n) and g(n) ≠ O(f(n))
B
g(n) = O(f(n) and f(n) ≠ O(g(n))
C
f(n) ≠ O(g(n)) and g(n) ≠ O(f(n))
D
f(n) = O(g(n)) and g(n) = O(f(n))
Question 15 Explanation: 
f(n) = n2logn; g(n) = n(logn)10
Cancel nlogn from f(n), g(n)
f(n) = n; g(n) = (logn)9
n is too large than (logn)9
So f(n)! = O(g(n)) and g(n) = O(f(n))
Question 16

The running time of the following algorithm

  Procedure A(n)
  If n <= 2 return(1) else return (A(⌈√n⌉)); 

is best described by:

A
O(n)
B
O(log n)
C
O(log log n)
D
O(1)
Question 16 Explanation: 
T(n) = T(√n)+1
Let n=2m
So, T(2m) = T(2m/2)+1
We substitute T(2m) = S(m),
So now the equation becomes,
S(m) = S(m/2)+1
Now after applying master's theorem,
S(m) = O(log m)
T(n) = O(log log n)
Question 17

The cube root of a natural number n is defined as the largest natural number m such that m3≤n. The complexity of computing the cube root of n (n is represented in binary notation) is:

A
O(n) but not O(n0.5)
B
O(n0.5 but not O((log n)k) for any constant k>0
C
O((log n)k) for some constant k>0, but not O((log log n)m) for any constant m>0
D
O((log log n)k) for some constant k>0.5, but not O((log log n)0.5)
Question 17 Explanation: 
Time complexity to search using binary search is O(log n). The cube root involves to search again O(log n) times in worst case. So time taken to find cube root is O(log2n) ... (I)
The time complexity is never less than O(log n) because to represent a binary search will takes O(log n) ... (II)
From (I), option A and B False
From (II), option D False.
Question 18

The usual Θ(n2) implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will

A
remain Θ(n2)
B
become Θ(n (log n)2)
C
become Θ(n log n)
D
become Θ(n)
Question 18 Explanation: 
While using Insertion sort to sort array by using linear search then time complexity = Θ(n2)
Instead, linear search use binary search then (log n) will be the worst case time complexity of binary search and performing n swaps to place an element in right position for the corresponding n elements
i.e., n×(logn+n)
Θ((n×logn)+n2)
Θ(n2)
Remains same.
Question 19

Consider the following three claims

I. (n + k)m = Θ(nm), where k and m are constants
II. 2n+1 = O(2n)
III. 22n+1 = O(2n) 

Which of these claims are correct?

A
I and II
B
I and III
C
II and III
D
I, II, and III
Question 19 Explanation: 
I) (n+k)m = Θ(nm)
Which is true by considering leading ordered term present in polynomial expression.
II) 2n+1 = Θ(nm) → True

2n×2n can't be written as Θ(2n)
So, this is False.
Question 20

where O(n) stands for order n is:

A
O(n)
B
O(n2)
C
O(n3)
D
O(3n2)
E
O(1.5n2)
F
B, C, D and E
Question 20 Explanation: 

⇒ In this 'n' is constant. So, n is added to n times itself which is O(n2).
Hence, (a) is wrong. And rest (B), (C), (D), (E) are correct.
Question 21

Consider the function F(n) for which the pseudo code is given below:

     Function F(n)
     begin
     F1 ← 1
     if(n=1) then F ← 3
     else For i = 1 to n do
                       begin
                       C ← 0
     For               j = 1 to F(n – 1) do
                       begin C ← C + 1 end
                       F1 = F1 * C
     end
                       F = F1
     end
     [n is a positive integer greater than zero] 

(a) Derive a recurrence relation for F(n)
(b) Solve the recurrence relation for a closed form solutions of F(n).

A
Theory Explanation.
Question 22

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Kruskal’s algorithm for finding a minimum spanning tree of a weighted graph G with vertices and m edges has the time complexity of:

A
O(n2)
B
O(mn)
C
O(m+n)
D
O(m log n)
E
O(m2
F
B, D and E
Question 22 Explanation: 
Though the edges are sorted still due to finding union operation complexity is O(m log n).
→ Where m is no. of edges and n is number of vertices then n = O(m2)
→ O(m logn) < O(mn)
Question 23
 Let H be a binary min-heap consisting of n elements implemented as an array. What is the worst case time complexity of an optimal algorithm to find the maximum element in H?
A
B
C
D
Question 23 Explanation: 

The heap is nothing but a complete binary tree and uses linear search to find the elements. 

In min heap, the parent key is less than or equal to (≤) the child keys. The maximum values should present in the farthest leaf node for the worst case time complexity. 

To traverse all the elements in a heap will take O(n) time for the worst case because it uses the linear search to find the element. 

Question 24
What is the worst-case number of arithmetic operations performed by recursive binary search on a sorted array of size n?
A
B
C
D
Question 24 Explanation: 

In this question they given three main constraints

  1. The input array is in sorted order
  2. Use recursive binary search
  3. Worst case number of operations

If the array is in sorted order then the worst case time complexity is O(logn)

Ex: 10, 20, 30

The binary search approach is using either recursive or iterative method is

Step-1: element = middle, the element is found and return the index.

Step-2: element > middle, search for the element in the sub-array starting from middle+1 index to n.

Step-3: element < middle, search for element in the sub-array starting from 0 index to middle -1.

Note: The worst case happens when we want to find the smallest/largest farthest node. So, it will not take more than O(logn) time. 

Question 25

Let f(n), g(n) and h(n) be functions defined for positive inter such that f(n) = O(g(n)), g(n) ≠ O(f(n)), g(n) = O(h(n)), and h(n) = O(g(n)). Which one of the following statements is FALSE?

A
f(n) + g(n) = O(h(n)) + h(n))
B
f(n) = O(h(n))
C
fh(n) ≠ O(f(n))
D
f(n)h(n) ≠ O(g(n)h(n))
Question 25 Explanation: 
f(n) = O(h(n)) [Using transitivity]
So, f(n)g(n) = O(g(n)h(n)) is True.
Hence, (D) is false.
Question 26

When n = 22k for some k ≥ 0, the recurrence relation

 T(n) = √(2)T(n/2) + √n, T(1) = 1
evaluates to :
A
√n (log n + 1)
B
√n log n
C
√n log √n
D
n log √n
Question 26 Explanation: 
Question 27

Arrange the following functions in increasing asymptotic order:

A. n1/3
B. en
C. n7/4
D. n log9n
E. 1.0000001n
A
A, D, C, E, B
B
D, A, C, E, B
C
A, C, D, E, B
D
A, C, D, B, E
Question 27 Explanation: 
A < D < C < E < B
n1/3 < n log9 < n7/4 < 1.0000001n < en
Question 28

Choose the correct alternatives (More than one may be correct). The complexity of comparison based sorting algorithms is:

A
Θ(n logn)
B
Θ(n)
C
Θ(n2)
D
Θ(n√n)
E
Both (A) and (C)
Question 28 Explanation: 
Question 29

A program takes as input a balanced binary search tree with n leaf nodes and computes the value of a function g(x) for each node x. If the cost of computing g(x) is min{no. of leaf-nodes in left-subtree of x, no. of leaf-nodes in right-subtree of x} then the worst-case time complexity of the program is

A
Θ (n)
B
Θ (n log n)
C
Θ (n2)
D
Θ (n2 log n)
Question 29 Explanation: 
Contains a balanced binary search tree.
⇒ g(x) = min (no. of leaf nodes of left-subtree of x, no. of leaf nodes in the right-subtree of x)
→ Total no. of leaves = n
Time complexity for a binary search = log n
Time complexity of the program is = O(n(log n))
Question 30

The time complexity of the following C function is (assume n > 0)

int recursive (int n) {
   if (n == 1)
   return (1);
   else
   return (recursive (n - 1) + recursive (n - 1));
} 
A
O(n)
B
O(n log n)
C
O(n2)
D
O(2n)
Question 30 Explanation: 
if (n==1)
return(1)
else
return(recursive(n-1) + recursive(n-1))
n>0:
T(2) = T(1) + T(1) = 2T(1)
T(3) = T(2) + T(2) = 2T(2) = 2⋅2T(1) = 22T(1)
T(4) = 23⋅T(1)

T(n) = 2n⋅T(1) = O(2n)
Question 31

Let A[1,...,n] be an array storing a bit (1 or 0) at each location, and f(m) is a function whose time complexity is θ(m). Consider the following program fragment written in a C like language:

counter = 0;
for (i = 1; i < = n; i++)
{ if (A[i] == 1) counter++;
      else { f(counter); counter = 0; }
}

The complexity of this program fragment is

A
Ω(n2)
B
Ω(nlogn) and O(n2)
C
θ(n)
D
o(n)
Question 31 Explanation: 
If the string contains all zero's then it takes O(n) time, because f(n) can call n times.
If the string contains all one's then it takes O(n) time, because counter++ can execute n times.
If it contains half 0's and half 1's then also it takes O(n) time.
Question 32

Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be

A
best if A is in row-major, and B is in column major order
B
best if both are in row-major order
C
best if both are in column-major order
D
independent of the storage scheme
Question 32 Explanation: 
Running time of an algorithm is always independent of the storage scheme. While computing the running time of an algorithm we assume that to access any element time taken is same. So answer is (D).
But if the question would have asked best time complexity in which of the following implementation (not algorithm) then Option (A) is correct.
Question 33

The tightest lower bound on the number of comparisons, in the worst case, for comparison-based sorting is of the order of

A
n
B
n2
C
nlogn
D
nlog2n
Question 33 Explanation: 
In comparison based sorting, the tightest bound of number of comparisons is θ(n log n).
→ Tightest upper bound is (big O).
Tightest lower bound is (big Ω).
There are 33 questions to complete.

Access quiz wise question and answers by becoming as a solutions adda PRO SUBSCRIBER with Ad-Free content

Register Now

If you have registered and made your payment please contact solutionsadda.in@gmail.com to get access