Sorting
Question 1 |
An array of 25 distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _____.
0.08 | |
0.01 | |
1 | |
8 |
Step-2: Pivot element = uniformly random.
Step-3: Worst case position in the pivot element is either first (or) last.
Step-4: So total 2 possibilities among 25 distinct elements
= 2/25
= 0.08
Question 2 |
There are n unsorted arrays: A1, A2, ..., An. Assume that n is odd. Each of A1, A2, ..., An contains n distinct elements. There are no common elements between any two arrays. The worst-case time complexity of computing the median of the medians of A1, A2, ..., An is
O(n) | |
O(n log n) | |
Ω(n2 log n) | |
O(n2) |
But it is similar to quicksort but in quicksort, partitioning will take extra time.
→ Find the median will be (i+j)/2
1. If n is odd, the value is Ceil((i+j)/2)
2. If n is even, the value is floor((i+j)/2)
-> Here, total number of arrays are
⇒ O(n)*O(n)
⇒ O(n2)
Note:
They are clearly saying that all are distinct elements.
There is no common elements between any two arrays.
Question 3 |
The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:
Θ(nlogn), Θ(nlogn), and Θ(n2) | |
Θ(n2 ), Θ(n2 ), and Θ(nlogn) | |
Θ(n2), Θ(nlogn), and Θ(nlogn) | |
Θ(n2), Θ(nlogn), and Θ(n2) |

Question 4 |
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
-
I. Quicksort runs in Θ(n2) time
II. Bubblesort runs in Θ(n2) time
III. Mergesort runs in Θ(n) time
IV. Insertion sort runs in Θ(n) time
I and II only | |
I and III only | |
II and IV only | |
I and IV only |
→ The recurrence relation for Quicksort, if elements are already sorted,
T(n) = T(n-1)+O(n) with the help of substitution method it will take O(n2).
→ The recurrence relation for Merge sort, is elements are already sorted,
T(n) = 2T(n/2) + O(n) with the help of substitution method it will take O(nlogn).
We can also use master's theorem [a=2, b=2, k=1, p=0] for above recurrence.
Question 5 |
You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is
O(n2) | |
O(n log n) | |
Θ(n logn) | |
O(n3) |
Question 6 |
Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?
O(log n) | |
O(n) | |
O(n log n) | |
O(n2) |
Selection sort time complexity O(n2) in terms of number of comparisons. Each of these scans requires one swap for n-1 elements (the final element is already in place).
Question 7 |
Which one of the following is the tightest upper bound that represents the time complexity of inserting an object into a binary search tree of n nodes?
O(1) | |
O(log n) | |
O(n) | |
O(n log n) |
Question 8 |
What is the time complexity of Bellman-Ford single-source shortest path algorithm on a complete graph of n vertices?
Θ(n2) | |
Θ(n2 log n) | |
Θ(n3) | |
Θ(n3 log n) |
Bellman–Ford runs in O(|V| * |E|) time, where |V| and |E| are the number of vertices and edges respectively.
Note: For complete graph: |E| = n(n-1)/2 = O(n2)
Question 9 |
The number of elements that can be sorted in Θ(log n) time using heap sort is
Θ(1) | |
Θ(√logn) | |
Θ (logn/loglogn) | |
Θ(log n) |
So, Θ((logn/ log log n)log(logn/log log n))
= Θ(logn/log logn (log logn - log log logn))
= Θ((log n/log logn) × log log n)
= Θ(log n)
Hence, option (C) is correct answer.
Question 10 |
What is the number of swaps required to sort n elements using selection sort, in the worst case?
θ(n) | |
θ(n log n) | |
θ(n2) | |
θ(n2 logn) |
Question 11 |
In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort?
θ(n) | |
θ(n log n) | |
θ(n2) | |
θ(n2 log n) |

n→n/(4/3)→n/(4/3)2→n/(4/3)3-----n/(4/3)k=1
n/(4/3)k = 1
⇒n=(4/3)k ⇒ k = log4/3n [k=no. of levels]
In each level workdone = Cn
So, total workdone = Cn⋅log4/3n = (nlog4/3n)
Question 12 |
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then
T(n) ≤ 2T(n/5) + n
| |
T(n) ≤ T(n/5) + T(4n/5) + n | |
T(n) ≤ 2T(4n/5) + n | |
T(n) ≤ 2T(n/2) + n
|
So, T(n/5) comparisons are needed for the first subset and T(4n/5) comparisons needed for second subset.
Now, suppose that one subset contains more than n/5 elements then another subset will contain less than 4n/5 elements. Due to which time complexity will be less than
T(n/5) + T(4n/5) + n
Because recursion tree will be more balanced.
Question 13 |
Which one of the following in place sorting algorithms needs the minimum number of swaps?
Quick sort | |
Insertion sort | |
Selection sort | |
Heap sort |
Question 14 |
In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj.
If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of 1...n ?
n(n-1)/2 | |
n(n-1)/4 | |
n(n+1)/4 | |
2n[log2n] |
Question 15 |
In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that i
What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1...n with at most n inversions?
Θ(n2) | |
Θ(n log n) | |
Θ(n1.5) | |
Θ(n) |
Question 16 |
Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort?
O(n) | |
O(n log n) | |
O(n2) | |
O(n!) |
Question 17 |
Let s be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in s. Which of the following statements is true?
t(n) is O(1) | |
n ≤ t(n) ≤ n log2 n | |
n log2 n ≤ t(n) < (n/2) | |
t(n) = (n/2) |
Question 18 |
A sorting technique is called stable if
it takes O (nlog n) time | |
it maintains the relative order of occurrence of non-distinct elements | |
it uses divide and conquer paradigm | |
it takes O(n) space |
Question 19 |
If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
8, 9, 15, 20, 47, 4, 12, 17, 30, 40 | |
8, 15, 20, 47, 4, 9, 30, 40, 12, 17 | |
15, 20, 47, 4, 8, 9, 12, 30, 40, 17 | |
4, 8, 9, 15, 20, 47, 12, 17, 30, 40 |

Question 20 |
Merge sort uses
Divide and conquer strategy | |
Backtracking approach | |
Heuristic search | |
Greedy approach |
Question 21 |
Following algorithm(s) can be used to sort n integers in the range [1...n3] in O(n) time
Heapsort | |
Quicksort | |
Mergesort | |
Radixsort |
As Radix sort is not comparison based sort (it is counting sort), so can be done in linear time.
Question 22 |
The minimum number of comparisons required to sort 5 elements is _____
7 |
= ⌈log(n!)⌉
= ⌈log(5!)⌉
= ⌈log(120)⌉
= 7
Question 23 |
Quicksort is ________ efficient than heapsort in the worst case.
LESS. |
Question 24 |
Let P be a quicksort program to sort numbers in ascending order. Let t1 and t2 be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1] respectively. Which of the following holds?
t1 = t2 | |
t1 > t2 | |
t1 < t2 | |
t1 = t2 + 5 log 5 |
Question 25 |
If we use Radix Sort to sort n integers in the range [nk/2, nk], for some k>0 which is independent of n, the time taken would be?
Θ(n) | |
Θ(kn) | |
Θ(nlogn) | |
Θ(n2) |
where n = keys, w = word size
w = log2 (nk) = k × log2 (n)
Complexity Θ(wn) = Θ(k × log2(n) × n) = Θ(n log n) = Θ(n log n)
Question 26 |
Let a and b be two sorted arrays containing n integers each, in non-decreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements
1. a[i] ≥ b [i] => c[2i] ≥ a [i] 2. a[i] ≥ b [i] => c[2i] ≥ b [i] 3. a[i] ≥ b [i] => c[2i] ≤ a [i] 4. a[i] ≥ b [i] => c[2i] ≤ b [i]Which of the following is TRUE?
only I and II | |
only I and IV | |
only II and III | |
only III and IV |
Since both 'a' and 'b' are sorted in the beginning, there are 'i' elements than or equal to a[i] and similarly 'i' elements smaller than or equal to b[i]. So, a[i] ≥ b[i] means there are 2i elements smaller than or equal to a[i] and hence in the merged array, a[i] will come after these 2i elements. So, c[2i] ≤ a[i].
Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array. So, b[i] ≤ c[2i].
So, option (C) is correct.
Question 27 |
m*n | |
maximum of m and n | |
minimum of m and n | |
m+n–1 |
For example: Take 2 sub-arrays of size 3 such as 1,3,5 and another subarray 2,4,6. Then in this case number of comparisons is 5 which is m+n-1 can be taken as O(m+n).
Question 28 |
Merge Sort | |
Insertion Sort | |
Selection Sort | |
Quick Sort |

Question 29 |
Greedy method | |
Divide-and-conquer | |
Dynamic Programming | |
Backtracking |
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Question 30 |
O(n2), O(n2) | |
O(n2), O(nlog2n) | |
O(n log2n), O(n2) | |
O(n log2n), O(n log2n) |
Question 31 |
Insertion Sort | |
Quick Sort | |
Heap Sort | |
Selection Sort |

Question 32 |
11 | |
12 | |
13 | |
10 |
Step-2: 8,7,9,22,31,5,13
Step-3: 8,7,9,22,5,31,13
Step-4: 8,7,9,22,5,13,31
Step-5: 7,8,9,22,5,13,31
Step-6: 7,8,9,5,22,13,31
Step-7: 7,8,9,5,13,22,31
Step-8: 7,8,5,9,13,22,31
Step-9: 7,5,8,9,13,22,31
Step-10: 5,7,8,9,12,22,31
Note:Total 10 swaps are required to sort the array.
Question 33 |
1 | |
5 | |
10 | |
20 |
1st iteration will compare 4 numbers with the 5
2nd iteration will compare 3 numbers with the 4
3rd iteration will compare 2 numbers with the 3
4th iteration i will compare 1 number with the 2
So, the total number of comparisons is 4 + 3 + 2 + 1 = 10
It can be viewed as the sum of the sequence of the first (n-1) numbers starting by 1
S = ((1 + (n-1) )*(n-1) )/2
S = 10
Question 34 |
Dynamic programming | |
Backtracking | |
Divide and conquer | |
Greedy method |
1. Quick sort
2. Merge sort
3. Heap sort
****Binary search also follows the divide and conquer.
Question 35 |
Quick sort | |
Insertion sort | |
Selection sort | |
Heap sort |
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 36 |
Greedy method | |
Backtracking | |
Dynamic programming | |
Divide and Conquer |
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Question 37 |
Insertion sort, Quick sort | |
Quick sort, Quick sort | |
Quick sort, Insertion sort | |
Insertion sort, Insertion sort |

Question 38 |
- Quicksort
- Heapsort
- Mergesort
1 and 2 only | |
2 and 3 only | |
3 only | |
1, 1 and 3 |

Question 39 |
45 | |
72 | |
360 | |
450 |
Octal number having → 5 digits
The octal number system base value = 8
The maximum number of comparison = (number of items) * (radix) * (number of digits)
= 9 * 5 * 8
= 360
Question 40 |
True,True | |
False, False | |
True,False | |
False,False |


Question 41 |
Quick sort | |
Merge sort | |
Shell sort | |
Bubble sort |
Question 42 |
O(nlogn),O(nlogn) and O(n2) | |
O(n2),O(n2) and O(nlogn) | |
O(n2),O(nlogn) and O(nlogn) | |
O(n2),O(nlogn) and O(n2) |
merge sort → O(nlogn)
quick sort → O(n2)
Question 43 |
Insertion sort | |
merge sort | |
Quick sort | |
bubble sort |

Question 44 |
Place reference to them in and array an sort the array | |
place them in a linked list and sort the linked list | |
place pointers to them in an array and sort the array | |
place them in an array and sort the array |
Option-B is not correct because it will extra time complexity and searching very difficult
Option-D is suitable for small amount of objects/elements
Question 45 |
No. of inputs | |
Arrangement of elements in an array | |
Size of elements | |
Pivot element |
Question 46 |
50.2 sec | |
6.7 sec | |
72.7 sec | |
11.2 sec |
The Worst case time complexity is O(n2)
Average and best case time complexity is O(n logn)
Step-1: Time taken to sort 1000 names by quicksort
100= c*nlogn
= c*1000log1000
= 100
c=1/10log1000
Now, time taken to sort 100 names by quicksort,
=c*nlogn
= (1/10log1000)*100*log100
= 6.7
Question 47 |
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
8, 9, 15, 20, 47, 4, 12, 17, 30, 40 | |
8, 15, 20, 47, 4, 9, 30, 40, 12, 17 | |
15, 20, 47, 4, 8, 9, 12, 30, 40, 17 | |
4, 8, 9, 15, 20, 47, 12, 17, 30, 40 |

Question 48 |
Which of the following data structures is most suitable for radix sort?
Stack
| |
Binary search tree
| |
Tree
| |
Linked list
|
→ This concept of a circularly linked trie structure is similar to the concept of a threaded binary tree. This structure will be called a circularly threaded trie.
Question 49 |
Bubble sort and Insertion sort | |
Bubble sort and Quicksort | |
Bubble sort and merge sort | |
Quicksort and merge sort |
The quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the quite similar way.
In the both algorithm, the larger problem is divided into smaller one until you find the solution later we will combine all smaller solutions into final solution.
Question 50 |
Consider an array with n elements. Which of the following options gives the maximum number of swap(aj, aj+1) operations to sort the array elements using efficient bubble sort algorithm?
n2 /2 | |
n(n-1)/2 | |
n2
| |
n(n+1)/2 |
→ This means that in the first iteration it would have to look at n elements, then after that it would look n - 1 elements (since the biggest integer is at the end) and so on and so forth till 1 comparison occurs.
→ For n number of numbers, total number of comparisons done will be (n-1)+ ... + 2 + 1.
= (n*(n-1))/2
= O(n2)
Question 51 |
Linear search | |
Hash search | |
Binary search | |
All of these |
→ Binary search in which elements should be in sorting order and every time we will search in the half the elements based upon the mid value.
Question 52 |
A process of rearranging a given set of objects in a specific order | |
To facilitate the later search for members of the sorted set | |
Is a relevant and essential activity, particularly in data processing | |
All of these |
● ordering: arranging items in a sequence ordered by some criterion;
● categorizing: grouping items with similar properties.
The most common uses of sorted sequences are:
● making lookup or search efficient;
● making merging of sequences efficient.
● enable processing of data in a defined order.
Question 53 |

Bubble | |
Shake | |
Tree | |
Insertion |
Question 54 |
Insertion sort | |
Heap sort | |
Merge sort | |
Bubble sort |
1. Mergesort
2. Quicksort.
Question 55 |
floor ((i+1)/2) | |
ceiling ((i+1)/2) | |
floor (i/2) | |
ceiling (i/2) |
Left Child is at index: 2i
Right child is at index: 2*i+1
Question 56 |
O(n log n) | |
O(n 2 log n) | |
O(n 2 + log n) | |
O(n 3 ) |
2.The length of the string is n, the time taken is to be O(n) for each comparison.
3. For level -1(bottom-up order): Every element has to be compared with other element the number of comparisons is O(n/2).
4. For level -1(bottom-up order): Total time taken by one level is O(n 2 ).
5. For copying level to level the time taken by O(n 2 ).
So, For level -1= O(n 2 )+ O(n 2 )
6. Second level O(n 2 )+ O(n 2 ).
;
;
;
Final n level (logn)*O(n 2 ) = O(n 2 logn)
Question 57 |
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
Insertion sort | |
Selection sort | |
Quick sort | |
Bubble sort |
→The above stages Bubble sort First Pass/ iteration steps.
→In the bubble sort, after completion the largest element gets last position.
Question 58 |
O (log 2 n) | |
O (n log 2 n) | |
O (1) | |
O (n) |
1. Stable: Equal keys aren't reordered.
2. Operates in place, requiring O(1) extra space.
3. Worst case O(nlog(n)) key comparisons.
4. Worst case O(n) swaps.
5. Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
Question 59 |
Selection sort | |
Bubble sort | |
Radix sort | |
Insertion sort |
Phase 1:
1. We use n bins, one for each of the integers 0,1,.., n-1. We place each integer i on the list to be sorted into the bin numbered (i mod n) each bin will then contain a list of integers leaving the same remainder when divided by n.
At the end, we concatenate the bins in order to obtain a list L.
Phase 2:
1. The integers on the list L are redistributed into bins, but using the bin selection
function: ⌊i/n⌋
2. Now append integers to the ends of lists. Finally, concatenate the lists to get the final sorted sequence.
Question 60 |
O(1) | |
O(Log n) | |
O(n) | |
O(n 2 ) |
→ The heapify function does not take O(logn) space, it will take O(logn) time in worst case. Here we don't do heapify through any recursion call. We will do through iterative method and we don't require any auxiliary space in this sorting. We use only input array.
So, we do heap sort in O(1) space complexity.
Question 61 |
Bubble sort | |
Selection sort | |
Quick sort | |
Insertion sort |
So, we can eliminate option-B and Option-C because it gives worst complexity(O(n 2 )) when array is already sorted.
Bubble sort and insertion sort will give best case complexity(O(n)) but here they given small constraint is “there are few random elements”. So, insertion sort is more appropriate answer.
Question 62 |
The array elements form a heap. | |
Elements in each half of the array are sorted amongst themselves. | |
Elements in the first half of the array are less than or equal to elements in second half of the array. | |
All of the above |
→Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
→Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
As per the merge sort, The elements in each half of the array are sorted amongst themselves.
Question 63 |
0.5 β Ig n | |
0.5 (1 – β) Ig n | |
– (Ig n)/(Ig β) | |
– (Ig n)/Ig (1 – β) |
Question 64 |
45 | |
72 | |
360 | |
450 |
Octal number having→ 5 digits
The octal number system base value= 8
The maximum number of comparison=(number of items)*(radix)*(number of digits)
= 9*5*8
= 360
Question 65 |
0(log n) | |
0(n log n) | |
0(n) | |
None of the above |
Question 66 |
O(nm log m) | |
O(mn log n) | |
O(m + n) | |
O(mn) |
Another strategy to achieve complexity O(mn log n) is to build a min-heap of size n, consisting the first element in each of the n lists. We then extract the minimum element from the heap to the output and replace it in the heap with the next element from the list from which the minimum came. Thus, it takes time O(log n) to generate each element in the output and there are O(mn) output elements in all, so the overall time is O(mn log n).
On the other hand, if we can apply min() to n elements at a time, we can merge all n lists is parallel. Let the n lists be ` l1 , `l 2 , . . . , `l n . We maintain indices i 1 , i 2 , . . . , i n into these n lists, initially set to 1. At each stage we add j = min(` l1 [i 1 ], ` l2 [i 2> ], . . . , ` ln [i n ]) to the sorted output list and increment the corresponding index. Overall, we add mn elements to the sorted output and each element is generated in constant time, so the time taken is O(mn).
Question 67 |
Descriptive Explanation |
First of all, the question is interesting only if the lists have duplicate elements. Other- wise, we just merge the two lists as usual and after k steps we have the answer.
Let us first solve a simpler problem and find the k th smallest number in a single sorted list of length n. The first number in the list is clearly the smallest number. Let us call this number x 1 . The next number must be at least x 1 +1, so we search for x 1 +1 in the list using binary search. (Recall that binary search is an example of divide and conquer.) Either we find the number, in which case the second smallest number x 2 is x 1 +1, or we fail to find it, in which case we will be at a boundary between the last occurrence of x 1 and the first occurrence of the next smaller number, which is x 2 . We now search for x 2 +1 and discover x 3 . Repeat this k−1 times to find x k . Each binary search takes time log n, so overall this procedure takes time k log n, which is O(log n) if we treat k as a constant.
If we have two lists, we can find the first k numbers in the first list, which takes time O(log m) and then find the first k numbers in the second list, which takes time O(log n) and then merge these in time O(k) (which is a constant!) to find the k th smallest number overall.
In fact, we can reduce the number of binary searches to k and avoid the merge step. Maintain indices i 1 and i 2 for the two lists, pointing initially to the first element in lists 1 and 2, respectively. At each stage, the smaller of the numbers pointed to by i 1 and i 2 is the next number to be enumerated. We then advance i 1 or i 2 , as the case may be, using binary search as described above. After k such steps, we would have found the k th smallest number overall. Some steps may involve a binary search in list 1, which takes time O(log m) and others may involve a binary search in list 2, which takes time O(log n), so each step is bounded by max(O(log m), O(log n)). This gives an overall complexity of max(O(log m), O(log n)), which is equivalent to O(log m + log n).
Question 68 |
Insertion in a sorted array takes constant time. | |
Insertion in a sorted linear linked list takes constant time. | |
Searching for a key in a sorted array can be done in O(log n) time. | |
Searching for a key in a sorted linear linked list can be done in O(log n) time. |
Because elements are in sorted order, we are using binary search tree. it will take worst case time complexity O(logn) only.
Question 69 |
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (6, 5, 9), (3, 5, 7)] | |
[(0, 2, 5), (3, 5, 7), (6, 1, 4), (6, 5, 9), (7, 1, 8), (9, 0, 9)] | |
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (3, 5, 7), (6, 5, 9)] | |
[(9, 0, 9), (6, 1, 4), (7, 1, 8), (0, 2, 5), (3, 5, 7), (6, 5, 9)] |
Question 70 |
Ω(n) | |
Ω(n/k) | |
Ω(nlogk ) | |
Ω(n/klogn/k) |
(k!)n/k ≤ 2h
Taking the logarithm of both sides, we get:
h ≥ lg(k!)n/k
= (n/k)lg(k!)
≥ (n/k)(k/2)lg(k/2)
= (1/2)*(nlogk)-(1/2)*n
= Ω(nlogk)
Question 71 |
Ω(lg n) | |
Ω(n) | |
Ω(n lg n) | |
Ω(n2) |
Proof:
→ The number of leaves l ≥ n!
→ Any binary tree of height h has ≤ 2h leaves.
→ Hence, n! ≤ l ≤ 2h
→ Take logs: h ≥ lg(n!)
→ Use Stirling’s approximation: n! > (n/e)n
→ We get:
h ≥ log(n/e)n
= nlg(n/e)
= nlg n - n lg e
= Ω(n lg n)
Question 72 |

a-iii, b-i, c-ii, d-iv | |
a-ii, b-i, c-iv, d-iii | |
a-ii, b-i, c-iii, d-iv | |
b-iii, b-ii, c-i, d-iv |
Huffman coding→ Greedy approach
Optimal polygon triangulation→ Dynamic programming
Subset sum problem → Backtracking
Note: We can also use subset sum problem as well backtracking.
Question 73 |
O (k (n + d)) | |
O(d (n + k)) | |
O((n + k) lg d) | |
O((n + d) lg k) |

Question 74 |
O(d n k) | |
O(d nk) | |
O((d +n) k) | |
O(d (n + k))
|
Question 75 |
Greedy algorithm, θ (V3) | |
Greedy algorithm, θ (V2 lgn) | |
Dynamic programming, θ (V3)
| |
Dynamic programming, θ (V2 lgn) |
Question 76 |
O(log n) | |
O(n) | |
O(n log n) | |
O(n2) |
But using radix sort will get in linear time only. O(n)
Question 77 |
Merge sort, Quick sort | |
Quick sort, Insertion sort | |
Insertion sort, Quick sort | |
insertion sort, Merge sort |
Method-1:
Mergesort is having best,average and worst case time complexity is O(nlogn) as like hep sort. Quick sort is also same like heap and merge except worst case time complexity is O(n2). According to question, P is merge sort and Q is quick sort.
Method-2:
Heap sort and insertion sort is not following divide and conquer technique. So, it is same like P.
Merge sort follows Divide and Conquer. So, it is not like Q.
Note: Official Key given answer is Option-4
Question 78 |
![]() | |
![]() | |
![]() | |
![]() |
Question 79 |
Insertion sort | |
Quick sort | |
Merge sort | |
Selection sort |

Question 80 |
67, 12, 10, 5, 4, 7, 23 | |
4, 7, 10, 23, 67, 12, 5 | |
4, 5, 7, 67, 10, 12, 23 | |
10, 7, 4, 67, 23, 12, 5 |
Pass-1: 4, 10, 7, 23, 67, 12, 5
Pass-2: 4, 7, 10, 23, 67, 12, 5
Pass-3: 4, 7, 10, 23, 67, 12, 5 [ No change because the value 10 is placed in the same position ]
Question 81 |
O(n) | |
O(1) | |
O(n2) | |
O(n log n) |
Question 82 |
Merge sort | |
Radix sort | |
Heap sort | |
Address Calculation Sort |
Merge sort has space complexity of O(n). Hence it is not an in-place sorting algorithm.
Question 83 |
The worst case time complexity of Quick Sort is __________
O(n2) | |
O(log n) | |
O(n) | |
O(n logn)
|
The worst case quicksort happens only 2 times
1. Elements are already in sorted order.
2. All elements having the same weight.
Question 84 |
The worst case time complexity of Merge Sort is _______
O(n2) | |
O(log n)
| |
O(n)
| |
O(n logn)
|
The merge sort will not depend on the type of input. It will give the best, average and worst case time complexity is O(nlogn) only.
Question 85 |
Which of the following sorting procedures is the slowest?
Quick sort | |
Heap sort | |
Shell sort | |
Bubble sort
|
Question 86 |
Which of the following sorting methods would be most suitable for sorting a list which is almost sorted?
Bubble Sort
| |
Insertion Sort | |
Selection Sort
| |
Quick Sort
|
Question 87 |
The running time of insertion sort is
O(n2) | |
O(n) | |
O(log n)
| |
O(n log n)
|
Question 88 |
A sort which compares adjacent elements in a list and switches where necessary is _________
Insertion sort
| |
Heap sort
| |
Quick sort | |
Bubble sort |
Question 89 |
The correct order of the efficiency of the following sorting algorithms according to their overall running time comparison is
insertion>selection>bubble | |
insertion>bubble>selection
| |
selection>bubble>insertion
| |
bubble >selection>insertion |
Question 90 |
A sort which iteratively passes through a list to exchange the first element with any element less than it and then repeats with a new first element is called
insertion sort | |
selection sort | |
heap sort
| |
quick sort
|
Question 91 |
The algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers like {1,3,2,5,4,6} is
quick sort
| |
bubble sort
| |
merge sort
| |
selection sort
|
Best case of Bubble sort is O(n)
Best case of merge sort is O(n logn)
Best case of selection sort is (n2)
So the algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers is Bubble sort.
Question 92 |
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quick sort runs in Q(n2) time
II. Bubble sort runs in Q(n2) time
III. Merge-sort runs in Q(n2) time
IV. Insertion sort runs in Q(n2) time
I and II only | |
I and III only | |
II and IV only
| |
I and IV only | |
None of the above. |
Bubble sort runs in O(n) time if the input is already in sorted order.
Merge sort runs in O(nlogn) time.
Insertion sort runs in O(n) time.
Question 93 |
How much extra space is used by heap sort?
O(1) | |
O(log n) | |
O(n) | |
O(n2)
|
Heap sort uses Max-Heapify function which calls itself but it can be made using a simple while loop and thus making it an iterative function which takes no space.
So, the space complexity of Heap Sort can be reduced to O(1)
--> The heapify function does not take O(logn) space, it will take O(logn) time in the worst case. Here we don't do heapify through any recursion call. We will do it through an iterative method and we don't require any auxiliary space in this sorting. We use only the input array.
So, we do heap sort in O(1) space complexity
Question 94 |
What is meant by stable sorting algorithm?
A sorting algorithm is stable if it preserves the order of all keys | |
A sorting algorithm is stable if it preserves the order of non-duplicate keys | |
A sorting algorithm is stable if it preserves the order of duplicate keys
| |
A sorting algorithm is stable if it doesn't preserve the order of duplicate keys |
The examples of sorting algorithms are insertion sort,merge sort,counting sort etc.
Question 95 |
Consider the following statements
1. Insertion sort and merge sort are stable
2. Heap sort is inherently unstable
3. Selection sort is not inherently stable, but may be coded in such a way that it is stable
Only statement 1 is correct
| |
All statements are correct
| |
Statement 3 is not correct
| |
Only statement 1 and 2 is correct
|
S1 is true.
S2 is True, Heapsort is not stable because operations on the heap can change the relative order of equal items. From here: When sorting (in ascending order) heapsort first peaks the largest element and put it in the last of the list.
S3 is True. Selection sort is unstable by nature but can be stable by making some changes in the algorithm. Hence not inherently unstable.
Inherently unstable algorithm is an algorithm which cannot be made stable even by modifying the algorithm.
Question 96 |
What is the complexity of merge sort?
O(n)
| |
O(n2 logn)
| |
O(nlogn)
| |
O(n2)
|
Question 97 |
Time complexity of Merge sort is
O(n) | |
O(n2) | |
O(log n) | |
O(n log n) |
Question 98 |
In which sorting algorithm average number of comparison is (n2 – n) / 2?
Insertion sort
| |
Selection sort | |
Merge sort
| |
Quick sort |
Question 99 |
Quick Sort | |
Heap Sort | |
Merge sort | |
All the above |
Question 100 |
Dynamic programming | |
Back tracking | |
Divide and conquer | |
Greedy method |
Question 101 |
Bubble Sort | |
Quick Sort | |
Merge Sort | |
Heap Sort |
Question 102 |
Quick sort is run on two inputs shown below to sort in ascending order:
I : 1,2,3,…n II : n, n-1,…, 2, 1
Let k1 and k2 be the number of comparisons made for the inputs I and II respectively. Then
k1 < k2 | |
k1 = k2 | |
k1 > k2 | |
None |
Question 103 |
Heap Sort | |
Quick Sort | |
Insertion Sort | |
None of the above |
Example of external sorting is merge sort.