Sorting

Question 1

An array of 25 distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _____.

A
0.08
B
0.01
C
1
D
8
       Algorithms       Sorting       GATE 2019       Video-Explanation
Question 1 Explanation: 
Step-1: Given, 25 distinct elements are to be sorted using quicksort.
Step-2: Pivot element = uniformly random.
Step-3: Worst case position in the pivot element is either first (or) last.
Step-4: So total 2 possibilities among 25 distinct elements
= 2/25
= 0.08
Question 2

There are n unsorted arrays: A1, A2, ..., An. Assume that n is odd. Each of A1, A2, ..., An contains n distinct elements. There are no common elements between any two arrays. The worst-case time complexity of computing the median of the medians of A1, A2, ..., An is

A
O(n)
B
O(n log n)
C
Ω(n2 log n)
D
O(n2)
       Algorithms       Sorting       GATE 2019       Video-Explanation
Question 2 Explanation: 
Finding the median in an unsorted array is O(n).
But it is similar to quicksort but in quicksort, partitioning will take extra time.
→ Find the median will be (i+j)/2
1. If n is odd, the value is Ceil((i+j)/2)
2. If n is even, the value is floor((i+j)/2)
-> Here, total number of arrays are
⇒ O(n)*O(n)
⇒ O(n2)
Note:
They are clearly saying that all are distinct elements.
There is no common elements between any two arrays.
Question 3

The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:

A
Θ(nlogn), Θ(nlogn), and Θ(n2)
B
Θ(n2 ), Θ(n2 ), and Θ(nlogn)
C
Θ(n2), Θ(nlogn), and Θ(nlogn)
D
Θ(n2), Θ(nlogn), and Θ(n2)
       Algorithms       Sorting       GATE 2016 [Set-1]       Video-Explanation
Question 3 Explanation: 
Question 4
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quicksort runs in Θ(n2) time
II. Bubblesort runs in Θ(n2) time
III. Mergesort runs in Θ(n) time
IV. Insertion sort runs in Θ(n) time
A
I and II only
B
I and III only
C
II and IV only
D
I and IV only
       Algorithms       Sorting       GATE 2016 [Set-2]       Video-Explanation
Question 4 Explanation: 
If input sequence is already sorted then the time complexity of Quick sort will take O(n2) and Bubble sort will take O(n) and Merge sort will takes O(nlogn) and insertion sort will takes O(n).
→ The recurrence relation for Quicksort, if elements are already sorted,
T(n) = T(n-1)+O(n) with the help of substitution method it will take O(n2).
→ The recurrence relation for Merge sort, is elements are already sorted,
T(n) = 2T(n/2) + O(n) with the help of substitution method it will take O(nlogn).
We can also use master's theorem [a=2, b=2, k=1, p=0] for above recurrence.
Question 5

You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

A
O(n2)
B
O(n log n)
C
Θ(n log⁡n)
D
O(n3)
       Algorithms       Sorting       GATE 2014 [Set-3]
Question 5 Explanation: 
The Worst case time complexity of quick sort is O (n2). This will happen when the elements of the input array are already in order (ascending or descending), irrespective of position of pivot element in array.
Question 6

Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?

A
O(log n)
B
O(n)
C
O(n log n)
D
O(n2)
       Algorithms       Sorting       GATE 2013
Question 6 Explanation: 
Best, Average and worst case will take maximum O(n) swaps.
Selection sort time complexity O(n2) in terms of number of comparisons. Each of these scans requires one swap for n-1 elements (the final element is already in place).
Question 7

Which one of the following is the tightest upper bound that represents the time complexity of inserting an object into a binary search tree of n nodes?

A
O(1)
B
O(log n)
C
O(n)
D
O(n log n)
       Algorithms       Sorting       GATE 2013
Question 7 Explanation: 
For skewed binary search tree on n nodes, the tightest upper bound to insert a node is O(n).
Question 8

What is the time complexity of Bellman-Ford single-source shortest path algorithm on a complete graph of n vertices?

A
Θ(n2)
B
Θ(n2 log n)
C
Θ(n3)
D
Θ(n3 log n)
       Algorithms       Sorting       GATE 2013
Question 8 Explanation: 
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is capable of handling graphs in which some of the edge weights are negative numbers.
Bellman–Ford runs in O(|V| * |E|) time, where |V| and |E| are the number of vertices and edges respectively.
Note: For complete graph: |E| = n(n-1)/2 = O(n2)
Question 9

The number of elements that can be sorted in Θ(log n) time using heap sort is

A
Θ(1)
B
Θ(√log⁡n)
C
Θ (log⁡n/log⁡log⁡n)
D
Θ(log n)
       Algorithms       Sorting       GATE 2013
Question 9 Explanation: 
Using heap sort to n elements it will take O(n log n) time. Assume there are log n/ log log n elements in the heap.
So, Θ((logn/ log log n)log(logn/log log n))
= Θ(logn/log logn (log logn - log log logn))
= Θ((log n/log logn) × log log n)
= Θ(log n)
Hence, option (C) is correct answer.
Question 10

What is the number of swaps required to sort n elements using selection sort, in the worst case?

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 logn)
       Algorithms        Sorting       GATE 2009
Question 10 Explanation: 
Selection sort – There is no Worst case input for selection sort. Since it searches for the index of kth minimum element in kth iteration and then in one swap, it places that element into its correct position. For n-1 iterations of selection sort, it can have O(n) swaps. Selection Sort does a significant number of comparisons before moving each element directly to its final intended position. At most the algorithm requires N swaps. once we swap an element into place, you never go back again.So it is great for writes O(n) but not so great (at all) for reads — O(n2). It isn’t well-suited to generalized sorting, but might work well in specialized situations like EEPROM (where writes are inordinately expensive).
Question 11

In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort? 

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 log n)
       Algorithms        Sorting       GATE 2009
Question 11 Explanation: 

n→n/(4/3)→n/(4/3)2→n/(4/3)3-----n/(4/3)k=1
n/(4/3)k = 1
⇒n=(4/3)k ⇒ k = log4/3n [k=no. of levels]
In each level workdone = Cn
So, total workdone = Cn⋅log4/3n = (nlog4/3n)
Question 12

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

A
T(n) ≤ 2T(n/5) + n
B
T(n) ≤ T(n/5) + T(4n/5) + n
C
T(n) ≤ 2T(4n/5) + n
D
T(n) ≤ 2T(n/2) + n
       Algorithms       Sorting       GATE 2008
Question 12 Explanation: 
Consider the case where one subset of set of n elements contains n/5 elements and another subset of set contains 4n/5 elements.
So, T(n/5) comparisons are needed for the first subset and T(4n/5) comparisons needed for second subset.
Now, suppose that one subset contains more than n/5 elements then another subset will contain less than 4n/5 elements. Due to which time complexity will be less than
T(n/5) + T(4n/5) + n
Because recursion tree will be more balanced.
Question 13

Which one of the following in place sorting algorithms needs the minimum number of swaps?

A
Quick sort
B
Insertion sort
C
Selection sort
D
Heap sort
       Algorithms        Sorting       GATE 2006
Question 13 Explanation: 
Selection sort requires minimum number of swaps i.e O(n). The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 14

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj.

If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of 1...n ?

A
n(n-1)/2
B
n(n-1)/4
C
n(n+1)/4
D
2n[log2n]
       Algorithms        Sorting       GATE 2003
Question 14 Explanation: 
Probability of inverse (ai, aj(i Probability of expected no. of inversions = (1/2) × (n(n-1)/2) = n(n-1)/4
Question 15

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that ii > aj.

What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1...n with at most n inversions?

A
Θ(n2)
B
Θ(n log n)
C
Θ(n1.5)
D
Θ(n)
       Algorithms        Sorting       GATE 2003
Question 15 Explanation: 
Here the inputs are to be restricted to 1...n with atmost 'n' inversions. Then the worst case time complexity of inversion sort reduces to Θ(n).
Question 16

Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort?

A
O(n)
B
O(n log n)
C
O(n2)
D
O(n!)
       Algorithms        Sorting       GATE 2001
Question 16 Explanation: 
In worst case Randomized quicksort execution time complexity is same as quicksort.
Question 17

Let s be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in s. Which of the following statements is true?

A
t(n) is O(1)
B
n ≤ t(n) ≤ n log2 n
C
n log2 n ≤ t(n) < (n/2)
D
t(n) = (n/2)
       Algorithms        Sorting       GATE 2000
Question 17 Explanation: 
Since the array is sorted. Now just pick the first two minimum elements and check if their sum is less than 1000 or not. If it is less than 1000 then we found it and if not then it is not possible to get the two elements whose sum is less than 1000. Hence, it takes constant time. So, correct option is (A).
Question 18

A sorting technique is called stable if

A
it takes O (nlog n) time
B
it maintains the relative order of occurrence of non-distinct elements
C
it uses divide and conquer paradigm
D
it takes O(n) space
       Algorithms        Sorting       GATE 1999
Question 18 Explanation: 
Sorting techniques are said to be stable if it maintains the relative order of occurrence of non-distinct element.
Question 19

If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:

   20, 47, 15, 8, 9, 4, 40, 30, 12, 17 

then the order of these elements after second pass of the algorithm is:

A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
       Algorithms        Sorting       GATE 1999
Question 19 Explanation: 
Question 20

A two dimensional array A[1...n][1...n] of integers is partially sorted if

    ∀i, j ∈ [1...n−1],   A[i][j] < A[i][j+1] and 
                           A[i][j] < A[i+1][j] 

Fill in the blanks:
(a) The smallest item in the array is at A[i][j] where i=............and j=..............
(b) The smallest item is deleted. Complete the following O(n) procedure to insert item x (which is guaranteed to be smaller than any item in the last row or column) still keeping A partially sorted.

procedure  insert (x: integer);
var        i,j: integer;
begin
(1) i:=1; j:=1, A[i][j]:=x;
(2) while (x > ...... or x > ......) do
(3) if A[i+1][j] < A[i][j] ......... then begin
(4) A[i][j]:=A[i+1][j]; i:=i+1;
(5) end
(6) else begin
(7) ............
(8) end
(9) A[i][j]:= .............
    end  
A
Theory Explanation.
       Algorithms       Sorting       GATE 1996
Question 21

Merge sort uses

A
Divide and conquer strategy
B
Backtracking approach
C
Heuristic search
D
Greedy approach
       Algorithms       Sorting       GATE 1995
Question 21 Explanation: 
Merge sort uses the divide and conquer strategy.
Question 22

Consider the following sequence of numbers

  92, 37, 52, 12, 11, 25  

Use bubble sort to arrange the sequence in ascending order. Give the sequence at the end of each of the first five passes.

A
Theory Explanation.
       Algorithms       Sorting       GATE 1995
Question 23

Consider the recursive algorithm given below:

 procedure bubblersort (n);
 var i,j: index; temp : item;
 begin
    for i:=1 to n-1 do
    if A[i] > A [i+1] then
 begin
    temp : A[i];
    A[i]:=A[i+1]; 
    A[i+1]:=temp
    end;
   bubblesort (n-1)
 end 

Let an be the number of times the ‘if…then….’ Statement gets executed when the algorithm is run with value n. Set up the recurrence relation by defining an in terms of an-1. Solve for an.

A
Theory Explanation.
       Algorithms       Sorting       GATE 1993
Question 24

Following algorithm(s) can be used to sort n integers in the range [1...n3] in O(n) time

A
Heapsort
B
Quicksort
C
Mergesort
D
Radixsort
       Algorithms       Sorting       GATE 1992
Question 24 Explanation: 
As no comparison based sort can ever do any better than nlogn. So option (A), (B), (C) are eliminated. O(nlogn) is lower bound for comparison based sorting.
As Radix sort is not comparison based sort (it is counting sort), so can be done in linear time.
Question 25

The minimum number of comparisons required to sort 5 elements is _____

A
7
       Algorithms       Sorting       GATE 1991
Question 25 Explanation: 
Minimum no. of comparisons
= ⌈log(n!)⌉
= ⌈log(5!)⌉
= ⌈log(120)⌉
= 7
Question 26

Quicksort is ________ efficient than heapsort in the worst case.

A
LESS.
       Algorithms        Sorting       GATE 1988
Question 26 Explanation: 
As worst case time for quicksort is O(n2) and worst case for heap sort is O(n logn).
Question 27

Let P be a quicksort program to sort numbers in ascending order. Let t1 and t2 be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1] respectively. Which of the following holds?

A
t1 = t2
B
t1 > t2
C
t1 < t2
D
t1 = t2 + 5 log 5
       Algorithms        Sorting       GATE 1987
Question 27 Explanation: 
Since both are in sorted order and no. of elements in second list is greater.
Question 28

If we use Radix Sort to sort n integers in the range [nk/2, nk], for some k>0 which is independent of n, the time taken would be?

A
Θ(n)
B
Θ(kn)
C
Θ(nlogn)
D
Θ(n2)
       Algorithms        Sorting       GATE 2008-IT
Question 28 Explanation: 
Time complexity for radix sort = Θ(wn)
where n = keys, w = word size
w = log2 (nk) = k × log2 (n)
Complexity Θ(wn) = Θ(k × log2(n) × n) = Θ(n log n) = Θ(n log n)
Question 29

Let a and b be two sorted arrays containing n integers each, in non-decreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements

1. a[i] ≥ b [i] => c[2i] ≥ a [i]
2. a[i] ≥ b [i] => c[2i] ≥ b [i]
3. a[i] ≥ b [i] => c[2i] ≤ a [i]
4. a[i] ≥ b [i] => c[2i] ≤ b [i] 
Which of the following is TRUE?

A
only I and II
B
only I and IV
C
only II and III
D
only III and IV
       Programming       Sorting       GATE 2005-IT
Question 29 Explanation: 
a[i] ≥ b[i]
Since both 'a' and 'b' are sorted in the beginning, there are 'i' elements than or equal to a[i] and similarly 'i' elements smaller than or equal to b[i]. So, a[i] ≥ b[i] means there are 2i elements smaller than or equal to a[i] and hence in the merged array, a[i] will come after these 2i elements. So, c[2i] ≤ a[i].
Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array. So, b[i] ≤ c[2i].
So, option (C) is correct.
Question 30
Given two sorted list of size m and n respectively. The number of comparisons needed the worst case by the merge sort algorithm will be
A
m*n
B
maximum of m and n
C
minimum of m and n
D
m+n–1
       Algorithms       Sorting       ISRO-2018       Video-Explanation
Question 30 Explanation: 
Here the maximum number of comparisons is m+n-1.
For example: Take 2 sub-arrays of size 3 such as 1,3,5 and another subarray 2,4,6. Then in this case number of comparisons is 5 which is m+n-1 can be taken as O(m+n).
Question 31
Of the following sorting algorithms, which has a running time that is least dependent on the initial ordering of the input?
A
Merge Sort
B
Insertion Sort
C
Selection Sort
D
Quick Sort
       Algorithms       Sorting       ISRO-2018       Video-Explanation
Question 31 Explanation: 
Question 32
Selection sort algorithm design technique is an example of
A
Greedy method
B
Divide-and-conquer
C
Dynamic Programming
D
Backtracking
       Algorithms       Sorting       ISRO-2007
Question 32 Explanation: 
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Question 33
The average case and worst case complexities for Merge sort algorithm are
A
O(n2), O(n2)
B
O(n2), O(nlog2n)
C
O(n log2n), O(n2)
D
O(n log2n), O(n log2n)
       Algorithms       Sorting       ISRO-2007
Question 33 Explanation: 
The best case, average case and worst case complexities for Merge sort algorithm are O( nlog2n ).
Question 34
Which one of the following in-place sorting algorithms needs the minimum number of swaps?
A
Insertion Sort
B
Quick Sort
C
Heap Sort
D
Selection Sort
       Algorithms       Sorting       ISRO-2017 May
Question 34 Explanation: 
Selection sort requires minimum number of swaps i.e O(n)

Question 35
The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order, using bubble sort is
A
11
B
12
C
13
D
10
       Algorithms       Sorting       ISRO-2017 May
Question 35 Explanation: 
Step-1: 8,7,22,9,31,5,13
Step-2: 8,7,9,22,31,5,13
Step-3: 8,7,9,22,5,31,13
Step-4: 8,7,9,22,5,13,31
Step-5: 7,8,9,22,5,13,31
Step-6: 7,8,9,5,22,13,31
Step-7: 7,8,9,5,13,22,31
Step-8: 7,8,5,9,13,22,31
Step-9: 7,5,8,9,13,22,31
Step-10: 5,7,8,9,12,22,31
Note:Total 10 swaps are required to sort the array.
Question 36
How many comparisons are needed to sort an array of length 5 if a straight selection sort is used and array is already in the opposite order?
A
1
B
5
C
10
D
20
       Algorithms       Sorting       ISRO CS 2008
Question 36 Explanation: 
Consider the array: 5 4 3 2 1
1st iteration will compare 4 numbers with the 5
2nd iteration will compare 3 numbers with the 4
3rd iteration will compare 2 numbers with the 3
4th iteration i will compare 1 number with the 2
So, the total number of comparisons is 4 + 3 + 2 + 1 = 10
It can be viewed as the sum of the sequence of the first (n-1) numbers starting by 1
S = ((1 + (n-1) )*(n-1) )/2
S = 10
Question 37
Algorithm design technique used in quicksort algorithm is?
A
Dynamic programming
B
Backtracking
C
Divide and conquer
D
Greedy method
       Algorithms       Sorting       ISRO-2016
Question 37 Explanation: 
Divide and conquer sorting algorithms:
1. Quick sort
2. Merge sort
3. Heap sort
****Binary search also follows the divide and conquer.
Question 38
Which one of the following in place sorting algorithms needs the minimum number of swaps?
A
Quick sort
B
Insertion sort
C
Selection sort
D
Heap sort
       Database-Management-System       Sorting       ISRO CS 2011
Question 38 Explanation: 
Selection sort requires maximum number of swaps i.e O(n).
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 39
Which of the following algorithm design technique is used in merge sort?
A
Greedy method
B
Backtracking
C
Dynamic programming
D
Divide and Conquer
       Algorithms       Sorting       ISRO CS 2011
Question 39 Explanation: 
Merge sort is a divide-and-conquer algorithm based on the idea of breaking down a list into several sub-lists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list.
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Question 40
Which of the following sorting algorithms has the minimum running time complexity in the best and average case?
A
Insertion sort, Quick sort
B
Quick sort, Quick sort
C
Quick sort, Insertion sort
D
Insertion sort, Insertion sort
       Algorithms       Sorting       ISRO CS 2013
Question 40 Explanation: 
Question 41
Consider the following sorting algorithms.
  1. Quicksort
  2. Heapsort
  3. Mergesort
Which of them perform in least time in the worst case?
A
1 and 2 only
B
2 and 3 only
C
3 only
D
1, 1 and 3
       Algorithms       Sorting       ISRO CS 2014
Question 41 Explanation: 
Question 42
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
A
45
B
72
C
360
D
450
       Algorithms       Sorting       UGC-NET CS 2018 JUNE Paper-2
Question 42 Explanation: 
Total sort items = 9
Octal number having → 5 digits
The octal number system base value = 8
The maximum number of comparison = (number of items) * (radix) * (number of digits)
= 9 * 5 * 8
= 360
Question 43
selection sort,quick sort is a stable sorting method
A
True,True
B
False, False
C
True,False
D
False,False
       Algorithms       Sorting       Nielit Scientist-B IT 4-12-2016
Question 43 Explanation: 

Question 44
Which of the following sorting procedures is the slowest?
A
Quick sort
B
Merge sort
C
Shell sort
D
Bubble sort
       Algorithms       Sorting       Nielit Scientist-B IT 4-12-2016
Question 44 Explanation: 
Bubble sort will execute O(n​ 2​ ) time worst case and also it takes n-1 comparisons. So, bubble sort procedure is the slowest one among all.
Question 45
The worst case running times of insertion sort, merge sort and quick sort respectively are
A
O(nlogn),O(nlogn) and O(n2)
B
O(n2),O(n2) and O(nlogn)
C
O(n2),O(nlogn) and O(nlogn)
D
O(n2),O(nlogn) and O(n2)
       Algorithms       Sorting       Nielit Scientist-B CS 22-07-2017
Question 45 Explanation: 
→ worst case running times of insertion sort → O(n2)
merge sort → O(nlogn)
quick sort → O(n2)
Question 46
which of the following sorting algorithms does not have a worst case running time of O(n​ 2​ )  
A
Insertion sort
B
merge sort
C
Quick sort
D
bubble sort
       Algorithms       Sorting       Nielit Scientist-B CS 2016 march
Question 46 Explanation: 
Question 47
To sort many large object or structures, it would be most efficient to
A
Place reference to them in and array an sort the array
B
place them in a linked list and sort the linked list
C
place pointers to them in an array and sort the array
D
place them in an array and sort the array
       Algorithms       Sorting       NieLit STA 2016 March 2016
Question 47 Explanation: 
Option-A is suitable for small amount of objects but not large
Option-B is not correct because it will extra time complexity and searching very difficult
Option-D is suitable for small amount of objects/elements
Question 48
A sorting technique is called stable if
A
If it takes O(n log n) time
B
It uses divide and conquer technique
C
Relative order of occurrence of non-distinct elements is maintained
D
It takes O(n) space
       Algorithms       Sorting       ISRO DEC 2017 22- Soon
Question 48 Explanation: 
→ Stable sorting algorithms maintain the relative order of records with equal keys (i.e. values).
→ That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.
Question 49
Quick sort is run on 2 inputs shown below to sort in ascending order
A.     1, 2, 3……n
B.     n, n-1, n-2 …… 1
Let C1 and C2 be the number of comparisons made for A and B respectively. Then
A
C1>C2
B
C1=C2
C
C1
D
Cannot say anything for arbitrary n
       Algorithms       Sorting       ISRO DEC 2017 22- Soon
Question 49 Explanation: 
→ The above question is clearly specifying quicksort worst case complexity.
→ Elements are already sorted order it gives worst case complexity O(n2)
→ If all elements are having same weight, it performs worst case complexity.
Question 50
The running time of quick sort algorithm depends heavily on the selection of:
A
No. of inputs
B
Arrangement of elements in an array
C
Size of elements
D
Pivot element
       Algorithms       Sorting       Nielit Scientist-B CS 4-12-2016
Question 50 Explanation: 
The running time of Quicksort will depend on how balanced the partitions are. If you are unlucky and select the greatest or the smallest element as the pivot, then each partition will separate only one element at a time, so the running time will be similar to Insertion Sort.However, Quicksort will usually pick a pivot that is mid-range, and it will partition the array into two parts.
Question 51
A machine needs a minimum of 100 sec to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately
A
50.2 sec
B
6.7 sec
C
72.7 sec
D
11.2 sec
       Algorithms       Sorting       ISRO CS 2015
Question 51 Explanation: 
We need to find the minimum time to sort the names by using quick sort.
The Worst case time complexity is O(n2)
Average and best case time complexity is O(n logn)
Step-1: Time taken to sort 1000 names by quicksort
100= c*nlogn
= c*1000log1000
= 100
c=1/10log1000
Now, time taken to sort 100 names by quicksort,
=c*nlogn
= (1/10log1000)*100*log100
= 6.7
Question 52
If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
       Algorithms       Sorting       ISRO CS 2015
Question 52 Explanation: 
Explanation:
Question 53

Which of the following data structures is most suitable for radix sort?

A
Stack
B
Binary search tree
C
Tree
D
Linked list
       Algorithms       Sorting       JT(IT) 2018 PART-B Computer Science
Question 53 Explanation: 
→ Forming the circularly linked list requires more memory but allows the keys to be visited more directly in either ascending or descending order via a linear traversal of the linked list rather than a depth-first traversal of the entire trie.
→ This concept of a circularly linked trie structure is similar to the concept of a threaded binary tree. This structure will be called a circularly threaded trie.
Question 54
Which of the following algorithms use recursion for sorting an array of integers?
A
Bubble sort and Insertion sort
B
Bubble sort and Quicksort
C
Bubble sort and merge sort
D
Quicksort and merge sort
       Algorithms       Sorting       KVS 22-12-2018 Part-B
Question 54 Explanation: 
Recursion is used in quicksort and merge sort.
The quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the quite similar way.
In the both algorithm, the larger problem is divided into smaller one until you find the solution later we will combine all smaller solutions into final solution.
Question 55

Consider an array with n elements. Which of the following options gives the maximum number of swap(aj, aj+1) operations to sort the array elements using efficient bubble sort algorithm?

A
n2 /2
B
n(n-1)/2
C
n2
D
n(n+1)/2
       Algorithms       Sorting       JT(IT) 2018 PART-B Computer Science
Question 55 Explanation: 
→ The worst case is if the array is already sorted but in descending order.
→ This means that in the first iteration it would have to look at n elements, then after that it would look n - 1 elements (since the biggest integer is at the end) and so on and so forth till 1 comparison occurs.
→ For n number of numbers, total number of comparisons done will be (n-1)+ ... + 2 + 1.
= (n*(n-1))/2
= O(n2)
Question 56
Which of the following search algorithm requires a sorted array?
A
Linear search
B
Hash search
C
Binary search
D
All of these
       Algorithms       Sorting       KVS DEC-2013
Question 56 Explanation: 
→ Linear search , in which elements are searched element by element.
→ Binary search in which elements should be in sorting order and every time we will search in the half the elements based upon the mid value.
Question 57
Sorting is
A
A process of rearranging a given set of objects in a specific order
B
To facilitate the later search for members of the sorted set
C
Is a relevant and essential activity, particularly in data processing
D
All of these
       Algorithms       Sorting       KVS DEC-2013
Question 57 Explanation: 
Sorting is any process of arranging items systematically, and has two common, yet distinct meanings:
● ordering: arranging items in a sequence ordered by some criterion;
● categorizing: grouping items with similar properties.
The most common uses of sorted sequences are:
● making lookup or search efficient;
● making merging of sequences efficient.
● enable processing of data in a defined order.
Question 58
The figure below represent a ____ sort
A
Bubble
B
Shake
C
Tree
D
Insertion
       Algorithms       Sorting       KVS DEC-2013
Question 58 Explanation: 
Bubble sorting is a simple sorting algorithm that repeatedly steps through the list, compares adjacent pairs and swaps them if they are in the wrong order.
Question 59
Which of the following sorting algorithms uses recursion?
A
Insertion sort
B
Heap sort
C
Merge sort
D
Bubble sort
       Algorithms       Sorting       KVS DEC-2017
Question 59 Explanation: 
Recursion uses two sorting algorithms for reducing time complexity.
1. Mergesort
2. Quicksort.
Question 60
Consider an array representation of an n element binary heap where the elements are stored from index 1 to index n of the array. For the element stored at index i of the array (i<=n), the index of the parent is:
A
floor ((i+1)/2)
B
ceiling ((i+1)/2)
C
floor (i/2)
D
ceiling (i/2)
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-2
Question 60 Explanation: 
Parent Node is at index: ⌊i/2⌋
Left Child is at index: 2i
Right child is at index: 2*i+1
Question 61
A list of n strings, each of length n, is sorted into lexicographic order using merge – sort algorithm. The worst case running time of this computation is :
A
O(n log n)
B
O(n​ 2​ log n)
C
O(n​ 2​ + log n)
D
O(n​ 3​ )
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-2
Question 61 Explanation: 
1.The comparison is strictly follow the Dictionary based order.
2.The length of the string is n, the time taken is to be O(n) for each comparison.
3. For level -1(bottom-up order): Every element has to be compared with other element the number of comparisons is O(n/2).
4. For level -1(bottom-up order): Total time taken by one level is O(n​ 2​ ).
5. For copying level to level the time taken by O(n​ 2​ ).
So, For level -1= O(n​ 2​ )+ O(n​ 2​ )
6. Second level O(n​ 2​ )+ O(n​ 2​ ).
;
;
;
Final n level (logn)*O(n​ 2​ ) = O(n​ 2​ logn)
Question 62
Given the array: 12,40,3,2 and intermediate states of the array while sorting
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
A
Insertion sort
B
Selection sort
C
Quick sort
D
Bubble sort
       Algorithms       Sorting       KVS 30-12-2018 Part B
Question 62 Explanation: 
→In the bubble sort, We will compare the two adjacent elements and swap the elements if first element is large.
→The above stages Bubble sort First Pass/ iteration steps.
→In the bubble sort, after completion the largest element gets last position.
Question 63
An ideal sort is an in-place-sort whose additional space requirement is __________.
A
O (log​ 2​ n)
B
O (n log​ 2​ n)
C
O (1)
D
O (n)
       Algorithms       Sorting       UGC NET CS 2015 Dec- paper-2
Question 63 Explanation: 
The ideal sorting algorithm would have the following properties:
1. Stable: Equal keys aren't reordered.
2. Operates in place, requiring O(1) extra space.
3. Worst case O(nlog(n)) key comparisons.
4. Worst case O(n) swaps.
5. Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
Question 64
Which of the following algorithms sort n integers, having the range 0 to (n​ 2​ - 1), in ascending order in O(n) time ?
A
Selection sort
B
Bubble sort
C
Radix sort
D
Insertion sort
       Algorithms       Sorting       UGC NET CS 2015 Jun- paper-2
Question 64 Explanation: 
Consider sorting n integers in the range 0 to n​ 2​ - 1. We do it in two phases.
Phase 1:
1. We use n bins, one for each of the integers 0,1,.., n-1. We place each integer i on the list to be sorted into the bin numbered (i mod n) each bin will then contain a list of integers leaving the same remainder when divided by n.
At the end, we concatenate the bins in order to obtain a list L.
Phase 2:
1. The integers on the list L are redistributed into bins, but using the bin selection
function: ⌊i/n⌋
2. Now append integers to the ends of lists. Finally, concatenate the lists to get the final sorted sequence.
Question 65
How much extra space is used by heapsort ?
A
O(1)
B
O(Log n)
C
O(n)
D
O(n​ 2​ )
       Algorithms       Sorting       UGC NET CS 2004 Dec-Paper-2
Question 65 Explanation: 
→ Heap sort uses Max-Heapify function which calls itself but it can be made using a simple while loop and thus making it an iterative function which intern takes no space. So, the space complexity of Heap Sort can be reduced to O(1)
→ The heapify function does not take O(logn) space, it will take O(logn) time in worst case. Here we don't do heapify through any recursion call. We will do through iterative method and we don't require any auxiliary space in this sorting. We use only input array.
So, we do heap sort in O(1) space complexity.
Question 66
You have to sort a list L, consisting of a sorted list followed by a few ‘random’ elements. Which of the following sorting method would be most suitable for such a task ?
A
Bubble sort
B
Selection sort
C
Quick sort
D
Insertion sort
       Algorithms       Sorting       UGC NET CS 2014 Dec-Paper-2
Question 66 Explanation: 
Here, they given sorted list followed by a few ‘random’ elements.
So, we can eliminate option-B and Option-C because it gives worst complexity(O(n​ 2​ )) when array is already sorted.
Bubble sort and insertion sort will give best case complexity(O(n)) but here they given small constraint is “there are few random elements”. So, insertion sort is more appropriate answer.
Question 67
Mergesort makes two recursive calls. Which statement is true after these two recursive calls finish, but before the merge step ?
A
The array elements form a heap.
B
Elements in each half of the array are sorted amongst themselves.
C
Elements in the first half of the array are less than or equal to elements in second half of the array.
D
All of the above
       Algorithms       Sorting       UGC NET CS 2014 June-paper-2
Question 67 Explanation: 
A merge sort works as follows:
→Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
→Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
As per the merge sort, The elements in each half of the array are sorted amongst themselves.
Question 68
Suppose that the splits at every level of Quicksort are in proportion 1-β to β, where 0 < β ≤ 0.5 is a constant. The number of elements in an array is n. The maximum depth is approximately
A
0.5 β Ig n
B
0.5 (1 – β) Ig n
C
– (Ig n)/(Ig β)
D
– (Ig n)/Ig (1 – β)
       Algorithms       Sorting       UGC NET CS 2013 Sep-paper-2
Question 69
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
A
45
B
72
C
360
D
450
       Algorithms       Sorting       UGC NET CS 2018 JUNE Paper-2
Question 69 Explanation: 
Total sort items=9
Octal number having→ 5 digits
The octal number system base value= 8
The maximum number of comparison=(number of items)*(radix)*(number of digits)
= 9*5*8
= 360
Question 70
The total number of comparisons in a bubble sort is
A
0(log n)
B
0(n log n)
C
0(n)
D
None of the above
       Algorithms       Sorting       UGC NET CS 2011 Dec-Paper-2
Question 70 Explanation: 
The total number of comparisons in a bubble sort is O(n2)
Question 71
You have n lists, each consisting of m integers sorted in ascending order. Merging these lists into a single sorted list will take time:
A
O(nm log m)
B
O(mn log n)
C
O(m + n)
D
O(mn)
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE (M.Sc. / Ph.D. Programme in Computer Science)15 May 2013
Question 71 Explanation: 
We can merge two sorted lists of size k and ` in time O(k + `). We begin by merging the lists in pairs to generate n /2 lists of size 2m each, in total time O(mn). If we repeat this, we get n/ 4 lists of size 4m each, again in total time O(mn). Thus, in O(log n) rounds, we converge to a single sorted list. Each round takes time O(mn), so the total time is O(mn log n).
Another strategy to achieve complexity O(mn log n) is to build a min-heap of size n, consisting the first element in each of the n lists. We then extract the minimum element from the heap to the output and replace it in the heap with the next element from the list from which the minimum came. Thus, it takes time O(log n) to generate each element in the output and there are O(mn) output elements in all, so the overall time is O(mn log n).
On the other hand, if we can apply min() to n elements at a time, we can merge all n lists is parallel. Let the n lists be ` l1 , `l 2 , . . . , `l n . We maintain indices i 1 , i 2 , . . . , i n into these n lists, initially set to 1. At each stage we add j = min(` l1 [i 1 ], ` l2 [i 2> ], . . . , ` ln [i n ]) to the sorted output list and increment the corresponding index. Overall, we add mn elements to the sorted output and each element is generated in constant time, so the time taken is O(mn).
Question 72
You are given two sorted lists of integers of size m and n. Describe a divide and conquer algorithm for computing the k th smallest element in the union of the two lists in time O(log m + log n).
A
Descriptive Explanation
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE (M.Sc. / Ph.D. Programme in Computer Science)15 May 2013
Question 72 Explanation: 
Note that k is treated as a constant in this problem.
First of all, the question is interesting only if the lists have duplicate elements. Other- wise, we just merge the two lists as usual and after k steps we have the answer.
Let us first solve a simpler problem and find the k th smallest number in a single sorted list of length n. The first number in the list is clearly the smallest number. Let us call this number x 1 . The next number must be at least x 1 +1, so we search for x 1 +1 in the list using binary search. (Recall that binary search is an example of divide and conquer.) Either we find the number, in which case the second smallest number x 2 is x 1 +1, or we fail to find it, in which case we will be at a boundary between the last occurrence of x 1 and the first occurrence of the next smaller number, which is x 2 . We now search for x 2 +1 and discover x 3 . Repeat this k−1 times to find x k . Each binary search takes time log n, so overall this procedure takes time k log n, which is O(log n) if we treat k as a constant.
If we have two lists, we can find the first k numbers in the first list, which takes time O(log m) and then find the first k numbers in the second list, which takes time O(log n) and then merge these in time O(k) (which is a constant!) to find the k th smallest number overall.
In fact, we can reduce the number of binary searches to k and avoid the merge step. Maintain indices i 1 and i 2 for the two lists, pointing initially to the first element in lists 1 and 2, respectively. At each stage, the smaller of the numbers pointed to by i 1 and i 2 is the next number to be enumerated. We then advance i 1 or i 2 , as the case may be, using binary search as described above. After k such steps, we would have found the k th smallest number overall. Some steps may involve a binary search in list 1, which takes time O(log m) and others may involve a binary search in list 2, which takes time O(log n), so each step is bounded by max(O(log m), O(log n)). This gives an overall complexity of max(O(log m), O(log n)), which is equivalent to O(log m + log n).
Question 73
Which of the following is true for a sorted list with ‘n’ elements ?
A
Insertion in a sorted array takes constant time.
B
Insertion in a sorted linear linked list takes constant time.
C
Searching for a key in a sorted array can be done in O(log n) time.
D
Searching for a key in a sorted linear linked list can be done in O(log n) time.
       Algorithms       Sorting       UGC NET CS 2008-june-Paper-2
Question 73 Explanation: 
TRUE: Searching for a key in a sorted array can be done in O(log n) time.
Because elements are in sorted order, we are using binary search tree. it will take worst case time complexity O(logn) only.
Question 74
A stable sort preserves the order of values that are equal with respect to the comparison function. We have a list of three dimensional points [(7, 1, 8), (3, 5, 7), (6, 1, 4), (6, 5, 9), (0, 2, 5), (9, 0, 9)]. We sort these in ascending order by the second coordinate. Which of the folllowing corresponds to a stable sort of this input?
A
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (6, 5, 9), (3, 5, 7)]
B
[(0, 2, 5), (3, 5, 7), (6, 1, 4), (6, 5, 9), (7, 1, 8), (9, 0, 9)]
C
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (3, 5, 7), (6, 5, 9)]
D
[(9, 0, 9), (6, 1, 4), (7, 1, 8), (0, 2, 5), (3, 5, 7), (6, 5, 9)]
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science (18 May 2017)
Question 74 Explanation: 
In a stable sort, the original order of the pairs (7,1,8),(6,1,4) and (3,5,7),(6,5,7) with equal second coordinates must be preserved in the sorted output.
Question 75
You are given a sorted array of n elements which has been circularly shifted. For example, {35, 42, 5, 12, 23, 26} is a sorted array that has been circularly shifted by 2 positions.
Give an O(log n) time algorithm to find the largest element in a circularly shifted array. (The number of positions through which it has been shifted is unknown to you.)
A
Descriptive Explanation
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science (15 May 2018)
Question 75 Explanation: 
Let n be the number of elements in the array. If the array is not shifted at all, then the first element is smaller than the last element. Say the array is shifted by m positions, for m > 0. If m < n/2 , then the first element would be larger than the middle element (and the largest element lies in the first half). If m >= n/2 then the last element would be smaller than the middle element (and the largest element lies in the second half). This suggests a recursive algorithm which reduces the search space by half at each call, and hence gives a running time of O(log n).
The following code describes this algorithm: a/b gives the integer part of dividing a by b. The required solution is obtained by calling f(T, 0, n-1), where n is the length of T.
function f(T, first, last)
if (T[first] <= T[last])
return T[last]
else {
mid := first + (last - first)/2
if (T[mid] > T[mid+1])
return T[mid]
else if (T[first] > T[mid])
return f(T, first, mid-1)
else if (T[mid] >= T[last])
return f(T, mid+1, last)
}
Question 76
You are given a sequence of n elements to sort. The input sequence consists of n/k subsequences,each containing k elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length n is to sort the k elements in each of the n/k subsequences. The lower bound on the number of comparisons needed to solve this variant of the sorting problem is
A
Ω(n)
B
Ω(n/k)
C
Ω(nlogk )
D
Ω(n/klogn/k)
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-3
Question 76 Explanation: 
There are n/k subsequences and each can be ordered in k! ways. This makes a (k!)n/k outputs. We can use the same reasoning:
(k!)n/k ≤ 2h
Taking the logarithm of both sides, we get:
h ≥ lg(k!)n/k
= (n/k)lg(k!)
≥ (n/k)(k/2)lg(k/2)
= (1/2)*(nlogk)-(1/2)*n
= Ω(nlogk)
Question 77
Any decision tree that sorts n elements has height ________.
A
Ω(lg n)
B
Ω(n)
C
Ω(n lg n)
D
Ω(n2)
       Algorithms       Sorting       UGC NET CS 2017 Jan- paper-3
Question 77 Explanation: 
Theorem: Any decision tree that sorts n elements has height Ω(n lg n).
Proof:
→ The number of leaves l ≥ n!
→ Any binary tree of height h has ≤ 2h leaves.
→ Hence, n! ≤ l ≤ 2h
→ Take logs: h ≥ lg(n!)
→ Use Stirling’s approximation: n! > (n/e)n
→ We get:
h ≥ log(n/e)n
= nlg(n/e)
= nlg n - n lg e
= Ω(n lg n)
Question 78
Match the following with respect to algorithm paradigms : 36  
A
a-iii, b-i, c-ii, d-iv
B
a-ii, b-i, c-iv, d-iii
C
a-ii, b-i, c-iii, d-iv
D
b-iii, b-ii, c-i, d-iv
       Algorithms       Sorting       UGC NET CS 2017 Jan- paper-3
Question 78 Explanation: 
Merge sort→ Divide and conquer
Huffman coding→ Greedy approach
Optimal polygon triangulation→ Dynamic programming
Subset sum problem → Backtracking
Note: We can also use subset sum problem as well backtracking.
Question 79
If there are n integers to sort, each integer has d digits, and each digit is in the set {1, 2, ...,k}, radix sort can sort the numbers in:
A
O (k (n + d))
B
O(d (n + k))
C
O((n + k) lg d)
D
O((n + d) lg k)
       Algorithms       Sorting       UGC NET CS 2016 Aug- paper-3
Question 79 Explanation: 

Question 80
If there are n integers to sort, each integer has d digits and each digit is in the set {1, 2, ..., k}, radix sort can sort the numbers in :
A
O(d n k)
B
O(d nk)
C
O((d +n) k)
D
O(d (n + k))
       Algorithms       Sorting       UGC NET CS 2015 Dec - paper-3
Question 80 Explanation: 
The radix sort running time depends on the stable sort used as the intermediate sorting algorithm. When each digit is in the range 0 to k-1(It takes k possible values), and ‘k’ is not too large, counting sort is the obvious choice. Each pass over n d-digits numbers then takes time O(n+k). There are ‘d’ passes, and so total time for radix sort is O(d (n + k))
Question 81
Floyd-Warshall algorithm utilizes __________ to solve the all-pairs shortest paths problem on a directed graph in __________ time.
A
Greedy algorithm, θ (V3)
B
Greedy algorithm, θ (V2 lgn)
C
Dynamic programming, θ (V3)
D
Dynamic programming, θ (V2 lgn)
       Algorithms       Sorting       UGC NET CS 2015 Dec - paper-3
Question 81 Explanation: 
Floyd-Warshall algorithm utilizes Dynamic Programming to solve the all-pairs shortest paths problem on a directed graph in θ(V3) time, where “V” is the number of vertices.
Question 82
Which of the following is best running time to sort n integers in the range 0 to n2-1
A
O(log n)
B
O(n)
C
O(n log n)
D
O(n2)
       Algorithms       Sorting       UGC NET June-2019 CS Paper-2
Question 82 Explanation: 
Using merge, heap sort will get O(nlogn)
But using radix sort will get in linear time only. O(n)
Question 83
There are many sorting algorithms based on comparison. The running time of heap sort algorithm is O(nlogn). Let P, but unlike Q, heapsort sorts in place where (P,Q) is equal to
A
Merge sort, Quick sort
B
Quick sort, Insertion sort
C
Insertion sort, Quick sort
D
insertion sort, Merge sort
       Algorithms       Sorting       UGC NET June-2019 CS Paper-2
Question 83 Explanation: 
This is an ambiguous question.
Method-1:
Mergesort is having best,average and worst case time complexity is O(nlogn) as like hep sort. Quick sort is also same like heap and merge except worst case time complexity is O(n2). According to question, P is merge sort and Q is quick sort.
Method-2:
Heap sort and insertion sort is not following divide and conquer technique. So, it is same like P.
Merge sort follows Divide and Conquer. So, it is not like Q.
Note: Official Key given answer is Option-4
Question 84
In order to sort list of numbers using radix sort algorithm, we need to get the individual digits of each number ‘n’ of the list. If n is a positive decimal integer, then ith digit , from right , of the number n is:
A
B
C
D
       Algorithms       Sorting       CIL Part - B
Question 85
Of the following sort algorithms, which has execution time that is least dependant on initial ordering of the input?
A
Insertion sort
B
Quick sort
C
Merge sort
D
Selection sort
       Algorithms       Sorting       ISRO CS 2020       Video-Explanation
Question 85 Explanation: 
Question 86
If the array A contains the items 10, 4, 7, 23, 67, 12 and 5 in that order, what will be the resultant array A after third pass of insertion sort?
A
67, 12, 10, 5, 4, 7, 23
B
4, 7, 10, 23, 67, 12, 5
C
4, 5, 7, 67, 10, 12, 23
D
10, 7, 4, 67, 23, 12, 5
       Algorithms       Sorting       ISRO CS 2020       Video-Explanation
Question 86 Explanation: 
Initial positions: 10, 4, 7, 23, 67, 12 , 5
Pass-1: 4, 10, 7, 23, 67, 12, 5
Pass-2: 4, 7, 10, 23, 67, 12, 5
Pass-3: 4, 7, 10, 23, 67, 12, 5 [ No change because the value 10 is placed in the same position ]
Question 87
The best case time complexity of insertion sort algorithms is ______
A
O(n)
B
O(1)
C
O(n2)
D
O(n log n)
       Algorithms       Sorting       APPSC-2016-DL-CS
Question 87 Explanation: 
The best case of insertion sort is when all the elements given in array are already sorted. In that case when insertion sort is applied only one comparison is made for each element of the array. Hence if we have n-elements the n-comparisons will be made. Hence the time complexity will be O(n)
Question 88
Which of the following sorting algorithms does in-place sorting with minimal space overhead?
A
Merge sort
B
Radix sort
C
Heap sort
D
Address Calculation Sort
       Algorithms       Sorting       APPSC-2016-DL-CS
Question 88 Explanation: 
In-Place sorting algorithm is one which requires only O(1) space in order to sort n-elements.
Merge sort has space complexity of O(n). Hence it is not an in-place sorting algorithm.
Question 89

The worst case time complexity of Quick Sort is __________

A
O(n2)
B
O(log n)
C
O(n)
D
O(n logn)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 89 Explanation: 
The worst case time complexity of Quick Sort is O(n2).
The worst case quicksort happens only 2 times
1. Elements are already in sorted order.
2. All elements having the same weight.
Question 90

The worst case time complexity of Merge Sort is _______

A
O(n2)
B
O(log n)
C
O(n)
D
O(n logn)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 90 Explanation: 
The worst case time complexity of Merge Sort is O(n logn).
The merge sort will not depend on the type of input. It will give the best, average and worst case time complexity is O(nlogn) only.
Question 91

Which of the following sorting procedures is the slowest?

A
Quick sort
B
Heap sort
C
Shell sort
D
Bubble sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 91 Explanation: 
Bubble sorting is the slowest because the average time complexity of all other sorting algorithms given in options are less than O(n2) but for bubble sort it is O(n2).
Question 92

Which of the following sorting methods would be most suitable for sorting a list which is almost sorted?

A
Bubble Sort
B
Insertion Sort
C
Selection Sort
D
Quick Sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 92 Explanation: 
If the elements of the list are almost sorted then insertion is the best to be chosen because it will take O(n) time.
Question 93

The running time of insertion sort is

A
O(n2)
B
O(n)
C
O(log n)
D
O(n log n)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 93 Explanation: 
The worst case time complexity of insertion sort algorithm is O(n2).
Question 94

A sort which compares adjacent elements in a list and switches where necessary is _________

A
Insertion sort
B
Heap sort
C
Quick sort
D
Bubble sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 94 Explanation: 
The given description in question is about bubble sort in which we compare adjacent elements in the list and switches where necessary.
Question 95

The correct order of the efficiency of the following sorting algorithms according to their overall running time comparison is

A
insertion>selection>bubble
B
insertion>bubble>selection
C
selection>bubble>insertion
D
bubble >selection>insertion
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 95 Explanation: 
Option 2 is the most satisfying option because best case time complexity for insertion sort and bubble sort is O(N) but for selection sort it is O(N2). So selection sort is the least efficient.
Question 96

A sort which iteratively passes through a list to exchange the first element with any element less than it and then repeats with a new first element is called

A
insertion sort
B
selection sort
C
heap sort
D
quick sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 96 Explanation: 
The given characteristics in question are of selection sort.
Question 97

The algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers like {1,3,2,5,4,6} is

A
quick sort
B
bubble sort
C
merge sort
D
selection sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 97 Explanation: 
Best case of quick sort is O(n logn)
Best case of Bubble sort is O(n)
Best case of merge sort is O(n logn)
Best case of selection sort is (n2)
So the algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers is Bubble sort.
Question 98

Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quick sort runs in Q(n2) time
II. Bubble sort runs in Q(n2) time
III. Merge-sort runs in Q(n2) time
IV. Insertion sort runs in Q(n2) time

A
I and II only
B
I and III only
C
II and IV only
D
I and IV only
E
None of the above.
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 98 Explanation: 
Quick sort runs in O(n2) if the input is already in sorted order.
Bubble sort runs in O(n) time if the input is already in sorted order.
Merge sort runs in O(nlogn) time.
Insertion sort runs in O(n) time.
Question 99

How much extra space is used by heap sort?

A
O(1)
B
O(log n)
C
O(n)
D
O(n2)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 99 Explanation: 
Heap sort space complexity is O(1).--
Heap sort uses Max-Heapify function which calls itself but it can be made using a simple while loop and thus making it an iterative function which takes no space.    
So, the space complexity of Heap Sort can be reduced to O(1)
--> The heapify function does not take O(logn) space, it will take O(logn) time in the worst case. Here we don't do heapify through any recursion call. We will do it through an iterative method and we don't require any auxiliary space in this sorting. We use only the input array.
So, we do heap sort in O(1) space complexity
Question 100

What is meant by stable sorting algorithm?

A
A sorting algorithm is stable if it preserves the order of all keys
B
A sorting algorithm is stable if it preserves the order of non-duplicate keys
C
A sorting algorithm is stable if it preserves the order of duplicate keys
D
A sorting algorithm is stable if it doesn't preserve the order of duplicate keys
       Algorithms       Sorting       CIL 2020
Question 100 Explanation: 
A sorting algorithm is called stable if it preserves the order of duplicate keys.
The examples of sorting algorithms are insertion sort,merge sort,counting sort etc.
Question 101

Consider the following statements
1. Insertion sort and merge sort are stable
2. Heap sort is inherently unstable
3. Selection sort is not inherently stable, but may be coded in such a way that it is stable

A
Only statement 1 is correct
B
All statements are correct
C
Statement 3 is not correct
D
Only statement 1 and 2 is correct
       Algorithms       Sorting       CIL 2020
Question 101 Explanation: 
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
S1 is true.
S2 is True, Heapsort is not stable because operations on the heap can change the relative order of equal items. From here: When sorting (in ascending order) heapsort first peaks the largest element and put it in the last of the list.
S3 is True. Selection sort is unstable by nature but can be stable by making some changes in the algorithm. Hence not inherently unstable.
Inherently unstable algorithm is an algorithm which cannot be made stable even by modifying the algorithm.
Question 102

What is the complexity of merge sort?

A
O(n)
B
O(n2 logn)
C
O(nlogn)
D
O(n2)
       Algorithms       Sorting       CIL 2020
Question 102 Explanation: 
Worst case time complexity of merge sort is O(nlogn).
Question 103

Time complexity of Merge sort is

A
O(n)
B
O(n2)
C
O(log n)
D
O(n log n)
       Algorithms       Sorting       APPSC-2012-DL CA
Question 103 Explanation: 
Time complexity of merge sort is O(nlogn).
Question 104

In which sorting algorithm average number of comparison is (n2 – n) / 2?

A
Insertion sort
B
Selection sort
C
Merge sort
D
Quick sort
       Algorithms       Sorting       APPSC-2012-DL CA
Question 104 Explanation: 
Insertion sort have average no. of comparisons (n2 – n) / 2.
Question 105
Which sorting method is best suited for external sorting?
A
Quick Sort
B
Heap Sort
C
Merge sort
D
All the above
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 105 Explanation: 
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted sub-files are combined into a single larger file.
Question 106
The algorithm design technique used in the quick sort algorithm is
A
Dynamic programming
B
Back tracking
C
Divide and conquer
D
Greedy method
       Algorithms       Sorting       TNPSC-2012-Polytechnic-CS
Question 106 Explanation: 
The algorithm design technique used in the quick sort algorithm is divide and conquer.
Question 107
Which of the following sorting algorithms does not have a worst case running time of O(n2)?
A
Bubble Sort
B
Quick Sort
C
Merge Sort
D
Heap Sort
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 107 Explanation: 
worst case time complexity of mergesort is O(nlogn).
Question 108

Quick sort is run on two inputs shown below to sort in ascending order:

I : 1,2,3,…n II : n, n-1,…, 2, 1

Let k1 and k2 be the number of comparisons made for the inputs I and II respectively. Then
A
k1 < k2
B
k1 = k2
C
k1 > k2
D
None
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 108 Explanation: 
In quick sort whether the inputs are in descending order or ascending order, it will have the worst case comparisons and will be equal no. of comparisons.
Question 109
Which Sorting method is an external Sort?
A
Heap Sort
B
Quick Sort
C
Insertion Sort
D
None of the above
       Algorithms       Sorting       TNPSC-2017-Polytechnic-CS
Question 109 Explanation: 
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy.
Example of external sorting is merge sort.
Question 110
Suppose that the splits at every level of quicksort are in the proportion (1 – α) to α, where 0 < α ≤ ½ is a constant. The minimum depth of a leaf in the recursion tree is approximately given by
A
–lgn/ lg(1 – α)
B
–(lg (1 – α) / lgn)
C
–lgn / lgα
D
–lgα / lgn
       Algorithms       Sorting       UGC NET CS 2014 June-paper-3
Question 110 Explanation: 
Question 111
In the code below reverse(A,i,j) takes an array A, indices i and j with i ≤ j, and reverses the segment A[i], A[i+1],. . . ,A[j]. For instance if A = [0,1,2,3,4,5,6,7] then, after we apply reverse(A,3,6), the contents of the array will be A = [0,1,2,6,5,4,3,7].
function mystery (A[0..99]) {
int i,j,m;
for (i = 0; i < 100; i++) {
m = i;
for (j = i; j < 100, j++) {
if (A[j] > A[m]) {
m = j;
}
}
reverse(A,i,m);
}
return;
}
When the procedure terminates, the array A has been:
A
Sorted in descending order
B
Sorted in ascending order
C
Reversed
D
Left unaltered
       Algorithms       Sorting       CMI 2019
Question 111 Explanation: 
This procedure is call pancake sort. Each iteration (of the outer for loop) moves the maximum value in A[i..99] to A[i].
Question 112
An array of n distinct elements is said to be un-sorted if for every index i such that 2 <=i <=n-1, either A[i] > max{A[i-1], A[i + 1] , or A[i] < min A[i-1], A[i + 1] . What is the time-complexity of the fastest algorithm that takes as input a sorted array A with n distinct elements, and un-sorts A?
A
O(n log n) but not O(n)
B
O(n) but not O(√n)
C
O(√n) but not O(log n)
D
O(log n) but not O(1)
E
O(1)
       Algorithms       Sorting       TIFR PHD CS & SS 2017
Question 113
An array A contains n integers. We wish to sort A in ascending order. We are told that initially no element of A is more than a distance k away from its final position in the sorted list. Assume that n and k are large and k is much smaller than n. Which of the following is true for the worst case complexity of sorting A?
A
A can be sorted with constant · kn comparisons but not with fewer comparisons
B
A cannot be sorted with less than constant · n log n comparisons
C
A can be sorted with constant · n comparisons
D
A can be sorted with constant · n log k comparisons but not with fewer comparisons
E
A can be sorted with constant · k2n comparisons but not fewer
       Algorithms       Sorting       TIFR PHD CS & SS 2012
Question 114
Consider the quicksort algorithm on a set of n numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?
A
The running time of the algorithm is Θ(n).
B
The running time of the algorithm is Θ(n log n)
C
The running time of the algorithm is Θ(n1.5).
D
The running time of the algorithm is Θ(n2)
E
None of the above.
       Algorithms       Sorting       TIFR PHD CS & SS 2012
Question 115
The best case performance of heap sort for sorting a given list of numbers into descending order occurs if the given list is in
A
ascending order
B
descending order
C
random order
D
all of the above
       Algorithms       Sorting       HCU PHD CS MAY 2017
Question 115 Explanation: 
Whatever the order of the given list is ,the heap sort always takes O(nlogn) times.
Question 116
Let (a1,a2,a3,......,an-1,an) be a list of numbers in ascending order except for an, where an < a1. Then the time taken by insertion sort and bubble sort algorithms respectively to sort the list are
A
O(n),O(n)
B
O(n),O(n2)
C
O(n2),O(n)
D
O(1),O(n)
       Algorithms       Sorting       HCU PHD CS MAY 2015
Question 116 Explanation: 
For almost sorted sequence of elements insertion sort takes O(n) time.Since the given sequence is almost sorted so it will take O(n) time.
Bubble sort will take O(n^2) time
Question 117
What is the minimum number of swaps that would take place, using Bubble sort on a list of n numbers ?
A
(n-1)
B
n/2
C
0
D
log n
       Algorithms       Sorting       HCU PHD CS MAY 2016
Question 117 Explanation: 
The minimum no. of swaps that would take place, using Bubble sort on a list of n numbers is 0,when the elements are already in sorted order.
Question 118
Heap sort may be considered as
A
Insertion sort done on a heap data structure instead of a list.
B
Selection sort done on a heap data structure instead of a list.
C
Bubble sort done on a heap data structure instead of a list.
D
None of the above.
       Algorithms       Sorting       HCU PHD CS MAY 2014
Question 118 Explanation: 
When we do min or max heapify in heap sort, we get the min or max element in the root then we exchange it with the last element. And we repeat the procedure for the remaining element .
We do similar type in selection sort,we find min or max element and then we exchange that element with first element and we repeat the element with the remaining procedure .
Question 119

The correct result of Lexicographically sorting

109, 20, 119, 2,207, 27, 19

in ascending order is:
A
19, 109, 119, 2, 20, 27, 207
B
109, 119, 19, 2,20,27,207
C
109, 119, 19, 2,20,207,27
D
2, 19, 20, 27, 109, 119 , 207
       Algorithms       Sorting       HCU PHD CS MAY 2014
Question 119 Explanation: 
Lexicographical sorting of numbers is similar to the Lexicographical sorting of words in a dictionary.
So the correct answer is,
109,119,19,2,20,207,27
Question 120
Which sort uses features of the key to operate in Linear time relative to the number of elements in the array?
A
Quick Sort
B
Merge Sort
C
Radix Sort
D
Bubble Sort
       Algorithms       Sorting       HCU PHD CS MAY 2014
Question 120 Explanation: 
Examples of sorting algorithms that run in linear time are counting sort, radix sort and bucket sort. Counting sort and radix sort assume that the input consists of integers in a small range.
Question 121
When data to be sorted is larger than the main memory capacity ,which sorting algorithm would you prefer?
A
Heap sort
B
Insertion sort
C
Quick sort
D
Merge sort
       Algorithms       Sorting       HCU PHD CS MAY 2014
Question 121 Explanation: 
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted sub-files are combined into a single larger file.
One example of external sorting is the external merge sort algorithm, which sorts chunks that each fit in RAM, then merges the sorted chunks together. We first divide the file into runs such that the size of a run is small enough to fit into main memory. Then sort each run in main memory using merge sort sorting algorithm. Finally merge the resulting runs together into successively bigger runs, until the file is sorted.
Question 122
Which of the following sorting algorithms yield approximately the same worst-case and average-case running time behaviour in o(nlogn)?
A
Quicksort and Radix sort
B
Heap sort and Merge sort
C
Bubble sort and Selection sort
D
None of the above.
       Algorithms       Sorting       HCU PHD CS MAY 2013
Question 122 Explanation: 
Heap sort has worst case time complexity as O(nlogn) and average case time complexity as O(nlogn)
Merge sort has worst case time complexity as O(nlogn) and average case time complexity as O(nlogn)
Bubble sort has worst case time complexity as O(n^2) and average case time complexity as O(n^2)
Selection sort has worst case time complexity as O(n^2) and average case time complexity as O(n^2)
Quick sort has worst case time complexity as O(n^2) and average case time complexity as O(nlogn)
Radix sort has worst case time complexity as O(n+K) and average case time complexity as O(n+K)
Question 123
Suppose that a list of n sorted integers is given. Which of the following algorithms can detect that the input is sorted in O(n) time?
A
insertion sort only
B
selection sort only
C
bubble sort and selection sort
D
bubble sort and insertion sort
       Algorithms       Sorting       HCU PHD CS MAY 2013
Question 123 Explanation: 
Since the input is already sorted so insertion sort and bubble sort will take O(n). But selection sort in all cases takes O(n^2) time.
Question 124
Which one of these sorting algorithms involves minimum record swaps during execution on an average?
A
bubble sort
B
insertion sort
C
selection sort
D
quicksort
       Algorithms       Sorting       HCU PHD CS MAY 2012
Question 124 Explanation: 
Selection sort algorithm involves minimum no. of records swap on an average and also in worst case. In selection sort the average or worst case no. of swaps is O(n).
Whereas in Bubble sort tha average no. of swaps is O(n^2) , in insertion sort it is O(n^2), and in quick sort it is O(nlogn).
Question 125
Which sort will operate in quadratic time relative to the number of elements in the array (on the average)?
A
Quick Sort
B
Merge Sort
C
Radix Sort
D
Bubble Sort
       Algorithms       Sorting       HCU PHD CS MAY 2012
Question 125 Explanation: 
The average time complexity of quick sort is O(nlogn) which is not quadratic.
The average time complexity of merge sort is O(nlogn) which is not quadratic.
The average time complexity of radix sort is O(nlogn) which is not quadratic.
The average time complexity of quick sort is O(n^2) which is quadratic.
Question 126
Which of the following statements best describes quicksort, Mergesort and Heapsort algorithms, respectively?
A
the first two use recursive partitioning, while the third uses static tree-Iike structure
B
All of them use recursive partitioning
C
the first two use static tree-like structure, while the third uses recursive partitioning
D
the first two use nested do loop where both loops are bounded by the array size, while the third uses static tree-like structure
       Algorithms       Sorting       HCU PHD CS MAY 2012
Question 126 Explanation: 
All the sorting methods above discussed uses recursive partitioning.
Question 127
Suppose s consists of the following n=5 letters in the order: A B C D E. What is the number of comparisons to alphabetically sort S using Quicksort algorithm?
A
15
B
12 to 13
C
10
D
25
       Algorithms       Sorting       HCU PHD CS MAY 2012
Question 127 Explanation: 
Beginning with E it takes n-1=4 comparisons to recognize that A is already in its correct position. Sorting S is now reduced to sorting the following sublist with n-1=4 letters.
A B C D E
Beginning with E it takes n-2 = 3 comparisons to recognize that B in sublist is already in its correct position. Sorting is now reduced to sorting the following sublist with n-2=3 letters:
A B C D E
Consequently we have 4 + 3 + 2 + 1 =10
So using Quick sort it takes
C=(n-1) + (n-2) + (n-3) + …. + 2 + 1 = n(n-1)/2 = O(n2)
=52
=25
Whenever the sequence is sorted either in descending order or ascending order the time complexity of quick sort becomes O(n^2). Since the given sequence of elements is in sorted order ,hence the answer will be approximately n^2 = 5^2 = 25.
Question 128
How many comparisons are required to sort an array of length 5 if selection sort is used and the array is already sorted in the opposite order?
A
0
B
1
C
10
D
20
       Algorithms       Sorting       HCU PHD CS MAY 2012
Question 128 Explanation: 
In every cases the no. of comparisions required for selection sort of n elements is
(n-1) + (n-2) + (n-3) + . . . . . . . . + 1 = n(n-1)/2 = 5*4/2 = 10
Question 129
Heap sort algorithm is the same as which of the following algorithms, except for the fact that is uses the heap data structure
A
bubble sort
B
Shell sort
C
Merge Sort
D
Selection sort
       Algorithms       Sorting       HCU PHD CS MAY 2011
Question 129 Explanation: 
Heap sort algorithm is same as selection sort algorithm ,because in selection sort we find the min element from the array and then swap that selected min element with first element of the list and then continue with the remaining n-1 elements, similarly in heap sort we find the min or max element and then we swap the selected element with the last element of the list and then continue with the remaining n-1 elements.
Question 130
Find the lexicographic ordering of the bit strings given below based on the ordering 0 < 1. A) 001 B) 010 C) 011 D) 0001 E) 0101 Choose the correct answer from the options given below:
A
001<010<011<0001<0101
B
0001<001<010<0101<011
C
0001<0101<001<010<011
D
001<010<0001<0101<011
       Algorithms       Sorting       UGC NET JRF November 2020 Paper-2
Question 130 Explanation: 
To find the lexicographic order of given strings will be:
0001<001<010<0101<011
Note: Lexicographical order is nothing but dictionary based order.
Question 131
Which sorting algorithm sorts by moving the current data element past the already sorted values and repeatedly interchanges it with the preceding value until it is in its correct place?
A
Insertion sort
B
Internal sort
C
External sort
D
Radix sort
       Algorithms       Sorting       NIC-NIELIT Scientist-B 2020
Question 131 Explanation: 
Inserting sorting algorithm sorts by moving the current data element past the already sorted values and repeatedly interchanges it with the preceding value until it is in its correct place.
Question 132
Which of the following techniques deals with sorting the data stored in the computer’s memory?
A
Distribution sort
B
Internal sort
C
External sort
D
Radix sort
       Algorithms       Sorting       NIC-NIELIT Scientist-B 2020
Question 132 Explanation: 
Here internal and external means using additional space or not. External means using additional array space we can sort the elements. Internal space does not take any extra space. But Main memory is limited in space so it can take internal sort help.
Question 133
If one uses straight two-way merge sort algorithm to sort the following elements in ascending order 20, 47, 15, 8, 9, 4, 40, 30, 12, 17 then the order of these elements after the second pass of the algorithm is:
A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
       Algorithms       Sorting       NIC-NIELIT Scientist-B 2020
Question 133 Explanation: 
Merge sort divides the node until the node itself is sorted. In the merging process two nodes are compared bcz of two way merge. Pass1:(20,47), (8,15), (4,9), (30,40), (12,17)
Pass2: (8,15,20,47), (4,9,30,40), (12,17)
Pass3: (4,8,9,15,20,30,40,47),(12,17)
Pass4: (4,8,9,12,15,17,20,30,40,47)
So within 4 passed we can sort all the elements
Question 134
What is the advantage of bubble sort over other sorting techniques?
A
It is faster
B
Consumes less memory
C
Detects whether the input is already sorted
D
All of the options
       Algorithms       Sorting       NIC-NIELIT Scientist-B 2020
Question 134 Explanation: 
The options are not appropriate.
Option-A is definitely FALSE.
Option-B is not an advantage when compared to other sorting techniques.
Option-C Bubble sort will give the best case O(n) when elements are in sorted order.
Question 135
What is the best case complexity of QuickSort?
A
O(nlogn)
B
O(logn)
C
O(n)
D
O(n2)
       Algorithms       Sorting       NIC-NIELIT Scientist-B 2020
Question 135 Explanation: 
In quick sort best case is possible when the pivot element can be placed in the middle of the elements on each pass.
T(n)=2T(n/2)+n
=O(nlogn)
Question 136
Consider an array of positive integers between 123456 to 876543, which sorting algorithm can be used to sort these number in linear time?
A
Impossible to sort in linear time
B
Radix Sort
C
Insertion Sort
D
Bubble Sort
       Algorithms       Sorting       NIC-NIELIT STA 2020
Question 136 Explanation: 
Radix Sort worst case time complexity is O(d(n+k)).
Even it gives linear time when the input array is 0 to n^6….
Question 137
The given array is arr = {1, 2, 4, 3}. Bubble sort is used to sort the array elements. How many passes will be done to sort the array?
A
4
B
2
C
1
D
3
       Algorithms       Sorting       NIC-NIELIT STA 2020
Question 137 Explanation: 
Bubble sort will take the worst case in n-1 comparisons but as per the input it takes only one pass.

Note: Official answer key given, Option-D as the correct answer.
Question 138
The procedure operates on three arrays A[0..99], B[0..99] and C[0..99], which are initialized with integer values.
procedure mystery() {
for (i = 0; i < 100; i++) { C[i] = A[i]; }
p = 99;
for (i = 0; i < 100; i++) {
B[p] = C[0];
p = p-1;
for (j = 1; j < 100; j++) {
C[j-1] = C[j];
}
}
}
When the procedure terminates, which of the following statements can be asserted about the array B?
A
All elements of B are equal to A[0]
B
B contains the elements of A sorted in descending order
C
All values of B are the same
D
B contains the elements of A in reverse order
       Algorithms       Sorting       CMI 2021
Question 138 Explanation: 
We copy A into C to begin with. In each iteration of the outer for loop, we place the first element of C at position p of B. Since we decrement p in each iteration, we are essentially placing the elements in positions 99, 98, ..., 0 of B, in order. At each iteration, we shift C left by one place, so what was C[i+1] in the previous iteration is C[i] in the current iteration. So essentially we are copying A[0], A[1], . . . , A[99] into B[99], B[98], . . . , B[0].
Question 139
The procedure operates on three arrays A[0..99], B[0..99] and C[0..99], which are initialized with integer values.
procedure mystery() {
for (i = 0; i < 100; i++) { C[i] = A[i]; }
p = 99;
for (i = 0; i < 100; i++) {
B[p] = C[0];
p = p-1;
for (j = 1; j < 100; j++) {
C[j-1] = C[j];
}
}
}
When the procedure terminates, which of the following statements can be asserted about the array C?
A
It contains the elements of A sorted in ascending order
B
It contains the elements of A sorted in descending order
C
All values are equal to A[99]
D
All values are equal to A[0]
       Algorithms       Sorting       CMI 2021
Question 139 Explanation: 
As explained above, each iteration of the outer loop shifts C to the left by one place.
At the end of iteration i, C[99], which is the same as A[99], is copied to positions 99-i, 99-i+1, . . . , 99 of C. Since the outer loop is run for 100 iterations, the result follows.
Question 140
Let A = A[1]A[2] · · · A[n] be an array of n numbers where n = 2k − 1 for some k > 0. Consider a binary tree T built from the array in the following manner. The root of T is the middle element of A. Let AL and AR denote the left and right subarrays resulting from the removal of the middle element from A. The left and right subtrees of the root node are binary trees defined similarly on AL and AR, respectively. Each element A[i] of the array is a node in this tree T. Let Si denote the indices occurring in the unique path from the root node to the element A[i]. We say that index i is good if for every j1, j2 in Si with j1 < j2, we have A[j1] ≤ A[j2]. Prove that the subarray A[i1]A[i2] · · · A[im] consisting only of good indices is sorted.
A
Descriptive explanation
       Algorithms       Sorting       CMI 2021
Question 140 Explanation: 
Consider any two good indices i and j, with i < j. Let A[k] be the lowest common ancestor of A[i] and A[j] in the tree T (it is possible that k ∈ {i, j}). Clearly i ≤ k ≤ j.
Now i, k ∈ Si and j, k ∈ Sj, and both i and j are good. Therefore A[i] ≤ A[k] and A[k] ≤ A[j], and hence A[i] ≤ A[j]. The desired result follows.
Question 141
Consider the problem of sorting n single digit integers (base 10). This problem can be solved in time
A
O(n log n) but not O(n log log n)
B
O(n log log n) but not O(n)
C
O(n) but not O(n/ log log n)
D
O(n/ log log n)
E
None of the above.
       Algorithms       Sorting       TIFR PHD 2022
Question 142
There is an unsorted list of n integers. You are given 3 distinct integers and you have to check if all 3 integers are present in the list or not. The only operation that you are allowed to perform is a comparison. Let A be an algorithm for this task that performs the least number of comparisons. Let c be the number of comparisons done by A. Then,
A
c = 3n
B
c = 2n + 5
C
c ≥ 3n − 1
D
c ≤ n
E
c ≤ 2n + 3
       Algorithms       Sorting       TIFR PHD 2022
Question 143
Consider the following function count, that takes as input a, an array of integers, and N, the size of the array.

Further, let count_IS be the number of comparisons made by the insertion sort algorithm on the array a.
Which of the following statements is TRUE for some constant c?
A
For all N ≥ c, there exists an array of size N for which count_IS ≥ N2/c, while count_FN ≤ cN
B
For all N ≥ c, there exists an array of size N for which count_FN ≥ N2/c, while count_IS ≤ cN
C
For all N ≥ c, for all arrays of size N, count_FN ≤ count_IS ≤ c × count_FN
D
For all N ≥ c, for all arrays of size N, count_FN ≥ N2/c
E
None of the above
       Algorithms       Sorting       TIFR PHD 2022
There are 143 questions to complete.