Sorting

Question 1

Merge sort uses

A
Divide and conquer strategy
B
Backtracking approach
C
Heuristic search
D
Greedy approach
Question 1 Explanation: 
Merge sort uses the divide and conquer strategy.
Question 2

Consider the following sequence of numbers

  92, 37, 52, 12, 11, 25  

Use bubble sort to arrange the sequence in ascending order. Give the sequence at the end of each of the first five passes.

A
Theory Explanation.
Question 3

A two dimensional array A[1...n][1...n] of integers is partially sorted if

    ∀i, j ∈ [1...n−1],   A[i][j] < A[i][j+1] and 
                           A[i][j] < A[i+1][j] 

Fill in the blanks:
(a) The smallest item in the array is at A[i][j] where i=............and j=..............
(b) The smallest item is deleted. Complete the following O(n) procedure to insert item x (which is guaranteed to be smaller than any item in the last row or column) still keeping A partially sorted.

procedure  insert (x: integer);
var        i,j: integer;
begin
(1) i:=1; j:=1, A[i][j]:=x;
(2) while (x > ...... or x > ......) do
(3) if A[i+1][j] < A[i][j] ......... then begin
(4) A[i][j]:=A[i+1][j]; i:=i+1;
(5) end
(6) else begin
(7) ............
(8) end
(9) A[i][j]:= .............
    end  
A
Theory Explanation.
Question 4

A sorting technique is called stable if

A
it takes O (nlog n) time
B
it maintains the relative order of occurrence of non-distinct elements
C
it uses divide and conquer paradigm
D
it takes O(n) space
Question 4 Explanation: 
Sorting techniques are said to be stable if it maintains the relative order of occurrence of non-distinct element.
Question 5

If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:

   20, 47, 15, 8, 9, 4, 40, 30, 12, 17 

then the order of these elements after second pass of the algorithm is:

A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
Question 5 Explanation: 
Question 6

Let s be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in s. Which of the following statements is true?

A
t(n) is O(1)
B
n ≤ t(n) ≤ n log2 n
C
n log2 n ≤ t(n) < (n/2)
D
t(n) = (n/2)
Question 6 Explanation: 
Since the array is sorted. Now just pick the first two minimum elements and check if their sum is less than 1000 or not. If it is less than 1000 then we found it and if not then it is not possible to get the two elements whose sum is less than 1000. Hence, it takes constant time. So, correct option is (A).
Question 7

Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort?

A
O(n)
B
O(n log n)
C
O(n2)
D
O(n!)
Question 7 Explanation: 
In worst case Randomized quicksort execution time complexity is same as quicksort.
Question 8

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj.

If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of 1...n ?

A
n(n-1)/2
B
n(n-1)/4
C
n(n+1)/4
D
2n[log2n]
Question 8 Explanation: 
Probability of inverse (ai, aj(i Probability of expected no. of inversions = (1/2) × (n(n-1)/2) = n(n-1)/4
Question 9

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that ii > aj.

What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1...n with at most n inversions?

A
Θ(n2)
B
Θ(n log n)
C
Θ(n1.5)
D
Θ(n)
Question 9 Explanation: 
Here the inputs are to be restricted to 1...n with atmost 'n' inversions. Then the worst case time complexity of inversion sort reduces to Θ(n).
Question 10

Consider the recursive algorithm given below:

 procedure bubblersort (n);
 var i,j: index; temp : item;
 begin
    for i:=1 to n-1 do
    if A[i] > A [i+1] then
 begin
    temp : A[i];
    A[i]:=A[i+1]; 
    A[i+1]:=temp
    end;
   bubblesort (n-1)
 end 

Let an be the number of times the ‘if…then….’ Statement gets executed when the algorithm is run with value n. Set up the recurrence relation by defining an in terms of an-1. Solve for an.

A
Theory Explanation.
Question 11

Following algorithm(s) can be used to sort n integers in the range [1...n3] in O(n) time

A
Heapsort
B
Quicksort
C
Mergesort
D
Radixsort
Question 11 Explanation: 
As no comparison based sort can ever do any better than nlogn. So option (A), (B), (C) are eliminated. O(nlogn) is lower bound for comparison based sorting.
As Radix sort is not comparison based sort (it is counting sort), so can be done in linear time.
Question 12

Let a and b be two sorted arrays containing n integers each, in non-decreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements

1. a[i] ≥ b [i] => c[2i] ≥ a [i]
2. a[i] ≥ b [i] => c[2i] ≥ b [i]
3. a[i] ≥ b [i] => c[2i] ≤ a [i]
4. a[i] ≥ b [i] => c[2i] ≤ b [i] 
Which of the following is TRUE?

A
only I and II
B
only I and IV
C
only II and III
D
only III and IV
Question 12 Explanation: 
a[i] ≥ b[i]
Since both 'a' and 'b' are sorted in the beginning, there are 'i' elements than or equal to a[i] and similarly 'i' elements smaller than or equal to b[i]. So, a[i] ≥ b[i] means there are 2i elements smaller than or equal to a[i] and hence in the merged array, a[i] will come after these 2i elements. So, c[2i] ≤ a[i].
Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array. So, b[i] ≤ c[2i].
So, option (C) is correct.
Question 13

If we use Radix Sort to sort n integers in the range [nk/2, nk], for some k>0 which is independent of n, the time taken would be?

A
Θ(n)
B
Θ(kn)
C
Θ(nlogn)
D
Θ(n2)
Question 13 Explanation: 
Time complexity for radix sort = Θ(wn)
where n = keys, w = word size
w = log2 (nk) = k × log2 (n)
Complexity Θ(wn) = Θ(k × log2(n) × n) = Θ(n log n) = Θ(n log n)
Question 14

Let P be a quicksort program to sort numbers in ascending order. Let t1 and t2 be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1] respectively. Which of the following holds?

A
t1 = t2
B
t1 > t2
C
t1 < t2
D
t1 = t2 + 5 log 5
Question 14 Explanation: 
Since both are in sorted order and no. of elements in second list is greater.
Question 15

Quicksort is ________ efficient than heapsort in the worst case.

A
LESS.
Question 15 Explanation: 
As worst case time for quicksort is O(n2) and worst case for heap sort is O(n logn).
Question 16

The minimum number of comparisons required to sort 5 elements is _____

A
7
Question 16 Explanation: 
Minimum no. of comparisons
= ⌈log(n!)⌉
= ⌈log(5!)⌉
= ⌈log(120)⌉
= 7
Question 17
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quicksort runs in Θ(n2) time
II. Bubblesort runs in Θ(n2) time
III. Mergesort runs in Θ(n) time
IV. Insertion sort runs in Θ(n) time
A
I and II only
B
I and III only
C
II and IV only
D
I and IV only
Question 17 Explanation: 
If input sequence is already sorted then the time complexity of Quick sort will take O(n2) and Bubble sort will take O(n) and Merge sort will takes O(nlogn) and insertion sort will takes O(n).
→ The recurrence relation for Quicksort, if elements are already sorted,
T(n) = T(n-1)+O(n) with the help of substitution method it will take O(n2).
→ The recurrence relation for Merge sort, is elements are already sorted,
T(n) = 2T(n/2) + O(n) with the help of substitution method it will take O(nlogn).
We can also use master's theorem [a=2, b=2, k=1, p=0] for above recurrence.
Question 18

An array of 25 distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _____.

A
0.08
B
0.01
C
1
D
8
Question 18 Explanation: 
Step-1: Given, 25 distinct elements are to be sorted using quicksort.
Step-2: Pivot element = uniformly random.
Step-3: Worst case position in the pivot element is either first (or) last.
Step-4: So total 2 possibilities among 25 distinct elements
= 2/25
= 0.08
Question 19

There are n unsorted arrays: A1, A2, ..., An. Assume that n is odd. Each of A1, A2, ..., An contains n distinct elements. There are no common elements between any two arrays. The worst-case time complexity of computing the median of the medians of A1, A2, ..., An is

A
O(n)
B
O(n log n)
C
Ω(n2 log n)
D
O(n2)
Question 19 Explanation: 
Finding the median in an unsorted array is O(n).
But it is similar to quicksort but in quicksort, partitioning will take extra time.
→ Find the median will be (i+j)/2
1. If n is odd, the value is Ceil((i+j)/2)
2. If n is even, the value is floor((i+j)/2)
-> Here, total number of arrays are
⇒ O(n)*O(n)
⇒ O(n2)
Note:
They are clearly saying that all are distinct elements.
There is no common elements between any two arrays.
Question 20

The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:

A
Θ(nlogn), Θ(nlogn), and Θ(n2)
B
Θ(n2 ), Θ(n2 ), and Θ(nlogn)
C
Θ(n2), Θ(nlogn), and Θ(nlogn)
D
Θ(n2), Θ(nlogn), and Θ(n2)
Question 20 Explanation: 
Question 21

What is the number of swaps required to sort n elements using selection sort, in the worst case?

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 logn)
Question 21 Explanation: 
Selection sort – There is no Worst case input for selection sort. Since it searches for the index of kth minimum element in kth iteration and then in one swap, it places that element into its correct position. For n-1 iterations of selection sort, it can have O(n) swaps. Selection Sort does a significant number of comparisons before moving each element directly to its final intended position. At most the algorithm requires N swaps. once we swap an element into place, you never go back again.So it is great for writes O(n) but not so great (at all) for reads — O(n2). It isn’t well-suited to generalized sorting, but might work well in specialized situations like EEPROM (where writes are inordinately expensive).
Question 22

In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort? 

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 log n)
Question 22 Explanation: 

n→n/(4/3)→n/(4/3)2→n/(4/3)3-----n/(4/3)k=1
n/(4/3)k = 1
⇒n=(4/3)k ⇒ k = log4/3n [k=no. of levels]
In each level workdone = Cn
So, total workdone = Cn⋅log4/3n = (nlog4/3n)
Question 23

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

A
T(n) ≤ 2T(n/5) + n
B
T(n) ≤ T(n/5) + T(4n/5) + n
C
T(n) ≤ 2T(4n/5) + n
D
T(n) ≤ 2T(n/2) + n
Question 23 Explanation: 
Consider the case where one subset of set of n elements contains n/5 elements and another subset of set contains 4n/5 elements.
So, T(n/5) comparisons are needed for the first subset and T(4n/5) comparisons needed for second subset.
Now, suppose that one subset contains more than n/5 elements then another subset will contain less than 4n/5 elements. Due to which time complexity will be less than
T(n/5) + T(4n/5) + n
Because recursion tree will be more balanced.
Question 24

Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?

A
O(log n)
B
O(n)
C
O(n log n)
D
O(n2)
Question 24 Explanation: 
Best, Average and worst case will take maximum O(n) swaps.
Selection sort time complexity O(n2) in terms of number of comparisons. Each of these scans requires one swap for n-1 elements (the final element is already in place).
Question 25

The number of elements that can be sorted in Θ(log n) time using heap sort is

A
Θ(1)
B
Θ(√log⁡n)
C
Θ (log⁡n/log⁡log⁡n)
D
Θ(log n)
Question 25 Explanation: 
Using heap sort to n elements it will take O(n log n) time. Assume there are log n/ log log n elements in the heap.
So, Θ((logn/ log log n)log(logn/log log n))
= Θ(logn/log logn (log logn - log log logn))
= Θ((log n/log logn) × log log n)
= Θ(log n)
Hence, option (C) is correct answer.
Question 26

You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

A
O(n2)
B
O(n log n)
C
Θ(n log⁡n)
D
O(n3)
Question 26 Explanation: 
The Worst case time complexity of quick sort is O (n2). This will happen when the elements of the input array are already in order (ascending or descending), irrespective of position of pivot element in array.
Question 27

Which one of the following in place sorting algorithms needs the minimum number of swaps?

A
Quick sort
B
Insertion sort
C
Selection sort
D
Heap sort
Question 27 Explanation: 
Selection sort requires minimum number of swaps i.e O(n). The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 28
In order to sort list of numbers using radix sort algorithm, we need to get the individual digits of each number ‘n’ of the list. If n is a positive decimal integer, then ith digit , from right , of the number n is:
A
B
C
D
Question 29
Find the lexicographic ordering of the bit strings given below based on the ordering 0 < 1. A) 001 B) 010 C) 011 D) 0001 E) 0101 Choose the correct answer from the options given below:
A
001<010<011<0001<0101
B
0001<001<010<0101<011
C
0001<0101<001<010<011
D
001<010<0001<0101<011
Question 29 Explanation: 
To find the lexicographic order of given strings will be:
0001<001<010<0101<011
Note: Lexicographical order is nothing but dictionary based order.
Question 30
Consider an array of positive integers between 123456 to 876543, which sorting algorithm can be used to sort these number in linear time?
A
Impossible to sort in linear time
B
Radix Sort
C
Insertion Sort
D
Bubble Sort
Question 30 Explanation: 
Radix Sort worst case time complexity is O(d(n+k)).
Even it gives linear time when the input array is 0 to n^6….
Question 31
The given array is arr = {1, 2, 4, 3}. Bubble sort is used to sort the array elements. How many passes will be done to sort the array?
A
4
B
2
C
1
D
3
Question 31 Explanation: 
Bubble sort will take the worst case in n-1 comparisons but as per the input it takes only one pass.

Note: Official answer key given, Option-D as the correct answer.
Question 32
In the code below reverse(A,i,j) takes an array A, indices i and j with i ≤ j, and reverses the segment A[i], A[i+1],. . . ,A[j]. For instance if A = [0,1,2,3,4,5,6,7] then, after we apply reverse(A,3,6), the contents of the array will be A = [0,1,2,6,5,4,3,7].
function mystery (A[0..99]) {
int i,j,m;
for (i = 0; i < 100; i++) {
m = i;
for (j = i; j < 100, j++) {
if (A[j] > A[m]) {
m = j;
}
}
reverse(A,i,m);
}
return;
}
When the procedure terminates, the array A has been:
A
Sorted in descending order
B
Sorted in ascending order
C
Reversed
D
Left unaltered
Question 32 Explanation: 
This procedure is call pancake sort. Each iteration (of the outer for loop) moves the maximum value in A[i..99] to A[i].
Question 33
Of the following sort algorithms, which has execution time that is least dependant on initial ordering of the input?
A
Insertion sort
B
Quick sort
C
Merge sort
D
Selection sort
Question 33 Explanation: 
Question 34
If the array A contains the items 10, 4, 7, 23, 67, 12 and 5 in that order, what will be the resultant array A after third pass of insertion sort?
A
67, 12, 10, 5, 4, 7, 23
B
4, 7, 10, 23, 67, 12, 5
C
4, 5, 7, 67, 10, 12, 23
D
10, 7, 4, 67, 23, 12, 5
Question 34 Explanation: 
Initial positions: 10, 4, 7, 23, 67, 12 , 5
Pass-1: 4, 10, 7, 23, 67, 12, 5
Pass-2: 4, 7, 10, 23, 67, 12, 5
Pass-3: 4, 7, 10, 23, 67, 12, 5 [ No change because the value 10 is placed in the same position ]
Question 35
The best case time complexity of insertion sort algorithms is ______
A
O(n)
B
O(1)
C
O(n2)
D
O(n log n)
Question 35 Explanation: 
The best case of insertion sort is when all the elements given in array are already sorted. In that case when insertion sort is applied only one comparison is made for each element of the array. Hence if we have n-elements the n-comparisons will be made. Hence the time complexity will be O(n)
Question 36
Which of the following sorting algorithms does in-place sorting with minimal space overhead?
A
Merge sort
B
Radix sort
C
Heap sort
D
Address Calculation Sort
Question 36 Explanation: 
In-Place sorting algorithm is one which requires only O(1) space in order to sort n-elements.
Merge sort has space complexity of O(n). Hence it is not an in-place sorting algorithm.
There are 36 questions to complete.

Access quiz wise question and answers by becoming as a solutions adda PRO SUBSCRIBER with Ad-Free content

Register Now

If you have registered and made your payment please contact solutionsadda.in@gmail.com to get access