Sorting

Question 1

An array of 25 distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _____.

A
0.08
B
0.01
C
1
D
8
       Algorithms       Sorting       GATE 2019
Question 1 Explanation: 
Step-1: Given, 25 distinct elements are to be sorted using quicksort.
Step-2: Pivot element = uniformly random.
Step-3: Worst case position in the pivot element is either first (or) last.
Step-4: So total 2 possibilities among 25 distinct elements
= 2/25
= 0.08
Question 2

There are n unsorted arrays: A1, A2, ..., An. Assume that n is odd. Each of A1, A2, ..., An contains n distinct elements. There are no common elements between any two arrays. The worst-case time complexity of computing the median of the medians of A1, A2, ..., An is

A
O(n)
B
O(n log n)
C
Ω(n2 log n)
D
O(n2)
       Algorithms       Sorting       GATE 2019
Question 2 Explanation: 
Finding the median in an unsorted array is O(n).
But it is similar to quicksort but in quicksort, partitioning will take extra time.
→ Find the median will be (i+j)/2
1. If n is odd, the value is Ceil((i+j)/2)
2. If n is even, the value is floor((i+j)/2)
-> Here, total number of arrays are
⇒ O(n)*O(n)
⇒ O(n2)
Note:
They are clearly saying that all are distinct elements.
There is no common elements between any two arrays.
Question 3

The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:

A
Θ(nlogn), Θ(nlogn), and Θ(n2)
B
Θ(n2 ), Θ(n2 ), and Θ(nlogn)
C
Θ(n2), Θ(nlogn), and Θ(nlogn)
D
Θ(n2), Θ(nlogn), and Θ(n2)
       Algorithms       Sorting       GATE 2016 [Set-1]
Question 3 Explanation: 
Question 4

Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?

    I. Quicksort runs in Θ(n2) time
    II. Bubblesort runs in Θ(n2) time
    III. Mergesort runs in Θ(n) time
    IV. Insertion sort runs in Θ(n) time
A
I and II only
B
I and III only
C
II and IV only
D
I and IV only
       Algorithms       Sorting       GATE 2016 [Set-2]
Question 4 Explanation: 
If input sequence is already sorted then the time complexity of Quick sort will take O(n2) and Bubble sort will take O(n) and Merge sort will takes O(nlogn) and insertion sort will takes O(n).
→ The recurrence relation for Quicksort, if elements are already sorted,
T(n) = T(n-1)+O(n) with the help of substitution method it will take O(n2).
→ The recurrence relation for Merge sort, is elements are already sorted,
T(n) = 2T(n/2) + O(n) with the help of substitution method it will take O(nlogn).
We can also use master's theorem [a=2, b=2, k=1, p=0] for above recurrence.
Question 5

You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

A
O(n2)
B
O(n log n)
C
Θ(n log⁡n)
D
O(n3)
       Algorithms       Sorting       GATE 2014 [Set-3]
Question 5 Explanation: 
The Worst case time complexity of quick sort is O (n2). This will happen when the elements of the input array are already in order (ascending or descending), irrespective of position of pivot element in array.
Question 6

Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?

A
O(log n)
B
O(n)
C
O(n log n)
D
O(n2)
       Algorithms       Sorting       GATE 2013
Question 6 Explanation: 
Best, Average and worst case will take maximum O(n) swaps.
Selection sort time complexity O(n2) in terms of number of comparisons. Each of these scans requires one swap for n-1 elements (the final element is already in place).
Question 7

Which one of the following is the tightest upper bound that represents the time complexity of inserting an object into a binary search tree of n nodes?

A
O(1)
B
O(log n)
C
O(n)
D
O(n log n)
       Algorithms       Sorting       GATE 2013
Question 7 Explanation: 
For skewed binary search tree on n nodes, the tightest upper bound to insert a node is O(n).
Question 8

What is the time complexity of Bellman-Ford single-source shortest path algorithm on a complete graph of n vertices?

A
Θ(n2)
B
Θ(n2 log n)
C
Θ(n3)
D
Θ(n3 log n)
       Algorithms       Sorting       GATE 2013
Question 8 Explanation: 
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is capable of handling graphs in which some of the edge weights are negative numbers.
Bellman–Ford runs in O(|V| * |E|) time, where |V| and |E| are the number of vertices and edges respectively.
Note: For complete graph: |E| = n(n-1)/2 = O(n2)
Question 9

The number of elements that can be sorted in Θ(log n) time using heap sort is

A
Θ(1)
B
Θ(√log⁡n)
C
Θ (log⁡n/log⁡log⁡n)
D
Θ(log n)
       Algorithms       Sorting       GATE 2013
Question 9 Explanation: 
Using heap sort to n elements it will take O(n log n) time. Assume there are log n/ log log n elements in the heap.
So, Θ((logn/ log log n)log(logn/log log n))
= Θ(logn/log logn (log logn - log log logn))
= Θ((log n/log logn) × log log n)
= Θ(log n)
Hence, option (C) is correct answer.
Question 10

What is the number of swaps required to sort n elements using selection sort, in the worst case?

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 logn)
       Algorithms        Sorting       GATE 2009
Question 10 Explanation: 
Selection sort – There is no Worst case input for selection sort. Since it searches for the index of kth minimum element in kth iteration and then in one swap, it places that element into its correct position. For n-1 iterations of selection sort, it can have O(n) swaps. Selection Sort does a significant number of comparisons before moving each element directly to its final intended position. At most the algorithm requires N swaps. once we swap an element into place, you never go back again.So it is great for writes O(n) but not so great (at all) for reads — O(n2). It isn’t well-suited to generalized sorting, but might work well in specialized situations like EEPROM (where writes are inordinately expensive).
Question 11

In quick sort, for sorting n elements, the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. What is the worst case time complexity of the quick sort? 

A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2 log n)
       Algorithms        Sorting       GATE 2009
Question 11 Explanation: 

n→n/(4/3)→n/(4/3)2→n/(4/3)3-----n/(4/3)k=1
n/(4/3)k = 1
⇒n=(4/3)k ⇒ k = log4/3n [k=no. of levels]
In each level workdone = Cn
So, total workdone = Cn⋅log4/3n = (nlog4/3n)
Question 12

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

A
T(n) ≤ 2T(n/5) + n
B
T(n) ≤ T(n/5) + T(4n/5) + n
C
T(n) ≤ 2T(4n/5) + n
D
T(n) ≤ 2T(n/2) + n
       Algorithms       Sorting       GATE 2008
Question 12 Explanation: 
Consider the case where one subset of set of n elements contains n/5 elements and another subset of set contains 4n/5 elements.
So, T(n/5) comparisons are needed for the first subset and T(4n/5) comparisons needed for second subset.
Now, suppose that one subset contains more than n/5 elements then another subset will contain less than 4n/5 elements. Due to which time complexity will be less than
T(n/5) + T(4n/5) + n
Because recursion tree will be more balanced.
Question 13

Which one of the following in place sorting algorithms needs the minimum number of swaps?

A
Quick sort
B
Insertion sort
C
Selection sort
D
Heap sort
       Algorithms        Sorting       GATE 2006
Question 13 Explanation: 
Selection sort requires minimum number of swaps i.e O(n). The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 14

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that i < j and ai > aj.

If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of 1...n ?

A
n(n-1)/2
B
n(n-1)/4
C
n(n+1)/4
D
2n[log2n]
       Algorithms        Sorting       GATE 2003
Question 14 Explanation: 
Probability of inverse (ai, aj(i Probability of expected no. of inversions = (1/2) × (n(n-1)/2) = n(n-1)/4
Question 15

In a permutation a1...an of n distinct integers, an inversion is a pair (ai, aj) such that ii > aj.

What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of 1...n with at most n inversions?

A
Θ(n2)
B
Θ(n log n)
C
Θ(n1.5)
D
Θ(n)
       Algorithms        Sorting       GATE 2003
Question 15 Explanation: 
Here the inputs are to be restricted to 1...n with atmost 'n' inversions. Then the worst case time complexity of inversion sort reduces to Θ(n).
Question 16

Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using randomized quicksort?

A
O(n)
B
O(n log n)
C
O(n2)
D
O(n!)
       Algorithms        Sorting       GATE 2001
Question 16 Explanation: 
In worst case Randomized quicksort execution time complexity is same as quicksort.
Question 17

Let s be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in s. Which of the following statements is true?

A
t(n) is O(1)
B
n ≤ t(n) ≤ n log2 n
C
n log2 n ≤ t(n) < (n/2)
D
t(n) = (n/2)
       Algorithms        Sorting       GATE 2000
Question 17 Explanation: 
Since the array is sorted. Now just pick the first two minimum elements and check if their sum is less than 1000 or not. If it is less than 1000 then we found it and if not then it is not possible to get the two elements whose sum is less than 1000. Hence, it takes constant time. So, correct option is (A).
Question 18

A sorting technique is called stable if

A
it takes O (nlog n) time
B
it maintains the relative order of occurrence of non-distinct elements
C
it uses divide and conquer paradigm
D
it takes O(n) space
       Algorithms        Sorting       GATE 1999
Question 18 Explanation: 
Sorting techniques are said to be stable if it maintains the relative order of occurrence of non-distinct element.
Question 19

If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:

   20, 47, 15, 8, 9, 4, 40, 30, 12, 17 

then the order of these elements after second pass of the algorithm is:

A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
       Algorithms        Sorting       GATE 1999
Question 19 Explanation: 
Question 20

Merge sort uses

A
Divide and conquer strategy
B
Backtracking approach
C
Heuristic search
D
Greedy approach
       Algorithms       Sorting       GATE 1995
Question 20 Explanation: 
Merge sort uses the divide and conquer strategy.
Question 21

Following algorithm(s) can be used to sort n integers in the range [1...n3] in O(n) time

A
Heapsort
B
Quicksort
C
Mergesort
D
Radixsort
       Algorithms       Sorting       GATE 1992
Question 21 Explanation: 
As no comparison based sort can ever do any better than nlogn. So option (A), (B), (C) are eliminated. O(nlogn) is lower bound for comparison based sorting.
As Radix sort is not comparison based sort (it is counting sort), so can be done in linear time.
Question 22

The minimum number of comparisons required to sort 5 elements is _____

A
7
       Algorithms       Sorting       GATE 1991
Question 22 Explanation: 
Minimum no. of comparisons
= ⌈log(n!)⌉
= ⌈log(5!)⌉
= ⌈log(120)⌉
= 7
Question 23

Quicksort is ________ efficient than heapsort in the worst case.

A
LESS.
       Algorithms        Sorting       GATE 1988
Question 23 Explanation: 
As worst case time for quicksort is O(n2) and worst case for heap sort is O(n logn).
Question 24

Let P be a quicksort program to sort numbers in ascending order. Let t1 and t2 be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1] respectively. Which of the following holds?

A
t1 = t2
B
t1 > t2
C
t1 < t2
D
t1 = t2 + 5 log 5
       Algorithms        Sorting       GATE 1987
Question 24 Explanation: 
Since both are in sorted order and no. of elements in second list is greater.
Question 25

If we use Radix Sort to sort n integers in the range [nk/2, nk], for some k>0 which is independent of n, the time taken would be?

A
Θ(n)
B
Θ(kn)
C
Θ(nlogn)
D
Θ(n2)
       Algorithms        Sorting       GATE 2008-IT
Question 25 Explanation: 
Time complexity for radix sort = Θ(wn)
where n = keys, w = word size
w = log2 (nk) = k × log2 (n)
Complexity Θ(wn) = Θ(k × log2(n) × n) = Θ(n log n) = Θ(n log n)
Question 26

Let a and b be two sorted arrays containing n integers each, in non-decreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements

1. a[i] ≥ b [i] => c[2i] ≥ a [i]
2. a[i] ≥ b [i] => c[2i] ≥ b [i]
3. a[i] ≥ b [i] => c[2i] ≤ a [i]
4. a[i] ≥ b [i] => c[2i] ≤ b [i] 
Which of the following is TRUE?

A
only I and II
B
only I and IV
C
only II and III
D
only III and IV
       Programming       Sorting       GATE 2005-IT
Question 26 Explanation: 
a[i] ≥ b[i]
Since both 'a' and 'b' are sorted in the beginning, there are 'i' elements than or equal to a[i] and similarly 'i' elements smaller than or equal to b[i]. So, a[i] ≥ b[i] means there are 2i elements smaller than or equal to a[i] and hence in the merged array, a[i] will come after these 2i elements. So, c[2i] ≤ a[i].
Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array. So, b[i] ≤ c[2i].
So, option (C) is correct.
Question 27
Given two sorted list of size m and n respectively. The number of comparisons needed the worst case by the merge sort algorithm will be
A
m*n
B
maximum of m and n
C
minimum of m and n
D
m+n–1
       Algorithms       Sorting       ISRO-2018
Question 27 Explanation: 
Here the maximum number of comparisons is m+n-1.
For example: Take 2 sub-arrays of size 3 such as 1,3,5 and another subarray 2,4,6. Then in this case number of comparisons is 5 which is m+n-1 can be taken as O(m+n).
Question 28
Of the following sorting algorithms, which has a running time that is least dependent on the initial ordering of the input?
A
Merge Sort
B
Insertion Sort
C
Selection Sort
D
Quick Sort
       Algorithms       Sorting       ISRO-2018
Question 28 Explanation: 
Question 29
Selection sort algorithm design technique is an example of
A
Greedy method
B
Divide-and-conquer
C
Dynamic Programming
D
Backtracking
       Algorithms       Sorting       ISRO-2007
Question 29 Explanation: 
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Question 30
The average case and worst case complexities for Merge sort algorithm are
A
O(n2), O(n2)
B
O(n2), O(nlog2n)
C
O(n log2n), O(n2)
D
O(n log2n), O(n log2n)
       Algorithms       Sorting       ISRO-2007
Question 30 Explanation: 
The best case, average case and worst case complexities for Merge sort algorithm are O( nlog2n ).
Question 31
Which one of the following in-place sorting algorithms needs the minimum number of swaps?
A
Insertion Sort
B
Quick Sort
C
Heap Sort
D
Selection Sort
       Algorithms       Sorting       ISRO-2017 May
Question 31 Explanation: 
Selection sort requires minimum number of swaps i.e O(n)

Question 32
The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order, using bubble sort is
A
11
B
12
C
13
D
10
       Algorithms       Sorting       ISRO-2017 May
Question 32 Explanation: 
Step-1: 8,7,22,9,31,5,13
Step-2: 8,7,9,22,31,5,13
Step-3: 8,7,9,22,5,31,13
Step-4: 8,7,9,22,5,13,31
Step-5: 7,8,9,22,5,13,31
Step-6: 7,8,9,5,22,13,31
Step-7: 7,8,9,5,13,22,31
Step-8: 7,8,5,9,13,22,31
Step-9: 7,5,8,9,13,22,31
Step-10: 5,7,8,9,12,22,31
Note:Total 10 swaps are required to sort the array.
Question 33
How many comparisons are needed to sort an array of length 5 if a straight selection sort is used and array is already in the opposite order?
A
1
B
5
C
10
D
20
       Algorithms       Sorting       ISRO CS 2008
Question 33 Explanation: 
Consider the array: 5 4 3 2 1
1st iteration will compare 4 numbers with the 5
2nd iteration will compare 3 numbers with the 4
3rd iteration will compare 2 numbers with the 3
4th iteration i will compare 1 number with the 2
So, the total number of comparisons is 4 + 3 + 2 + 1 = 10
It can be viewed as the sum of the sequence of the first (n-1) numbers starting by 1
S = ((1 + (n-1) )*(n-1) )/2
S = 10
Question 34
Algorithm design technique used in quicksort algorithm is?
A
Dynamic programming
B
Backtracking
C
Divide and conquer
D
Greedy method
       Algorithms       Sorting       ISRO-2016
Question 34 Explanation: 
Divide and conquer sorting algorithms:
1. Quick sort
2. Merge sort
3. Heap sort
****Binary search also follows the divide and conquer.
Question 35
Which one of the following in place sorting algorithms needs the minimum number of swaps?
A
Quick sort
B
Insertion sort
C
Selection sort
D
Heap sort
       Database-Management-System       Sorting       ISRO CS 2011
Question 35 Explanation: 
Selection sort requires maximum number of swaps i.e O(n).
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.
Question 36
Which of the following algorithm design technique is used in merge sort?
A
Greedy method
B
Backtracking
C
Dynamic programming
D
Divide and Conquer
       Algorithms       Sorting       ISRO CS 2011
Question 36 Explanation: 
Merge sort is a divide-and-conquer algorithm based on the idea of breaking down a list into several sub-lists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list.
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Question 37
Which of the following sorting algorithms has the minimum running time complexity in the best and average case?
A
Insertion sort, Quick sort
B
Quick sort, Quick sort
C
Quick sort, Insertion sort
D
Insertion sort, Insertion sort
       Algorithms       Sorting       ISRO CS 2013
Question 37 Explanation: 
Question 38
Consider the following sorting algorithms.
  1. Quicksort
  2. Heapsort
  3. Mergesort
Which of them perform in least time in the worst case?
A
1 and 2 only
B
2 and 3 only
C
3 only
D
1, 1 and 3
       Algorithms       Sorting       ISRO CS 2014
Question 38 Explanation: 
Question 39
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
A
45
B
72
C
360
D
450
       Algorithms       Sorting       UGC-NET CS 2018 JUNE Paper-2
Question 39 Explanation: 
Total sort items = 9
Octal number having → 5 digits
The octal number system base value = 8
The maximum number of comparison = (number of items) * (radix) * (number of digits)
= 9 * 5 * 8
= 360
Question 40
selection sort,quick sort is a stable sorting method
A
True,True
B
False, False
C
True,False
D
False,False
       Algorithms       Sorting       Nielit Scientist-B IT 4-12-2016
Question 40 Explanation: 

Question 41
Which of the following sorting procedures is the slowest?
A
Quick sort
B
Merge sort
C
Shell sort
D
Bubble sort
       Algorithms       Sorting       Nielit Scientist-B IT 4-12-2016
Question 41 Explanation: 
Bubble sort will execute O(n​ 2​ ) time worst case and also it takes n-1 comparisons. So, bubble sort procedure is the slowest one among all.
Question 42
The worst case running times of insertion sort, merge sort and quick sort respectively are
A
O(nlogn),O(nlogn) and O(n2)
B
O(n2),O(n2) and O(nlogn)
C
O(n2),O(nlogn) and O(nlogn)
D
O(n2),O(nlogn) and O(n2)
       Algorithms       Sorting       Nielit Scientist-B CS 22-07-2017
Question 42 Explanation: 
→ worst case running times of insertion sort → O(n2)
merge sort → O(nlogn)
quick sort → O(n2)
Question 43
which of the following sorting algorithms does not have a worst case running time of O(n​ 2​ )  
A
Insertion sort
B
merge sort
C
Quick sort
D
bubble sort
       Algorithms       Sorting       Nielit Scientist-B CS 2016 march
Question 43 Explanation: 
Question 44
To sort many large object or structures, it would be most efficient to
A
Place reference to them in and array an sort the array
B
place them in a linked list and sort the linked list
C
place pointers to them in an array and sort the array
D
place them in an array and sort the array
       Algorithms       Sorting       NieLit STA 2016 March 2016
Question 44 Explanation: 
Option-A is suitable for small amount of objects but not large
Option-B is not correct because it will extra time complexity and searching very difficult
Option-D is suitable for small amount of objects/elements
Question 45
The running time of quick sort algorithm depends heavily on the selection of:
A
No. of inputs
B
Arrangement of elements in an array
C
Size of elements
D
Pivot element
       Algorithms       Sorting       Nielit Scientist-B CS 4-12-2016
Question 45 Explanation: 
The running time of Quicksort will depend on how balanced the partitions are. If you are unlucky and select the greatest or the smallest element as the pivot, then each partition will separate only one element at a time, so the running time will be similar to Insertion Sort.However, Quicksort will usually pick a pivot that is mid-range, and it will partition the array into two parts.
Question 46
A machine needs a minimum of 100 sec to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately
A
50.2 sec
B
6.7 sec
C
72.7 sec
D
11.2 sec
       Algorithms       Sorting       ISRO CS 2015
Question 46 Explanation: 
We need to find the minimum time to sort the names by using quick sort.
The Worst case time complexity is O(n2)
Average and best case time complexity is O(n logn)
Step-1: Time taken to sort 1000 names by quicksort
100= c*nlogn
= c*1000log1000
= 100
c=1/10log1000
Now, time taken to sort 100 names by quicksort,
=c*nlogn
= (1/10log1000)*100*log100
= 6.7
Question 47
If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
A
8, 9, 15, 20, 47, 4, 12, 17, 30, 40
B
8, 15, 20, 47, 4, 9, 30, 40, 12, 17
C
15, 20, 47, 4, 8, 9, 12, 30, 40, 17
D
4, 8, 9, 15, 20, 47, 12, 17, 30, 40
       Algorithms       Sorting       ISRO CS 2015
Question 47 Explanation: 
Explanation:
Question 48

Which of the following data structures is most suitable for radix sort?

A
Stack
B
Binary search tree
C
Tree
D
Linked list
       Algorithms       Sorting       JT(IT) 2018 PART-B Computer Science
Question 48 Explanation: 
→ Forming the circularly linked list requires more memory but allows the keys to be visited more directly in either ascending or descending order via a linear traversal of the linked list rather than a depth-first traversal of the entire trie.
→ This concept of a circularly linked trie structure is similar to the concept of a threaded binary tree. This structure will be called a circularly threaded trie.
Question 49
Which of the following algorithms use recursion for sorting an array of integers?
A
Bubble sort and Insertion sort
B
Bubble sort and Quicksort
C
Bubble sort and merge sort
D
Quicksort and merge sort
       Algorithms       Sorting       KVS 22-12-2018 Part-B
Question 49 Explanation: 
Recursion is used in quicksort and merge sort.
The quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the quite similar way.
In the both algorithm, the larger problem is divided into smaller one until you find the solution later we will combine all smaller solutions into final solution.
Question 50

Consider an array with n elements. Which of the following options gives the maximum number of swap(aj, aj+1) operations to sort the array elements using efficient bubble sort algorithm?

A
n2 /2
B
n(n-1)/2
C
n2
D
n(n+1)/2
       Algorithms       Sorting       JT(IT) 2018 PART-B Computer Science
Question 50 Explanation: 
→ The worst case is if the array is already sorted but in descending order.
→ This means that in the first iteration it would have to look at n elements, then after that it would look n - 1 elements (since the biggest integer is at the end) and so on and so forth till 1 comparison occurs.
→ For n number of numbers, total number of comparisons done will be (n-1)+ ... + 2 + 1.
= (n*(n-1))/2
= O(n2)
Question 51
Which of the following search algorithm requires a sorted array?
A
Linear search
B
Hash search
C
Binary search
D
All of these
       Algorithms       Sorting       KVS DEC-2013
Question 51 Explanation: 
→ Linear search , in which elements are searched element by element.
→ Binary search in which elements should be in sorting order and every time we will search in the half the elements based upon the mid value.
Question 52
Sorting is
A
A process of rearranging a given set of objects in a specific order
B
To facilitate the later search for members of the sorted set
C
Is a relevant and essential activity, particularly in data processing
D
All of these
       Algorithms       Sorting       KVS DEC-2013
Question 52 Explanation: 
Sorting is any process of arranging items systematically, and has two common, yet distinct meanings:
● ordering: arranging items in a sequence ordered by some criterion;
● categorizing: grouping items with similar properties.
The most common uses of sorted sequences are:
● making lookup or search efficient;
● making merging of sequences efficient.
● enable processing of data in a defined order.
Question 53
The figure below represent a ____ sort
A
Bubble
B
Shake
C
Tree
D
Insertion
       Algorithms       Sorting       KVS DEC-2013
Question 53 Explanation: 
Bubble sorting is a simple sorting algorithm that repeatedly steps through the list, compares adjacent pairs and swaps them if they are in the wrong order.
Question 54
Which of the following sorting algorithms uses recursion?
A
Insertion sort
B
Heap sort
C
Merge sort
D
Bubble sort
       Algorithms       Sorting       KVS DEC-2017
Question 54 Explanation: 
Recursion uses two sorting algorithms for reducing time complexity.
1. Mergesort
2. Quicksort.
Question 55
Consider an array representation of an n element binary heap where the elements are stored from index 1 to index n of the array. For the element stored at index i of the array (i<=n), the index of the parent is:
A
floor ((i+1)/2)
B
ceiling ((i+1)/2)
C
floor (i/2)
D
ceiling (i/2)
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-2
Question 55 Explanation: 
Parent Node is at index: ⌊i/2⌋
Left Child is at index: 2i
Right child is at index: 2*i+1
Question 56
A list of n strings, each of length n, is sorted into lexicographic order using merge – sort algorithm. The worst case running time of this computation is :
A
O(n log n)
B
O(n​ 2​ log n)
C
O(n​ 2​ + log n)
D
O(n​ 3​ )
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-2
Question 56 Explanation: 
1.The comparison is strictly follow the Dictionary based order.
2.The length of the string is n, the time taken is to be O(n) for each comparison.
3. For level -1(bottom-up order): Every element has to be compared with other element the number of comparisons is O(n/2).
4. For level -1(bottom-up order): Total time taken by one level is O(n​ 2​ ).
5. For copying level to level the time taken by O(n​ 2​ ).
So, For level -1= O(n​ 2​ )+ O(n​ 2​ )
6. Second level O(n​ 2​ )+ O(n​ 2​ ).
;
;
;
Final n level (logn)*O(n​ 2​ ) = O(n​ 2​ logn)
Question 57
Given the array: 12,40,3,2 and intermediate states of the array while sorting
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
A
Insertion sort
B
Selection sort
C
Quick sort
D
Bubble sort
       Algorithms       Sorting       KVS 30-12-2018 Part B
Question 57 Explanation: 
→In the bubble sort, We will compare the two adjacent elements and swap the elements if first element is large.
→The above stages Bubble sort First Pass/ iteration steps.
→In the bubble sort, after completion the largest element gets last position.
Question 58
An ideal sort is an in-place-sort whose additional space requirement is __________.
A
O (log​ 2​ n)
B
O (n log​ 2​ n)
C
O (1)
D
O (n)
       Algorithms       Sorting       UGC NET CS 2015 Dec- paper-2
Question 58 Explanation: 
The ideal sorting algorithm would have the following properties:
1. Stable: Equal keys aren't reordered.
2. Operates in place, requiring O(1) extra space.
3. Worst case O(nlog(n)) key comparisons.
4. Worst case O(n) swaps.
5. Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
Question 59
Which of the following algorithms sort n integers, having the range 0 to (n​ 2​ - 1), in ascending order in O(n) time ?
A
Selection sort
B
Bubble sort
C
Radix sort
D
Insertion sort
       Algorithms       Sorting       UGC NET CS 2015 Jun- paper-2
Question 59 Explanation: 
Consider sorting n integers in the range 0 to n​ 2​ - 1. We do it in two phases.
Phase 1:
1. We use n bins, one for each of the integers 0,1,.., n-1. We place each integer i on the list to be sorted into the bin numbered (i mod n) each bin will then contain a list of integers leaving the same remainder when divided by n.
At the end, we concatenate the bins in order to obtain a list L.
Phase 2:
1. The integers on the list L are redistributed into bins, but using the bin selection
function: ⌊i/n⌋
2. Now append integers to the ends of lists. Finally, concatenate the lists to get the final sorted sequence.
Question 60
How much extra space is used by heapsort ?
A
O(1)
B
O(Log n)
C
O(n)
D
O(n​ 2​ )
       Algorithms       Sorting       UGC NET CS 2004 Dec-Paper-2
Question 60 Explanation: 
→ Heap sort uses Max-Heapify function which calls itself but it can be made using a simple while loop and thus making it an iterative function which intern takes no space. So, the space complexity of Heap Sort can be reduced to O(1)
→ The heapify function does not take O(logn) space, it will take O(logn) time in worst case. Here we don't do heapify through any recursion call. We will do through iterative method and we don't require any auxiliary space in this sorting. We use only input array.
So, we do heap sort in O(1) space complexity.
Question 61
You have to sort a list L, consisting of a sorted list followed by a few ‘random’ elements. Which of the following sorting method would be most suitable for such a task ?
A
Bubble sort
B
Selection sort
C
Quick sort
D
Insertion sort
       Algorithms       Sorting       UGC NET CS 2014 Dec-Paper-2
Question 61 Explanation: 
Here, they given sorted list followed by a few ‘random’ elements.
So, we can eliminate option-B and Option-C because it gives worst complexity(O(n​ 2​ )) when array is already sorted.
Bubble sort and insertion sort will give best case complexity(O(n)) but here they given small constraint is “there are few random elements”. So, insertion sort is more appropriate answer.
Question 62
Mergesort makes two recursive calls. Which statement is true after these two recursive calls finish, but before the merge step ?
A
The array elements form a heap.
B
Elements in each half of the array are sorted amongst themselves.
C
Elements in the first half of the array are less than or equal to elements in second half of the array.
D
All of the above
       Algorithms       Sorting       UGC NET CS 2014 June-paper-2
Question 62 Explanation: 
A merge sort works as follows:
→Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
→Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
As per the merge sort, The elements in each half of the array are sorted amongst themselves.
Question 63
Suppose that the splits at every level of Quicksort are in proportion 1-β to β, where 0 < β ≤ 0.5 is a constant. The number of elements in an array is n. The maximum depth is approximately
A
0.5 β Ig n
B
0.5 (1 – β) Ig n
C
– (Ig n)/(Ig β)
D
– (Ig n)/Ig (1 – β)
       Algorithms       Sorting       UGC NET CS 2013 Sep-paper-2
Question 64
The maximum number of comparisons needed to sort 9 items using radix sort is (assume each item is 5 digit octal number) :
A
45
B
72
C
360
D
450
       Algorithms       Sorting       UGC NET CS 2018 JUNE Paper-2
Question 64 Explanation: 
Total sort items=9
Octal number having→ 5 digits
The octal number system base value= 8
The maximum number of comparison=(number of items)*(radix)*(number of digits)
= 9*5*8
= 360
Question 65
The total number of comparisons in a bubble sort is
A
0(log n)
B
0(n log n)
C
0(n)
D
None of the above
       Algorithms       Sorting       UGC NET CS 2011 Dec-Paper-2
Question 65 Explanation: 
The total number of comparisons in a bubble sort is O(n2)
Question 66
You have n lists, each consisting of m integers sorted in ascending order. Merging these lists into a single sorted list will take time:
A
O(nm log m)
B
O(mn log n)
C
O(m + n)
D
O(mn)
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE (M.Sc. / Ph.D. Programme in Computer Science)15 May 2013
Question 66 Explanation: 
We can merge two sorted lists of size k and ` in time O(k + `). We begin by merging the lists in pairs to generate n /2 lists of size 2m each, in total time O(mn). If we repeat this, we get n/ 4 lists of size 4m each, again in total time O(mn). Thus, in O(log n) rounds, we converge to a single sorted list. Each round takes time O(mn), so the total time is O(mn log n).
Another strategy to achieve complexity O(mn log n) is to build a min-heap of size n, consisting the first element in each of the n lists. We then extract the minimum element from the heap to the output and replace it in the heap with the next element from the list from which the minimum came. Thus, it takes time O(log n) to generate each element in the output and there are O(mn) output elements in all, so the overall time is O(mn log n).
On the other hand, if we can apply min() to n elements at a time, we can merge all n lists is parallel. Let the n lists be ` l1 , `l 2 , . . . , `l n . We maintain indices i 1 , i 2 , . . . , i n into these n lists, initially set to 1. At each stage we add j = min(` l1 [i 1 ], ` l2 [i 2> ], . . . , ` ln [i n ]) to the sorted output list and increment the corresponding index. Overall, we add mn elements to the sorted output and each element is generated in constant time, so the time taken is O(mn).
Question 67
You are given two sorted lists of integers of size m and n. Describe a divide and conquer algorithm for computing the k th smallest element in the union of the two lists in time O(log m + log n).
A
Descriptive Explanation
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE (M.Sc. / Ph.D. Programme in Computer Science)15 May 2013
Question 67 Explanation: 
Note that k is treated as a constant in this problem.
First of all, the question is interesting only if the lists have duplicate elements. Other- wise, we just merge the two lists as usual and after k steps we have the answer.
Let us first solve a simpler problem and find the k th smallest number in a single sorted list of length n. The first number in the list is clearly the smallest number. Let us call this number x 1 . The next number must be at least x 1 +1, so we search for x 1 +1 in the list using binary search. (Recall that binary search is an example of divide and conquer.) Either we find the number, in which case the second smallest number x 2 is x 1 +1, or we fail to find it, in which case we will be at a boundary between the last occurrence of x 1 and the first occurrence of the next smaller number, which is x 2 . We now search for x 2 +1 and discover x 3 . Repeat this k−1 times to find x k . Each binary search takes time log n, so overall this procedure takes time k log n, which is O(log n) if we treat k as a constant.
If we have two lists, we can find the first k numbers in the first list, which takes time O(log m) and then find the first k numbers in the second list, which takes time O(log n) and then merge these in time O(k) (which is a constant!) to find the k th smallest number overall.
In fact, we can reduce the number of binary searches to k and avoid the merge step. Maintain indices i 1 and i 2 for the two lists, pointing initially to the first element in lists 1 and 2, respectively. At each stage, the smaller of the numbers pointed to by i 1 and i 2 is the next number to be enumerated. We then advance i 1 or i 2 , as the case may be, using binary search as described above. After k such steps, we would have found the k th smallest number overall. Some steps may involve a binary search in list 1, which takes time O(log m) and others may involve a binary search in list 2, which takes time O(log n), so each step is bounded by max(O(log m), O(log n)). This gives an overall complexity of max(O(log m), O(log n)), which is equivalent to O(log m + log n).
Question 68
Which of the following is true for a sorted list with ‘n’ elements ?
A
Insertion in a sorted array takes constant time.
B
Insertion in a sorted linear linked list takes constant time.
C
Searching for a key in a sorted array can be done in O(log n) time.
D
Searching for a key in a sorted linear linked list can be done in O(log n) time.
       Algorithms       Sorting       UGC NET CS 2008-june-Paper-2
Question 68 Explanation: 
TRUE: Searching for a key in a sorted array can be done in O(log n) time.
Because elements are in sorted order, we are using binary search tree. it will take worst case time complexity O(logn) only.
Question 69
A stable sort preserves the order of values that are equal with respect to the comparison function. We have a list of three dimensional points [(7, 1, 8), (3, 5, 7), (6, 1, 4), (6, 5, 9), (0, 2, 5), (9, 0, 9)]. We sort these in ascending order by the second coordinate. Which of the folllowing corresponds to a stable sort of this input?
A
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (6, 5, 9), (3, 5, 7)]
B
[(0, 2, 5), (3, 5, 7), (6, 1, 4), (6, 5, 9), (7, 1, 8), (9, 0, 9)]
C
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (3, 5, 7), (6, 5, 9)]
D
[(9, 0, 9), (6, 1, 4), (7, 1, 8), (0, 2, 5), (3, 5, 7), (6, 5, 9)]
       Algorithms       Sorting       CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science (18 May 2017)
Question 69 Explanation: 
In a stable sort, the original order of the pairs (7,1,8),(6,1,4) and (3,5,7),(6,5,7) with equal second coordinates must be preserved in the sorted output.
Question 70
You are given a sequence of n elements to sort. The input sequence consists of n/k subsequences,each containing k elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length n is to sort the k elements in each of the n/k subsequences. The lower bound on the number of comparisons needed to solve this variant of the sorting problem is
A
Ω(n)
B
Ω(n/k)
C
Ω(nlogk )
D
Ω(n/klogn/k)
       Algorithms       Sorting       UGC NET CS 2017 Nov- paper-3
Question 70 Explanation: 
There are n/k subsequences and each can be ordered in k! ways. This makes a (k!)n/k outputs. We can use the same reasoning:
(k!)n/k ≤ 2h
Taking the logarithm of both sides, we get:
h ≥ lg(k!)n/k
= (n/k)lg(k!)
≥ (n/k)(k/2)lg(k/2)
= (1/2)*(nlogk)-(1/2)*n
= Ω(nlogk)
Question 71
Any decision tree that sorts n elements has height ________.
A
Ω(lg n)
B
Ω(n)
C
Ω(n lg n)
D
Ω(n2)
       Algorithms       Sorting       UGC NET CS 2017 Jan- paper-3
Question 71 Explanation: 
Theorem: Any decision tree that sorts n elements has height Ω(n lg n).
Proof:
→ The number of leaves l ≥ n!
→ Any binary tree of height h has ≤ 2h leaves.
→ Hence, n! ≤ l ≤ 2h
→ Take logs: h ≥ lg(n!)
→ Use Stirling’s approximation: n! > (n/e)n
→ We get:
h ≥ log(n/e)n
= nlg(n/e)
= nlg n - n lg e
= Ω(n lg n)
Question 72
Match the following with respect to algorithm paradigms : 36  
A
a-iii, b-i, c-ii, d-iv
B
a-ii, b-i, c-iv, d-iii
C
a-ii, b-i, c-iii, d-iv
D
b-iii, b-ii, c-i, d-iv
       Algorithms       Sorting       UGC NET CS 2017 Jan- paper-3
Question 72 Explanation: 
Merge sort→ Divide and conquer
Huffman coding→ Greedy approach
Optimal polygon triangulation→ Dynamic programming
Subset sum problem → Backtracking
Note: We can also use subset sum problem as well backtracking.
Question 73
If there are n integers to sort, each integer has d digits, and each digit is in the set {1, 2, ...,k}, radix sort can sort the numbers in:
A
O (k (n + d))
B
O(d (n + k))
C
O((n + k) lg d)
D
O((n + d) lg k)
       Algorithms       Sorting       UGC NET CS 2016 Aug- paper-3
Question 73 Explanation: 

Question 74
If there are n integers to sort, each integer has d digits and each digit is in the set {1, 2, ..., k}, radix sort can sort the numbers in :
A
O(d n k)
B
O(d nk)
C
O((d +n) k)
D
O(d (n + k))
       Algorithms       Sorting       UGC NET CS 2015 Dec - paper-3
Question 74 Explanation: 
The radix sort running time depends on the stable sort used as the intermediate sorting algorithm. When each digit is in the range 0 to k-1(It takes k possible values), and ‘k’ is not too large, counting sort is the obvious choice. Each pass over n d-digits numbers then takes time O(n+k). There are ‘d’ passes, and so total time for radix sort is O(d (n + k))
Question 75
Floyd-Warshall algorithm utilizes __________ to solve the all-pairs shortest paths problem on a directed graph in __________ time.
A
Greedy algorithm, θ (V3)
B
Greedy algorithm, θ (V2 lgn)
C
Dynamic programming, θ (V3)
D
Dynamic programming, θ (V2 lgn)
       Algorithms       Sorting       UGC NET CS 2015 Dec - paper-3
Question 75 Explanation: 
Floyd-Warshall algorithm utilizes Dynamic Programming to solve the all-pairs shortest paths problem on a directed graph in θ(V3) time, where “V” is the number of vertices.
Question 76
Which of the following is best running time to sort n integers in the range 0 to n2-1
A
O(log n)
B
O(n)
C
O(n log n)
D
O(n2)
       Algorithms       Sorting       UGC NET June-2019 CS Paper-2
Question 76 Explanation: 
Using merge, heap sort will get O(nlogn)
But using radix sort will get in linear time only. O(n)
Question 77
There are many sorting algorithms based on comparison. The running time of heap sort algorithm is O(nlogn). Let P, but unlike Q, heapsort sorts in place where (P,Q) is equal to
A
Merge sort, Quick sort
B
Quick sort, Insertion sort
C
Insertion sort, Quick sort
D
insertion sort, Merge sort
       Algorithms       Sorting       UGC NET June-2019 CS Paper-2
Question 77 Explanation: 
This is an ambiguous question.
Method-1:
Mergesort is having best,average and worst case time complexity is O(nlogn) as like hep sort. Quick sort is also same like heap and merge except worst case time complexity is O(n2). According to question, P is merge sort and Q is quick sort.
Method-2:
Heap sort and insertion sort is not following divide and conquer technique. So, it is same like P.
Merge sort follows Divide and Conquer. So, it is not like Q.
Note: Official Key given answer is Option-4
Question 78
In order to sort list of numbers using radix sort algorithm, we need to get the individual digits of each number ‘n’ of the list. If n is a positive decimal integer, then ith digit , from right , of the number n is:
A
B
C
D
       Algorithms       Sorting       CIL Part - B
Question 79
Of the following sort algorithms, which has execution time that is least dependant on initial ordering of the input?
A
Insertion sort
B
Quick sort
C
Merge sort
D
Selection sort
       Algorithms       Sorting       ISRO CS 2020
Question 79 Explanation: 
Question 80
If the array A contains the items 10, 4, 7, 23, 67, 12 and 5 in that order, what will be the resultant array A after third pass of insertion sort?
A
67, 12, 10, 5, 4, 7, 23
B
4, 7, 10, 23, 67, 12, 5
C
4, 5, 7, 67, 10, 12, 23
D
10, 7, 4, 67, 23, 12, 5
       Algorithms       Sorting       ISRO CS 2020
Question 80 Explanation: 
Initial positions: 10, 4, 7, 23, 67, 12 , 5
Pass-1: 4, 10, 7, 23, 67, 12, 5
Pass-2: 4, 7, 10, 23, 67, 12, 5
Pass-3: 4, 7, 10, 23, 67, 12, 5 [ No change because the value 10 is placed in the same position ]
Question 81
The best case time complexity of insertion sort algorithms is ______
A
O(n)
B
O(1)
C
O(n2)
D
O(n log n)
       Algorithms       Sorting       APPSC-2016-DL-CS
Question 81 Explanation: 
The best case of insertion sort is when all the elements given in array are already sorted. In that case when insertion sort is applied only one comparison is made for each element of the array. Hence if we have n-elements the n-comparisons will be made. Hence the time complexity will be O(n)
Question 82
Which of the following sorting algorithms does in-place sorting with minimal space overhead?
A
Merge sort
B
Radix sort
C
Heap sort
D
Address Calculation Sort
       Algorithms       Sorting       APPSC-2016-DL-CS
Question 82 Explanation: 
In-Place sorting algorithm is one which requires only O(1) space in order to sort n-elements.
Merge sort has space complexity of O(n). Hence it is not an in-place sorting algorithm.
Question 83

The worst case time complexity of Quick Sort is __________

A
O(n2)
B
O(log n)
C
O(n)
D
O(n logn)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 83 Explanation: 
The worst case time complexity of Quick Sort is O(n2).
The worst case quicksort happens only 2 times
1. Elements are already in sorted order.
2. All elements having the same weight.
Question 84

The worst case time complexity of Merge Sort is _______

A
O(n2)
B
O(log n)
C
O(n)
D
O(n logn)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 84 Explanation: 
The worst case time complexity of Merge Sort is O(n logn).
The merge sort will not depend on the type of input. It will give the best, average and worst case time complexity is O(nlogn) only.
Question 85

Which of the following sorting procedures is the slowest?

A
Quick sort
B
Heap sort
C
Shell sort
D
Bubble sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 85 Explanation: 
Bubble sorting is the slowest because the average time complexity of all other sorting algorithms given in options are less than O(n2) but for bubble sort it is O(n2).
Question 86

Which of the following sorting methods would be most suitable for sorting a list which is almost sorted?

A
Bubble Sort
B
Insertion Sort
C
Selection Sort
D
Quick Sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 86 Explanation: 
If the elements of the list are almost sorted then insertion is the best to be chosen because it will take O(n) time.
Question 87

The running time of insertion sort is

A
O(n2)
B
O(n)
C
O(log n)
D
O(n log n)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 87 Explanation: 
The worst case time complexity of insertion sort algorithm is O(n2).
Question 88

A sort which compares adjacent elements in a list and switches where necessary is _________

A
Insertion sort
B
Heap sort
C
Quick sort
D
Bubble sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 88 Explanation: 
The given description in question is about bubble sort in which we compare adjacent elements in the list and switches where necessary.
Question 89

The correct order of the efficiency of the following sorting algorithms according to their overall running time comparison is

A
insertion>selection>bubble
B
insertion>bubble>selection
C
selection>bubble>insertion
D
bubble >selection>insertion
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 89 Explanation: 
Option 2 is the most satisfying option because best case time complexity for insertion sort and bubble sort is O(N) but for selection sort it is O(N2). So selection sort is the least efficient.
Question 90

A sort which iteratively passes through a list to exchange the first element with any element less than it and then repeats with a new first element is called

A
insertion sort
B
selection sort
C
heap sort
D
quick sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 90 Explanation: 
The given characteristics in question are of selection sort.
Question 91

The algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers like {1,3,2,5,4,6} is

A
quick sort
B
bubble sort
C
merge sort
D
selection sort
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 91 Explanation: 
Best case of quick sort is O(n logn)
Best case of Bubble sort is O(n)
Best case of merge sort is O(n logn)
Best case of selection sort is (n2)
So the algorithm that will efficiently sort an array that is nearly sorted except for the interchange of some adjacent pairs of numbers is Bubble sort.
Question 92

Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?
I. Quick sort runs in Q(n2) time
II. Bubble sort runs in Q(n2) time
III. Merge-sort runs in Q(n2) time
IV. Insertion sort runs in Q(n2) time

A
I and II only
B
I and III only
C
II and IV only
D
I and IV only
E
None of the above.
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 92 Explanation: 
Quick sort runs in O(n2) if the input is already in sorted order.
Bubble sort runs in O(n) time if the input is already in sorted order.
Merge sort runs in O(nlogn) time.
Insertion sort runs in O(n) time.
Question 93

How much extra space is used by heap sort?

A
O(1)
B
O(log n)
C
O(n)
D
O(n2)
       Algorithms       Sorting       APPSC-2016-DL-CA
Question 93 Explanation: 
Heap sort space complexity is O(1).--
Heap sort uses Max-Heapify function which calls itself but it can be made using a simple while loop and thus making it an iterative function which takes no space.    
So, the space complexity of Heap Sort can be reduced to O(1)
--> The heapify function does not take O(logn) space, it will take O(logn) time in the worst case. Here we don't do heapify through any recursion call. We will do it through an iterative method and we don't require any auxiliary space in this sorting. We use only the input array.
So, we do heap sort in O(1) space complexity
Question 94

What is meant by stable sorting algorithm?

A
A sorting algorithm is stable if it preserves the order of all keys
B
A sorting algorithm is stable if it preserves the order of non-duplicate keys
C
A sorting algorithm is stable if it preserves the order of duplicate keys
D
A sorting algorithm is stable if it doesn't preserve the order of duplicate keys
       Algorithms       Sorting       CIL 2020
Question 94 Explanation: 
A sorting algorithm is called stable if it preserves the order of duplicate keys.
The examples of sorting algorithms are insertion sort,merge sort,counting sort etc.
Question 95

Consider the following statements
1. Insertion sort and merge sort are stable
2. Heap sort is inherently unstable
3. Selection sort is not inherently stable, but may be coded in such a way that it is stable

A
Only statement 1 is correct
B
All statements are correct
C
Statement 3 is not correct
D
Only statement 1 and 2 is correct
       Algorithms       Sorting       CIL 2020
Question 95 Explanation: 
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
S1 is true.
S2 is True, Heapsort is not stable because operations on the heap can change the relative order of equal items. From here: When sorting (in ascending order) heapsort first peaks the largest element and put it in the last of the list.
S3 is True. Selection sort is unstable by nature but can be stable by making some changes in the algorithm. Hence not inherently unstable.
Inherently unstable algorithm is an algorithm which cannot be made stable even by modifying the algorithm.
Question 96

What is the complexity of merge sort?

A
O(n)
B
O(n2 logn)
C
O(nlogn)
D
O(n2)
       Algorithms       Sorting       CIL 2020
Question 96 Explanation: 
Worst case time complexity of merge sort is O(nlogn).
Question 97

Time complexity of Merge sort is

A
O(n)
B
O(n2)
C
O(log n)
D
O(n log n)
       Algorithms       Sorting       APPSC-2012-DL CA
Question 97 Explanation: 
Time complexity of merge sort is O(nlogn).
Question 98

In which sorting algorithm average number of comparison is (n2 – n) / 2?

A
Insertion sort
B
Selection sort
C
Merge sort
D
Quick sort
       Algorithms       Sorting       APPSC-2012-DL CA
Question 98 Explanation: 
Insertion sort have average no. of comparisons (n2 – n) / 2.
Question 99
Which sorting method is best suited for external sorting?
A
Quick Sort
B
Heap Sort
C
Merge sort
D
All the above
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 99 Explanation: 
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted sub-files are combined into a single larger file.
Question 100
The algorithm design technique used in the quick sort algorithm is
A
Dynamic programming
B
Back tracking
C
Divide and conquer
D
Greedy method
       Algorithms       Sorting       TNPSC-2012-Polytechnic-CS
Question 100 Explanation: 
The algorithm design technique used in the quick sort algorithm is divide and conquer.
Question 101
Which of the following sorting algorithms does not have a worst case running time of O(n2)?
A
Bubble Sort
B
Quick Sort
C
Merge Sort
D
Heap Sort
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 101 Explanation: 
worst case time complexity of mergesort is O(nlogn).
Question 102

Quick sort is run on two inputs shown below to sort in ascending order:

I : 1,2,3,…n II : n, n-1,…, 2, 1

Let k1 and k2 be the number of comparisons made for the inputs I and II respectively. Then
A
k1 < k2
B
k1 = k2
C
k1 > k2
D
None
       Algorithms       Sorting       APPSC-2012-DL-CS
Question 102 Explanation: 
In quick sort whether the inputs are in descending order or ascending order, it will have the worst case comparisons and will be equal no. of comparisons.
Question 103
Which Sorting method is an external Sort?
A
Heap Sort
B
Quick Sort
C
Insertion Sort
D
None of the above
       Algorithms       Sorting       TNPSC-2017-Polytechnic-CS
Question 103 Explanation: 
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy.
Example of external sorting is merge sort.
There are 103 questions to complete.
PHP Code Snippets Powered By : XYZScripts.com
error: Content is protected !!