Algorithms
Question 1 
Divide and Conquer  
Dynamic Programming  
Greedy Algorithm  
Branch and Bound 
Sum of subset problem:
Given a set of nonnegative integers, and a value sum, determine if there is a subset of the given set with the sum equal to the given sum.
Example:
Array set[ ]= {1,2,3,4,5} and sum=6
Here there are possible subsets (2,4),(1,2,3) and (5,1).
Question 2 
m*n  
maximum of m and n  
minimum of m and n  
m+n–1 
For example: Take 2 subarrays of size 3 such as 1,3,5 and another subarray 2,4,6. Then in this case number of comparisons is 5 which is m+n1 can be taken as O(m+n).
Question 3 
Merge Sort  
Insertion Sort  
Selection Sort  
Quick Sort 
Question 4 
T(1) = T(2) = T(3)  
T(1) + T(3) = 2 * T(2)  
T(1) – T(3) = T(2)  
T(1) + T(2) = T(3) 
Solution:
Better way of solving is substitute.
T(1) = 1
T(2) = 2
T(3) = 3
T(4) = T(3) + T(2)  T(1) = 4
T(5) = T(4) + T(3)  T(2) = 5
:
:
T(n) = T(n1) + T(n2)  T(n3) = n1 + n2  n + 3 = n
So order is O(n)
Question 5 
O(n)  
O(n log n)  
O(n^{3/2})  
O(n^{3}) 
→ For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place.
→ To find the transitive closure of given relation, we can represent the given relation by the graph such that if x R y then there should be directed edge between x and y in a graph.
→ The time complexity of floyd warshall algorithm is O(V3) where V is the number of vertices in the given graph. Take V as the number of elements is set i.e., N.
→ Therefore time complexity for the given question is O(n3).
Question 6 
Detailed  
Aggregate  
Qualitative  
None of the above 
Question 7 
Greedy method  
Divideandconquer  
Dynamic Programming  
Backtracking 
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Question 8 
O(n^{2}), O(n^{2})  
O(n^{2}), O(nlog_{2}n)  
O(n log_{2}n), O(n^{2})  
O(n log_{2}n), O(n log_{2}n) 
Question 9 
O ( log_{2}n )  
O ( n )  
O ( n log_{2}n )  
O ( n_{2} ) 
Question 10 
1, 3, 5, 7, 9, 11, 13  
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55  
0, 1, 3, 4, 7, 11, 18, 29, 47  
0, 1, 3, 7, 15 
with seed values
F0 = 0 and F1 = 1.
The Fibonacci numbers are the numbers in the following integer sequence.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144..
Question 11 
Both NPcomplete  
Both in P  
NPcomplete and in P, respectively  
Undecidable and NPcomplete, respectively 
→ 3SAT problem is NP complete problem
Question 12 
T(n)=2T(n/2)+k , where k is constant  
T(n)=T(n/2) +k, where k is constant  
T(n)=T(n/2)+logn  
T(n)=T(n/2)+n 
The time to search in an array of ‘n’ elements is equal to the time to search in an array of n/2 elements plus k comparison.
T(n)=T(n/2)+k // k is constant
Question 13 
Insertion Sort  
Quick Sort  
Heap Sort  
Selection Sort 
Question 14 
11  
12  
13  
10 
Step2: 8,7,9,22,31,5,13
Step3: 8,7,9,22,5,31,13
Step4: 8,7,9,22,5,13,31
Step5: 7,8,9,22,5,13,31
Step6: 7,8,9,5,22,13,31
Step7: 7,8,9,5,13,22,31
Step8: 7,8,5,9,13,22,31
Step9: 7,5,8,9,13,22,31
Step10: 5,7,8,9,12,22,31
Note:Total 10 swaps are required to sort the array.
Question 15 
Prim’s algorithm  
Dijkstra's algorithm  
BellmanFord’s algorithm  
FloydWarshall’s algorithm 
Dijkstra's algorithm → Single source shortest path for only positive values
Bellman Ford’s algorithm → Single source shortest path for either positive values or negative values but not negative weight cycle.
FloydWarshall’s algorithm→ All pair shortest path problem
Question 16 
O(nlogn)  
O(n^{3/2})  
O(n^{3})  
O(n) 
Question 17 
Dynamic programming  
Backtracking  
Greedy  
Divide and Conquer 
→ A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices
→ The FloydWarshall algorithm is an example of dynamic programming.
Question 18 
1  
5  
10  
20 
1st iteration will compare 4 numbers with the 5
2nd iteration will compare 3 numbers with the 4
3rd iteration will compare 2 numbers with the 3
4th iteration i will compare 1 number with the 2
So, the total number of comparisons is 4 + 3 + 2 + 1 = 10
It can be viewed as the sum of the sequence of the first (n1) numbers starting by 1
S = ((1 + (n1) )*(n1) )/2
S = 10
Question 19 
T(n) = 2T(n1), if n>0
= 1, otherwise
Then T(n) is (in big O order)
O(n)  
O(2  
O(1)  
O(log n) 
Question 20 
r, s, p, q  
r, p, s, q  
q, s, p, r  
s, p, q, r 
Insertion sort→ Decrease and Conquer
Gaussian Elimination→ Transform and Conquer
Floyd Shortest path algorithm→ Dynamic programming
Question 21 
n/2  
(n1)/2  
(n+1)/2  
None of these 
= No. of comparisons if element present in 1^{st} position + No. of comparisons if element present in 2^{nd}nd position + ............. + No. of comparisons if element present in n^{th} position
= 1+2+3+ ... + n
= n(n+1)/2
Since there are n elements in the list, so average no. of comparisons
= Total comparisons/Total no. of elements
= (n(n+1)/2)/n
= n+1/2
Question 22 
Dynamic programming  
Backtracking  
Divide and conquer  
Greedy method 
1. Quick sort
2. Merge sort
3. Heap sort
****Binary search also follows the divide and conquer.
Question 23 
T(n) = 2T(√n) + 1
T(1) = 1
Which of the following is true?
T(n)= O(log log n)  
T(n) = O(log n)  
T(n) = O(√n)  
T(n)= O(n) 
Question 24 
L  
L/2  
(L+1)/2  
2L 
A linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list , whether element is found or not
Question 25 
O(n)  
O(n log n)  
O(n^{2})  
O(n^{3}) 
T(n + 1) = 2n + T(n)
By substitution method:
n=1, T(2)=2x1+T(1)=2+10=12
n=2,T(3)=2x2+T(2)=4+12=16
n=3,T(4)=2x3+T(3)=6+16=22
n=4,T(5)=2x4+T(4)=8+22=30
n=5,T(6)=2x5+T(5)=10+30=40
n=6,T(7)=2x6+T(6)=12+40=52
n=7,T(8)=2x7+T(7)=14+52=66
The above T(n) value is always less than n2, Then the function is O(n^{2})
Question 26 
Greedy method  
Backtracking  
Dynamic programming  
Divide and Conquer 
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Question 27 
Insertion sort, Quick sort  
Quick sort, Quick sort  
Quick sort, Insertion sort  
Insertion sort, Insertion sort 
Question 28 
3.00  
3.46  
2.81  
3.33 
Each level has 2i nodes where i is level number
Level 0 has one node so one comparison(1)
Level 1 has two nodes so for each node two comparisons and total four comparisons (4)
Level 2 has four nodes and total comparisons are 4x3 =12 comparisons
Level3 has total 8 nodes but in the given question we need to process only 11 items and already we covered 7 nodes so remaining nodes 4 nodes at level 3. Total comparisons at level3 are 4*4=16
So, total number of comparisons at each level are = 1+4+12+16= 33
Average comparisons required for 11 items = 33/11 = 3
Question 29 
 Quicksort
 Heapsort
 Mergesort
1 and 2 only  
2 and 3 only  
3 only  
1, 1 and 3 
Question 30 
int module(int n)
{
if (n == 1)
return 1;
else
return (n + module(n1));
}
O(n)  
O(log n)  
O(n^{2})  
O(n!) 
Question 31 
The solution of the recurrence relation T(m) = T(3m/4) + 1 is :
θ(lg m)  
θ(m)  
θ(mlg m)  
θ(lglg m)

a = 1, b = 4/3, k = 0, p = 0
Here, a = b^{k}
So, T(m) = n^{log a/ log b} log^{p+1} n
T(m) = θ(log m)
Question 32 
Which of the following algorithms solves the singlesource shortest paths ?
Prim’s algorithm  
Floyd  Warshall algorithm  
Johnson’s algorithm  
Dijkstra’s algorithm

Question 33 
A text is made up of the characters A, B, C, D, E each occurring with the probability 0.08, 0.40, 0.25, 0.15 and 0.12 respectively. The optimal coding technique will have the average length of :
2.4  
1.87  
3.0  
2.15 
Step 1: Select two characters with smallest probabilities and merge them.
Step 2: Select two characters from Step 1 with smallest probabilities and merge them.
Step 3: Select two characters (from Step 2) with smallest probabilities and merge them.
Step 4: Merge the remaining two probabilities.
A = 1000 (4bits)
E = 1001 (4bits)
D = 101 (3bits)
C = 11 (2bits)
B = 0 (1bit)
Question 34 
A binary search tree in which every nonleaf node has nonempty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :
cannot have more than 37 nodes  
has exactly 37 nodes
 
has exactly 35 nodes  
cannot have more than 35 nodes 
Where,
L = Number of leaf nodes
n = Nary tree
I = Number of intermediate nodes
Now,
19 = I(2  1) + 1
19 = 2I  I + 1
19 = I + 1
I = 18
Total Number of nodes in a tree = Number of Intermediate nodes + Number of Leaf nodes
Total Number of nodes in a tree = 19 + 18
Total Number of nodes in a tree = 37
Question 35 
Match the following with respect to algorithm paradigms :
ListI ListII (a) The 8Queens problem (i) Dynamic programming (b) SingleSource shortest paths (ii) Divide and conquer (c) STRASSEN’s Matrix multiplication (iii) Greedy approach (d) Optimal binary search trees (iv) Backtracking
(a)(iv), (b)(i), (c)(iii), (d)(ii)  
(a)(iv), (b)(iii), (c)(i), (d)(ii)  
(a)(iii), (b)(iv), (c)(ii), (d)(i)  
(a)(iv), (b)(iii), (c)(ii), (d)(i) 
SingleSource shortest paths use Greedy approach.
STRASSEN’s Matrix multiplication use Divide and conquer technique.
Optimal binary search trees use Dynamic programming.
Question 36 
45  
72  
360  
450 
Octal number having → 5 digits
The octal number system base value = 8
The maximum number of comparison = (number of items) * (radix) * (number of digits)
= 9 * 5 * 8
= 360
Question 37 
A 5ary tree is tree in which every internal node has exactly 5 children. The number of left nodes in such a tree with 8 internal nodes will be :
30  
33  
45  
None of the above 
L = I(n  1) + 1
Where,
L = Number of leaf nodes
n = Nary tree
I = Number of intermediate nodes
Now, L = 8(5  1) + 1
L = 8(4) + 1
L = 33
Total Number of nodes in a tree = Number of Intermediate nodes + Number of Leaf nodes
Total Number of nodes in a tree = 8 + 33
Total Number of nodes in a tree = 41
Question 38 
Consider a Boolean function of ‘n’ variables. The order of an algorithm that determines whether the Boolean function produces a output 1 is :
Logarithmic  
Linear
 
Quadratic  
Exponential 
Set 1 have “n” variables and each variable in set 1 can be mapped with one boolean value in set 2 i.e,. each variable in set 1 have 2 choices and we have “n” such variables in set 1.
→ So, total number of choices we get maximum 2^{n}. It means exponential.
Question 39 
E is the number of edges in the graph and f is maximum flow in the graph. When the capacities are integers, the runtime of FordFulkerson algorithm is bounded by :
O(E∗f)
 
O(E^{2}∗f)
 
O(E∗f^{2})
 
None of the above 
→ The FordFulkerson method or FordFulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network.
→ It is called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times.
→ the runtime of FordFulkerson is bounded by O(E*f), where E is the number of edges in the graph and f is the maximum flow in the graph. This is because each augmenting path can be found in O(E) time and increases the flow by an integer amount of at least 1, with the upper bound f.
→ The variation of the FordFulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the EdmondsKarp algorithm, which runs in O(VE^{2})time.
Question 40 
T(n)=2T(n1), if n>0.
=1, otherwise
O(nlogn)  
O(n^{2})  
O(2^{n})  
O(n) 
= 2 [2T(n  2) + 1] + 1
= 2^{2} T(n  2) + 3
⋮
= 2^{k}T(n  k) + (2^{k}  1)
= 2^{n1}T(1) + (2^{n1}  1)
= 2^{n1}+ 2^{n1}  1
= 2^{n}  1
≅ O( 2^{n} )
Question 41 
O(nlogn)  
O(logn)  
O(n)  
O(n^{2}) 
Question 42 
O(n)  
O(nlogn)  
O(logn) for search and insert, and O(n) for delete  
None of these 
→ In question, they are not mentioned about elements are sorted or unsorted. So, worst case we have to consider unsorted elements.
→ All operations(i.e. Search,update and delete) will take O(n).
Question 43 
Greedy approach  
Dynamic programming  
Backtracking paradigm  
Divide and conquer paradigm 
→ Dijkstra’s algorithm is solving the problem of single source shortest path.
Question 44 
89,19,50,17,12,15,2,5,7,11,6,9,100
Minimum _____ number of interchanges are needed to convert it into a maxheap
3  
4  
5  
6 
2nd swap is: 100 and 50
3rd swap is: 100 and 89
Question 45 
Selection sort  
Bubble sort  
Merge sort  
Quick sort 
Question 46 
12  
10  
6  
5 
Question 47 
Greedy  
Branch and bound  
Dynamic Programming  
Backtracking 
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
Question 48 
True,True  
False, False  
True,False  
False,False 
Question 49 
Quick sort  
Merge sort  
Shell sort  
Bubble sort 
Question 50 
T(n)=2T(n2)+2  
T(n)=2T(n/2)+1  
T(n)=2T(n2)+n  
T(n)=2T(n1)+1 
T(n) = 2T(n – 1) + 1
= 2 [2T(n – 2) + 1] + 1
= 2 ^{2 }T(n – 2) + 3
⋮
= 2 ^{k} T( n – k) + (2 ^{k} – 1)
n – k = 1
= 2 ^{n1} T(1) + (2^{ n1} – 1)
= 2^{ n1} + 2^{ n1} – 1
= 2 ^{n} – 1
≌ O(2 ^{n} )
Question 51 
O(mn)  
O(m)  
O(m+n)  
O(n) 
Step1: Sorting of edges takes O(ELogE) time.
Step2: After sorting, we iterate through all edges and apply find union algorithm.
The find and union operations can take at most O(1) time if you use Disjoint set . So overall complexity is O(ELogE + E) time.
Given the edges are already sorted, so we need to do only second step i.e.,we iterate through all edges and apply findunion algorithm. The find and union operations can take at most O(1) time. So, the total time complexity will be O(E).
Question 52 
12  
10  
6  
5 
because n^{2} is asymptotically larger than nlogn. Finally, 10nlog _{10} n ≤ 0.0001n ^{2}
Let n = 10^{ k} records. Substitute into 10nlog _{10} n ≤ 0.0001n^{ 2}
10(10^{ k} )log_{ 10} 10^{ }k ≤ 0.0001(10^{ k} )^{ 2}
10^{ k+1} k ≤ 0.0001 × 10^{ 2k}
k ≤ 10
2k−k−1−4
k ≤ 10
k−5
According to the problem value 6 is suitable for K.
Question 53 
O(logn)  
O(n)  
O(nlogn)  
none of the above, as the tree cannot be be uniquely determined. 
T(n) = T(k) + T(nk1) + logn
In worst case T(n) = O(nlogn), when k=0
But searching for an element in the inorder traversal of given BST can be done in O(1) because we have n elements from 1...n so there is no need to search an element if last element in post order is say 5 we take it as root and since 4 elements are split the post order array into two (first 4 elements), (6th element onward) and solve recursively. Time complexity for this would be
T(n) = T(k) + T(nk1)+O(1)
Which gives T(n) = O(n)
since we know that all elements must be traversed at least once T(n) = θ(n)
Question 54 
O(n)  
O(m)  
O(m+n)  
O(mn) 
Suppose if we are using Adjacency matrix means it takes θ(n ^{2} ).
Question 55 
solves it in linear time using a left to right pass of the array  
solves in linear time using a right to left pass of the array  
solves it using divide and conquer in time O(nlogn)  
solves it in time O(n ^{2} ) 
● Start from the right keeping track of largest element (currentLeader). If a larger element is found, then print it and set currentLeader to the current element.
● We can consider the last element is a leader since there is no element to its right. Solves it in linear time using a right to left pass of the array will takes time complexity is O(n).
Question 56 
Consider the following terminology and match List 1 and List 2 and choose the correct answer from the code given below
 b = branch factor
d = depth of shallowest solution
M = Maximum depth of the search tree
l = depth limit
List 1 List 2 (a) BFS i) O(bd) (b) DFS ii) O(b^{d}) (c) Depth Limited Search iii) O(bm) (d) Iterative deepening Search iv) O(bl)Code:
(a)(iii), (b)(ii), (c)(iv), (d)(i)
 
(a)(ii), (b)(iii), (c)(iv), (d)(i)
 
(a)(i), (b)(ii), (c)(iv), (d)(iii)
 
(a)(i), (b)(iii), (c)(iv), (d)(ii)

DFS → O(bm) worst case space complexity
Depth  Limited Search → O(bl)
Iterative deepening Search → O(bd)
Note:
Based upon BFS and DFS we can find the solution.
Question 57 
Match List1 with List2 and choose the correct answer from the code given below
List 1 List 2 (a) Greedy Bestfirst Search (i) Selects a node for expansion if optimal path to that node has been found (b) A* Search (ii) Avoids substantial overhead associated with keeping the sorted queue of nodes (c) Recursive BestFirst Search (iii) Suffers from excessive node generation (d) Iterativedeepening A* Search (iv) Time complexity depends upon the quality of heuristicCode:
(a)(iv), (b)(iii), (c)(ii), (d)(i)  
(a)(i), (b)(iv), (c)(iii), (d)(ii)
 
(a)(iv), (b)(i), (c)(ii), (d)(iii)
 
(a)(i), (b)(ii), (c)(iii), (d)(iv)

→ A* Search: Time complexity depends upon the quality of heuristic.
→ Recursive BestFirst Search: Suffers from excessive node generation. RBFS depends on how widely the promising nodes are separated in the search tree, and is harder to anticipate.
→ Iterativedeepening A* Search: Avoids substantial overhead associated with keeping the sorted queue of nodes.
Question 58 
Consider the graph shown below :
Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
13  
16  
17  
14 
The weight of this minimum spanning tree is 16.
Question 59 
Match List 1 with List 2 and choose the correct answer from the code given below :
List I List II (Graph Algorithm) (Time Complexity) (a) Dijkstra’s algorithm (i) O(E lg E) (b) Kruskal’s algorithm (ii) Θ(V3) (c) FloydWarshall algorithm (iii) O(V2) (d) Topological sorting (iv) Θ(V + E)
Where V and E are the number of vertices and edges in graph respectively.
Code :(a)(i), (b)(iii), (c)(iv), (d)(ii)
 
(a)(i), (b)(iii), (c)(ii), (d)(iv)
 
(a)(iii), (b)(i), (c)(ii), (d)(iv)
 
(a)(iii), (b)(i), (c)(iv), (d)(ii) 
→ Kruskal's algorithm is a minimumspanningtree algorithm which finds an edge of the least possible weight that connects any two trees in the forest and time complexity is O(E log E).
→ A topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v,u comes before v in the ordering Θ(V + E).
→ Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights. A single execution of the algorithm will find the lengths of shortest paths between all pairs of vertices.
Question 60 
The second smallest of n elements can be found with _______ comparisons in worst case.
n + ceil(lg n)  2
 
n  1  
lg n  
3n/1

This takes n1 comparisons and each element involved is compared at most log n times. Finally, we get the largest element A1’’’. This element which is largest, has won comparison with the second largest element at some point. So, keeping track of items compared with the largest element will give us a list of (log n) elements. In this list we find out largest, which will be the second largest of all.
So, no. of comparisons = (n  1) + log2 (n)  1
Example: 512 elements
In this case no. of comparisons will be, (512  1) + log2(512)  1
= 511 + (9  1)
= 511 + 8
= 519
Question 61 
Consider two sequences X and Y :
 X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4  
2  
3  
5ComputerNetworks 
Possible subsequences are : 130, 1210, 1301 and length of longest common subsequence is 4.
Question 62 
The solution of recurrence relation : T(n) = 2T(sqrt(n)) + lg(n) is
O(lg (n) lg(n))
 
O(lg (n))
 
O(n lg (n))
 
O(lg (n) lg(lg(n)))

Question 63 
O(nlogn),O(nlogn) and O(n^{2})  
O(n^{2}),O(n^{2}) and O(nlogn)  
O(n^{2}),O(nlogn) and O(nlogn)  
O(n^{2}),O(nlogn) and O(n^{2}) 
merge sort → O(nlogn)
quick sort → O(n^{2})
Question 64 
Bellman ford algorithm for single source shortest path  
floyd Warshall for all pairs shortest paths  
01 knapsack problem  
Prim’s minimum spanning tree 
Question 65 
Maximum sum subsequence in an array  
Maximum sum subarray in an array  
Maximum product subsequence in an array  
Maximum product subarray in an array 
def max_subarray(A):
max_ending_here = max_so_far = A[0]
for x in A[1:]:
max_ending_here = max(x, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
Question 66 
O(nlogd)  
O(n^{2} logd)  
O(nd)  
O(n^{2}d) 
→ They are asking to find the atmost distance from every element. So, it will take O(n^{2}*d) from every node.
Note: Using heap sort , time complexity will be O(nlogd) and most effective than insertion sort.
Question 67 
Every minimum spanning tree of G must contain e_{min} and e_{max}  
If e_{max} is in a minimum spanning tree, then its removal must disconnect G  
No minimum spanning tree contains e_{max}  
G has a multiple minimum spanning trees 
Question 68 
nlog(n/2)  
nlogn  
(nlogn)/2  
logn 
Question 69 
O(m*n)  
O(m+n)  
O(m^{2})  
O(m^{n}) 
1. DFS search produces a DFS tree/forest
2. Strongly Connected Components form subtrees of the DFS tree.
3. If we can find head of such subtrees, we can print/store all the nodes in that subtree (including head) and that will be one SCC.
4. There is no back edge from one SCC to another (There can be cross edges, but cross edges will not be used while processing the graph).
Note: Time complexity for Tarjan’s algorithm is O(V+E)
Kosaraju’s algorithm
1) Create an empty stack ‘S’ and do DFS traversal of a graph. In DFS traversal, after calling recursive DFS for adjacent vertices of a vertex, push the vertex to stack. In the above graph, if we start DFS from vertex 0, we get vertices in stack as 1, 2, 4, 3, 0.
2) Reverse directions of all arcs to obtain the transpose graph.
3) One by one pop a vertex from S while S is not empty. Let the popped vertex be ‘v’. Take v as source and do DFS (call DFSUtil(v)). The DFS starting from v prints strongly connected component of v. In the above example, we process vertices in order 0, 3, 4, 2, 1 (One by one popped from stack).
Note: Time complexity for kosaraju’s algorithm is O(V+E)
Question 70 
Insertion sort  
merge sort  
Quick sort  
bubble sort 
Question 71 
P  
NP  
Both (A) and (B)  
None of these 
Question 72 
logn  
n  
n ^{2 }  
n^{ n} 
Question 73 
Place reference to them in and array an sort the array  
place them in a linked list and sort the linked list  
place pointers to them in an array and sort the array  
place them in an array and sort the array 
OptionB is not correct because it will extra time complexity and searching very difficult
OptionD is suitable for small amount of objects/elements
Question 74 
b= branch factor
d= depth of shallowest solution
M= Maximum depth of the search tree
I= depth limit
(a)(iii), (b)(ii), (c)(iv), (d)(i)  
(a)(ii), (b)(iii), (c)(iv), (d)(i)  
(a)(i), (b)(ii), (c)(iv), (d)(iii)  
(a)(i), (b)(iii), (c)(iv), (d)(ii) 
DFS → O(bm) worst case space complexity
Depth Limited Search → O(bl)
Iterative deepening Search → O(bd)
Note: Based upon BFS and DFS we can find the solution.
Question 75 
(a)(iv), (b)(iii), (c) (ii), (d)(i)  
(a)(i), (b)(iv), (c) (iii), (d)(ii)  
(a)(iv), (b)(i), (c) (ii), (d)(iii)  
(a)(i), (b)(ii), (c) (iii), (d)(iv) 
→ A* Search: Time complexity depends upon the quality of heuristic.
→ Recursive BestFirst Search: Suffers from excessive node generation. RBFS depends on how widely the promising nodes are separated in the search tree, and is harder to anticipate.
→ Iterativedeepening A* Search: Avoids substantial overhead associated with keeping the sorted queue of nodes
Question 76 
11  
12  
10  
9 
Number of leaf nodes = Sum of degrees of all internal nodes  number of internal nodes +1
= (4*1 + 3*2 + 3*3 )10 +1
= 19 10 +1=10
Formula : ndn +1 where n is number of nodes and d is degree of the tree.
Question 77 
Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
13  
16  
17  
14 
The weight of this minimum spanning tree is 16.
Question 78 
n + ceil(lg n) 2  
n1  
lg n  
3n/1 
This takes n1 comparisons and each element involved is compared at most log n times. Finally, we get the largest element A1’’’ . This element which is largest, has won comparison with the second largest element at some point. So, keeping track of items compared with the largest element will give us a list of (log n) elements. In this list we find out largest, which will be the second largest of all.
So, no. of comparisons =(n1)+log _{2} (n)1
Example: 512 elements
In this case no. of comparisons will be, (5121)+log _{2} (512) 1
=511+(91)
=511+8
=519
Question 79 
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4  
2  
3  
5 
Possible subsequences are :130, 1210, 1301 and length of longest common subsequence is 4.
Question 80 
(1) Input (x _{1} ,y_{ 1} ) and (x_{ 2} ,y_{ 2} )
(2) y=y_{ 1}
(3) d=f(x _{1} +1, y _{1} +1⁄2) // f is the implicit form of a line
(4) for x=x_{ 1} to x_{ 2}
(5) do
(6) plot(x,y)
(7) if(d<0)
(8) then
(9) y=y+1
(10) d=d+(y_{ 1}  y _{2} ) + (x_{ 2}  x_{ 1} )
(11) else
(12) d=d+(y_{ 1}  y_{ 2} )
(13) end
(14) end
Which statements are true ?
P: For a line with slope m>1, we should change the outer loop in line (4) to be over y.
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.
R: The algorithm fails if d is ever 0.
Q and R only  
P only  
P and Q only  
P,Q and R 
Line 10 and 12 will update the value of d , So the statement Q is true.
Question 81 
splitting  
completion  
beginning  
joining 
→ The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. It is commonly used in conjunction with the program evaluation and review technique (PERT).
→ A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.
→ Event: An event represents a point in time signifying the completion of some activities and the beginning of new ones. This is usually represented by a circle in a network which is also called a node or connector. The events are classified into three categories
1. Merge event – When more than one activity comes and joins an event such an event is known as merge event.
2. Burst event – When more than one activity leaves an event such an event is known as burst event.
3. Merge and Burst event – An activity may be merge and burst event at the same time as with respect to some activities it can be a merge event and with respect to some other activities it may be a burst event.
Question 82 
O(lg (n) lg(n))  
O(lg (n))  
O(n lg (n))  
O(lg (n) lg(lg(n))) 
T (n) = l og log (n) · l og (n)
T (n) = l og(n) · l og log (n)
Question 83 
Optimization  
NP complete  
Linear Solution  
Sorting 
Question 84 
No. of inputs  
Arrangement of elements in an array  
Size of elements  
Pivot element 
Question 85 
Processor and Memory  
Complexity and Capacity  
Time and Space  
Data and Space 
Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. "Time" can mean the number of memory accesses performed, the number of comparisons between integers, the number of times some inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take. We try to keep this idea of time separate from "wall clock" time, since many factors unrelated to the algorithm itself can affect the real time (like the language used, type of computing hardware, proficiency of the programmer, optimization in the compiler, etc.). It turns out that, if we chose the units wisely, all of the other stuff doesn't matter and we can get an independent measure of the efficiency of the algorithm.
Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. We often speak of "extra" memory needed, not counting the memory needed to store the input itself. Again, we use natural (but fixedlength) units to measure this. We can use bytes, but it's easier to use, say, number of integers used, number of fixedsized structures, etc. Space complexity is sometimes ignored because the space used is minimal and/or obvious, but sometimes it becomes as important an issue as time.
Question 86 
O(log n)  
O(n)  
O(n logn)  
None of these 
a=1,b=2,k=1,p=0
=ak
=O(n)
Question 87 
It can be used to decide the best algorithm that solves a given problem  
It is the lower bound of the growth rate of algorithm  
It determines the maximum size of a problem that can be solved in a given amount of time  
Both (A) and (B) 
● The letter O is used because the growth rate of a function is also referred to as the order of the function.
● A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Question 88 
19  
18  
17  
20 
→ According to above Profit/weight ratio, the item1 having maximum profit. So, pick item1.
→ Now M=7, Pick item2 because it has more profit.
→ Now M=5, Now according to profit/weight ration we can pick item3 or item5 but item5 having more weight, so it exceeds capacity. If we are selecting item3 means we are wasting 1 slot. So, here, profit/weight ration fails. So, follow brute force technique in this position. We can pick item4 because it has weight 5.
→ Now, M=0 and Profit=19. [item1 + item2 + item4]
Question 89 
void Function(int n)
{
int i=1;
int s=1;
while(s<=n)
{
i++;
s=s+i;
}
}
O(n)  
O(n^{ 2} )  
O(1)  
O(√n) 
i= 1, 2, 3, 4, 5, 6…..
Here, ‘S’ is nothing but sum of ‘n’ natural numbers
Assume the n=21, then corresponding k value 6. "k^2" is nothing but 36 which is greater than "n"
For any given K value , the S value is k(k+1)/2, So, 6(6+1)/2=21
the loop will terminate whenever k(k+1)/2 >n
k=O(√n)
Question 90 
50.2 sec  
6.7 sec  
72.7 sec  
11.2 sec 
The Worst case time complexity is O(n^{2})
Average and best case time complexity is O(n logn)
Step1: Time taken to sort 1000 names by quicksort
100= c*nlogn
= c*1000log1000
= 100
c=1/10log1000
Now, time taken to sort 100 names by quicksort,
=c*nlogn
= (1/10log1000)*100*log100
= 6.7
Question 91 
F3, F4, F1, F5, F6, F2  
F2, F6, F5, F1, F4, F3  
F1, F2, F3, F4, F5, F6  
Ordering is immaterial as all files are accessed with the same frequency. 
→ Final order is F3, F4, F1, F5, F6, F2
Question 92 
int recursive (int n)
{
if(n == 1)
return (1);
else
return (recursive (n1) + recursive (n1));
}
O(n)  
O(n log n)  
O(n^{2})  
O(2^{n}) 
= 2 [2T(n  2) + 1] + 1
= 2^{2} T(n  2) + 3
⋮
= 2^{k}T(n  k) + (2^{k}  1)
= 2^{n1}T(1) + (2^{n1}  1)
= 2^{n1} + 2^{n1}  1
= 2^{n}  1
≅ O( 2^{n} )
Question 93 
2^{5}  
7^{5}  
3^{5}  
2^{2×5} 
Here, there are 7 nodes.
=7^{72}
=7^{5}
=16,807
Question 94 
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
8, 9, 15, 20, 47, 4, 12, 17, 30, 40  
8, 15, 20, 47, 4, 9, 30, 40, 12, 17  
15, 20, 47, 4, 8, 9, 12, 30, 40, 17  
4, 8, 9, 15, 20, 47, 12, 17, 30, 40 
Question 95 
What is the total number of spanning trees of a complete graph of 4 vertices (K_{4})?
16  
8  
4  
15 
n^{n2} = 4^{2} = 16
Question 96 
Which of the following data structures is most suitable for radix sort?
Stack
 
Binary search tree
 
Tree
 
Linked list

→ This concept of a circularly linked trie structure is similar to the concept of a threaded binary tree. This structure will be called a circularly threaded trie.
Question 97 
Which of the following greedy strategies gives the optimal solution for the following 01 knapsack problem?
w=30, w1=10, p1=11, w2=15, p2=12, w3=20, p3=24, w4=25, p4=25
Largest profit per unit weight first
 
Largest profit first
 
Lightest item first
 
Heaviest item first

Note: Even though they are asking 0/1 knapsack. We are always takes first profit/weight first.
→ W=30, we can exceed weight and we can’t take fractions of weight.
→ According to profit/weight, W3 having more profit. And W1 having more profit.
→ 0/1 knapsack optimal solution = 24+11 = 35
Question 98 
Which of the following is the time complexity to find the determinant of an upper triangular matrix of order n*n?
Θ(n^{2.5})  
Θ(n)  
Θ(n^{2})
 
Θ(n) 
Question 99 
Consider an array with n elements. Which of the following options gives the maximum number of swap(a_{j}, a_{j+1}) operations to sort the array elements using efficient bubble sort algorithm?
n^{2} /2  
n(n1)/2  
n^{2}
 
n(n+1)/2 
→ This means that in the first iteration it would have to look at n elements, then after that it would look n  1 elements (since the biggest integer is at the end) and so on and so forth till 1 comparison occurs.
→ For n number of numbers, total number of comparisons done will be (n1)+ ... + 2 + 1.
= (n*(n1))/2
= O(n^{2})
Question 100 
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
n^{2}  
n^{n}  
n^{3}  
N 
a=8,b=2,k=0,p=0;
So, according to masters theorem, a>b ^{k}
=O(n^log _{2}^{ 8} )
=O(n^{ 3} )
Question 101 
f(int Y[10], int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]
else
j=k;
}
while((Y[k]!=x) && (i
printf("x is in the array");
else
printf("x is not in the array");
}
On which of the following contents of 'Y' and 'x' does the program fail?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10  
Y is [1 3 5 7 9 11 13 15 17 19] and x<1  
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2  
Y is [ 2 4 6 8 10 12 14 16 18 20] and 2 
First iteration:
I=0,j=9
K=(0+9)/2=4
Elements with duplicate values of “2” the condition if(Y[k]
I=0, j=k=4
K= (0+4)/2=2.
Third iteration:
I=0, j=k=2
K= (0+2)/2=1
Fourth iteration:
I=0, j=k=1
K= (0+1)/2=0
The while condition is Y[k] != x && i < j is true
● The above program doesn’t work for the cases where element to be searched is the last element of Y[ ] or greater than the last element (or maximum element) in Y[ ].
● For such cases, program goes in an infinite loop because i is assigned value as k in all iterations, and i never becomes equal to or greater than j.
● So while condition never becomes false.
Question 102 
max(f(n),g(n))  
min(f(n),g(n))  
f(n)+g(n)  
f(n) * g(n) 
f(n) is order of M1
g(n) is order of M2.
In order to find the order of the algorithm, there are three possible cases with f(n) and g(n)
Case1 : if f(n) > g(n)
In this case we take O(f(n)) the complexity of the algorithm as g(n) is a lower order term, we can ignore this one .
Case2 : f(n) < g(n)
In this case we take O(g(n)) the complexity of the algorithm as f(n) is a lower order term, we can ignore this one.
Case3: f(n) = g(n)
Time Complexity can be either O(g(n)) or O(f(n)) (which is equal asymptotically). So the order of the algorithm is max(f(n), g(n)).
Question 103 
Ω(logn)  
Ω(nlogn)  
Ω(n)  
Ω(n^{2}) 
Question 104 
Greedy  
Dynamic  
Divide and conquer  
Backtracking 
→ Dijkstra’s algorithm is best example for greedy
→ Floyd warshall is best example for dynamic programming.
→ Quicksort is best example for divide and conquer.
Question 105 
Divide and conquer  
Backtracking  
Heuristic approach  
Greedy approach 
→ The solutions to the subproblems are then combined to give a solution to the original problem.
→ So Merge Sort first divides the array into equal halves and then combines them in a sorted manner.
Question 106 
m*n  
minimum of m,n  
Maximum of m,n  
M+n1 
→ Since we need to merge 2 at a time, the optimal strategy would be to take smallest size lists first.
→ The reason for picking smallest two items is to carry minimum items for repetition in merging.
Question 107 
Bubble sort and Insertion sort  
Bubble sort and Quicksort  
Bubble sort and merge sort  
Quicksort and merge sort 
The quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the quite similar way.
In the both algorithm, the larger problem is divided into smaller one until you find the solution later we will combine all smaller solutions into final solution.
Question 108 
int IsPrime(n)
{
int i,n;
for(i=2;i<= √n;i++)
if(n%i == 0)
{
printf(“Not Prime\n”);
return 0;
}
return 1;
}
Let T(n) denotes the number of times the for loop is executed by the program on input n.
Which of the following is TRUE?
T(n) = O( √n) and T(n) = Ω ( √n)  
T(n) = O( √n) and T(n) = Ω (1)  
T(n) = O(n) and T(n) = Ω ( √n)  
None of the above 
Question 109 
A) Dijkstra's algorithm
B) Floyd's algorithm
C) prim's algorithm
D) Warshall's algorithm
Both A and B  
Both A and C  
Both C and D  
Both B and D 
Question 110 
max(f(n),g(n))  
min(f(n),g(n))  
f(n)+g(n)  
f(n) * g(n) 
So, we will prefer max(f(n),g(n)).
Question 111 
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
n^{ 2}  
n^{ n}  
n ^{3}  
N 
Case1: a>b^{k}
Here, a=8,b=2,k=1,p=0
O(n(log _{2} 8))=O(n(log _{2} 2 ^{3} ))=O(n ^{3} (log_{ 2} 2)) [ log_{ 2} 2=1]
=O(n ^{3} )
Question 112 
f(int Y[10], int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]
else
j=k;
}while((Y[k]!=x) && (i
printf("x is in the array");
else
printf("x is not in the array");
}
On which of the following contents of 'Y' and 'x' does the program fail?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10  
Y is [1 3 5 7 9 11 13 15 17 19] and x<1  
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2  
Y is [ 2 4 6 8 10 12 14 16 18 20] and 2 
First iteration:
I=0,j=9
K=(0+9)/2=4
Elements with duplicate values of “2” the condition if(Y[k]
I=0, j=k=4
K= (0+4)/2=2.
Third iteration:
I=0, j=k=2
K= (0+2)/2=1
Fourth iteration:
I=0, j=k=1
K= (0+1)/2=0
The while condition is Y[k] != x && i < j is true
→ The above program doesn’t work for the cases where element to be searched is the last element of Y[ ] or greater than the last element (or maximum element) in Y[ ].
→ For such cases, program goes in an infinite loop because i is assigned value as k in all iterations, and i never becomes equal to or greater than j.
So, while condition never becomes false.
Question 113 
Linear search  
Hash search  
Binary search  
All of these 
→ Binary search in which elements should be in sorting order and every time we will search in the half the elements based upon the mid value.
Question 114 
A process of rearranging a given set of objects in a specific order  
To facilitate the later search for members of the sorted set  
Is a relevant and essential activity, particularly in data processing  
All of these 
● ordering: arranging items in a sequence ordered by some criterion;
● categorizing: grouping items with similar properties.
The most common uses of sorted sequences are:
● making lookup or search efficient;
● making merging of sequences efficient.
● enable processing of data in a defined order.
Question 115 
Bubble  
Shake  
Tree  
Insertion 
Question 116 
Knight's tour  
Eight Queens problem  
Kings problem  
Rooks problem 
● The eight queens puzzle is an example of the more general n queens problem of placing n nonattacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n=2 and n=3
Question 117 
Insertion sort  
Heap sort  
Merge sort  
Bubble sort 
1. Mergesort
2. Quicksort.
Question 118 
O(nlogn)  
O(n)  
O(logn)  
O(n*n) 
→ So, it takes time complexity O(n)
Question 119 
2  
10  
8  
5 
Sep2: The maximum number of comparisons for a particular record among 32 sorted recorder through binary search method will be O(log _{2} 32)=5
Question 120 
For the recurrence relation T(n) = 2 + T(n  1), where T(0)=2, T(n)=?
n^{2}
 
2n+2  
log(n)  
2^{n} 
T(0) = 1
T(n1) = T(n11)+1
T(n) = [T(n2)+1] +1 = T(n2)+ 2
T(n2) = T(n21)+1
T(n) = [T(n3)+1]+1
= T(n3)+3
= T(n) = T(nk)+ k
Note: Let k = n
Then T(n) = T(0) + n = 1 + n
∴ O(n)
Question 121 
Travelling salesperson problem belongs to which category of problems?
Satisfiable
 
Non Solvable
 
Decision  
Optimization

Question 122 
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
Insertion sort  
Selection sort  
Quick sort  
Bubble sort 
→The above stages Bubble sort First Pass/ iteration steps.
→In the bubble sort, after completion the largest element gets last position.
Question 123 
2*log_{2} n  
log_{2} 2n  
log_{2} (2n+1)  
2*log_{2} n+1 
Question 124 
floor ((i+1)/2)  
ceiling ((i+1)/2)  
floor (i/2)  
ceiling (i/2) 
Left Child is at index: 2i
Right child is at index: 2*i+1
Question 125 
3  
4  
5  
6 
Height of the binary search tree = 3
Question 126 
Every minimum spanning tree of G must contain E_{ min}.  
If E _{max} is in minimum spanning tree, then its removal must disconnect G.  
No minimum spanning tree contains E_{ max}.  
G has a unique minimum spanning tree. 
Minimum Spanning Tree:
Question 127 
O(n log n)  
O(n ^{2} log n)  
O(n ^{2} + log n)  
O(n ^{3} ) 
2.The length of the string is n, the time taken is to be O(n) for each comparison.
3. For level 1(bottomup order): Every element has to be compared with other element the number of comparisons is O(n/2).
4. For level 1(bottomup order): Total time taken by one level is O(n ^{2} ).
5. For copying level to level the time taken by O(n ^{2} ).
So, For level 1= O(n^{ 2} )+ O(n^{ 2} )
6. Second level O(n ^{2} )+ O(n ^{2} ).
;
;
;
Final n level (logn)*O(n ^{2} ) = O(n ^{2} logn)
Question 128 
3, 4, 5, 7, 9, 14, 20, 18, 17, 16, 15  
20, 18, 17, 16, 15, 14, 3, 4, 5, 7, 9
 
20, 18, 17, 16, 15, 14, 9, 7, 5, 4, 3  
3, 4, 5, 7, 9, 14, 15, 16, 17, 18, 20 
Question 129 
↔~∧pq∨ ~ ~ pq, pq∧~p~q~∨↔  
↔~∧pq∨ ~ p~q, pq∧~p~q~∨↔  
↔~∧pq∨ ~ ~ pq, pq∧~p~~q∨↔  
↔~∧pq∨ ~ p~ q, pq∧~p~ ~q∨↔ 
(~(p ∧ q))↔(~ p ∨ ~ q)
It is clearly specifying that
↔ is a root
(~(p ∧ q)) is left subtree
(~p ∨ ~q) is right subtree.
Step2: Finally the tree is looks like
Step3: Prefix operation traverse through Root,left and Right ↔~∧pq∨ ~ p~q Step4: Postfix operation traverse through Left,Right and Root.
pq∧~p~q~∨↔
Question 130 
x * 3 && 3  j  k
1  
0  
1  
2 
Step2: && is logical AND. Both the statements are TRUE, then it returns 1 otherwise 0.
1.5 && 3 is TRUE. So, it returns 1.
Step3: j  k is bitwise OR operator. It returns 1.
Step4: ((x * 3) && 3)  (j  k) islogical OR operator. It returns 1
Note: The precedence is ((x * 3) && 3)  (j  k)
Question 131 
O(lg n)  
O(n lg n)  
O(n)  
O(n ^{2} ) 
→ Traversing all the nodes of a binary search tree with n nodes and printing them in order using inorder traversal.
→ Inorder traversal will print all elements in sorted order. It takes worst case time complexity is O(n).
Possibility2:
Without sorted elements chance to form either left skewed or right skewed tree. It takes worst case time complexity is O(n).
Question 132 
Both time and space complexities are better in recursive than in nonrecursive program.  
Both time and space complexities are better in nonrecursive than in recursive program  
Time complexity is better in recursive version but space complexity is better in nonrecursive version of the program.  
Space complexity is better in recursive version but time complexity is better in nonrecursive version of the program. 
→ Recursion uses more memory, but is sometimes clearer and more readable. Using loops increases the performance, but recursion can sometimes be better for the programmer (and his performance). Depends on what we you want to implement, and what's more important for you (readability, performance...)
Example: Factorial of a number.
Question 133 
It shows which of the following depth first forest?
{a, b, e} {c, d, f, g, h}  
{a, b, e} {c, d, h} {f, g}  
{a, b, e} {f, g} {c, d} {h}  
{a, b, c, d} {e, f, g} {h} 
Based on definition of DFS forest, optionA is correct.
Question 134 
O (log _{2} n)  
O (n log _{2} n)  
O (1)  
O (n) 
1. Stable: Equal keys aren't reordered.
2. Operates in place, requiring O(1) extra space.
3. Worst case O(nlog(n)) key comparisons.
4. Worst case O(n) swaps.
5. Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
Question 135 
According to kruskal’s algorithm, first we have sort an elements.
Elements are: 10,12,14,16,18,22,24,25,28
Question 136 
The item to be searched is in somewhere middle of the array  
The item to be searched is not in the array  
The item to be searched is in the last of the array  
The item to be searched is either in the last or not in the array 
→ A linear search runs in at worst linear time and makes at most n comparisons, where ‘n’ is the length of the list. If each element is equally likely to be searched, then linear search has an average case of n/2 comparisons, but the average case can be affected if the search probabilities for each element vary. → Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists.
Question 137 
Counting microseconds  
Counting number of key operations  
Counting number of statements  
Counting kilobytes of algorithm 
Question 138 
Selection sort  
Bubble sort  
Radix sort  
Insertion sort 
Phase 1:
1. We use n bins, one for each of the integers 0,1,.., n1. We place each integer i on the list to be sorted into the bin numbered (i mod n) each bin will then contain a list of integers leaving the same remainder when divided by n.
At the end, we concatenate the bins in order to obtain a list L.
Phase 2:
1. The integers on the list L are redistributed into bins, but using the bin selection
function: ⌊i/n⌋
2. Now append integers to the ends of lists. Finally, concatenate the lists to get the final sorted sequence.
Question 139 
O(1)  
O(Log n)  
O(n)  
O(n ^{2} ) 
→ The heapify function does not take O(logn) space, it will take O(logn) time in worst case. Here we don't do heapify through any recursion call. We will do through iterative method and we don't require any auxiliary space in this sorting. We use only input array.
So, we do heap sort in O(1) space complexity.
Question 140 
Binary search  
Maximum of n number  
Quick sort  
Fibonacci search 
Question 141 
Bubble sort  
Selection sort  
Quick sort  
Insertion sort 
So, we can eliminate optionB and OptionC because it gives worst complexity(O(n 2 )) when array is already sorted.
Bubble sort and insertion sort will give best case complexity(O(n)) but here they given small constraint is “there are few random elements”. So, insertion sort is more appropriate answer.
Question 142 
O(n!) and O(n log n)  
O(n^{n}) and O(n log n)  
O(n!) and O(log n!)  
O(n^{n}) and O(log n!) 
→ Given logarithm of the factorial function log n!.
= log n! (or)
= log n^{n} (or)
= nlogn
When we are writing into asymptotic order also the value remains same O(nlogn).
Question 143 
Search  
Search and Insert  
Search and Delete  
Search, Insert and Delete 
Search (or) Find: For the Search (or) Find operation, we perform a normal BST ﬁnd followed by a splay operation on the node found (or the leaf node last encountered, if the key was not found). We can charge the cost of going down the tree to the splay operation. Thus the amortized cost of ﬁnd is O(log n).
Insert/delete: The amortized cost of the splay operation is also O(log n), and thus the amortized cost of insert/delete is O(log n).
Question 144 
0.5 β Ig n  
0.5 (1 – β) Ig n  
– (Ig n)/(Ig β)  
– (Ig n)/Ig (1 – β) 
Question 145 
θ(lg m)  
θ(m)  
θ(mlg m)  
θ(lglg m) 
a=1, b=4/3, k=0, p=0
Here, a = b ^{k}
So, T(m) = n ^{log a/ log b} log^{ p+1} n
T(m) = θ(log m)
Question 146 
Prim’s algorithm  
Floyd  Warshall algorithm  
Johnson’s algorithm  
Dijkstra’s algorithm 
Question 147 
2.4  
1.87  
3.0  
2.15 
Step 1: Select two characters with smallest probabilities and merge them.
Step 2: Select two characters from Step 1 with smallest probabilities and merge them.
Step 3: Select two characters (from Step 2) with smallest probabilities and merge them.
Step 4: Merge the remaining two probabilities.
A = 1000 (4bits)
E = 1001 (4bits)
D = 101 (3bits)
C = 11 (2bits)
B = 0 (1bit)
Average length = ((0.08×4)+(0.12×4)+(0.15×3)+(0.25×2)+(0.40×1)) / (0.08+0.12+0.15+0.25+0.40)
=2.15/1.0
= 2 .15
Question 148 
(a)(iv),(b)(i), (c)(iii), (d)(ii)  
(a)(iv),(b)(iii), (c)(i), (d)(ii)  
(a)(iii),(b)(iv), (c)(ii), (d)(i)  
(a)(iv),(b)(iii), (c)(ii), (d)(i) 
SingleSource shortest paths use Greedy approach
STRASSEN’s Matrix multiplication use Divide and conquer technique
Optimal binary search trees use Dynamic programming
Question 149 
45  
72  
360  
450 
Octal number having→ 5 digits
The octal number system base value= 8
The maximum number of comparison=(number of items)*(radix)*(number of digits)
= 9*5*8
= 360
Question 150 
O(x log x)  
O(x^{2})  
O(x^{3})  
O(x^{2} log x) 
Question 151 
0(log n)  
0(n log n)  
0(n)  
None of the above 
Question 152 
Carnot function  
Riemann function  
Bounded function  
Ackermann function 
Question 153 
O(n^{5}+2n^{2}) & O(n^{3}*n!)  
O(n^{5}) & O(n^{3}*2^{n})  
O(n^{5}) & O(n^{3}* n!)  
O(n^{5}+2n^{2}) & O(n^{3}*2^{n}) 
Here, It is performing multiplication of 2 polynomials.
(nlogn+n^{2}) → Raising value is n^{2}. We can write asymptotically O(n^{2})
(n^{3}+2) → Raising value is n^{3}. We can write asymptotically O(n^{3})
= O(n^{2})*O(n^{3})
= O(n^{5})
Step2: Second function (n!+2^{n})(n^{3}+log(n^{2}+1))
Here, It is performing multiplication of 2 polynomials.
(n!+2^{n}) → Raising value is n!. We can write asymptotically O(n!)
(n^{3}+log(n^{2}+1)) → Raising value is n3. We can write asymptotically O(n^{3})
= O(n!)*O(n^{3})
= O(n^{3}* n!)
Question 154 
Complete graph  
Hamiltonian graph  
Euler graph  
None of the above 
→ Hamiltonian graph will get more than one spanning tree.
→ Euler graph will get more than one spanning tree.
Question 155 
125  
64  
36  
16 
n=5
=5^{52}
=5^{3}
=125
Question 156 
O (n)  
O (n log n)  
O (e log n)  
O (e) 
1. A=∅
2. for each vertex v∈ G.V
3. MAKESET(v)
4. sort the edges of G.E into nondecreasing order by weight w
5. for each edge (u,v)∈G.E, taken in nondecreasing order by weight
6. if FINDSET(u)≠FINDSET(v)
7. A=A∪{(u,v)}
8. UNION(u,v)
9. return A
Analysis:
The running time of Kruskal’s algorithm for a graph G=(V,E) depends on how we implement the disjointset data structure.
→ Initializing the set A in line 1 takes O(1) time, and the time to sort the edges in line 4 is O(ElgE). (We will account for the cost of the V MAKESET operations in the for loop of lines 2–3 in a moment.)
→ The for loop of lines 5–8 performs O(E) FINDSET and UNION operations on the disjointset forest.
→ Along with the V MAKESET operations, these take a total of O((V+E)α(V)) time, where α is the very slowly growing function.
→ Because we assume that G is connected, we have E>=V1, and so the disjointset operations take O(Eα(V)) time.
→ Moreover, since α V=O(lg V)=O(lg E), the total running time of Kruskal’s algorithm is O(ElgE)
→ Observing that E<V^{2} , we have lg E=O(lg V), and so we can restate the running time of Kruskal’s algorithm as O(E lg V)
Question 157 
Insertion in a sorted array takes constant time.  
Insertion in a sorted linear linked list takes constant time.  
Searching for a key in a sorted array can be done in O(log n) time.  
Searching for a key in a sorted linear linked list can be done in O(log n) time. 
Because elements are in sorted order, we are using binary search tree. it will take worst case time complexity O(logn) only.
Question 158 
O(e)  
O(n)  
O(e^{2})  
O(n^{2}) 
→ We know that E ≤ V^{2} if graph is complete and time complexity is O(n^{2})
Question 159 
Binary search  
Maximum of n numbers  
Quick sort  
Fibonacci search 
Question 160 
Ω(n)  
Ω(n/k)  
Ω(nlogk )  
Ω(n/klogn/k) 
(k!)^{n/k} ≤ 2^{h}
Taking the logarithm of both sides, we get:
h ≥ lg(k!)^{n/k}
= (n/k)lg(k!)
≥ (n/k)(k/2)lg(k/2)
= (1/2)*(nlogk)(1/2)*n
= Ω(nlogk)
Question 161 
n  
n^{2}  
n log n  
n^{3} 
a=8,b=2,k=0 and p=0
Case1: a>b^{k} = 8>2^{0}
T(n)=O(n^{logb^a})
= O(n^{3})
Question 162 
X = < B, C, D, C, A, B, C > and
Y = < C, A, D, B, C, B >
The length of longest common subsequence of X and Y is :
5  
3  
4  
2 
Question 163 
2.40  
2.16  
2.26  
2.15 
Question 164 
88  
91  
49  
21 
Question 165 
O(b+m)  
O(bm)  
O(b^{m})  
O(m^{m}) 
Question 166 
Ω(lg n)  
Ω(n)  
Ω(n lg n)  
Ω(n^{2}) 
Proof:
→ The number of leaves l ≥ n!
→ Any binary tree of height h has ≤ 2^{h} leaves.
→ Hence, n! ≤ l ≤ 2^{h}
→ Take logs: h ≥ lg(n!)
→ Use Stirling’s approximation: n! > (n/e)^{n}
→ We get:
h ≥ log(n/e)^{n}
= nlg(n/e)
= nlg n  n lg e
= Ω(n lg n)
Question 167 
O(1)  
O(lg n)
 
O(n)  
O(n lg n) 
Question 168 
630  
580  
480  
405 
Question 169 
Divide and conquer paradigm  
Dynamic programming  
Greedy Approach
 
Backtracking paradigm 
Question 170 
aiii, bi, cii, div  
aii, bi, civ, diii  
aii, bi, ciii, div  
biii, bii, ci, div 
Huffman coding→ Greedy approach
Optimal polygon triangulation→ Dynamic programming
Subset sum problem → Backtracking
Note: We can also use subset sum problem as well backtracking.
Question 171 
14875  
21000  
9375  
11875 
Question 172 
4n^{2}  
n  
4n4  
2n2 
2)Weight of the minimum spanning tree = 42  1 + 43  2 + 44  3 + 45  4 .... + 4 n  (n1)  = 4n  2
Question 173 
O (k (n + d))  
O(d (n + k))  
O((n + k) lg d)  
O((n + d) lg k) 
Question 174 
ai, biii, civ, dii  
ai, biii, cii, div  
aiii, bi, civ, dii  
aiii, bi, cii, div 
BellmanFord algorithm→ O(V^{2}E)
FloydWarshall algorithm→ O(V^{3})
Johnson’s algorithm→ O(VE lg V)
Question 175 
X = < a, b, c, b, d, a, b >
Y = < b, d, c, a, b, a >
The longest common subsequence of X and Y is :
< b, c, a >
 
< c, a, b >  
< b, c, a, a >  
< b, c, b, a > 
Question 176 
O(d n k)  
O(d n^{k})  
O((d +n) k)  
O(d (n + k))

Question 177 
O(lg n)  
O(n)  
O(n lg n)  
None of the above 
T(n) ≤ T(n/5)+T((7n/10)+6)+O(n)
Inductively verify that T(n)≤cn for some constant c.
T(n) ≤ c(n/5)+c(7n/10+6)+O(n)
≤ 9cn/10+6c+O(n)
≤ cn
In above, choose c so that c((n/10)−6) beats the function O(n) for all n.
Note: They given ‘s’ instead 5.
Question 178 
Greedy algorithm, θ (V^{3})  
Greedy algorithm, θ (V^{2} lgn)  
Dynamic programming, θ (V^{3})
 
Dynamic programming, θ (V^{2} lgn) 