Algorithms
Question 1 
A message is made up entirely of characters from the set X = {P, Q, R, S, T}. The table of probabilities for each of the characters is shown below:
If a message of 100 characters over X is encoded using Huffman coding, then the expected length of the encoded message in bits is __________.
225  
226  
227  
228 
General procedure to solve Huffman coding problem
Step1: Arrange into either descending/ ascending order according to that profits.
Step2: Apply optimal merge pattern procedure.
Step3: Make left subtree value either 0 or 1, for right subtree, viceversa.
= 2 × 0.34 + 2 × 0.22 + 2 × 0.19 + 3 × 0.17 + 3 × 0.08
= 2.25
∴ So, for 100 characters, 2.25 * 100 = 225
Question 2 
Consider a sequence of 14 elements: A = [5, 10, 6, 3, 1, 2, 13, 4, 9, 1, 4, 12, 3, 0]. The subsequence sum . Determine the maximum of S(i,j), where 0 ≤ i ≤ j < 14. (Divide and conquer approach may be used)
19  
39  
29  
09 
Ex:
{A, B, C, D}
{A, AB, AC, AD, ABC, ABD, ACD, B, BC, BD, BCD, C, CD, D }
Step1: Array of elements A = [5, 10, 6, 3, 1, 2, 13, 4, 9, 1, 4, 12, 3, 0 ]
Step2: As per the given question, if they want to find maximum subsequence means
{6,3,13,4,4,12}
= 42
Step3: But according to given recurrence relation, the sequence should be continuous. {6,3,13,4,4,12}.
This is not continuous subsequence.
Step4: The continuous sequence is {6, 3, 1, 2, 13, 4, 9, 1, 4, 12}
Total is {29}.
Note: We can't get more than 29 maximum subsequence sum.
Question 3 
An array of 25 distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to 2 decimal places) is _____.
0.08  
0.01  
1  
8 
Step2: Pivot element = uniformly random.
Step3: Worst case position in the pivot element is either first (or) last.
Step4: So total 2 possibilities among 25 distinct elements
= 2/25
= 0.08
Question 4 
There are n unsorted arrays: A_{1}, A_{2}, ..., A_{n}. Assume that n is odd. Each of A_{1}, A_{2}, ..., A_{n} contains n distinct elements. There are no common elements between any two arrays. The worstcase time complexity of computing the median of the medians of A_{1}, A_{2}, ..., A_{n} is
O(n)  
O(n log n)  
Ω(n^{2} log n)  
O(n^{2}) 
But it is similar to quicksort but in quicksort, partitioning will take extra time.
→ Find the median will be (i+j)/2
1. If n is odd, the value is Ceil((i+j)/2)
2. If n is even, the value is floor((i+j)/2)
> Here, total number of arrays are
⇒ O(n)*O(n)
⇒ O(n^{2})
Note:
They are clearly saying that all are distinct elements.
There is no common elements between any two arrays.
Question 5 
Consider the following undirected graph G:
Choose a value for x that will maximize the number of minimum weight spanning trees (MWSTs) of G. The number of MWSTs of G for this value of x is _________.
4  
5  
6  
7 
If x = 5 then the total number of MWSTs are 4.
If r = 1
If r = 2
If r = 3
If r = 4
If r = 5
Question 6 
Consider the weights and values of items listed below. Note that there is only one unit of each item.
The task is to pick a subset of these items such that their total weight is no more than 11 Kgs and their total value is maximized. Moreover, no item may be split. The total value of items picked by an optimal algorithm is denoted by V_{opt}. A greedy algorithm sorts the items by their valuetoweight ratios in descending order and packs them greedily, starting from the first item in the ordered list. The total value of items picked by the greedy algorithm is denoted by V_{greedy}.
The value of V_{opt} − V_{greedy} is ______ .
16  
17  
18  
19 
V_{opt} is clearly = 60
For V_{greedy} use the table (Do not take the fraction as per the question),
Item 4 picked,
Profit = 24
Remaining weight = 11 – 2 = 9
Next item 3 picked (item 1 cannot be picked since its capacity is greater than available capacity),
Profit = 24 + 20 = 44
Remaining capacity = 9 – 4 = 5
Now no item can be picked with available capacity.
So V_{greedy} = 44
∴ V_{opt} – V_{greedy} = 60 – 44 = 16
Question 7 
Consider the following functions from positives integers to real numbers
The CORRECT arrangement of the above functions in increasing order of asymptotic complexity is:
Step1: Take n=2048 or 2^{11} (Always take n is very big number)
Step2: Divide functions into 2 ways
1. Polynomial functions
2. Exponential functions
Step3: The above functions are belongs to polynomial. So, simply substitute the value of n,
First compare with constant values.
→ 100 / 2048 = 0.048828125
→ 10 > 100/ 2048
→ log_{2} 2048 =11
→ √n = 45.25483399593904156165403917471
→ n = 2048
So, Option B is correct
Question 8 
Consider the following table
Match the algorithm to design paradigms they are based on:
(P)↔(ii), Q↔(iii), (R)↔(i)  
(P)↔(iii), Q↔(i), (R)↔(ii)  
(P)↔(ii), Q↔(i), (R)↔(iii)  
(P)↔(i), Q↔(ii), (R)↔(iii) 
(Q) QuickSort is a Divide and Conquer algorithm.
(R) Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem using Dynamic Programming.
Some important points regarding Greedy Vs Dynamic Programming
Greedy: →
It always gives polynomial time complexity
→ It is not an optimal
→ It always selects only either minimum or maximum among all possibilities
→ Ex: Dijkstra’s algorithm for SSSP, Optimal Merge Pattern, Huffman coding, Fractional knapsack problem, etc..,
Dynamic Programming:
→ It gives either polynomial or exponential time complexity.
→ It gives always an optimal result.
→ It checks all possibilities of a problem.
→ Ex: Longest Common sequence, Matrix chain Multiplication, Travelling sales Problem, etc.
Question 9 
Let G = (V, E) be any connected undirected edgeweighted graph. The weights of the edges in E are positive and distinct. Consider the following statements:

(I) Minimum Spanning Tree of G is always unique.
(II) Shortest path between any two vertices of G is always unique.
Which of the above statements is/are necessarily true?
(I) only  
(II) only  
both (I) and (II)  
neither (I) nor (II) 
Let us take an example
Step 1:
Using kruskal’s algorithm, arrange each weights in ascending order.
17, 18, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30
Step 2:
Step 3:
17+18+20+21+22+23+26 = 147
Step 4:
Here, all the elements are distinct. So the possible MCST is 1.
StatementII: May or may not happen, please take an example graph and try to solve it. This is not correct always.
So, we have to pick most appropriate answer.
Question 10 
Let A be an array of 31 numbers consisting of a sequence of 0’s followed by a sequence of 1’s. The problem is to find the smallest index i such that A[i] is 1 by probing the minimum number of locations in A. The worst case number of probes performed by an optimal algorithm is _________.
5  
6  
7  
8 
→ As in this array sequence of 0’s is followed by sequence of 1’s, the array is sorted. We can apply binary search directly without sorting it.
So number of probes = ceil(log_{2} 31) = 4.954196310386876
⇒ here we are using ceiling so it becomes 5
Question 11 
Match the algorithms with their time complexities:
Algorithm Time complexity (P) Towers of Hanoi with n disks (i) Θ(n^{2}) (Q) Binary search given n stored numbers (ii) Θ(n log n) (R) Heap sort given n numbers at the worst case (iii) Θ(2^{n}) (S) Addition of two n×n matrices (iv) Θ(log n)
P→(iii), Q→(iv), R→(i), S→(ii)  
P→(iv), Q→(iii), R→(i), S→(ii)  
P→(iii), Q→(iv), R→(ii), S→(i)  
P→(iv), Q→(iii), R→(ii), S→(i) 
→ Tower of Hanoi with n disks takes θ(2^{n}) time
It is a mathematical game or puzzle.
It consists of three rods and a number of disks of different sizes, which can slide onto any rod.
The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following simple rules:
1. Only one disk can be moved at a time.
2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack.
3. No disk may be placed on top of a smaller disk.
With 3 disks, the puzzle can be solved in 7 moves.
The minimal number of moves required to solve a Tower of Hanoi puzzle is 2^{n}1, where n is the number of disks.
→ Binary Search given n sorted numbers takes Ɵ(log_{2} n)
→ Heap sort given n numbers of the worst case takes Ɵ(n log n)
→ Addition of two n×n matrices takes Ɵ(n^{2})
Question 12 
Consider the recurrence function
Then T(n) in terms of Θ notation is
Θ(loglogn)  
Θ(logn)  
Θ(√n)  
Θ(n) 
(or)
T(n) = 2T(n^{(1⁄2)}) + 1
Here, Assume n = 2^{k}
T(2^{k}) = 2T(2^{k})^{(1⁄2)} + 1
T(2^{k}) = 2T(2^{(k/2)} ) + 1
Assume T(2^{k}) = S(k)
S(k) = 2S(k/2) + 1
Apply Master’s theorem Case1
a=2, b=2
S(k) = k^{(log22 )}
S(k) = θ(k')
but S(k) = T(2^{k})
T(2^{k}) = θ(k')
but n = 2^{k}
k = logn
T(n) = θ(logn)
Question 13 
Consider the following C function.
int fun (int n) { int i, j; for (i = 1; i <= n; i++) { for (j = 1; j < n; j += i) { printf("%d %d",i,j); } } }
Time complexity of fun in terms of Θ notation is
Θ(n√n)  
Θ(n^{2})  
Θ(n logn)  
Θ(n^{2} logn) 
int fun(int n)
{
int i, j;
for (i = 1; i <= n ; i++) /* It is independent loop. So, it takes O(n) time */
{
for (j = 1; j < n; j += i) /* It is dependent loop, It always incrementing with the help of i. It will take approximately O(log n) times*/
{
printf("%d %d", i, j); /* It takes O(1) time */
}
}
}
So, the overall complexity of the program is θ(n logn) times.
Question 14 
The worst case running times of Insertion sort, Merge sort and Quick sort, respectively, are:
Θ(nlogn), Θ(nlogn), and Θ(n^{2})  
Θ(n^{2} ), Θ(n^{2} ), and Θ(nlogn)  
Θ(n^{2}), Θ(nlogn), and Θ(nlogn)  
Θ(n^{2}), Θ(nlogn), and Θ(n^{2}) 
Question 15 
Let G be a complete undirected graph on 4 vertices, having 6 edges with weights being 1, 2, 3, 4, 5, and 6. The maximum possible weight that a minimum weight spanning tree of G can have is __________.
7  
8  
9  
10 
Now consider vertex A to make Minimum spanning tree with Maximum weights.
As weights are 1, 2, 3, 4, 5, 6. AB, AD, AC takes maximum weights 4, 5, 6 respectively.
Next consider vertex B, BA = 4, and minimum spanning tree with maximum weight next is BD & BC takes 2, 3 respectively.
And the last edge CD takes 1.
So, 1+2+4 in our graph will be the Minimum spanning tree with maximum weights.
Question 16 
G = (V,E) is an undirected simple graph in which each edge has a distinct weight, and e is a particular edge of G. Which of the following statements about the minimum spanning trees (MSTs) of G is/are TRUE?

I. If e is the lightest edge of some cycle in G, then every MST of G includes e
II. If e is the heaviest edge of some cycle in G, then every MST of G excludes e
I only  
II only  
both I and II  
neither I nor II 
The MSTs of G may or may not include the lightest edge.
Take rectangular graph labelled with P,Q,R,S.
Connect with PQ = 5, QR = 6, RS = 8, SP = 9, PR = 7.
When we are forming a cycle RSPR. PR is the lightest edge of the cycle.
The MST abcd with cost 11
PQ + QR + RS does not include it.
Statement2: True
Suppose there is a minimum spanning tree which contains e. If we add one more edge to the spanning tree we will create a cycle.
Suppose we add edge e to the spanning tree which generated cycle C.
We can reduce the cost of the minimum spanning tree if we choose an edge other than e from C for removal which implies that e must not be in minimum spanning tree and we get a contradiction.
Question 17 
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in ascending order, which of the following are TRUE?

I. Quicksort runs in Θ(n^{2}) time
II. Bubblesort runs in Θ(n^{2}) time
III. Mergesort runs in Θ(n) time
IV. Insertion sort runs in Θ(n) time
I and II only  
I and III only  
II and IV only  
I and IV only 
→ The recurrence relation for Quicksort, if elements are already sorted,
T(n) = T(n1)+O(n) with the help of substitution method it will take O(n^{2}).
→ The recurrence relation for Merge sort, is elements are already sorted,
T(n) = 2T(n/2) + O(n) with the help of substitution method it will take O(nlogn).
We can also use master's theorem [a=2, b=2, k=1, p=0] for above recurrence.
Question 18 
The FloydWarshall algorithm for allpair shortest paths computation is based on
Greedy paradigm.  
DivideandConquer paradigm.  
Dynamic Programming paradigm.  
Neither Greedy nor DivideandConquer nor Dynamic Programming paradigm. 
It takes worst case time complexity is O(V^{3}) and worst case space complexity is O(V^{2}).
→ The Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles).
→ A single execution of the algorithm will find the lengths (summed weights) of the shortest paths between all pairs of vertices.
Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm.
Question 19 
Let A_{1}, A_{2}, A_{3} and A_{4} be four matrices of dimensions 10 × 5, 5 × 20, 20 × 10, and 10 × 5, respectively. The minimum number of scalar multiplications required to ﬁnd the product A_{1}A_{2}A_{3}A_{4} using the basic matrix multiplication method is _________.
1500  
1501  
1502  
1503 
The optimal parenthesized sequence is A_{1}((A_{2}A_{3})A_{4}) out of many possibilities, the possibilities are
1. ((A_{1}A_{2})A_{3})A_{4}
2. ((A_{1}(A_{2}A_{3}))A_{4})
3. (A_{1}A_{2})(A_{3}A_{4})
4. A_{1}((A_{2}A_{3})A_{4})
5. A_{1}(A_{2}(A_{3}A_{4}))
→ A_{1}((A_{2}A_{3})A_{4}) = (5 x 20 x 10) + (5 x 10 x 5) + (10 x 5 x 5) = 1000 + 250 + 250 = 1500
Question 20 
The given diagram shows the ﬂowchart for a recursive function A(n). Assume that all statements, except for the recursive calls, have O(1) time complexity. If the worst case time complexity of this function is O(n^{α}), then the least possible value (accurate upto two decimal positions) of α is _________.
2.3219280  
2.3219281  
2.3219282  
2.3219283 
According to flow chart total 5 worst case possibilities of function calls.
The remaining function calls/return statements will execute only constant amount of time.
So, total function calls 5.
The Recurrence will be
A(n) = 5A(n/2) + O(1)
Apply Master’s theorem,
A=5, b=2, k=0, p=0
a > b^{k} case
A(n) = n^{(logba )} = n^{(log25)} = n^{2.3219280}
∴α = 2.3219280
Question 21 
Which one of the following is the recurrence equation for the worst case time complexity of the Quicksort algorithm for sorting n(≥ 2) numbers? In the recurrence equations given in the options below, c is a constant.
T(n) = 2T(n/2) + cn  
T(n) = T(n – 1) + T(1) + cn  
T(n) = 2T(n – 1) + cn  
T(n) = T(n/2) + cn

Hence recurrence relation T(n) = T(n  1) + T(1) + cn.
Question 22 
Match the following:
(P) Prim's algorithm for minimum spanning tree (i) Backtracking (Q) FloydWarshall algorithm for all pairs shortest paths (ii) Greedy method (R) Mergesort (iii) Dynamic programming (S) Hamiltonian circuit (iv) Divide and conquer
Piii, Qii, Riv, Si  
Pi, Qii, Riv, Siii  
Pii, Qiii, Riv, Si  
Pii, Qi, Riii, Siv 
Floydwarshall always changes it distance at each iteration which is nothing but dynamic programming.
Merge sort in merge sort first we always divide and merge to perform sorting hence divide and conquer.
Hamiltonian circuit used to reach the entire vertex once, if some vertex is repeating in its path it will backtrack.
Question 23 
An algorithm performs (log N)^{1/2} find operations, N insert operations, (log N)^{1/2} delete operations, and (log N)^{1/2} decreasekey operations on a set of data items with keys drawn from a linearly ordered set. For a delete operation, a pointer is provided to the record that must be deleted. For the decreasekey operation, a pointer is provided to the record that has its key decreased. Which one of the following data structures is the most suited for the algorithm to use, if the goal is to achieve the best total asymptotic complexity considering all the operations?
Unsorted array  
Minheap  
Sorted array  
Sorted doubly linked list 
Question 24 
The graph shown below 8 edges with distinct integer edge weights. The minimum spanning tree (MST) is of weight 36 and contains the edges: {(A, C), (B, C), (B, E), (E, F), (D, F)}. The edge weights of only those edges which are in the MST are given in the figure shown below. The minimum possible sum of weights of all 8 edges of this graph is ______________.
69  
70  
71  
72 
⇒ Total sum = 10 + 9 + 2 + 15 + 7 + 16 + 4 + 6 = 69
> First we compare AC and AB we find 9 at AC it means AB must greater than AC and for minimum possible greater value than 9 will be 10
> Second we compare BE and CD in which we select BE is 15 which CD possible weight 16.
> Third, we compare ED and FD in which we select FD 6 means ED must be greater than 6 so possible value greater than 6 is 7 .
Note: Add First+Second+Third=(AB=10)+(CD=16)+(ED=7)
Question 25 
Given below are some algorithms, and some algorithm design paradigms
A. Dijkstra’s Shortest Path 1. Divide and Conquer B. FloydWarshall algorithm to 2. Dynamic Programming compute all pairs shortest path C. Binary search on a sorted array 3. Greedy design D. Backtracking search on a graph 4. Depthfirst search 5. Breadthfirst search
Match the above algorithms on the left to the corresponding design paradigm they follow Codes:
1i, 2iii, 3i, 4v.  
1iii, 2iii, 3i, 4v.  
1iii, 2ii, 3i, 4iv.  
1iii, 2ii, 3i, 4v. 
(2) → All Pair shortest path algorithm is using Dynamic Programming technique. It takes worst case time complexity is O(V3) and worst case space complexity is O(V2).
→ The Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles).
→ A single execution of the algorithm will find the lengths (summed weights) of the shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm.
(3) Binary search on a sorted array:
Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.
(4) Backtracking search on a graph uses Depthfirst search
Question 26 
Suppose you are provided with the following function declaration in the C programming language.
int partition (int a[], int n);
The function treats the first element of a[] as a pivot, and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part. The following partially given function in the C programming language is used to find the kth smallest element in an array a[ ] of size n using the partition function. We assume k ≤ n
int kth_smallest (int a[], int n, int k) { int left_end = partition (a, n); if (left_end+1==k) { return a [left_end]; } if (left_end+1 > k) { return kth_smallest (____________________); } else { return kth_smallest (____________________); } }
The missing argument lists are respectively
(a, left_end, k) and (a + left_end + 1, n – left_end – 1, k – left_end – 1)
 
(a, left_end, k) and (a, n – left_end – 1, k – left_end – 1)  
(a + left_end + 1, n – left_end – 1, k – left_end – 1) and (a, left_end, k)  
(a, n – left_end – 1, k – left_end – 1) and (a, left_end, k) 
Question 27 
Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50×106 bytes/sec. If the average seek time of the disk is twice the average rotational delay and the controller’s transfer time is 10 times the disk transfer time, the average time (in milliseconds) to read or write a 512byte sector of the disk is _____________.
6  
6.1  
6.2  
6.3 
1 rotation → 60 /15000 sec = 4 ms
∴ Average rotational delay = 1/2 × 4 ms = 2 ms
Average seek time = 2 × Average rotational delay
= 2 × 2 ms = 4 ms
Time to transfer 512 byte,
50 × 10^{6}B  1s
512 B  512B/ 50×10^{6}B/s = 0.01 ms
∴ Controllers transfer time
= 10 × Disk transfer time
= 10 × 0.01 ms
= 0.01 ms
∴ Avg. time = (4 + 2 + 0.1 + 0.01)ms
= 6.11 ms
Question 28 
Assume that a mergesort algorithm in the worst case takes 30 seconds for an input of size 64. Which of the following most closely approximates the maximum input size of a problem that can be solved in 6 minutes?
256  
512  
1024  
2048 
where c is some constant
It is given that for n = 64,
cn log n = 30
c × 64 log 64 = 30
c × 64 × 6 = 30
c = 5/64
Now, the value of n for
c n log n = 360
5/64 × n log n = 360
n log n = 360×64/5
= 72 × 64 = 2^{9} × 9
So for n = 2^{9}, it satisfies.
So, n = 2^{9} = 512
Question 29 
Let G be a connected undirected graph of 100 vertices and 300 edges. The weight of a minimum spanning tree of G is 500. When the weight of each edge of G is increased by five, the weight of a minimum spanning tree becomes __________.
995  
996  
997  
998 
Question 30 
There are 5 bags labeled 1 to 5. All the coins in a given bag have the same weight. Some bags have coins of weight 10 gm, others have coins of weight 11 gm. I pick 1, 2, 4, 8, 16 coins respectively from bags 1 to 5. Their total weight comes out to 323 gm. Then the product of the labels of the bags having 11 gm coins is _______.
12  
13  
14  
15 
No. of coins picked: 1 2 4 8 16
There are two types of weights of coin, i.e., 10gm and 11gm. And total weight of picked coins are 323gm.
Let there be x number of 10gm coins and y number of 11gm coins.
So, 10x + 11y = 323  (1)
And we also know that total number of coins picked is
1 + 2 + 4 + 8 + 16 = 31 which is equal to x + y, so,
= 31  (2)
Solving equation (1) and (2), we get
y = 13
Means total there are 13 coins of 11gm.
Now we can chose bag number 1, 3 and 4, we will get a total sum of 13 coins picked from them.
So product of labelled bag number = 1×3×4 = 12
Question 31 
Suppose a polynomial time algorithm is discovered that correctly computes the largest clique in a given graph. In this scenario, which one of the following represents the correct Venn diagram of the complexity classes P, NP and NP Complete (NPC)?
Question 32 
The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is __________.
148  
149  
150  
151 
(∵ T(n) = T(floor(n/2))+T(ceil(n/2))+2)
Question 33 
Which one of the following correctly determines the solution of the recurrence relation with T(1) = 1?
T(n) = 2T(n/2) + Logn
Θ(n)  
Θ(nlog n)  
Θ(n^{2})  
Θ(logn) 
Apply Master’s theorem,
a=2, b=2, k=0, p=1
According to Master’s theorem, we can apply CaseI
(I) If a>b^{k}, then T(n) = θ(n^{(logba )}) = θ(n^{(log22)}) = θ (n)
Question 34 
Consider two strings A = "qpqrr" and B = "pqprqrp". Let x be the length of the longest common subsequence (not necessarily contiguous) between A and B and let y be the number of such longest common subsequences between A and B. Then x+10y = _________.
34  
35  
36  
37 
A = “qpqrr” B = “pqprqrp”
The longest common subsequence (not necessarily contiguous) between A and B is having 4 as the length, so x=4 and such common subsequences are as follows:
(1) qpqr
(2) pqrr
(3) qprr
So y = 3 (the number of longest common subsequences) hence x+10y = 4+10*3 = 34
Question 35 
Suppose P, Q, R, S, T are sorted sequences having lengths 20, 24, 30, 35, 50 respectively. They are to be merged into a single sequence by merging together two sequences at a time. The number of comparisons that will be needed in the worst case by the optimal algorithm for doing this is ______.
358  
359  
360  
361 
In the above implementation, total number of comparisons is
(441) + (941) + (651) + (1591) = 358
Hint: The number of comparisons for merging two sorted sequences of length m and n is m+n1.
Question 36 
The number of distinct minimum spanning trees for the weighted graph below is _______.
6  
7  
8  
9 
Minimum Spanning Tree:
From the diagram, CFDA gives the minimum weight so will not disturb them, but in order to reach BE=1 we have 3 different ways ABE/ DBE/ DEB and we have HI=1, the shortest weight, we can reach HI=1 through GHI/ GIH.
So 3*2=6 ways of forming Minimum Spanning Tree with sum as 11.
Question 37 
Consider the following rooted tree with the vertex labeled P as t he root:
The order in which the nodes are visited during inorder traversal is
SQPTRWUV  
SQPTUWRV  
SQPTWUVR  
SQPTRUWV 
Inorder Traversal: Left, Root, Middle, Right.
So, during inorder traversal whenever we visit the node second time then print it.
So, output will be,
S Q P T R W U V
Question 38 
Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.
19  
20  
21  
22 
Note: We should not consider backtrack edges, it reduces recursion depth in stack.
So the maximum possible recursive depth will be 19.
Question 39 
You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is
O(n^{2})  
O(n log n)  
Θ(n logn)  
O(n^{3}) 
Question 40 
Suppose we have a balanced binary search tree T holding n numbers. We are given two numbers L and H and wish to sum up all the numbers in T that lie between L and H. Suppose there are m such numbers in T. If the tightest upper bound on the time to compute the sum is O(n^{a}log^{b}n + m^{c}log^{d}n), the value of a + 10b + 100c + 1000d is ____.
110  
111  
112  
113 
1. n_{1} is the smallest number greater than or equal to L and there is no predecessor n_{1}' of >n_{1} such that n_{1}' is equal to n_{1}.
2. n_{2}2' of n_{2} such that is equal to n_{2}.
Since there are m elements between n_{1} and n_{2}, it takes ‘m’ time to add all elements between n_{1} and n_{2}.
So time complexity is O(log n+m)
So the given expression becomes O(n^{0}log'n+m'log^{0}n)
And a+10b+100c+1000d = 0+10*1+100*1+1000*1 = 10+100 = 110
Because a=0, b=1, c=1 and d=0
Question 41 
Consider a hash table with 100 slots. Collisions are resolved using chaining. Assuming simple uniform hashing, what is the probability that the first 3 slots are unfilled after the first 3 insertions?
(97×97×97)/100^{3}  
(99×98×97)/100^{3}  
(97×96×95)/100^{3}  
(97×96×95)/(3!×100^{3}) 
They are picked with equal probability of Hash function since it is uniform hashing.
So to avoid the first 3 slots to be unfilled
= 97/100*97/100*97/100 =((97*97*97))⁄100^{3}
Question 42 
Consider the pseudocode given below. The function DoSomething() takes as argument a pointer to the root of an arbitrary tree represented by the leftMostChildrightSibling representation. Each node of the tree is of type treeNode.
typedef struct treeNode* treeptr; struct treeNode { treeptr leftMostChild, rightSibling; }; int DoSomething (treeptr tree) { int value=0; if (tree != NULL) { if (tree>leftMostChild == NULL) value = 1; else value = DoSomething(tree>leftMostChild); value = value + DoSomething(tree>rightSibling); } return(value); }
When the pointer to the root of a tree is passed as the argument to DoSomething, the value returned by the function corresponds to the
number of internal nodes in the tree.  
height of the tree.  
number of nodes without a right sibling in the tree.  
number of leaf nodes in the tree. 
Here, that condition is if (tree → leftMostchild = = Null)
⇒ Which means if there is no left most child of the tree (or the subtree or the current node as called in recursion)
⇒ Which means there is no child to that particular node (since if there is no left most child, there is no child at all).
⇒ Which means the node under consideration is a leaf node.
⇒ The function recursively counts, and adds to value, whenever a leaf node is encountered.
⇒ The function returns the number of leaf nodes in the tree.
Question 43 
Consider the C function given below. Assume that the array listA contains n (> 0) elements, sorted in ascending order.
int ProcessArray(int *listA, int x, int n) { int i, j, k; i = 0; j = n1; do { k = (i+j)/2; if (x <= listA[k]) j = k1; if (listA[k] <= x) i = k+1; } while (i <= j); if (listA[k] == x) return(k); else return 1; }
Which one of the following statements about the function ProcessArray is CORRECT?
It will run into an infinite loop when x is not in listA.  
It is an implementation of binary search.  
It will always find the maximum element in listA.  
It will return −1 even when x is present in listA. 
k = (i+j)/2;
where k keeps track of current middle element & i, j keeps track of left & right children of current subarray.
So it is an implementation of Binary search.
Question 44 
Which one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?
O(log n)  
O(n)  
O(n log n)  
O(n^{2}) 
Selection sort time complexity O(n^{2}) in terms of number of comparisons. Each of these scans requires one swap for n1 elements (the final element is already in place).
Question 45 
Which one of the following is the tightest upper bound that represents the time complexity of inserting an object into a binary search tree of n nodes?
O(1)  
O(log n)  
O(n)  
O(n log n) 
Question 46 
Consider the languages L_{1} = ϕ and L_{2 }= {a}. Which one of the following represents L_{1}L_{2}* ∪ L_{1}*?
{є}  
ϕ  
a*  
{є,a} 
Rϕ = ϕR = ϕ
So L_{1} L_{2} * = ϕ
and L_{1} * = {ϕ}* = {ϵ}
So L_{1}L_{2}* ∪ L_{1}* = {ϵ}
Question 47 
What is the time complexity of BellmanFord singlesource shortest path algorithm on a complete graph of n vertices?
Θ(n^{2})  
Θ(n^{2} log n)  
Θ(n^{3})  
Θ(n^{3} log n) 
Bellman–Ford runs in O(V * E) time, where V and E are the number of vertices and edges respectively.
Note: For complete graph: E = n(n1)/2 = O(n^{2})
Question 48 
Consider an undirected random graph of eight vertices. The probability that there is an edge between a pair of vertices is 1/2. What is the expected number of unordered cycles of length three?
1/8  
1  
7  
8 
The probability that there exists one edge between two vertices = 1/2
So, the total probability that all three edges of the above exists
= 1/2 × 1/2 × 1/2 (as they are independent events)
= 1/8
Total number of ways in which we can select ‘3’ such vertices among ‘8’ vertices = ^{8}C_{3} = 56
Total number of cycles of length ‘3’ out of 8 vertices = 56 × 1/8 = 7
Question 49 
Which of the following statements is/are TRUE for undirected graphs?
 P: Number of odd degree vertices is even.
Q: Sum of degrees of all vertices is even.
P only  
Q only  
Both P and Q  
Neither P nor Q 
The sum of the degrees of all the vertices of a graph is an even number (exactly twice the number of edges).
In every graph, the number of vertices of odd degree must be even.
Question 50 
The line graph L(G) of a simple graph G is defined as follows: · There is exactly one vertex v(e) in L(G) for each edge e in G. · For any two edges e and e' in G, L(G) has an edge between v(e) and v(e'), if and only if e and e'are incident with the same vertex in G. Which of the following statements is/are TRUE?
 (P) The line graph of a cycle is a cycle.
(Q) The line graph of a clique is a clique.
(R) The line graph of a planar graph is planar.
(S) The line graph of a tree is a tree.
P only  
P and R only  
R only  
P, Q and S only 
R) False. We can give counter example. Let G has 5 vertices and 9 edges which is planar graph. Assume degree of one vertex is 2 and of all others are 4. Now, L(G) has 9 vertices (because G has 9 edges) and 25 edges. But for a graph to be planar,
E ≤ 3v  6
For 9 vertices E ≤ 3 × 9  6
⇒ E ≤ 27  6
⇒ E ≤ 21. But L(G) has 25 edges and so is not planar.
As (R) is false, option (B) & (C) are eliminated.
Question 51 
The number of elements that can be sorted in Θ(log n) time using heap sort is
Θ(1)  
Θ(√logn)  
Θ (logn/loglogn)  
Θ(log n) 
So, Θ((logn/ log log n)log(logn/log log n))
= Θ(logn/log logn (log logn  log log logn))
= Θ((log n/log logn) × log log n)
= Θ(log n)
Hence, option (C) is correct answer.
Question 52 
What is the return value of f(p,p), if the value of p is initialized to 5 before the call? Note that the first parameter is passed by reference, whereas the second parameter is passed by value.
int f(int &x, int c) { c = c  1; if (c==0) return 1; x = x + 1; return f(x,c) * x; }
3024  
6561  
55440  
161051 
f(5,5) → f(x,4)*x
f(x,3)*x*x
f(x,2)*x*x*x
⇒x*x*x*x = x^{4}
Now x = x+1, there are 4 different calls namely f(6,4),f(7,3),f(8,2),f(9,1)
Because of pass by reference, x in all earlier functions is replaced with 9.
⇒ 9*9*9*9 = 9^{4} = 6561
Question 53 
The preorder traversal sequence of a binary search tree is 30, 20, 10, 15, 25, 23, 39, 35, 42. Which one of the following is the postorder traversal sequence of the same tree?
10, 20, 15, 23, 25, 35, 42, 39, 30  
15, 10, 25, 23, 20, 42, 35, 39, 30  
15, 20, 10, 23, 25, 42, 35, 39, 30  
15, 10, 23, 25, 20, 35, 42, 39, 30 
From the data,
The Postorder traversal sequence is:
→ 15, 10, 23, 25, 20, 35, 42, 39, 30.
Question 54 
Consider the following operation along with Enqueue and Dequeue operations on queues, where k is a global parameter.
MultiDequeue(Q){ m = k while (Q is not empty and m > 0) { Dequeue(Q) m = m  1 } }
What is the worst case time complexity of a sequence of n MultiDequeue() operations on an initially empty queue?
Θ(n)  
Θ(n + k)  
Θ(nk)  
Θ(n^{2}) 
i. One option is to perform all Enqueue operations i.e. n Enqueue operations. Complexity will be Ѳ(n).
or
ii. We can perform a mix of Enqueue and Dequeue operations. It can be Enqueue for first n/2 times and then Dequeue for next n/2, or Enqueue and Dequeue alternately, or any permutation of Enqueues and Dequeues totaling ‘n’ times. Completity will be Ѳ(n).
or iii. We can perform Enqueues and MultiDequeues. A general pattern could be as follows: Enqueue Enqueue …(ktimes) MultiDequeue Enqueue Enqueue …(ktimes) MultiDequeue …Up to total n …. K items enqueued ….k items deleted….k items enqueued…..k items deleted  and so on.
The number of times this kEnqueues, MultiDequeue cycle is performed = n/k+1
So, Complexity will be k times Enqueue + 1 MultiDequeue) ×n/k+1
Which is (2k×n/k+1) = (n)
or
iv. We can just perform n MultiDequeues (or n Dequeues for the matter):
Each time the while condition is false (empty queue), condition is checked just once for each of the ‘n’ operations. So Ѳ(n).
Question 55 
Assuming P ≠ NP, which of the following is TRUE?
NPcomplete = NP  
NPcomplete ∩ P = ∅  
NPhard = NP  
P = NPcomplete 
The definition of NPcomplete is,
A decision problem p is NPcomplete if:
1. p is in NP, and
2. Every problem in NP is reducible to p in polynomial time.
It is given that assume P ≠ NP , hence NPcomplete ∩ P = ∅ .
This is due to the fact that, if NPcomplete ∩ P ≠ ∅ i.e. there are some problem (lets say problem P1) which is in P and in NPcomplete also, then it means that P1 (NPcomplete problem) can be solved in polynomial time also (as it is also in P class) and this implies that every NP problem can be solve in polynomial time, as we can convert every NP problem into NPcomplete problem in polynomial time.
Which means that we can convert every NP problem to P1 and solve in polynomial time and hence P = NP, which is contradiction to the given assumption that P ≠ NP.
Question 56 
Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. Which of the following is ALWAYS TRUE?
A(n) = Ω (W(n))  
A(n) = Θ (W(n))  
A(n) = O (W(n))  
A(n) = o (W(n)) 
So, A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).
A(n) = O(W(n))
Note: Option A is wrong because A(n) is not equal to Ω(w(n)) .
Question 57 
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then
T(n) ≤ 2T(n/5) + n
 
T(n) ≤ T(n/5) + T(4n/5) + n  
T(n) ≤ 2T(4n/5) + n  
T(n) ≤ 2T(n/2) + n

So, T(n/5) comparisons are needed for the first subset and T(4n/5) comparisons needed for second subset.
Now, suppose that one subset contains more than n/5 elements then another subset will contain less than 4n/5 elements. Due to which time complexity will be less than
T(n/5) + T(4n/5) + n
Because recursion tree will be more balanced.
Question 58 
The subsetsum problem is defined as follows: Given a set S of n positive integers and a positive integer W, determine whether there is a subset of S Whose elements sum to
An algorithm Q solves this problem in O(nW) time. Which of the following statements is false?
Q solves the subsetsum problem in polynomial time when the input is encoded in unary
 
Q solves the subsetsum problem in polynomial time when the input is encoded in binary
 
The subset sum problem belongs to the class NP  
The subset sum problem is NPhard 
Question 59 
Dijkstra’s single source shortest path algorithm when run from vertex a in the above graph, computes the correct shortest path distance to
only vertex a  
only vertices a, e, f, g, h  
only vertices a, b, c, d  
all the vertices 
A  B ⇒ 1
A  C ⇒ 1 + 2 = 3
A  E ⇒ 1  3 = 2
A  F ⇒ 1 3 + 2 = 0
A  G ⇒ 1  3 + 2 + 3 = 3
A  C ⇒ 1 + 2 = 3
A  H ⇒ 1 + 2  5 = 2
Question 60 
Merge sort uses
Divide and conquer strategy  
Backtracking approach  
Heuristic search  
Greedy approach 
Question 61 
The postfix expression for the infix expression
A + B*(C + D)/F + D*E is:
AB + CD + *F/D + E*  
ABCD + *F/DE*++  
A *B + CD/F *DE++  
A + *BCD/F* DE++  
None of the above 
A B C D + * F / + D E * +
Question 62 
Which of the following statements is true?
 I. As the number of entries in a hash table increases, the number of collisions increases.
II. Recursive programs are efficient
III. The worst case complexity for Quicksort is O(n^{2})
I and II  
II and III  
I and IV  
I and III 
Recursive program requires stack for storing the function state. Any recursive program can be rewritten as an iterative one. Advantage of recursion is "less programming effort", while it always lags behind iterative one in terms of performance.
Question 63 
FORTRAN implementation do not permit recursion because
they use static allocation for variables  
they use dynamic allocation for variables  
stacks are not available on all machines  
it is not possible to implement recursion on all machines 
→ Recursion requires dynamic allocation of data.
Question 64 
The recurrence relation that arises in relation with the complexity of binary search is:
T(n) = T(n/2) + k, k a constant  
T(n) = 2T(n/2) + k, k a constant  
T(n) = T(n/2) + log n  
T(n) = T(n/2) + n 
∴ T(n) = 2T(n/2) + k, k a constant
Question 65 
Which of the following algorithm design techniques is used in the quicksort algorithm?
Dynamic programming  
Backtracking  
Divide and conquer  
Greedy method 
Question 66 
In which one of the following cases is it possible to obtain different results for callby reference and callbyname parameter passing methods?
Passing a constant value as a parameter  
Passing the address of an array as a parameter  
Passing an array element as a parameter  
Passing an array following statements is true 
{ ........
a[ ] = {1, 2, 3, 4}
i = 0
fun(a[i]);
print a[0];
}
fun(int x)
{
int i = 1;
x = 8;
}
O/p:
Callbyreference = 8
Callbyvalue = 1
Question 67 
Which one of the following statements is false?
Optimal binary search tree construction can be performed efficiently using dynamic programming.  
Breadthfirst search cannot be used to find converted components of a graph.  
Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.  
Depthfirst search can be used to find connected components of a graph. 
Question 68 
Consider the following two functions:
Which of the following is true?
g_{1}(n) is O(g_{2}(n))  
g_{1} (n) is O(^{3})  
g_{2} (n) is O(g_{1} (n))  
g_{2} (n) is O(n)  
Both A and B 
Growth rate of g_{1} is less than that of g_{2} i.e., g_{1}(n) = O(g_{2}(n)) = O(n).
Question 69 
For parameters a and b, both of which are ω(1), T(n) = T(n^{1/a})+1, and T(b)=1.
Then T(n) is
θ(log_{a} log_{b} n)  
θ(log_{b} log_{a} n)
 
θ(log_{2} log_{2} n)
 
θ(log_{ab} n)

T(n) = [T(n^{1/a2})+1] + 1
= [T(n^{1/a3})+1] + 2
= [T(n^{1/a3})] + 3
= [T(n^{1/ak})] + b
= log_{b} n = a^{k}
= log log_{b} n = k log a
= k= log_{a} log_{b} n
T(n)=1+log_{a} log_{b} n
T(n)=O(log_{a} log_{b} n)
Question 70 
Let G = (V,E) be a weighted undirected graph and let T be a Minimum Spanning Tree (MST) of G maintained using adjacency lists. Suppose a new weighted edge (u,v) ∈ V×V is added to G. The worst case time complexity of determining if T is still an MST of the resultant graph is
θ(E+V)  
θ(E logV)  
θ(EV)  
θ(V) 
• As T is a minimum spanning tree and we need to add a new edge to existing spanning tree.
• Later we need to check still T is a minimum spanning tree or not, So we need to check all vertices whether there is any cycle present after adding a new edge.
• All vertices need to traverse to confirm minimum spanning tree after adding new edge then time complexity is O(V).
Method2:
Time Complexity:
Total vertices: V, Total Edges : E
• O(logV) – to extract each vertex from the queue. So for V vertices – O(VlogV)
• O(logV) – each time a new pair object with a new key value of a vertex and will be done at most once for each edge. So for total E edge – O(ElogV)
• So overall complexity: O(VlogV) + O(ElogV) = O((E+V)logV) = O(ElogV)
Note: Method1 is the most appropriate answer for giving a question.
Question 71 
Consider a graph G = (V, E), where V = {v_{1}, v_{2}, …, v_{100}}, E = {(v_{i}, v_{j})  1 ≤ i < j ≤ 100}, and weight of the edge (v_{i}, v_{j}) is i  j. The weight of the minimum spanning tree of G is ______.
99 
• N =100
• Edge weight is ij for Edge (vi,vj) {1<=i<=100}
• The weight of edge(v1,v2) is 1 , edge(v5,v6) is 1.
• So, 99 edges of weight is 99.
Question 72 
The root directory of a disk should be placed
at a fixed address in main memory  
at a fixed location on the disk  
anywhere on the disk  
at a fixed location on the system disk  
anywhere on the system disk 
Question 73 
Consider a simple connected graph G with n vertices and nedges (n>2). Then, which of the following statements are true?
G has no cycles.  
The graph obtained by removing any edge from G is not connected.  
G has at least one cycle.  
The graph obtained by removing any two edges from G is not connected.  
Both C and D. 
For example let us consider, n=3.
Question 74 
where O(n) stands for order n is:
O(n)  
O(n^{2})  
O(n^{3})  
O(3n^{2})  
O(1.5n^{2})  
B, C, D and E 
⇒ In this 'n' is constant. So, n is added to n times itself which is O(n^{2}).
Hence, (a) is wrong. And rest (B), (C), (D), (E) are correct.
Question 75 
Complexity of Kruskal’s algorithm for finding the minimum spanning tree of an undirected graph containing n vertices and m edges if the edges are sorted is __________
O(m log n) 
Question 76 
Following algorithm(s) can be used to sort n integers in the range [1...n^{3}] in O(n) time
Heapsort  
Quicksort  
Mergesort  
Radixsort 
As Radix sort is not comparison based sort (it is counting sort), so can be done in linear time.
Question 77 
Assume that the last element of the set is used as partition element in Quicksort. If n distinct elements from the set [1…..n] are to be sorted, give an input for which Quicksort takes maximum time.
n 
→ The array is already sorted in ascending order.
→ The array is already sorted in descending order.
Question 78 
The minimum number of comparisons required to sort 5 elements is _____
7 
= ⌈log(n!)⌉
= ⌈log(5!)⌉
= ⌈log(120)⌉
= 7
Question 79 
The weighted external path length of the binary tree in figure is _____
144 
Question 80 
If the binary tree in figure is traversed in inorder, then the order in which the nodes will be visited is ______
4, 1, 6, 7, 3, 2, 5, 8 
(Left, Root, Right)
So, the order will be
4, 1, 6, 7, 3, 2, 5, 8
Question 81 
Match the pairs in the following questions by writing the corresponding letters only.
(A) The number distinct binary trees with n nodes (P) n!/2 (B) The number of binary strings of length of 2n (Q) (3n/n) with an equal number of 0’s and 1’s (C) The number of even permutations of n objects (R) (2n/n) (D) The number of binary strings of length 6m which (S) 1/n+1(2n/n) are palindromes with 2n 0’s.
AS, BR, CP, DQ 
Question 82 
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Kruskal’s algorithm for finding a minimum spanning tree of a weighted graph G with vertices and m edges has the time complexity of:
O(n^{2})  
O(mn)  
O(m+n)  
O(m log n)  
O(m^{2}  
B, D and E 
→ Where m is no. of edges and n is number of vertices then n = O(m^{2})
→ O(m logn) < O(mn)
Question 83 
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: The following sequence of operations is performed on a stack:
PUSH (10), PUSH (20), POP, PUSH (10), PUSH (20), POP, POP, POP, PUSH (20), POPThe sequence of values popped out is:
20, 10, 20, 10, 20  
20, 20, 10, 10, 20  
10, 20, 20, 10, 20  
20, 20, 10, 20, 10 
→ PUSH(10), PUSH(10), PUSH(20), POP = 20 (ii)
→ PUSH(10), PUSH(10), POP = 10 (iii)
→ PUSH(10), POP = 10 (iv)
→ PUSH(20)
→ PUSH(20), POP = 20 (v)
⇒ 20, 20, 10, 10, 20
Question 84 
Match the pairs in the following questions:
(a) Strassen's matrix multiplication algorithm (p) Greedy method (b) Kruskal's minimum spanning tree algorithm (q) Dynamic programming (c) Biconnected components algorithm (r) Divide and Conquer (d) Floyd's shortest path algorithm (s) Depth first search
(a)  (r), (b)  (p), (c)  (s), (d)  (q) 
Kruskal's minimum spanning tree algorithm  Greedy method
Biconnected components algorithm  Depth first search
Floyd's shortest path algorithm  Dynamic programming
Question 85 
The numbers 1, 2, .... n are inserted in a binary search tree in some order. In the resulting tree, the right subtree of the root contains p nodes. The first number to be inserted in the tree must be
p  
p + 1  
n  p  
n  p + 1 
RST contains elements = p
Root contains = 1 element
1^{st} contains = n  (p + 1) element
Root contains the value is n  p.
Question 86 
In a depthfirst traversal of a graph G with n vertices, k edges are marked as tree edges. The number of connected components in G is
k  
k + 1  
n  k  1  
n  k 
In this question, we are going to applying the depth first search traversal on each component of graph where G is connected (or) disconnected which gives minimum spanning tree
i.e., k = np
p = n  k
Question 87 
In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.
1. BellmanFord algorithm A: O ( m log n) 2. Kruskal’s algorithm B: O (n^{3}) 3. FloydWarshall algorithm C: O (nm) 4. Topological sorting D: O (n + m)
1→ C, 2 → A, 3 → B, 4 → D  
1→ B, 2 → D, 3 → C, 4 → A  
1→ C, 2 → D, 3 → A, 4 → B  
1→ B, 2 → A, 3 → C, 4 → D 
Krushkal's algorithm → O(m log n)
FloydWarshall algorithm → O(n^{3})
Topological sorting → O(n+m)
Question 88 
A hash table contains 10 buckets and uses linear probing to resolve collisions. The key values are integers and the hash function used is key % 10. If the values 43, 165, 62, 123, 142 are inserted in the table, in what location would the key value 142 be inserted?
2  
3  
4  
6 
165%10 = 5 [occupy 5]
62%10 = 2 [occupy 2]
123%10 = 3 [3 already occupied, so occupies 4]
142%10 = 2 [2, 3, 4, 5 are occupied, so it occupies 6]
Question 89 
In a binary tree, for every node the difference between the number of nodes in the left and right subtrees is at most 2. If the height of the tree is h > 0, then the minimum number of nodes in the tree is:
2^{h1}  
2^{h1} + 1  
2^{h}  1  
2^{h} 
Above tree satisfies the property given in the question. Hence, only option (B) satisfies it.
Question 90 
Let T(n) be a function defined by the recurrence T(n) = 2T(n/2) + √n for n ≥ 2 and T(1) = 1 Which of the following statements is TRUE?
T(n) = θ(log n)  
T(n) = θ(√n)  
T(n) = θ(n)  
T(n) = θ(n log n) 
Question 91 
Let G be a weighted undirected graph and e be an edge with maximum weight in G. Suppose there is a minimum weight spanning tree in G containing the edge e. Which of the following statements is always TRUE?
There exists a cutset in G having all edges of maximum weight  
There exists a cycle in G having all edges of maximum weight  
Edge e cannot be contained in a cycle  
All edges in G have the same weight 
(B) False, because the cutset of heaviest edge may contain only one edge.
(C) False. The cutset may form cycle with other edge.
(D) False. Not always true.
Question 92 
A binary search tree contains the numbers 1, 2, 3, 4, 5, 6, 7, 8. When the tree is traversed in preorder and the values in each node printed out, the sequence of values obtained is 5, 3, 1, 2, 4, 6, 8, 7. If the tree is traversed in postorder, the sequence obtained would be
8, 7, 6, 5, 4, 3, 2, 1  
1, 2, 3, 4, 8, 7, 6, 5  
2, 1, 4, 3, 6, 7, 8, 5  
2, 1, 4, 3, 7, 8, 6, 5 
5, 3, 1, 2, 4, 6, 8, 7
Inorder is the sorted sequence, i.e.,
1, 2, 3, 4, 5, 6, 7, 8
And we know that with given inorder and preorder, we can draw a unique BST.
So, the BST will be,
Hence, postorder traversasl sequence is,
2, 1, 4, 3, 7, 8, 6, 5
Question 93 
Let G be a directed graph whose vertex set is the set of numbers from 1 to 100. There is an edge from a vertex i to a vertex j iff either j = i + 1 or j = 3i. The minimum number of edges in a path in G from vertex 1 to vertex 100 is
4  
7  
23  
99 
j = i +1
(or)
j = 3i
Second option will help us reach from 1 to 100 rapidly. The trick to solve this question is to think in reverse way. Instead of finding a path from 1 to 100, try to find a path from 100 to 1.
So, the edge sequence with minimum number of edges is
1 → 3 → 9 → 10 → 11 → 33 → 99 → 100
which consists of 7 edges.
Question 94 
To carry out white box testing of a program, its flow chart representation is obtained as shown in the figure below:
For basis path based testing of this program, its cyclomatic complexity is
5  
4  
3  
2 
Question 95 
An array of integers of size n can be converted into a heap by adjusting the heaps rooted at each internal node of the complete binary tree starting at the node ⌊(n  1) /2⌋, and doing this adjustment up to the root node (root node is at index 0) in the order ⌊(n  1)/2⌋, ⌊(n  3)/2⌋, ....., 0. The time required to construct a heap in this manner is
O(log n)  
O(n)  
O(n log log n)  
O(n log n) 
And we know that time complexity for building the heap is O(n).
Question 96 
Which one of the following binary trees has its inorder and preorder traversals as BCAD and ABCD, respectively?
Left root right.
Preorder traversal is,
Root left right.
Question 97 
Let f(n), g(n) and h(n) be functions defined for positive inter such that f(n) = O(g(n)), g(n) ≠ O(f(n)), g(n) = O(h(n)), and h(n) = O(g(n)). Which one of the following statements is FALSE?
f(n) + g(n) = O(h(n)) + h(n))  
f(n) = O(h(n))  
fh(n) ≠ O(f(n))  
f(n)h(n) ≠ O(g(n)h(n)) 
So, f(n)g(n) = O(g(n)h(n)) is True.
Hence, (D) is false.
Question 98 
Consider the undirected graph below:
Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?
(E, G), (C, F), (F, G), (A, D), (A, B), (A, C)  
(A, D), (A, B), (A, C), (C, F), (G, E), (F, G)  
(A, B), (A, D), (D, F), (F, G), (G, E), (F, C)  
(A, D), (A, B), (D, F), (F, C), (F, G), (G, E)

(C) produces connected component every instead a new edge is added but when first vertex is chosen (first vertex is chosen randomly) first edge must be minimum weight edge that is chosen. Therefore, (A, D) must be chosen before (A, B). Therefore (C) is false.
Question 99 
Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.
RECURSIVE ALGORITHM RECURRENCE RELATION P. Binary search I. T(n) = T(nk) + T(k) + cn Q. Merge sort II. T(n) = 2T(n1) + 1 R. Quick sort III. T(n) = 2T(n/2) + cn S. Tower of Hanoi IV. T(n) = T(n/2) + 1
PII, QIII, RIV, SI  
PIV, QIII, RI, SII  
PIII, QII, RIV, SI  
PIV, QII, RI, SIII 
Binary search  T(n/2) + 1
Merge sort  T(n) = 2T(n/2)+ cn
Tower of hanoi  T(n) = 2T(n1) +1
Question 100 
f_{2}, f_{3}, f_{1}  
f_{3}, f_{2}, f_{1}  
f_{2}, f_{1}, f_{3}  
f_{1}, f_{2}, f_{3} 
The asymptotic notation order should be
Constant < logarithmic < linear < polynomial < exponential < factorial
F2 and F3 → Polynomial
F1 → Exponential
By the order of asymptotic notations F1 is definitely larger.
Method1:
Consider n=100
F1 : 100 ^100 ⇒ 1.e+200
F2 : N^log(100) base 2 ⇒ 100 ^ log(100) base 2 ⇒ 100^6.6438561897747 = 1,93,96,00,91,15,564.181300016469223466
F3 : N^Sqrt(n) ====> 100^Sqrt(100) ⇒ 100^10 ⇒ 10,00,00,00,00,00,00,00,00,000
Method2:
We can apply "log" on both sides.
log(F1)=nlog10 (base 2)
log(F2)=(logn)^2 = logn * logn (base 2)
log(F3)=sqrt(n)logn (base 2)
Answer: F2< F3< F1
Question 101 
R_{7is achieved by three different solutions.
}  
R_{7}=18  
R_{7}=19  
R_{7}cannot be achieved by a solution consisting of three pieces. 
There are 3 different possible ways to get the maximum amount.
P[6] + P[1] → 17+1 = 18
P[2] + P[2] + P[3] → 5+5+8 = 18
P[7] → 18 = 18
Question 102 
3 
To find the number of spanning trees using 2 methods:
 If graph is complete, use n^n2 formula
 If graph is not complete, then use kirchhoff theorem.
Steps in Kirchoff’s Approach:
(i) Make an Adjacency matrix.
(ii) Replace all nondiagonal is by – 1.
(iii) Replace all diagonal zero’s by the degree of the corresponding vertex.
(iv) Cofactors of any element will give the number of spanning trees.
Using the Kirchhoff theorem will take lot of time because the number of vertices are 9.
So, we use brute force technique to solve the problem with the help of Kruskal's algorithm.
Question 103 
Question 104 
Quicksort using the last element as pivot  
Insertion sort  
Selection sort  
Mergesort 
Quick sort(with last element as pivot) → will give the worst case time complexity as O(n^2).
Merge Sort → The merge sort will not depend upon the input order and will give O(nlogn) time complexity.
Insertion Sort → Insertion sort will perform best case time complexity when the input array is in sorted order. If the array is already sorted then the inversion count is 0 and If the array is sorted in reverse order that inversion count is the maximum.
Note: Inversion formal definition is two elements A[i] and A[j] form an inversion if A[i] > A[j] and i < j.
The number of comparisons will not take more than “n” for the given input array.
Selection Sort → Selection sort will not depend upon the input order and will give O(n^2) time complexity.
Question 105 
Let assume n=512
Method1:
Using standard recursive algorithm:
MaxMin is a recursive algorithm that finds the maximum and minimum of the set of elements {a(i), a(i+1), ..., a(j)}. The situation of set sizes one (i=j) and two (i=j1) are handled separately. For sets containing more than two elements, the midpoint is determined ( just as in binary search) and two new subproblems are generated. When the maxima and minima of these subproblems are determined, the two maxima are compared and the two minima are compared to achieve the solution for the entire set.
To find the number of element comparisons needed for Maxmin, T(n) represents this number, then the resulting recurrence relation is
When n is a power of two, n = 2^{k} for some positive integer k, then
T(n)=2T(n/2)+2
=2(2T(n/4)+2)+2
=4T(n/4)+4+2
፧
=2k1T(2)+1ik12i
=2k1+2k2
=3n/22
→ The given example n=512
Apply into 3n/2 2
= (3*512)/2 2
= 7682
= 766
Method2:
Find the minimum and maximum independently, using n1 comparisons for each, for a total of 2n2 comparisons. In fact, at most 3⌊n/2⌋ comparisons are sufficient to find both the minimum and the maximum. The strategy is to maintain the minimum and maximum elements seen thus far. Rather than processing each element of the input by comparing it against the current minimum and maximum, at a cost of 2 comparisons per element, we process elements in pairs. We compare pairs of elements from the input first with each other, and then we compare the smaller to the current minimum and the larger to the current maximum, at a cost of 3 comparisons for every 2 elements.
Setting up initial values for the current minimum and maximum depends on whether n is odd or even. If n is odd, we set both the minimum and maximum to the value of the first element,and then we process the rest of the elements in pairs. If n is even, we perform 1 comparison on the first 2 elements to determine the initial values of the minimum and maximum, and then process the rest of the elements in pairs as in the case for odd n.
Let us analyze the total number of comparisons. If n is odd, then we perform 3⌊n/2⌋ comparisons. If n is even, we perform 1 initial comparison followed by 3(n2)/2 comparisons, for a total of (3n/2)2. Thus, in either case, the total number of comparisons is at most 3⌊n/2⌋.
Given an even number of elements. So, 3n/2 2 comparisons.
= (3*512)/2 2
= 7682
= 766
Method3:
By using Tournament Method:
Step1: To find the minimum element in the array will take n1 comparisons.
We are given 512 elements. So, to find the minimum element in the array will take 5121= 511
Step2: To find the largest element in the array using the tournament method.
 After the first round of Tournament , there will be exactly n/2 numbers =256 that will lose the round.
 The biggest loser (the largest number) should be among these 256 loosers.To find the largest number will take (n/2)−1 comparisons =2561 = 255
Total 511+255= 766
Question 106 
Divide and Conquer  
Dynamic Programming  
Greedy Algorithm  
Branch and Bound 
Sum of subset problem:
Given a set of nonnegative integers, and a value sum, determine if there is a subset of the given set with the sum equal to the given sum.
Example:
Array set[ ]= {1,2,3,4,5} and sum=6
Here there are possible subsets (2,4),(1,2,3) and (5,1).
Question 107 
m*n  
maximum of m and n  
minimum of m and n  
m+n–1 
For example: Take 2 subarrays of size 3 such as 1,3,5 and another subarray 2,4,6. Then in this case number of comparisons is 5 which is m+n1 can be taken as O(m+n).
Question 108 
Merge Sort  
Insertion Sort  
Selection Sort  
Quick Sort 
Question 109 
Detailed  
Aggregate  
Qualitative  
None of the above 
Question 110 
Greedy method  
Divideandconquer  
Dynamic Programming  
Backtracking 
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort the array.
Question 111 
O(n^{2}), O(n^{2})  
O(n^{2}), O(nlog_{2}n)  
O(n log_{2}n), O(n^{2})  
O(n log_{2}n), O(n log_{2}n) 
Question 112 
O ( log_{2}n )  
O ( n )  
O ( n log_{2}n )  
O ( n_{2} ) 
Question 113 
1, 3, 5, 7, 9, 11, 13  
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55  
0, 1, 3, 4, 7, 11, 18, 29, 47  
0, 1, 3, 7, 15 
with seed values
F0 = 0 and F1 = 1.
The Fibonacci numbers are the numbers in the following integer sequence.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144..
Question 114 
T(1) = T(2) = T(3)  
T(1) + T(3) = 2 * T(2)  
T(1) – T(3) = T(2)  
T(1) + T(2) = T(3) 
Solution:
Better way of solving is substitute.
T(1) = 1
T(2) = 2
T(3) = 3
T(4) = T(3) + T(2)  T(1) = 4
T(5) = T(4) + T(3)  T(2) = 5
:
:
T(n) = T(n1) + T(n2)  T(n3) = n1 + n2  n + 3 = n
So order is O(n)
Question 115 
O(n)  
O(n log n)  
O(n^{3/2})  
O(n^{3}) 
→ For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place.
→ To find the transitive closure of given relation, we can represent the given relation by the graph such that if x R y then there should be directed edge between x and y in a graph.
→ The time complexity of floyd warshall algorithm is O(V3) where V is the number of vertices in the given graph. Take V as the number of elements is set i.e., N.
→ Therefore time complexity for the given question is O(n3).
Question 116 
Dynamic programming  
Backtracking  
Greedy  
Divide and Conquer 
→ A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices
→ The FloydWarshall algorithm is an example of dynamic programming.
Question 117 
Both NPcomplete  
Both in P  
NPcomplete and in P, respectively  
Undecidable and NPcomplete, respectively 
→ 3SAT problem is NP complete problem
Question 118 
T(n)=2T(n/2)+k , where k is constant  
T(n)=T(n/2) +k, where k is constant  
T(n)=T(n/2)+logn  
T(n)=T(n/2)+n 
The time to search in an array of ‘n’ elements is equal to the time to search in an array of n/2 elements plus k comparison.
T(n)=T(n/2)+k // k is constant
Question 119 
Insertion Sort  
Quick Sort  
Heap Sort  
Selection Sort 
Question 120 
11  
12  
13  
10 
Step2: 8,7,9,22,31,5,13
Step3: 8,7,9,22,5,31,13
Step4: 8,7,9,22,5,13,31
Step5: 7,8,9,22,5,13,31
Step6: 7,8,9,5,22,13,31
Step7: 7,8,9,5,13,22,31
Step8: 7,8,5,9,13,22,31
Step9: 7,5,8,9,13,22,31
Step10: 5,7,8,9,12,22,31
Note:Total 10 swaps are required to sort the array.
Question 121 
Prim’s algorithm  
Dijkstra's algorithm  
BellmanFord’s algorithm  
FloydWarshall’s algorithm 
Dijkstra's algorithm → Single source shortest path for only positive values
Bellman Ford’s algorithm → Single source shortest path for either positive values or negative values but not negative weight cycle.
FloydWarshall’s algorithm→ All pair shortest path problem
Question 122 
O(nlogn)  
O(n^{3/2})  
O(n^{3})  
O(n) 
Question 123 
1  
5  
10  
20 
1st iteration will compare 4 numbers with the 5
2nd iteration will compare 3 numbers with the 4
3rd iteration will compare 2 numbers with the 3
4th iteration i will compare 1 number with the 2
So, the total number of comparisons is 4 + 3 + 2 + 1 = 10
It can be viewed as the sum of the sequence of the first (n1) numbers starting by 1
S = ((1 + (n1) )*(n1) )/2
S = 10
Question 124 
T(n) = 2T(n1), if n>0
= 1, otherwise
Then T(n) is (in big O order)
O(n)  
O(2  
O(1)  
O(log n) 
Question 125 
r, s, p, q  
r, p, s, q  
q, s, p, r  
s, p, q, r 
Insertion sort→ Decrease and Conquer
Gaussian Elimination→ Transform and Conquer
Floyd Shortest path algorithm→ Dynamic programming
Question 126 
n/2  
(n1)/2  
(n+1)/2  
None of these 
= No. of comparisons if element present in 1^{st} position + No. of comparisons if element present in 2^{nd}nd position + ............. + No. of comparisons if element present in n^{th} position
= 1+2+3+ ... + n
= n(n+1)/2
Since there are n elements in the list, so average no. of comparisons
= Total comparisons/Total no. of elements
= (n(n+1)/2)/n
= n+1/2
Question 127 
Dynamic programming  
Backtracking  
Divide and conquer  
Greedy method 
1. Quick sort
2. Merge sort
3. Heap sort
****Binary search also follows the divide and conquer.
Question 128 
T(n) = 2T(√n) + 1
T(1) = 1
Which of the following is true?
T(n)= O(log log n)  
T(n) = O(log n)  
T(n) = O(√n)  
T(n)= O(n) 
Question 129 
Explanation 
Step 1 Sort array A.
Step 2 Traverse through A to find the most frequent element by keeping counters. Since the numbers with same value would appear in consecutive locations in the sorted array A, this can be achieved in one pass. Let maximum frequency is M AX.
Step 3 Traverse through A once more to report all the elements that matches whose frequencies match with M AX.
Since Step 1 needs O(n log n) time, the overall time complexity is O(n log n).
Question 130 
Explanation 
For i ∈ 1, 2, . . . , n, let LAS[i] denote the length of the longest ascending subsequence ending at position i. We can set up the following inductive calculation for LAS[i].
• LAS[1] = 1
• For i ∈ 2, 3, . . . , n, LAS[i] = 1 + max{LAS[j]  1 ≤ j < i, S[j] < S[i]}
We can use this to compute LAS[i] for i ∈ 1, 2, . . . , n in time O(n 2 ) — computing LAS[i] takes time O(i), so the total time is .The final answer is the maximum value among all the LAS[i]’s, which can be found in time O(n).
The computation of LAS can be improved to O(n log n) by maintaining auxiliary intermediate information about the longest ascending sequences computed at each stage. Refer to any standard textbook on algorithms.
Question 131 
L  
L/2  
(L+1)/2  
2L 
A linear search runs in at worst linear time and makes at most n comparisons, where n is the length of the list , whether element is found or not
Question 132 
O(n)  
O(n log n)  
O(n^{2})  
O(n^{3}) 
T(n + 1) = 2n + T(n)
By substitution method:
n=1, T(2)=2x1+T(1)=2+10=12
n=2,T(3)=2x2+T(2)=4+12=16
n=3,T(4)=2x3+T(3)=6+16=22
n=4,T(5)=2x4+T(4)=8+22=30
n=5,T(6)=2x5+T(5)=10+30=40
n=6,T(7)=2x6+T(6)=12+40=52
n=7,T(8)=2x7+T(7)=14+52=66
The above T(n) value is always less than n2, Then the function is O(n^{2})
Question 133 
Greedy method  
Backtracking  
Dynamic programming  
Divide and Conquer 
Conceptually, a merge sort works as follows:
1. Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
2. Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
Question 134 
Insertion sort, Quick sort  
Quick sort, Quick sort  
Quick sort, Insertion sort  
Insertion sort, Insertion sort 
Question 135 
3.00  
3.46  
2.81  
3.33 
Each level has 2i nodes where i is level number
Level 0 has one node so one comparison(1)
Level 1 has two nodes so for each node two comparisons and total four comparisons (4)
Level 2 has four nodes and total comparisons are 4x3 =12 comparisons
Level3 has total 8 nodes but in the given question we need to process only 11 items and already we covered 7 nodes so remaining nodes 4 nodes at level 3. Total comparisons at level3 are 4*4=16
So, total number of comparisons at each level are = 1+4+12+16= 33
Average comparisons required for 11 items = 33/11 = 3
Question 136 
 Quicksort
 Heapsort
 Mergesort
1 and 2 only  
2 and 3 only  
3 only  
1, 1 and 3 
Question 137 
int module(int n)
{
if (n == 1)
return 1;
else
return (n + module(n1));
}
O(n)  
O(log n)  
O(n^{2})  
O(n!) 
Question 138 
The solution of the recurrence relation T(m) = T(3m/4) + 1 is :
θ(lg m)  
θ(m)  
θ(mlg m)  
θ(lglg m)

a = 1, b = 4/3, k = 0, p = 0
Here, a = b^{k}
So, T(m) = n^{log a/ log b} log^{p+1} n
T(m) = θ(log m)
Question 139 
Which of the following algorithms solves the singlesource shortest paths ?
Prim’s algorithm  
Floyd  Warshall algorithm  
Johnson’s algorithm  
Dijkstra’s algorithm

Question 140 
A text is made up of the characters A, B, C, D, E each occurring with the probability 0.08, 0.40, 0.25, 0.15 and 0.12 respectively. The optimal coding technique will have the average length of :
2.4  
1.87  
3.0  
2.15 
Step 1: Select two characters with smallest probabilities and merge them.
Step 2: Select two characters from Step 1 with smallest probabilities and merge them.
Step 3: Select two characters (from Step 2) with smallest probabilities and merge them.
Step 4: Merge the remaining two probabilities.
A = 1000 (4bits)
E = 1001 (4bits)
D = 101 (3bits)
C = 11 (2bits)
B = 0 (1bit)
Question 141 
A binary search tree in which every nonleaf node has nonempty left and right subtrees is called a strictly binary tree. Such a tree with 19 leaves :
cannot have more than 37 nodes  
has exactly 37 nodes
 
has exactly 35 nodes  
cannot have more than 35 nodes 
Where,
L = Number of leaf nodes
n = Nary tree
I = Number of intermediate nodes
Now,
19 = I(2  1) + 1
19 = 2I  I + 1
19 = I + 1
I = 18
Total Number of nodes in a tree = Number of Intermediate nodes + Number of Leaf nodes
Total Number of nodes in a tree = 19 + 18
Total Number of nodes in a tree = 37
Question 142 
Match the following with respect to algorithm paradigms :
ListI ListII (a) The 8Queens problem (i) Dynamic programming (b) SingleSource shortest paths (ii) Divide and conquer (c) STRASSEN’s Matrix multiplication (iii) Greedy approach (d) Optimal binary search trees (iv) Backtracking
(a)(iv), (b)(i), (c)(iii), (d)(ii)  
(a)(iv), (b)(iii), (c)(i), (d)(ii)  
(a)(iii), (b)(iv), (c)(ii), (d)(i)  
(a)(iv), (b)(iii), (c)(ii), (d)(i) 
SingleSource shortest paths use Greedy approach.
STRASSEN’s Matrix multiplication use Divide and conquer technique.
Optimal binary search trees use Dynamic programming.
Question 143 
45  
72  
360  
450 
Octal number having → 5 digits
The octal number system base value = 8
The maximum number of comparison = (number of items) * (radix) * (number of digits)
= 9 * 5 * 8
= 360
Question 144 
A 5ary tree is tree in which every internal node has exactly 5 children. The number of left nodes in such a tree with 8 internal nodes will be :
30  
33  
45  
None of the above 
L = I(n  1) + 1
Where,
L = Number of leaf nodes
n = Nary tree
I = Number of intermediate nodes
Now, L = 8(5  1) + 1
L = 8(4) + 1
L = 33
Total Number of nodes in a tree = Number of Intermediate nodes + Number of Leaf nodes
Total Number of nodes in a tree = 8 + 33
Total Number of nodes in a tree = 41
Question 145 
Consider a Boolean function of ‘n’ variables. The order of an algorithm that determines whether the Boolean function produces a output 1 is :
Logarithmic  
Linear
 
Quadratic  
Exponential 
Set 1 have “n” variables and each variable in set 1 can be mapped with one boolean value in set 2 i.e,. each variable in set 1 have 2 choices and we have “n” such variables in set 1.
→ So, total number of choices we get maximum 2^{n}. It means exponential.
Question 146 
E is the number of edges in the graph and f is maximum flow in the graph. When the capacities are integers, the runtime of FordFulkerson algorithm is bounded by :
O(E∗f)
 
O(E^{2}∗f)
 
O(E∗f^{2})
 
None of the above 
→ The FordFulkerson method or FordFulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network.
→ It is called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified or it is specified in several implementations with different running times.
→ the runtime of FordFulkerson is bounded by O(E*f), where E is the number of edges in the graph and f is the maximum flow in the graph. This is because each augmenting path can be found in O(E) time and increases the flow by an integer amount of at least 1, with the upper bound f.
→ The variation of the FordFulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the EdmondsKarp algorithm, which runs in O(VE^{2})time.
Question 147 
T(n)=2T(n1), if n>0.
=1, otherwise
O(nlogn)  
O(n^{2})  
O(2^{n})  
O(n) 
= 2 [2T(n  2) + 1] + 1
= 2^{2} T(n  2) + 3
⋮
= 2^{k}T(n  k) + (2^{k}  1)
= 2^{n1}T(1) + (2^{n1}  1)
= 2^{n1}+ 2^{n1}  1
= 2^{n}  1
≅ O( 2^{n} )
Question 148 
O(nlogn)  
O(logn)  
O(n)  
O(n^{2}) 
Question 149 
O(n)  
O(nlogn)  
O(logn) for search and insert, and O(n) for delete  
None of these 
→ In question, they are not mentioned about elements are sorted or unsorted. So, worst case we have to consider unsorted elements.
→ All operations(i.e. Search,update and delete) will take O(n).
Question 150 
Greedy approach  
Dynamic programming  
Backtracking paradigm  
Divide and conquer paradigm 
→ Dijkstra’s algorithm is solving the problem of single source shortest path.
Question 151 
89,19,50,17,12,15,2,5,7,11,6,9,100
Minimum _____ number of interchanges are needed to convert it into a maxheap
3  
4  
5  
6 
2nd swap is: 100 and 50
3rd swap is: 100 and 89
Question 152 
Selection sort  
Bubble sort  
Merge sort  
Quick sort 
Question 153 
12  
10  
6  
5 
Question 154 
Greedy  
Branch and bound  
Dynamic Programming  
Backtracking 
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
Question 155 
True,True  
False, False  
True,False  
False,False 
Question 156 
Quick sort  
Merge sort  
Shell sort  
Bubble sort 
Question 157 
T(n)=2T(n2)+2  
T(n)=2T(n/2)+1  
T(n)=2T(n2)+n  
T(n)=2T(n1)+1 
T(n) = 2T(n – 1) + 1
= 2 [2T(n – 2) + 1] + 1
= 2 ^{2 }T(n – 2) + 3
⋮
= 2 ^{k} T( n – k) + (2 ^{k} – 1)
n – k = 1
= 2 ^{n1} T(1) + (2^{ n1} – 1)
= 2^{ n1} + 2^{ n1} – 1
= 2 ^{n} – 1
≌ O(2 ^{n} )
Question 158 
O(mn)  
O(m)  
O(m+n)  
O(n) 
Step1: Sorting of edges takes O(ELogE) time.
Step2: After sorting, we iterate through all edges and apply find union algorithm.
The find and union operations can take at most O(1) time if you use Disjoint set . So overall complexity is O(ELogE + E) time.
Given the edges are already sorted, so we need to do only second step i.e.,we iterate through all edges and apply findunion algorithm. The find and union operations can take at most O(1) time. So, the total time complexity will be O(E).
Question 159 
Consider the following terminology and match List 1 and List 2 and choose the correct answer from the code given below
 b = branch factor
d = depth of shallowest solution
M = Maximum depth of the search tree
l = depth limit
List 1 List 2 (a) BFS i) O(bd) (b) DFS ii) O(b^{d}) (c) Depth Limited Search iii) O(bm) (d) Iterative deepening Search iv) O(bl)Code:
(a)(iii), (b)(ii), (c)(iv), (d)(i)
 
(a)(ii), (b)(iii), (c)(iv), (d)(i)
 
(a)(i), (b)(ii), (c)(iv), (d)(iii)
 
(a)(i), (b)(iii), (c)(iv), (d)(ii)

DFS → O(bm) worst case space complexity
Depth  Limited Search → O(bl)
Iterative deepening Search → O(bd)
Note:
Based upon BFS and DFS we can find the solution.
Question 160 
12  
10  
6  
5 
because n^{2} is asymptotically larger than nlogn. Finally, 10nlog _{10} n ≤ 0.0001n ^{2}
Let n = 10^{ k} records. Substitute into 10nlog _{10} n ≤ 0.0001n^{ 2}
10(10^{ k} )log_{ 10} 10^{ }k ≤ 0.0001(10^{ k} )^{ 2}
10^{ k+1} k ≤ 0.0001 × 10^{ 2k}
k ≤ 10
2k−k−1−4
k ≤ 10
k−5
According to the problem value 6 is suitable for K.
Question 161 
O(nlogn),O(nlogn) and O(n^{2})  
O(n^{2}),O(n^{2}) and O(nlogn)  
O(n^{2}),O(nlogn) and O(nlogn)  
O(n^{2}),O(nlogn) and O(n^{2}) 
merge sort → O(nlogn)
quick sort → O(n^{2})
Question 162 
O(logn)  
O(n)  
O(nlogn)  
none of the above, as the tree cannot be be uniquely determined. 
T(n) = T(k) + T(nk1) + logn
In worst case T(n) = O(nlogn), when k=0
But searching for an element in the inorder traversal of given BST can be done in O(1) because we have n elements from 1...n so there is no need to search an element if last element in post order is say 5 we take it as root and since 4 elements are split the post order array into two (first 4 elements), (6th element onward) and solve recursively. Time complexity for this would be
T(n) = T(k) + T(nk1)+O(1)
Which gives T(n) = O(n)
since we know that all elements must be traversed at least once T(n) = θ(n)
Question 163 
O(n)  
O(m)  
O(m+n)  
O(mn) 
Suppose if we are using Adjacency matrix means it takes θ(n ^{2} ).
Question 164 
solves it in linear time using a left to right pass of the array  
solves in linear time using a right to left pass of the array  
solves it using divide and conquer in time O(nlogn)  
solves it in time O(n ^{2} ) 
● Start from the right keeping track of largest element (currentLeader). If a larger element is found, then print it and set currentLeader to the current element.
● We can consider the last element is a leader since there is no element to its right. Solves it in linear time using a right to left pass of the array will takes time complexity is O(n).
Question 165 
Bellman ford algorithm for single source shortest path  
floyd Warshall for all pairs shortest paths  
01 knapsack problem  
Prim’s minimum spanning tree 
Question 166 
Maximum sum subsequence in an array  
Maximum sum subarray in an array  
Maximum product subsequence in an array  
Maximum product subarray in an array 
def max_subarray(A):
max_ending_here = max_so_far = A[0]
for x in A[1:]:
max_ending_here = max(x, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
Question 167 
O(nlogd)  
O(n^{2} logd)  
O(nd)  
O(n^{2}d) 
→ They are asking to find the atmost distance from every element. So, it will take O(n^{2}*d) from every node.
Note: Using heap sort , time complexity will be O(nlogd) and most effective than insertion sort.
Question 168 
Every minimum spanning tree of G must contain e_{min} and e_{max}  
If e_{max} is in a minimum spanning tree, then its removal must disconnect G  
No minimum spanning tree contains e_{max}  
G has a multiple minimum spanning trees 
If they mentioned distinct weights then we will get only one MST. So, OptionC is most appropriate solution.
Question 169 
nlog(n/2)  
nlogn  
(nlogn)/2  
logn 
Question 170 
Match List1 with List2 and choose the correct answer from the code given below
List 1 List 2 (a) Greedy Bestfirst Search (i) Selects a node for expansion if optimal path to that node has been found (b) A* Search (ii) Avoids substantial overhead associated with keeping the sorted queue of nodes (c) Recursive BestFirst Search (iii) Suffers from excessive node generation (d) Iterativedeepening A* Search (iv) Time complexity depends upon the quality of heuristicCode:
(a)(iv), (b)(iii), (c)(ii), (d)(i)  
(a)(i), (b)(iv), (c)(iii), (d)(ii)
 
(a)(iv), (b)(i), (c)(ii), (d)(iii)
 
(a)(i), (b)(ii), (c)(iii), (d)(iv)

→ A* Search: Time complexity depends upon the quality of heuristic.
→ Recursive BestFirst Search: Suffers from excessive node generation. RBFS depends on how widely the promising nodes are separated in the search tree, and is harder to anticipate.
→ Iterativedeepening A* Search: Avoids substantial overhead associated with keeping the sorted queue of nodes.
Question 171 
Insertion sort  
merge sort  
Quick sort  
bubble sort 
Question 172 
O(m*n)  
O(m+n)  
O(m^{2})  
O(m^{n}) 
1. DFS search produces a DFS tree/forest
2. Strongly Connected Components form subtrees of the DFS tree.
3. If we can find head of such subtrees, we can print/store all the nodes in that subtree (including head) and that will be one SCC.
4. There is no back edge from one SCC to another (There can be cross edges, but cross edges will not be used while processing the graph).
Note: Time complexity for Tarjan’s algorithm is O(V+E)
Kosaraju’s algorithm
1) Create an empty stack ‘S’ and do DFS traversal of a graph. In DFS traversal, after calling recursive DFS for adjacent vertices of a vertex, push the vertex to stack. In the above graph, if we start DFS from vertex 0, we get vertices in stack as 1, 2, 4, 3, 0.
2) Reverse directions of all arcs to obtain the transpose graph.
3) One by one pop a vertex from S while S is not empty. Let the popped vertex be ‘v’. Take v as source and do DFS (call DFSUtil(v)). The DFS starting from v prints strongly connected component of v. In the above example, we process vertices in order 0, 3, 4, 2, 1 (One by one popped from stack).
Note: Time complexity for kosaraju’s algorithm is O(V+E)
Question 173 
P  
NP  
Both (A) and (B)  
None of these 
Question 174 
logn  
n  
n ^{2 }  
n^{ n} 
Question 175 
Consider the graph shown below :
Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
13  
16  
17  
14 
The weight of this minimum spanning tree is 16.
Question 176 
Match List 1 with List 2 and choose the correct answer from the code given below :
List I List II (Graph Algorithm) (Time Complexity) (a) Dijkstra’s algorithm (i) O(E lg E) (b) Kruskal’s algorithm (ii) Θ(V3) (c) FloydWarshall algorithm (iii) O(V2) (d) Topological sorting (iv) Θ(V + E)
Where V and E are the number of vertices and edges in graph respectively.
Code :(a)(i), (b)(iii), (c)(iv), (d)(ii)
 
(a)(i), (b)(iii), (c)(ii), (d)(iv)
 
(a)(iii), (b)(i), (c)(ii), (d)(iv)
 
(a)(iii), (b)(i), (c)(iv), (d)(ii) 
→ Kruskal's algorithm is a minimumspanningtree algorithm which finds an edge of the least possible weight that connects any two trees in the forest and time complexity is O(E log E).
→ A topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v,u comes before v in the ordering Θ(V + E).
→ Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights. A single execution of the algorithm will find the lengths of shortest paths between all pairs of vertices.
Question 177 
The second smallest of n elements can be found with _______ comparisons in worst case.
n + ceil(lg n)  2
 
n  1  
lg n  
3n/1

This takes n1 comparisons and each element involved is compared at most log n times. Finally, we get the largest element A1’’’. This element which is largest, has won comparison with the second largest element at some point. So, keeping track of items compared with the largest element will give us a list of (log n) elements. In this list we find out largest, which will be the second largest of all.
So, no. of comparisons = (n  1) + log2 (n)  1
Example: 512 elements
In this case no. of comparisons will be, (512  1) + log2(512)  1
= 511 + (9  1)
= 511 + 8
= 519
Question 178 
Place reference to them in and array an sort the array  
place them in a linked list and sort the linked list  
place pointers to them in an array and sort the array  
place them in an array and sort the array 
OptionB is not correct because it will extra time complexity and searching very difficult
OptionD is suitable for small amount of objects/elements
Question 179 
Consider two sequences X and Y :
 X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4  
2  
3  
5ComputerNetworks 
Possible subsequences are : 130, 1210, 1301 and length of longest common subsequence is 4.
Question 180 
The solution of recurrence relation : T(n) = 2T(sqrt(n)) + lg(n) is
O(lg (n) lg(n))
 
O(lg (n))
 
O(n lg (n))
 
O(lg (n) lg(lg(n)))

Question 181 
Optimization  
NP complete  
Linear Solution  
Sorting 
Question 182 
No. of inputs  
Arrangement of elements in an array  
Size of elements  
Pivot element 
Question 183 
Processor and Memory  
Complexity and Capacity  
Time and Space  
Data and Space 
Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. "Time" can mean the number of memory accesses performed, the number of comparisons between integers, the number of times some inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take. We try to keep this idea of time separate from "wall clock" time, since many factors unrelated to the algorithm itself can affect the real time (like the language used, type of computing hardware, proficiency of the programmer, optimization in the compiler, etc.). It turns out that, if we chose the units wisely, all of the other stuff doesn't matter and we can get an independent measure of the efficiency of the algorithm.
Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. We often speak of "extra" memory needed, not counting the memory needed to store the input itself. Again, we use natural (but fixedlength) units to measure this. We can use bytes, but it's easier to use, say, number of integers used, number of fixedsized structures, etc. Space complexity is sometimes ignored because the space used is minimal and/or obvious, but sometimes it becomes as important an issue as time.
Question 184 
O(log n)  
O(n)  
O(n logn)  
None of these 
a=1,b=2,k=1,p=0
=ak
=O(n)
Question 185 
It can be used to decide the best algorithm that solves a given problem  
It is the lower bound of the growth rate of algorithm  
It determines the maximum size of a problem that can be solved in a given amount of time  
Both (A) and (B) 
● The letter O is used because the growth rate of a function is also referred to as the order of the function.
● A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Question 186 
50.2 sec  
6.7 sec  
72.7 sec  
11.2 sec 
The Worst case time complexity is O(n^{2})
Average and best case time complexity is O(n logn)
Step1: Time taken to sort 1000 names by quicksort
100= c*nlogn
= c*1000log1000
= 100
c=1/10log1000
Now, time taken to sort 100 names by quicksort,
=c*nlogn
= (1/10log1000)*100*log100
= 6.7
Question 187 
F3, F4, F1, F5, F6, F2  
F2, F6, F5, F1, F4, F3  
F1, F2, F3, F4, F5, F6  
Ordering is immaterial as all files are accessed with the same frequency. 
→ Final order is F3, F4, F1, F5, F6, F2
Question 188 
int recursive (int n)
{
if(n == 1)
return (1);
else
return (recursive (n1) + recursive (n1));
}
O(n)  
O(n log n)  
O(n^{2})  
O(2^{n}) 
= 2 [2T(n  2) + 1] + 1
= 2^{2} T(n  2) + 3
⋮
= 2^{k}T(n  k) + (2^{k}  1)
= 2^{n1}T(1) + (2^{n1}  1)
= 2^{n1} + 2^{n1}  1
= 2^{n}  1
≅ O( 2^{n} )
Question 189 
2^{5}  
7^{5}  
3^{5}  
2^{2×5} 
Here, there are 7 nodes.
=7^{72}
=7^{5}
=16,807
Question 190 
20, 47, 15, 8, 9, 4, 40, 30, 12, 17
then the order of these elements after second pass of the algorithm is:
8, 9, 15, 20, 47, 4, 12, 17, 30, 40  
8, 15, 20, 47, 4, 9, 30, 40, 12, 17  
15, 20, 47, 4, 8, 9, 12, 30, 40, 17  
4, 8, 9, 15, 20, 47, 12, 17, 30, 40 
Question 191 
19  
18  
17  
20 
→ According to above Profit/weight ratio, the item1 having maximum profit. So, pick item1.
→ Now M=7, Pick item2 because it has more profit.
→ Now M=5, Now according to profit/weight ration we can pick item3 or item5 but item5 having more weight, so it exceeds capacity. If we are selecting item3 means we are wasting 1 slot. So, here, profit/weight ration fails. So, follow brute force technique in this position. We can pick item4 because it has weight 5.
→ Now, M=0 and Profit=19. [item1 + item2 + item4]
Question 192 
What is the total number of spanning trees of a complete graph of 4 vertices (K_{4})?
16  
8  
4  
15 
n^{n2} = 4^{2} = 16
Question 193 
void Function(int n)
{
int i=1;
int s=1;
while(s<=n)
{
i++;
s=s+i;
}
}
O(n)  
O(n^{ 2} )  
O(1)  
O(√n) 
i= 1, 2, 3, 4, 5, 6…..
Here, ‘S’ is nothing but sum of ‘n’ natural numbers
Assume the n=21, then corresponding k value 6. "k^2" is nothing but 36 which is greater than "n"
For any given K value , the S value is k(k+1)/2, So, 6(6+1)/2=21
the loop will terminate whenever k(k+1)/2 >n
k=O(√n)
Question 194 
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
n^{2}  
n^{n}  
n^{3}  
N 
a=8,b=2,k=0,p=0;
So, according to masters theorem, a>b ^{k}
=O(n^log _{2}^{ 8} )
=O(n^{ 3} )
Question 195 
Which of the following data structures is most suitable for radix sort?
Stack
 
Binary search tree
 
Tree
 
Linked list

→ This concept of a circularly linked trie structure is similar to the concept of a threaded binary tree. This structure will be called a circularly threaded trie.
Question 196 
Ω(logn)  
Ω(nlogn)  
Ω(n)  
Ω(n^{2}) 
Question 197 
f(int Y[10], int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]
else
j=k;
}
while((Y[k]!=x) && (i
printf("x is in the array");
else
printf("x is not in the array");
}
On which of the following contents of 'Y' and 'x' does the program fail?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10  
Y is [1 3 5 7 9 11 13 15 17 19] and x<1  
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2  
Y is [ 2 4 6 8 10 12 14 16 18 20] and 2 
First iteration:
I=0,j=9
K=(0+9)/2=4
Elements with duplicate values of “2” the condition if(Y[k]
I=0, j=k=4
K= (0+4)/2=2.
Third iteration:
I=0, j=k=2
K= (0+2)/2=1
Fourth iteration:
I=0, j=k=1
K= (0+1)/2=0
The while condition is Y[k] != x && i < j is true
● The above program doesn’t work for the cases where element to be searched is the last element of Y[ ] or greater than the last element (or maximum element) in Y[ ].
● For such cases, program goes in an infinite loop because i is assigned value as k in all iterations, and i never becomes equal to or greater than j.
● So while condition never becomes false.
Question 198 
max(f(n),g(n))  
min(f(n),g(n))  
f(n)+g(n)  
f(n) * g(n) 
f(n) is order of M1
g(n) is order of M2.
In order to find the order of the algorithm, there are three possible cases with f(n) and g(n)
Case1 : if f(n) > g(n)
In this case we take O(f(n)) the complexity of the algorithm as g(n) is a lower order term, we can ignore this one .
Case2 : f(n) < g(n)
In this case we take O(g(n)) the complexity of the algorithm as f(n) is a lower order term, we can ignore this one.
Case3: f(n) = g(n)
Time Complexity can be either O(g(n)) or O(f(n)) (which is equal asymptotically). So the order of the algorithm is max(f(n), g(n)).
Question 199 
Which of the following greedy strategies gives the optimal solution for the following 01 knapsack problem?
w=30, w1=10, p1=11, w2=15, p2=12, w3=20, p3=24, w4=25, p4=25
Largest profit per unit weight first
 
Largest profit first
 
Lightest item first
 
Heaviest item first

Note: Even though they are asking 0/1 knapsack. We are always takes first profit/weight first.
→ W=30, we can exceed weight and we can’t take fractions of weight.
→ According to profit/weight, W3 having more profit. And W1 having more profit.
→ 0/1 knapsack optimal solution = 24+11 = 35
Question 200 
Which of the following is the time complexity to find the determinant of an upper triangular matrix of order n*n?
Θ(n^{2.5})  
Θ(n)  
Θ(n^{2})
 
Θ(n) 
Question 201 
Greedy  
Dynamic  
Divide and conquer  
Backtracking 
→ Dijkstra’s algorithm is best example for greedy
→ Floyd warshall is best example for dynamic programming.
→ Quicksort is best example for divide and conquer.
Question 202 
Divide and conquer  
Backtracking  
Heuristic approach  
Greedy approach 
→ The solutions to the subproblems are then combined to give a solution to the original problem.
→ So Merge Sort first divides the array into equal halves and then combines them in a sorted manner.
Question 203 
m*n  
minimum of m,n  
Maximum of m,n  
M+n1 
→ Since we need to merge 2 at a time, the optimal strategy would be to take smallest size lists first.
→ The reason for picking smallest two items is to carry minimum items for repetition in merging.
Question 204 
Bubble sort and Insertion sort  
Bubble sort and Quicksort  
Bubble sort and merge sort  
Quicksort and merge sort 
The quick sort and merge sort algorithms are based on the divide and conquer algorithm which works in the quite similar way.
In the both algorithm, the larger problem is divided into smaller one until you find the solution later we will combine all smaller solutions into final solution.
Question 205 
int IsPrime(n)
{
int i,n;
for(i=2;i<= √n;i++)
if(n%i == 0)
{
printf(“Not Prime\n”);
return 0;
}
return 1;
}
Let T(n) denotes the number of times the for loop is executed by the program on input n.
Which of the following is TRUE?
T(n) = O( √n) and T(n) = Ω ( √n)  
T(n) = O( √n) and T(n) = Ω (1)  
T(n) = O(n) and T(n) = Ω ( √n)  
None of the above 
Question 206 
A) Dijkstra's algorithm
B) Floyd's algorithm
C) prim's algorithm
D) Warshall's algorithm
Both A and B  
Both A and C  
Both C and D  
Both B and D 
Question 207 
max(f(n),g(n))  
min(f(n),g(n))  
f(n)+g(n)  
f(n) * g(n) 
So, we will prefer max(f(n),g(n)).
Question 208 
T(n)=8T(n/2)+qn, if n>1
=p, if n=1
Where p,q are constants. The order of this algorithm is
n^{ 2}  
n^{ n}  
n ^{3}  
N 
Case1: a>b^{k}
Here, a=8,b=2,k=1,p=0
O(n(log _{2} 8))=O(n(log _{2} 2 ^{3} ))=O(n ^{3} (log_{ 2} 2)) [ log_{ 2} 2=1]
=O(n ^{3} )
Question 209 
Consider an array with n elements. Which of the following options gives the maximum number of swap(a_{j}, a_{j+1}) operations to sort the array elements using efficient bubble sort algorithm?
n^{2} /2  
n(n1)/2  
n^{2}
 
n(n+1)/2 
→ This means that in the first iteration it would have to look at n elements, then after that it would look n  1 elements (since the biggest integer is at the end) and so on and so forth till 1 comparison occurs.
→ For n number of numbers, total number of comparisons done will be (n1)+ ... + 2 + 1.
= (n*(n1))/2
= O(n^{2})
Question 210 
f(int Y[10], int x)
{
int u,j,k;
i=0;j=9;
do
{
k=(i+j)/2;
if(Y[k]
else
j=k;
}while((Y[k]!=x) && (i
printf("x is in the array");
else
printf("x is not in the array");
}
On which of the following contents of 'Y' and 'x' does the program fail?
Y is [1 2 3 4 5 6 7 8 9 10] and x<10  
Y is [1 3 5 7 9 11 13 15 17 19] and x<1  
Y is [ 2 2 2 2 2 2 2 2 2 2] and x>2  
Y is [ 2 4 6 8 10 12 14 16 18 20] and 2 
First iteration:
I=0,j=9
K=(0+9)/2=4
Elements with duplicate values of “2” the condition if(Y[k]
I=0, j=k=4
K= (0+4)/2=2.
Third iteration:
I=0, j=k=2
K= (0+2)/2=1
Fourth iteration:
I=0, j=k=1
K= (0+1)/2=0
The while condition is Y[k] != x && i < j is true
→ The above program doesn’t work for the cases where element to be searched is the last element of Y[ ] or greater than the last element (or maximum element) in Y[ ].
→ For such cases, program goes in an infinite loop because i is assigned value as k in all iterations, and i never becomes equal to or greater than j.
So, while condition never becomes false.
Question 211 
Linear search  
Hash search  
Binary search  
All of these 
→ Binary search in which elements should be in sorting order and every time we will search in the half the elements based upon the mid value.
Question 212 
A process of rearranging a given set of objects in a specific order  
To facilitate the later search for members of the sorted set  
Is a relevant and essential activity, particularly in data processing  
All of these 
● ordering: arranging items in a sequence ordered by some criterion;
● categorizing: grouping items with similar properties.
The most common uses of sorted sequences are:
● making lookup or search efficient;
● making merging of sequences efficient.
● enable processing of data in a defined order.
Question 213 
Bubble  
Shake  
Tree  
Insertion 
Question 214 
Knight's tour  
Eight Queens problem  
Kings problem  
Rooks problem 
● The eight queens puzzle is an example of the more general n queens problem of placing n nonattacking queens on an n×n chessboard, for which solutions exist for all natural numbers n with the exception of n=2 and n=3
Question 215 
Insertion sort  
Heap sort  
Merge sort  
Bubble sort 
1. Mergesort
2. Quicksort.
Question 216 
For the recurrence relation T(n) = 2 + T(n  1), where T(0)=2, T(n)=?
n^{2}
 
2n+2  
log(n)  
2^{n} 
T(0) = 1
T(n1) = T(n11)+1
T(n) = [T(n2)+1] +1 = T(n2)+ 2
T(n2) = T(n21)+1
T(n) = [T(n3)+1]+1
= T(n3)+3
= T(n) = T(nk)+ k
Note: Let k = n
Then T(n) = T(0) + n = 1 + n
∴ O(n)
Question 217 
Travelling salesperson problem belongs to which category of problems?
Satisfiable
 
Non Solvable
 
Decision  
Optimization

Question 218 
O(nlogn)  
O(n)  
O(logn)  
O(n*n) 
→ So, it takes time complexity O(n)
Question 219 
2  
10  
8  
5 
Sep2: The maximum number of comparisons for a particular record among 32 sorted recorder through binary search method will be O(log _{2} 32)=5
Question 220 
floor ((i+1)/2)  
ceiling ((i+1)/2)  
floor (i/2)  
ceiling (i/2) 
Left Child is at index: 2i
Right child is at index: 2*i+1
Question 221 
3  
4  
5  
6 
Height of the binary search tree = 3
Question 222 
Every minimum spanning tree of G must contain E_{ min}.  
If E _{max} is in minimum spanning tree, then its removal must disconnect G.  
No minimum spanning tree contains E_{ max}.  
G has a unique minimum spanning tree. 
Minimum Spanning Tree:
Question 223 
O(n log n)  
O(n ^{2} log n)  
O(n ^{2} + log n)  
O(n ^{3} ) 
2.The length of the string is n, the time taken is to be O(n) for each comparison.
3. For level 1(bottomup order): Every element has to be compared with other element the number of comparisons is O(n/2).
4. For level 1(bottomup order): Total time taken by one level is O(n ^{2} ).
5. For copying level to level the time taken by O(n ^{2} ).
So, For level 1= O(n^{ 2} )+ O(n^{ 2} )
6. Second level O(n ^{2} )+ O(n ^{2} ).
;
;
;
Final n level (logn)*O(n ^{2} ) = O(n ^{2} logn)
Question 224 
Stage (i) 12,3,40,2
Stage (ii) 12,3,2,40
Stage (iii) 3,12,2,40
Stage (iv) 3,2,12,40
Stage (v) 2,3,12,40
The array has been sorted using
Insertion sort  
Selection sort  
Quick sort  
Bubble sort 
→The above stages Bubble sort First Pass/ iteration steps.
→In the bubble sort, after completion the largest element gets last position.
Question 225 
2*log_{2} n  
log_{2} 2n  
log_{2} (2n+1)  
2*log_{2} n+1 
Question 226 
3, 4, 5, 7, 9, 14, 20, 18, 17, 16, 15  
20, 18, 17, 16, 15, 14, 3, 4, 5, 7, 9
 
20, 18, 17, 16, 15, 14, 9, 7, 5, 4, 3  
3, 4, 5, 7, 9, 14, 15, 16, 17, 18, 20 
Question 227 
↔~∧pq∨ ~ ~ pq, pq∧~p~q~∨↔  
↔~∧pq∨ ~ p~q, pq∧~p~q~∨↔  
↔~∧pq∨ ~ ~ pq, pq∧~p~~q∨↔  
↔~∧pq∨ ~ p~ q, pq∧~p~ ~q∨↔ 
(~(p ∧ q))↔(~ p ∨ ~ q)
It is clearly specifying that
↔ is a root
(~(p ∧ q)) is left subtree
(~p ∨ ~q) is right subtree.
Step2: Finally the tree is looks like
Step3: Prefix operation traverse through Root,left and Right ↔~∧pq∨ ~ p~q Step4: Postfix operation traverse through Left,Right and Root.
pq∧~p~q~∨↔
Question 228 
O(lg n)  
O(n lg n)  
O(n)  
O(n ^{2} ) 
→ Traversing all the nodes of a binary search tree with n nodes and printing them in order using inorder traversal.
→ Inorder traversal will print all elements in sorted order. It takes worst case time complexity is O(n).
Possibility2:
Without sorted elements chance to form either left skewed or right skewed tree. It takes worst case time complexity is O(n).
Question 229 
Both time and space complexities are better in recursive than in nonrecursive program.  
Both time and space complexities are better in nonrecursive than in recursive program  
Time complexity is better in recursive version but space complexity is better in nonrecursive version of the program.  
Space complexity is better in recursive version but time complexity is better in nonrecursive version of the program. 
→ Recursion uses more memory, but is sometimes clearer and more readable. Using loops increases the performance, but recursion can sometimes be better for the programmer (and his performance). Depends on what we you want to implement, and what's more important for you (readability, performance...)
Example: Factorial of a number.
Question 230 
It shows which of the following depth first forest?
{a, b, e} {c, d, f, g, h}  
{a, b, e} {c, d, h} {f, g}  
{a, b, e} {f, g} {c, d} {h}  
{a, b, c, d} {e, f, g} {h} 
Based on definition of DFS forest, optionA is correct.
Question 231 
O (log _{2} n)  
O (n log _{2} n)  
O (1)  
O (n) 
1. Stable: Equal keys aren't reordered.
2. Operates in place, requiring O(1) extra space.
3. Worst case O(nlog(n)) key comparisons.
4. Worst case O(n) swaps.
5. Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
Question 232 
According to kruskal’s algorithm, first we have sort an elements.
Elements are: 10,12,14,16,18,22,24,25,28
Question 233 
The item to be searched is in somewhere middle of the array  
The item to be searched is not in the array  
The item to be searched is in the last of the array  
The item to be searched is either in the last or not in the array 
→ A linear search runs in at worst linear time and makes at most n comparisons, where ‘n’ is the length of the list. If each element is equally likely to be searched, then linear search has an average case of n/2 comparisons, but the average case can be affected if the search probabilities for each element vary. → Linear search is rarely practical because other search algorithms and schemes, such as the binary search algorithm and hash tables, allow significantly faster searching for all but short lists.
Question 234 
Counting microseconds  
Counting number of key operations  
Counting number of statements  
Counting kilobytes of algorithm 
Question 235 
Selection sort  
Bubble sort  
Radix sort  
Insertion sort 
Phase 1:
1. We use n bins, one for each of the integers 0,1,.., n1. We place each integer i on the list to be sorted into the bin numbered (i mod n) each bin will then contain a list of integers leaving the same remainder when divided by n.
At the end, we concatenate the bins in order to obtain a list L.
Phase 2:
1. The integers on the list L are redistributed into bins, but using the bin selection
function: ⌊i/n⌋
2. Now append integers to the ends of lists. Finally, concatenate the lists to get the final sorted sequence.
Question 236 
O(1)  
O(Log n)  
O(n)  
O(n ^{2} ) 
→ The heapify function does not take O(logn) space, it will take O(logn) time in worst case. Here we don't do heapify through any recursion call. We will do through iterative method and we don't require any auxiliary space in this sorting. We use only input array.
So, we do heap sort in O(1) space complexity.
Question 237 
Binary search  
Maximum of n number  
Quick sort  
Fibonacci search 
Question 238 
Bubble sort  
Selection sort  
Quick sort  
Insertion sort 
So, we can eliminate optionB and OptionC because it gives worst complexity(O(n 2 )) when array is already sorted.
Bubble sort and insertion sort will give best case complexity(O(n)) but here they given small constraint is “there are few random elements”. So, insertion sort is more appropriate answer.
Question 239 
b= branch factor
d= depth of shallowest solution
M= Maximum depth of the search tree
I= depth limit
(a)(iii), (b)(ii), (c)(iv), (d)(i)  
(a)(ii), (b)(iii), (c)(iv), (d)(i)  
(a)(i), (b)(ii), (c)(iv), (d)(iii)  
(a)(i), (b)(iii), (c)(iv), (d)(ii) 
DFS → O(bm) worst case space complexity
Depth Limited Search → O(bl)
Iterative deepening Search → O(bd)
Note: Based upon BFS and DFS we can find the solution.
Question 240 
(a)(iv), (b)(iii), (c) (ii), (d)(i)  
(a)(i), (b)(iv), (c) (iii), (d)(ii)  
(a)(iv), (b)(i), (c) (ii), (d)(iii)  
(a)(i), (b)(ii), (c) (iii), (d)(iv) 
→ A* Search: Time complexity depends upon the quality of heuristic.
→ Recursive BestFirst Search: Suffers from excessive node generation. RBFS depends on how widely the promising nodes are separated in the search tree, and is harder to anticipate.
→ Iterativedeepening A* Search: Avoids substantial overhead associated with keeping the sorted queue of nodes
Question 241 
11  
12  
10  
9 
Number of leaf nodes = Sum of degrees of all internal nodes  number of internal nodes +1
= (4*1 + 3*2 + 3*3 )10 +1
= 19 10 +1=10
Formula : ndn +1 where n is number of nodes and d is degree of the tree.
Question 242 
Use Kruskal’s algorithm to find the minimum spanning tree of the graph. The weight of this minimum spanning tree is
13  
16  
17  
14 
The weight of this minimum spanning tree is 16.
Question 243 
O(n!) and O(n log n)  
O(n^{n}) and O(n log n)  
O(n!) and O(log n!)  
O(n^{n}) and O(log n!) 
→ Given logarithm of the factorial function log n!.
= log n! (or)
= log n^{n} (or)
= nlogn
When we are writing into asymptotic order also the value remains same O(nlogn).
Question 244 
n + ceil(lg n) 2  
n1  
lg n  
3n/1 
This takes n1 comparisons and each element involved is compared at most log n times. Finally, we get the largest element A1’’’ . This element which is largest, has won comparison with the second largest element at some point. So, keeping track of items compared with the largest element will give us a list of (log n) elements. In this list we find out largest, which will be the second largest of all.
So, no. of comparisons =(n1)+log _{2} (n)1
Example: 512 elements
In this case no. of comparisons will be, (5121)+log _{2} (512) 1
=511+(91)
=511+8
=519
Question 245 
X = <0,1,2,1,3,0,1 >
Y = <1,3,2,0,1,0 >
The length of longest common subsequence between X and Y is
4  
2  
3  
5 
Possible subsequences are :130, 1210, 1301 and length of longest common subsequence is 4.
Question 246 
(1) Input (x _{1} ,y_{ 1} ) and (x_{ 2} ,y_{ 2} )
(2) y=y_{ 1}
(3) d=f(x _{1} +1, y _{1} +1⁄2) // f is the implicit form of a line
(4) for x=x_{ 1} to x_{ 2}
(5) do
(6) plot(x,y)
(7) if(d<0)
(8) then
(9) y=y+1
(10) d=d+(y_{ 1}  y _{2} ) + (x_{ 2}  x_{ 1} )
(11) else
(12) d=d+(y_{ 1}  y_{ 2} )
(13) end
(14) end
Which statements are true ?
P: For a line with slope m>1, we should change the outer loop in line (4) to be over y.
Q: Lines (10) and (12) update the decision variable d through an incremental evaluation of the line equation f.
R: The algorithm fails if d is ever 0.
Q and R only  
P only  
P and Q only  
P,Q and R 
Line 10 and 12 will update the value of d , So the statement Q is true.
Question 247 
splitting  
completion  
beginning  
joining 
→ The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. It is commonly used in conjunction with the program evaluation and review technique (PERT).
→ A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.
→ Event: An event represents a point in time signifying the completion of some activities and the beginning of new ones. This is usually represented by a circle in a network which is also called a node or connector. The events are classified into three categories
1. Merge event – When more than one activity comes and joins an event such an event is known as merge event.
2. Burst event – When more than one activity leaves an event such an event is known as burst event.
3. Merge and Burst event – An activity may be merge and burst event at the same time as with respect to some activities it can be a merge event and with respect to some other activities it may be a burst event.
Question 248 
O(lg (n) lg(n))  
O(lg (n))  
O(n lg (n))  
O(lg (n) lg(lg(n))) 
T (n) = l og log (n) · l og (n)
T (n) = l og(n) · l og log (n)
Question 249 
The array elements form a heap.  
Elements in each half of the array are sorted amongst themselves.  
Elements in the first half of the array are less than or equal to elements in second half of the array.  
All of the above 
→Divide the unsorted list into n sublists, each containing one element (a list of one element is considered sorted).
→Repeatedly merge sublists to produce new sorted sublists until there is only one sublist remaining. This will be the sorted list.
As per the merge sort, The elements in each half of the array are sorted amongst themselves.
Question 250 
min(h(n), g(n))  
max(h(n), g(n))  
h(n) + g(n)  
h(n) * g(n) 
Let take some constants c_{1}, c_{2}, n_{1}, n_{2} such that
T(n) ≤ c_{1}*h(n), for all n≥n_{1}.
T(n) ≤ c_{2} *h(n), for all n ≥ n_{2}.
N=max(n_{1}, n_{2}) and C=max(c_{1}, c_{2}).
So,
T(n)≤C * h(n) for all n≥N
T(n)≤C * g(n) for all n≥N
T(n)≤C/2 * (h(n)+g(n))
Without loss of generality, let max(h(n), g(n))=h(n) .
So, T(n)≤C/2 (h(n)+h(n))≤C*h(n) .
So, complexity of h(n) is max(h(n), g(n))
Question 251 
Search  
Search and Insert  
Search and Delete  
Search, Insert and Delete 
Search (or) Find: For the Search (or) Find operation, we perform a normal BST ﬁnd followed by a splay operation on the node found (or the leaf node last encountered, if the key was not found). We can charge the cost of going down the tree to the splay operation. Thus the amortized cost of ﬁnd is O(log n).
Insert/delete: The amortized cost of the splay operation is also O(log n), and thus the amortized cost of insert/delete is O(log n).
Question 252 
0.5 β Ig n  
0.5 (1 – β) Ig n  
– (Ig n)/(Ig β)  
– (Ig n)/Ig (1 – β) 
Question 253 
θ(lg m)  
θ(m)  
θ(mlg m)  
θ(lglg m) 
a=1, b=4/3, k=0, p=0
Here, a = b ^{k}
So, T(m) = n ^{log a/ log b} log^{ p+1} n
T(m) = θ(log m)
Question 254 
Prim’s algorithm  
Floyd  Warshall algorithm  
Johnson’s algorithm  
Dijkstra’s algorithm 
Question 255 
2.4  
1.87  
3.0  
2.15 
Step 1: Select two characters with smallest probabilities and merge them.
Step 2: Select two characters from Step 1 with smallest probabilities and merge them.
Step 3: Select two characters (from Step 2) with smallest probabilities and merge them.
Step 4: Merge the remaining two probabilities.
A = 1000 (4bits)
E = 1001 (4bits)
D = 101 (3bits)
C = 11 (2bits)
B = 0 (1bit)
Average length = ((0.08×4)+(0.12×4)+(0.15×3)+(0.25×2)+(0.40×1)) / (0.08+0.12+0.15+0.25+0.40)
=2.15/1.0
= 2 .15
Question 256 
(a)(iv),(b)(i), (c)(iii), (d)(ii)  
(a)(iv),(b)(iii), (c)(i), (d)(ii)  
(a)(iii),(b)(iv), (c)(ii), (d)(i)  
(a)(iv),(b)(iii), (c)(ii), (d)(i) 
SingleSource shortest paths use Greedy approach
STRASSEN’s Matrix multiplication use Divide and conquer technique
Optimal binary search trees use Dynamic programming
Question 257 
45  
72  
360  
450 
Octal number having→ 5 digits
The octal number system base value= 8
The maximum number of comparison=(number of items)*(radix)*(number of digits)
= 9*5*8
= 360
Question 258 
O(x log x)  
O(x^{2})  
O(x^{3})  
O(x^{2} log x) 
Question 259 
0(log n)  
0(n log n)  
0(n)  
None of the above 
Question 260 
O(n^{5}+2n^{2}) & O(n^{3}*n!)  
O(n^{5}) & O(n^{3}*2^{n})  
O(n^{5}) & O(n^{3}* n!)  
O(n^{5}+2n^{2}) & O(n^{3}*2^{n}) 
Here, It is performing multiplication of 2 polynomials.
(nlogn+n^{2}) → Raising value is n^{2}. We can write asymptotically O(n^{2})
(n^{3}+2) → Raising value is n^{3}. We can write asymptotically O(n^{3})
= O(n^{2})*O(n^{3})
= O(n^{5})
Step2: Second function (n!+2^{n})(n^{3}+log(n^{2}+1))
Here, It is performing multiplication of 2 polynomials.
(n!+2^{n}) → Raising value is n!. We can write asymptotically O(n!)
(n^{3}+log(n^{2}+1)) → Raising value is n3. We can write asymptotically O(n^{3})
= O(n!)*O(n^{3})
= O(n^{3}* n!)
Question 261 
Complete graph  
Hamiltonian graph  
Euler graph  
None of the above 
→ Hamiltonian graph will get more than one spanning tree.
→ Euler graph will get more than one spanning tree.
Question 262 
Carnot function  
Riemann function  
Bounded function  
Ackermann function 
Question 263 
125  
64  
36  
16 
n=5
=5^{52}
=5^{3}
=125
Question 264 
500  
1000  
5000  
20000 
Question 265 
O(nm log m)  
O(mn log n)  
O(m + n)  
O(mn) 
Another strategy to achieve complexity O(mn log n) is to build a minheap of size n, consisting the first element in each of the n lists. We then extract the minimum element from the heap to the output and replace it in the heap with the next element from the list from which the minimum came. Thus, it takes time O(log n) to generate each element in the output and there are O(mn) output elements in all, so the overall time is O(mn log n).
On the other hand, if we can apply min() to n elements at a time, we can merge all n lists is parallel. Let the n lists be ` l_{1} , `l _{2} , . . . , `l_{ n} . We maintain indices i _{1} , i_{ 2} , . . . , i _{n} into these n lists, initially set to 1. At each stage we add j = min(` l_{1} [i_{ 1} ], ` l_{2} [i_{ 2>} ], . . . , ` l_{n} [i_{ n} ]) to the sorted output list and increment the corresponding index. Overall, we add mn elements to the sorted output and each element is generated in constant time, so the time taken is O(mn).
Question 266 
Descriptive Explanation 
First of all, the question is interesting only if the lists have duplicate elements. Other wise, we just merge the two lists as usual and after k steps we have the answer.
Let us first solve a simpler problem and find the k th smallest number in a single sorted list of length n. The first number in the list is clearly the smallest number. Let us call this number x_{ 1} . The next number must be at least x _{1} +1, so we search for x _{1} +1 in the list using binary search. (Recall that binary search is an example of divide and conquer.) Either we find the number, in which case the second smallest number x _{2} is x_{ 1} +1, or we fail to find it, in which case we will be at a boundary between the last occurrence of x _{1} and the first occurrence of the next smaller number, which is x _{2} . We now search for x_{ 2} +1 and discover x _{3} . Repeat this k−1 times to find x _{k} . Each binary search takes time log n, so overall this procedure takes time k log n, which is O(log n) if we treat k as a constant.
If we have two lists, we can find the first k numbers in the first list, which takes time O(log m) and then find the first k numbers in the second list, which takes time O(log n) and then merge these in time O(k) (which is a constant!) to find the k th smallest number overall.
In fact, we can reduce the number of binary searches to k and avoid the merge step. Maintain indices i _{1} and i_{ 2} for the two lists, pointing initially to the first element in lists 1 and 2, respectively. At each stage, the smaller of the numbers pointed to by i _{1} and i_{ 2} is the next number to be enumerated. We then advance i _{1} or i_{ 2} , as the case may be, using binary search as described above. After k such steps, we would have found the k th smallest number overall. Some steps may involve a binary search in list 1, which takes time O(log m) and others may involve a binary search in list 2, which takes time O(log n), so each step is bounded by max(O(log m), O(log n)). This gives an overall complexity of max(O(log m), O(log n)), which is equivalent to O(log m + log n).
Question 267 
Descriptive Explanation 
(ii) In any dag, there must be some vertex with no incoming edge (indegree 0). Any vertex of indegree 0 represents a task that has no pending dependencies and can hence be immediately completed. Once it is completed, we can remove it from the graph and operate on the tasks that remain. The algorithm is thus the following.
• Initialize a counter c to 0.
• While the dag is not empty
– Remove all vertices of indegree 0
– Recompute indegrees of remaining vertices
– Increment c
The final value of c is the answer we seek.
The complexity of this algorithm depends on how we represent the graph. For G = (V, E) let V  = n and E = m. If we use an adjacency matrix, for each vertex we remove, we have to scan a row of the matrix to determine which indegrees to decrement, so it will take time O(n ^{2} ). If we use adjacency lists, for each vertex we delete, we can scan its list of outgoing edges and directly decrement indegrees for its outgoing neighbours. Across the n vertices we delete, we scan each of the m edges once, so the overall time is O(n + m).
Question 268 
Descriptive Explanation 
(ii) Let Best[i] denote the maximum number of matches you can watch in the set {i, i + 1, . . . , n}. The overall answer we are looking for is Best[1].
Consider the set of matches {i, i + 1, . . . , n}. We have two options initially, and we must pick the better of the two.
• Watch match i. In this case, we get to watch one match immediately and as many matches as we can manage if we start with the match Next[i], so the overall number of matches is 1 + Best[Next[i]].
• Skip match i. In this case, the number of matches we get to watch is Best[i + 1].
Clearly Best[n] = 1 since there is no advantage in skipping the last match.
We can express this as a recurrence for Best[i] as follows, where Best[n] is the base case.
Best[i] = max(1 + Best[Next[i]], Best[i + 1])
Best[n] = 1
We can now start with the base case Best[n] = 1 and work backwards to compute Best[n − 1], Best[n − 2], . . . , Best[1] using dynamic programming, in time O(n).
Question 269 
If the best algorithm for B takes exponential time, there is no polynomial time algorithm for A.  
If the best algorithm for A takes exponential time, there is no polynomial time algorithm for B.  
If we have a polynomial time algorithm for A, we must also have a polynomial time algorithm for B.  
If we don’t know whether there is a polynomial time algorithm for B, there cannot be a polynomial time algorithm for A. 
Question 270 
Descriptive Explanation 
Algorithm 1 Algorithm to check if a nonincreasing sequence is graphical
function IsGraphical(s)
Let the input sequence be s = (d _{1} , d_{ 2} , . . . , d _{n} )
if d_{ 1} = d 2 = · · · = d_{ n<>/sub> = 0 then return true else if n = 1 and d 1 > 0 then return false else Form the sequence s s' = (d 2 − 1, d 3 − 1, . . . , d d 1 +1 − 1, d d 1 +2 , . . . , d n ) Let s 00 be s 0 sorted in descending order return IsGraphical(s' ' ) end if end function There are n recursive calls to the function IsGraphical, and the majority of the time in each call is spent in sorting the input sequence. Thus the algorithm takes time O(n 2 log n). }
Question 271 
Descriptive Explanation 
The language of the above grammar is the set of strings consisting of equal number of a’s and b’s.
A sequence of acid and base molecules is just a string over {a, b}. To check if a single virus could have produced the sequence, you can use CYK algorithm to check membership in the above contextfree grammar.
Question 272 
O (n)  
O (n log n)  
O (e log n)  
O (e) 
1. A=∅
2. for each vertex v∈ G.V
3. MAKESET(v)
4. sort the edges of G.E into nondecreasing order by weight w
5. for each edge (u,v)∈G.E, taken in nondecreasing order by weight
6. if FINDSET(u)≠FINDSET(v)
7. A=A∪{(u,v)}
8. UNION(u,v)
9. return A
Analysis:
The running time of Kruskal’s algorithm for a graph G=(V,E) depends on how we implement the disjointset data structure.
→ Initializing the set A in line 1 takes O(1) time, and the time to sort the edges in line 4 is O(ElgE). (We will account for the cost of the V MAKESET operations in the for loop of lines 2–3 in a moment.)
→ The for loop of lines 5–8 performs O(E) FINDSET and UNION operations on the disjointset forest.
→ Along with the V MAKESET operations, these take a total of O((V+E)α(V)) time, where α is the very slowly growing function.
→ Because we assume that G is connected, we have E>=V1, and so the disjointset operations take O(Eα(V)) time.
→ Moreover, since α V=O(lg V)=O(lg E), the total running time of Kruskal’s algorithm is O(ElgE)
→ Observing that E<V^{2} , we have lg E=O(lg V), and so we can restate the running time of Kruskal’s algorithm as O(E lg V)
Question 273 
Insertion in a sorted array takes constant time.  
Insertion in a sorted linear linked list takes constant time.  
Searching for a key in a sorted array can be done in O(log n) time.  
Searching for a key in a sorted linear linked list can be done in O(log n) time. 
Because elements are in sorted order, we are using binary search tree. it will take worst case time complexity O(logn) only.
Question 274 
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (6, 5, 9), (3, 5, 7)]  
[(0, 2, 5), (3, 5, 7), (6, 1, 4), (6, 5, 9), (7, 1, 8), (9, 0, 9)]  
[(9, 0, 9), (7, 1, 8), (6, 1, 4), (0, 2, 5), (3, 5, 7), (6, 5, 9)]  
[(9, 0, 9), (6, 1, 4), (7, 1, 8), (0, 2, 5), (3, 5, 7), (6, 5, 9)] 
Question 275 
If the best algorithm for B takes exponential time, there is no polynomial time algorithm for A  
If the best algorithm for A takes exponential time, there is no polynomial time algorithm for B.  
If we have a polynomial time algorithm for A, we must also have a polynomial time algorithm for B.  
If we don’t know whether there is a polynomial time algorithm for B, there cannot be a polynomial time algorithm for A. 
Question 276 
Descriptive Explanation 
Question 277 