1 Algorithms (328)
topGive a method for finding and printing the cycle formed if the edge $(u,v)$ of $G$ not in $T$ (i.e., $e \in GT$) is now added to $T$.
Time taken by your method must be proportional to the length of the cycle.
Describe the algorithm in a PASCAL $(C)$ – like language. Assume that the variables have been suitably declared.
An array $A$ contains $n$ integers in locations $A[0], A[1], \dots A[n1]$. It is required to shift the elements of the array cyclically to the left by $K$ places, where $1\leq K \leq n1$. An incomplete algorithm for doing this in linear time, without using another array is given below. Complete the algorithm by filling in the blanks. Assume all variables are suitably declared.
min:=n; i=0; while _____ do begin temp:=A[i]; j:=i; while ____ do begin A[j]:=____; j:=(j+K) mod n; if j<min then min:=j; end; A[(n+iK)mod n]:=____; i:=______; end;
An element in an array $X$ is called a leader if it is greater than all elements to the right of it in $X$. The best algorithm to find all leaders in an array
 solves it in linear time using a left to right pass of the array
 solves it in linear time using a right to left pass of the array
 solves it using divide and conquer in time $\Theta (n\log n)$
 solves it in time $\Theta( n^2)$
Given two arrays of numbers $a_{1},...,a_{n}$ and $b_{1},...,b_{n}$ where each number is $0$ or $1$, the fastest algorithm to find the largest span $(i, j)$ such that $ a_{i}+a_{i+1}+\dots+a_{j}=b_{i}+b_{i+1}+\dots+b_{j}$ or report that there is not such span,
 Takes $ O(3^{n})$ and $\Omega(2^{n})$ time if hashing is permitted
 Takes $ O(n^{3})$ and $\Omega(n^{2.5})$ time in the key comparison mode
 Takes $\Theta (n)$ time and space
 Takes $O(\sqrt n)$ time only if the sum of the $2n$ elements is an even number
Answer: ___________
You are given ten rings numbered from $1$ to $10$, and three pegs labeled $A$, $B$, and $C$. Initially all the rings are on peg $A$, arranged from top to bottom in ascending order of their numbers. The goal is to move all the rings to peg $B$ in the minimum number of moves obeying the following constraints:
 In one move, only one ring can be moved.
 A ring can only be moved from the top of its peg to the top of a new peg.
 At no point can a ring be placed on top of another ring with a lower number.
How many moves are required?
Asha and Lata play a game in which Lata first thinks of a natural number between $1$ and $1000$. Asha must find out that number by asking Lata questions, but Lata can only reply by saying “Yes” or “no”. Assume that Lata always tells the truth. What is the least number of questions that Asha needs to ask within which she can always find out the number Lata has thought of?
Answers: Algorithm Design
UnionFind Algorithm can be used to find the cycle.
Ref: http://www.geeksforgeeks.org/unionfind/
The inner loop is left rotating elements each separated by $k$ elements. i.e. A[i+k] goes to A[i], A[i+2k] goes to A[i+k] and so on.
This loop stops, when A[i] gets assigned to some place which should be A[nk+i]. (we do not need mod n here because i < k)
If $n$ is a multiple of $K,$ the inner loop iterates $n/K$ times and outer loop iterates $K$ times.
Otherwise inner loop iterates more than $n/K$ times and correspondingly the outer loop gets adjusted using the min variable.
min:=n; i=0; while i < min do begin temp:=A[i]; j:=i; while (j != n+iK) do //we completed a cycle when this equals begin A[j]:= A[(j+K) mod n]; j:=(j+K) mod n; if j<min then //updates the iteration count for i loop min:=j; end; A[(n+iK)mod n]:=temp; //we do not need mod n, because i is <= K i:= i+1; end;
C code for the problem is as follows:
#include <stdio.h> #include <stdlib.h> #include <time.h> int main() { int *A; int n, i, j, K, min; time_t t; time(&t); //to get current time srand(t); //initialize random seed using current time printf("Please enter n and K: "); scanf("%d%d", &n, &K); A = malloc(n * sizeof(int)); printf("Enter n elements: \n"); for(i = 0; i < n; i++) A[i] = rand()%1000; printf("n elements are: \n"); for(i = 0; i < n; i++) printf("%d ", A[i]); i = 0, min = n; while(i < min) { int temp = A[i]; j = i; while(j != n+iK)//we completed a cycle when this equals { A[j] = A[(j+K)%n]; j = (j+K) % n; if(j < min) min = j; } A[n+iK] = temp; i++; } printf("\nThe numbers left rotated by %d places are: \n", K); for(i = 0; i < n; i++) printf("%d ", A[i]); free(A); }
Option B) We can move from right to left, while keeping a note of the maximum element so far (let's call it current_max
).
 Starting from the rightmost element, we initialize our
current_max
with it, since the rightmost element will always be a leader.  Moving from right to left, if an element $x$ is greater than our
current_max
, then $x$ is also a leader. Add this element to list of leaders (or simply print it). Setcurrent_max
to $x$ and carryon leftward.
Time Complexity would be $\Theta(n)$.
Answer is (C). Following algorithm would do.
Since array is binary, the max sum will go until $n$ and so the sum difference of the two arrays can vary between $n$ and $n$. We use array $start$ to keep the starting index of each possible sum (hence of size $2n+1$) and array $end$ to keep the ending index (these two arrays work like hash tables and since we have only $2n+1$ possible keys, we can do a perfect hashing). So, our required solution will be $max(end[i]start[i])$ provided both are assigned values.
The algorithm works as follows:
 Initialize $diff$ array to contain the difference of sum of elements of array a and b. i.e., $diff[i] = \sum_{i=0}^n a[i]  b[i]$.
 Now $diff[i]$ can have values from $n$ to $n$ which gives $2n+1$ possible values and the first occurrence of a $diff$ value marks the beginning of a span and the last occurrence marks the end. We use $start$ and $end$ array for storing these two positions for the $2n+1$ possible values.
 Now, the largest value of $end[i]  start[i]$ for any $i$, will be the largest span and the start of it will be $start[i] + 1$, and end will be $end[i]$. If the span is starting from first position itself (arrays $a$ and $b$ have same first elements), then it will start from $start[i]$ itself.
#include <stdio.h> #define size 100 //assume n is less than 100 int main() { int n, a[size], b[size]; int start[2*size+1], end[2*size+1]; int sum1 = 0, sum2 = 0, i; int diff[size]; printf("Enter n: "); scanf("%d", &n); for(i = 0; i < n; i++) { printf("Enter a[%d]: ", i); scanf("%d", &a[i]); } for(i = 0; i < n; i++) { printf("Enter b[%d]: ", i); scanf("%d", &b[i]); } for(i = 0; i < n; i++) { if(a[i]) sum1++; if(b[i]) sum2++; diff[i] = sum1  sum2; } for(i = 0; i < 2*n; i++) start[i] = 1,end[i] = 1; start[n] = end[n] = 0; //initially sum is 0 at the beginning of array and //the first n1 elements of start and end are used //if sum of A till ith element is less than sum of B till ith element for(i=0; i < n; i++) { if(start[diff[i] + n] == 1)//interested only in the first occurrence of diff[i] start[diff[i] + n] = i; end[diff[i] + n] = i;//interested in the last occurrence of diff[i] } int max = 1; int savei = 1; //savei is for storing the sum having the largest span for(i = 0; i < 2*n; i++) { if(start[i] > 1 && (end[i]  start[i] > max)) { max = end[i]  start[i]; savei = i; } } if(savei >= 0) { printf("The largest span is from %d to %d\n", start[savei]+(savei != n), end[savei]); //when sum zero is having the largest span, span starts from first element itself. //Else, the span starts from the next element from which the span does not change } else { printf("No span\n"); } }
Suppose $X$ is the number of coins of $11$ gm and $Y$ is the number of $10$ gm coins.
According to question,
$\qquad 11X+10Y= 323 \qquad \to (1)$
$ \qquad X+Y=31\qquad \qquad \to (2)$
Solving $(1),(2),$ we get $X=13$ and $Y=18.$ So, here number of coins of $11$ gm is $13$ and the only possible combination for $13$ coins is
 $1$ coin from bag$1$
 $4$ coins from bag$3$
 $8$ coins from bag$4$
So, product of label of bags will be $=1\times 3\times 4=12.$
Subsequence : A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.
Subarray : A subarray of nelement array is an array composed from a contiguous block of the original array's elements.
Question is asking for subsequence.
$S(i,j) = \sum_{k=i}^j A[k]$
What does $\sum_{k=i}^j $ mean? It means you start from index $i$ and go till index $j$ with unit increment each time.
Ultimately they are asking 'Maximum subarray sum'.
int maxsum = A[0], sum = A[0], n=14; for(int i=1; i<n; i++){ if (A[i]>A[i1) { if (sum > 0) sum+=A[i]; else sum = A[i]; } else sum = A[i]; if (maxsum < sum) maxsum = sum; }
sum $=6 + 3  1  2 + 13 + 4  9  1 + 4 + 12 = 29$
Therefore, total number of function call $2^{n} 1 = 1023$.
$\therefore$ Option $B$.
By using binary search Asha needs only $ \left \lceil \text{log}_2 1000 \right \rceil = 10$ comparisons
$\sum_{i \in A} a_{i}  \sum_{i \in B} a_{i}$ is minimised
Consider a greedy algorithm for solving this problem. The numbers are ordered so that $a_{1} \geq a_{2} \geq \dots a_{n},$ and at $i^{th}$ step, $a_i$ is placed in that part whose sum in smaller at that step. Give an example with $n=5$ for which the solution produced by the greedy algorithm is not optimal.
The correct matching for the following pairs is$$\begin{array}{llll}\hline \text{A.} & \text{All pairs shortest path} & \text{1.} & \text{Greedy} \\\hline \text{B.} & \text{Quick Sort} & \text{2.}& \text{DepthFirst Search} \\\hline \text{C.}& \text{Minimum weight spanning tree} & \text{3.} & \text{Dynamic Programming} \\\hline \text{D.} & \text{Connected Components} &\text{4.} & \text{Divide and Conquer} \\\hline \end{array}$$
Match the following:
$$\begin{array}{llll}\hline \text{P.} & \text{Prim's algorithm for minimum spanning tree} & \text{i.} & \text{Backtracking} \\\hline \text{Q.} & \text{FloydWarshall algorithm for all pairs shortest path} & \text{ii.}& \text{ Greedy method} \\\hline \text{R.} & \text{ Merge sort } & \text{iii.} & \text{ Dynamic programming} \\\hline \text{S.}& \text{Hamiltonian circuit } & \text{iv.} & \text{ Divide and conquer} \\\hline \end{array}$$
Given below are some algorithms, and some algorithm design paradigms.$$\begin{array}{llll}\hline \text{1.} & \text{Dijkstra's Shortest Path} & \text{i.} & \text{Divide and Conquer} \\\hline \text{2.} & \text{FloydWarshall algorithm to compute} & \text{ii.}& \text{Dynamic Programming} \\& \text{ all pairs shortest path} \\\hline \text{3.}& \text{Binary search on a sorted array} & \text{iii.} & \text{Greedy design} \\\hline \text{4.} & \text{Backtracking search on a graph} &\text{iv.} & \text{Depthfirst search} \\\hline & &\text{v.} & \text{Breadthfirst search} \\\hline \end{array}$$
Match the above algorithms on the left to the corresponding design paradigm they follow.
Consider the following table:$$\begin{array}{llll}\hline \rlap{\textbf {Algorithms}} & & \rlap{\textbf{Design Paradigms }} & \\\hline \text{P.} & \text{Kruskal} & \text{i.}& \text{Divide and Conquer} \\\hline \text{Q.} & \text{Quicksort} & \text{ii.}& \text{Greedy} \\\hline \text{R.} & \text{FloydWarshall} & \text{iii.}& \text{Dynamic Programming}\\\hline \end{array}$$
Match the algorithms to the design paradigms they are based on.
 $(P) \leftrightarrow (ii), (Q) \leftrightarrow (iii), (R) \leftrightarrow (i)$
 $(P) \leftrightarrow (iii), (Q) \leftrightarrow (i), (R) \leftrightarrow (ii)$
 $(P) \leftrightarrow (ii), (Q) \leftrightarrow (i), (R) \leftrightarrow (iii)$
 $(P) \leftrightarrow (i), (Q) \leftrightarrow (ii), (R) \leftrightarrow (iii)$
Answers: Algorithm Design Techniques
Let $S = \{12,11,8,8,8\}$
As per the given greedy algorithm we get $A = \{12,8 \}$ and $B = \{11,8,8\}$ and $\sum_{i \in A} a_{i}  \sum_{i \in B} a_{i} = 20  27 = 7.$
This is not minimal because if we partition $S$ as $A = \{12,11\}$ and $B = \{8,8,8\}$ we get $\sum_{i \in A} a_{i}  \sum_{i \in B} a_{i} = 23  24 = 1.$
Thus greedy algorithm fails for this example.
Working algorithm: https://www.geeksforgeeks.org/partitionasetintotwosubsetssuchthatthedifferenceofsubsetsumsisminimum/
$(b)$ Kruskal's minimum spanning tree algorithm  $(p)$ Greedy method
$(c)$ Biconnected components algorithm  $(s)$ Depth first search
$(d)$ Floyd's shortest path algorithm  $(q)$ Dynamic programming
$$\begin{array}{llll}\hline (a) & \text{Strassen's matrix multiplication algorithm} & (r) & \text{Divide and Conquer} \\\hline (b) & \text{Kruskal's minimum spanning tree algorithm} & (p) & \text{Greedy method} \\\hline (c) & \text{ Biconnected components algorithm} & (s) & \text{Depthfirst search} \\\hline (d) & \text{Floyd's shortest path algorithm} & (q) & \text{Dynamic programming} \\\hline \end{array}$$
Reference: Read the Intro/Algo SubHeading.
 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm#History_and_naming
 https://en.wikipedia.org/wiki/Quicksort#Algorithm
 https://en.wikipedia.org/wiki/Minimum_spanning_tree#Algorithms
 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)#Algorithms
P. Prim  ii. Greedy
http://www.geeksforgeeks.org/greedyalgorithmsset5primsminimumspanningtreemst2/
Q. Floyd Warshall  iii. Dynamic
http://www.geeksforgeeks.org/dynamicprogrammingset16floydwarshallalgorithm/
R. Merge Sort  iv. Divide & Conquer
http://www.geeksforgeeks.org/mergesort/
S. Hamiltonian Circuit  i. Backtracking
http://www.geeksforgeeks.org/backtrackingset7hamiltoniancycle/
Option is $C$.
Dijkstra's Shortest path algorithm uses greedy design (always chosing the shortest neighbour) while Floyd Warshall Algorithm to compute all shortest paths uses Dynamic Programming.
Binary search uses Divide and Conquer (though we eliminate one part at each time) and Back tracking traversal to a graph uses Depth First Search (DFS) (in DFS we have to backtrack to a node after searching through all its descendant nodes if the searched node is not found).
In Kruskal, in every iteration, an edge of the most minimum weight (greediest) possible is selected and added to MST construction. Hence, greedy.
In Quick Sort, we partition the problem into subproblems, solve them and then combine. Hence, it is Divide & Conquer.
FloydWarshall uses Dynamic programming.
Hence, correct answer is : OPTION (C).
Consider the following two functions:
$g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$
$g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$
Which of the following is true?
Which of the following is false?
Consider the following functions
 $f(n) = 3n^{\sqrt{n}}$
 $g(n) = 2^{\sqrt{n}{\log_{2}n}}$
 $h(n) = n!$
Which of the following is true?
Let $f(n) = n^2 \log n$ and $g(n) = n(\log n)^{10}$ be two positive functions of $n$. Which of the following statements is correct?
Consider the following three claims:
 $(n+k)^m = \Theta(n^m)$ where $k$ and $m$ are constants
 $2^{n+1} = O(2^n)$
 $2^{2n+1} = O(2^n)$
Which of the following claims are correct?
Let $f(n)$, $g(n)$ and $h(n)$ be functions defined for positive integers such that
$f(n) = O(g(n))$, $g(n) \neq O(f(n))$, $g(n) = O(h(n))$, and $h(n) = O(g(n))$.
Which one of the following statements is FALSE?
Consider the following functions:
 $f(n) = 2^n$
 $g(n) = n!$
 $h(n) = n^{\log n}$
Which of the following statements about the asymptotic behavior of $f(n)$, $g(n)$ and $h(n)$ is true?
 $f\left(n\right)=O\left(g\left(n\right)\right); g\left(n\right)=O\left(h\left(n\right)\right)$
 $f\left(n\right) = \Omega\left(g\left(n\right)\right); g(n) = O\left(h\left(n\right)\right)$
 $g\left(n\right) = O\left(f\left(n\right)\right); h\left(n\right)=O\left(f\left(n\right)\right)$
 $h\left(n\right)=O\left(f\left(n\right)\right); g\left(n\right) = \Omega\left(f\left(n\right)\right)$
Arrange the following functions in increasing asymptotic order:
Which of the given options provides the increasing order of asymptotic complexity of functions $f_1, f_2, f_3$ and $f_4$?
 $f_1(n) = 2^n$
 $f_2(n) = n^{3/2}$
 $f_3(n) = n \log_2 n$
 $f_4(n) = n^{\log_2 n}$
Let $W(n) $ and $A(n)$ denote respectively, the worst case and average case running time of an algorithm executed on an input of size $n$. Which of the following is ALWAYS TRUE?
Consider the equality $\displaystyle{\sum_{i=0}^n} i^3 = X$ and the following choices for $X$:
 $\Theta(n^4)$
 $\Theta(n^5)$
 $O(n^5)$
 $\Omega(n^3)$
The equality above remains correct if $X$ is replaced by
Let $f(n) = n$ and $g(n) = n^{(1 + \sin \: n)}$, where $n$ is a positive integer. Which of the following statements is/are correct?
 $f(n) = O(g(n))$
 $f(n) = \Omega(g(n))$
Consider the following functions from positive integers to real numbers:
$10$, $\sqrt{n}$, $n$, $\log_{2}n$, $\frac{100}{n}$.
The CORRECT arrangement of the above functions in increasing order of asymptotic complexity is:
Let $n$ be a large integer. Which of the following statements is TRUE?
Let $n$ be a large integer. Which of the following statements is TRUE?
Which of these functions grows fastest with $n$?
Let $n = m!$. Which of the following is TRUE?
 $m = \Theta (\log n / \log \log n)$
 $m = \Omega (\log n / \log \log n)$ but not $m = O(\log n / \log \log n)$
 $m = \Theta (\log^2 n)$
 $m = \Omega (\log^2 n)$ but not $m = Ο( (\log^2 n)$
 $m = \Theta (\log^{1.5} n)$
Which of the following functions asymptotically grows the fastest as $n$ goes to infinity?
Which of the following statements is TRUE for all sufficiently large integers n ?
Stirling’s approximation for $n!$ states for some constants $c_1,c_2$
$$c_1 n^{n+\frac{1}{2}}e^{n} \leq n! \leq c_2 n^{n+\frac{1}{2}}e^{n}.$$
What are the tightest asymptotic bounds that can be placed on $n!$ $?$
Among the following asymptotic expressions, which of these functions grows the slowest (as a function of $n$) asymptotically?
Answers: Asymptotic Notations
Options $A$ and $B$ are TRUE here.
 $100n \log n=O(\frac{n\log n}{100})$ : Big$O$ denotes the growth rate of functions and multiplication or division by a constant does not change the growth rate. So, this is TRUE and here $O$ can even be replaced by $\Theta$ or $\Omega$.
 $\sqrt{\log n} = O(\log\log n)$ : FALSE. $\sqrt \log n = ({\log n}) ^{0.5}$ grows faster than $\log \log n$ as any positive polynomial function $($including powers between $01)$ grows faster than any polylogarithmic function. (Ref: Section 3.2 Logarithms, Cormen). We can also do substitution here, but we must do for at least $2$ large values and ensure it works for any larger $n$ also.
 $0 < x < y \text{ then } n^x = O\left(n^y\right)$ : TRUE since $y$ is always greater than $x$. So, RHS is always greater than LHS.
 $2^n \neq O\left(nk\right)$ : TRUE since $k$ is constant. So, for large values of $n$, LHS is much higher than RHS (exponential function always greater than linear).
Only B is FALSE.
$$\begin{array}{lll}\hline \text{} & \textbf{n= 256} & \textbf{n = 65536} \\ \hline \text{$f(n) = 3n^{\sqrt{n}}$} & \text{$3 \times 256^{16}$} & \text{$3 \times 65536^{256}$}\\ & \text{$=3 \times {2^{128}}$} & \text{$ = 3 \times 2^{16 \times 256}$} \\ & & \text{$ = 3 \times 2^{4096}$}\\\hline \text{$g(n) = 2^{\sqrt{n}{\log_{2}n}}$} & \text{$2^{16 \times 8}$} & \text{$2^{256 \times 16}$} \\ & \text{$=2^{128}$} & \text{$ = 2^{4096}$}\\ \hline \text{$h(n) = n!$} & \text{$256!$} & \text{$65536!$}\\ & \text{$= O\left(\left(2^8\right)^{256}\right)$} & \text{$= O\left(\left(2^{16}\right)^{65536}\right)$} \\ & \text{$=O\left(2^{2048}\right)$} & \text{$=O\left(2^{1M}\right)$} \\ \hline\end{array}$$
Case of $h(n)$ is given only by an upper bound but factorial has higher growth rate than exponential.
$f(n)$ and $g(n)$ are having same order of growth as $f(n)$ is simply $3\times g(n)$ (we can prove this by taking $\log$ also). So, (d) is correct and all other choices are false.
A more formal approach:
$f(n) = n^2 \log n$
$g(n) = n (\log n)^{10}$
We can use the limit definition of $O$notation
http://cse.unl.edu/~choueiry/S06235/files/AsymptoticsHandoutNoNotes.pdf
$\displaystyle{\lim_{n \to \infty }} \frac{f(n)}{g(n)} = 0, \implies f(n) = o(g(n))$
small $o$ implying $f$ is strictly asymptotically lower than $g$. Also by definition, $o \implies O \text{ but } O \not\implies o$.
$\displaystyle{\lim_{n\to \infty}} \frac{f(n)}{g(n)} = c, c > 0 \implies f(n) = \Theta(g(n))$
$\displaystyle{\lim_{n \to \infty}} \frac{f(n)}{g(n)} = \infty, \implies f(n) = \omega(g(n))$
small $\omega$ implying $f$ is strictly asymptotically higher than $g$. Also by definition, $\omega \implies \Omega, \text{ but } \Omega \not \implies \omega.$
We can use this to prove the above question
For any $k > 0$
$\displaystyle{\lim_{n \to \infty}} \frac{(\log n)^{k}}{ n}$
Applying L'Hôpital's rule,
$\implies \displaystyle{ \lim_{n \to \infty}} \frac{ k*(\log n)^{k1}}{ n}$
$= \displaystyle{\lim_{n \to \infty}} \frac{ k!}{n} = 0$
So, $(\log n)^k = o(n)$
Now for large $n$, $n >> (\log n)^9$
i.e., $n^{2}\log n >> n(\log n)^{10}$ (Multiplying both sides by $n \log n)$
So, $n(\log n )^{10} = o(n^{2}\log n)$ and $n(\log n )^{10} \neq \Theta(n^{2}\log n)$ (making LHS strictly asymptotically lower than RHS)
Or
$n(\log n )^{10} = O(n^{2}\log n)$ and $n^{2}\log n \neq O(n(\log n )^{10})$
Option $B$.
$$\begin{array}{lcc} \hline \text {} & \textit{f(n)} & \textit{g(n)} \\\hline \text{$n = 2^{10}$} & \text{$10 \times 2^{10} \times 2^{10}$} & \text{$2^{10} \times 10^{10}$}\\\hline \text{$n = 2^{256}$} & \text{$256 \times 2^{256} \times 2^{256}$} &\text{$2^{256} \times 256^{10}$} \\\hline \end{array}$$
As $n$ is going larger, $f(n)$ is overtaking $g(n)$ and the growth rate of $f$ is faster than that of $g$. So, $g(n) = O(f(n))$ and $f(n) \neq O(g(n))$.
B choice.
I. Rate of growth of $(n+k)^m$ is same as that of $(n^m)$ as $k$ and $m$ are constants. (If either $k$ or $m$ is a variable then the equality does not hold), i.e., for sufficiently large values of $n,$
$$(n+k)^m \leq a n^m \text{ and}$$
$$n^m \leq b (n+k)^m$$
where $a$ and $b$ are positive constants. Here, $a$ can be $k^m$and $b$ can be $1.$
So, TRUE.
II. $2^{n+1} = 2\times (2^n) = \Theta\left(2^n\right)$ as $2$ is a constant here.
As $2^{n+1}$ is both upper and lower bounded by $2^n$ we can say $2^{n+1} = O\left(2^n\right).$ $(\Theta$ implies both $O$ as well as $\Omega)$
So, TRUE.
III. $2^{2n+1}$ has same rate of growth as $2^{2n}$.
$2^{2n} = {2^n}^2 = 2^n \times 2^n$
$2^n$ is upper bounded by $(2^n)^2$, not the other way round as $2^{2n}$ is increasing by a factor of $2^n$ which is not a constant.
So, FALSE.
Correct Answer: $A$
Answer is (D).
We can verify as : $f \leqslant g$ but $g \nleqslant f$. Therefore, $f<g$.
Also, $g=h$, as $g=O(h)$ and $h=O(g)$.
$g(n) = n!$.
On expanding the factorial we get $g(n) = O(n^n)$ :$$\begin{align*} n^n &> n^{\log n} \\ n^n &> 2^n \end{align*}$$This condition is violated by options $A$, $B$ and $C$ by first statements of each. Hence, they cannot be said to be TRUE.
Second statement of option $D$ says that $g(n)$ is asymptotically biggest of all.
Answer is option (D).
$E < B$ and $C, D < E$ as $E$ and $B$ are exponential functions and $B$ having a larger base.
Now, we just need to see if $C$ or $D$ is larger.
In $C$ we have a term $n^{3/4}$ and correspondingly in D we have $\log^9n$ (after taking $n$ out).
$n^{3/4}$ is asymptotically larger than $\log^9n$ as when $n = 10^{100}, \log^9 n$ gives $100^9$, while $n^{3/4}$ gives $10^{75} > 100^{37}$ a much higher value and this is true for all higher values of $n$. So, D < C.
Thus, $A$ is correct.
$n \log_2 n < n^{3/2}$ is quite straightforward as $n^{{3}/{2}} = n \times n^{1/2}$ and $\log n < n^{1/2}$ as logarithmic growth is smaller than exponential growth however small be the exponentiation factor.
Also, $n^{3/2} < n^{\log_2 n} $ and $n^{3/2} < 2^n$.
Now only $n^{\log_2 n}$ and $ 2^n$ need to be compared.
Taking $\log$ of both $(\log_2 n)^2$ and $n$,
$n > (\log_2 n)^2$
Hence, $2^n > n^{\log_2 n} $.
NOTE: We cannot compare two functions for asymptotic growth by taking $\log$ if they are giving constants after $\log$ operation.
$A(n) = O(W(n))$.
Sum of the cubes of the first $n$ natural numbers is given by $(n(n+1) / 2) ^ 2$ which is $\Theta(n^4)$. So, I, III and IV are correct. II is wrong.
$\therefore$ (C) is correct.
The answer is option (D).
Since the value of $\sin(n)$ will always range from $1$ to $+1$, hence $g(n)$ can take values $1$, $n$, $n^2$.
Hence, if $g(n) = 1$, then statement I is incorrect.
And, if $g(n) = n^2$, then statement II is incorrect.
$10$ is constant. $\therefore$ Growth rate is $0$.
$\sqrt{n}$ grows slower than linear but faster than $\log$. (Consider $\frac {\sqrt {n^2}}{\sqrt n} =\sqrt n$, whereas $\frac {\log \left(n^2\right)}{\log n} = 2$)
$n$ : Growth rate is linear.
$\log_{2}n$ : Growth rate is logarithmic. For asymptotic growth, the base does not matter.
$\frac{100}{n}$ : Growth rate decreases with $n$.
So, correct answer is (B).
NOTE: Please never substitute large values of $n$ in such questions. If ever you do, at least do for $2$ such values and take ratio to get the growth rate or plot a graph. Remember ${1.01}^{n} \neq O\left(n^{100}\right)$.
 $10$, Constant
 $\sqrt{n}$, Square root
 $n$, Polynomial
 $\log_{2}n$, Logarithmic
 $\frac{100}{n}$. Constant division by polynomial (clearly less than constant for every value of n >100)
Now we know Constant division by polynomial < Constant < Logarithmic <Square root < Polynomial
So, correct order is $\frac{100}{n}$, $10$, $\log_{2}n$, $\sqrt{n}$, $n$
PS: In the above graph, for asymptotic notations any graph can be multiplied by a constant and so the $\textbf{SLOPE}$ is the important factor. If any plot is having higher slope, it'll eventually overtake the lower slope one at some value of $x$ and hence asymptotically larger than the other.
Let $n = 2^x$. Then, $\log_2 n = x$.
$$\begin{array}{rlllcll} f(n) &=& n^{1/\sqrt{\log_2 n}} &=& \left (2^x \right )^{1/\sqrt{x}} = 2^{x/\sqrt{x}} &=& 2^{\sqrt{x}}\\[1em] g(n) &=& \sqrt{\log_2 n} &=& \sqrt{\log_2 {\left ( 2^x \right )}}&=& \sqrt{x}\\[1em] h(n) &=& n^{1/100} &=& \left ( 2^x \right )^{1/100} &=& 2^{x/100} \end{array}$$Since exponentials grow faster than polynomials, $h(n) > g(n)$ for large $n$.
Since linear functions grow faster than square roots, $\frac{x}{100} > \sqrt{x}$ for large $x$. Thus, $h(n) > f(n)$ for large $n$.
Since exponentials grow faster than polynomials, $2^{\sqrt{x}} > \sqrt{x}$ for large $\sqrt{x}$. Thus, $f(n) > g(n)$ for large $n$.
Hence, the relation is,
$g(n) < f(n) < h(n)$
Thus, option (D) is correct.
Take $n=2^{1024}$
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{45}$
$n^{\frac{1}{3}} \approx 2^{341}$
$n / \log n = 2^{1024}/1024 \approx 2^{1014}$
Just one value is not enough to confirm growth rate. So, take $n = 1024$.
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{4}$
$n^{\frac{1}{3}}\approx 2^{3}$
$n / \log n = 2^{10}/10 \approx 2^{7}$
So, as $n$ increases, the gap between second and third function increases and also the second function overtakes the first. So, $f1 < f2 < f3$.
Assuming that the base of the $\log$ in the question is $e$.
Let us try to rewrite each of these functions in the form $e^{\,\small\rm something}$, to make the comparison easier.
$$\def\d{\displaystyle\,}
\begin{array}{lll}
a. & e^{n} / n & = \dfrac{e^{\d n}}{e^{\d(\ln n)}} & = e^{\d (n  \ln n)}\\[1em]
b. & e^{n  0.9 \ln n} &&= e^{\d (n  0.9 \ln n)}\\[1em]
c. & 2^n &= \left (e^{\d\ln 2} \right )^{\d n} &= e^{\d (n \ln 2)}\\[1em]
d. & (\ln n)^{n1} &= \left (e^{\d\ln \ln n} \right )^{\d n1} &= e^{\d (n \ln \ln n  \ln\ln n)}
\end{array}$$
Now, if we just compare the exponents of all, we can clearly see that $(n \ln \ln n  \ln \ln n)$ grows faster than the rest. Note that in option $(C)$, the multiplicative $\ln 2$ is a constant, and hence grows slower than the multiplicative $\ln \ln n$ from option $(D)$.
This implies that $e^{\d (n \ln \ln n  \ln\ln n)}$ grows the fastest, and hence, $(\ln n)^{n1}$ grows the fastest.
Thus, option (D) is the correct answer.
Correct Answer: $A$
Let $N = 2^{256}$
 $(\log \log N)! = 8!$
 $(\log \log N)^{\log N} = (8)^{256} = 2^{768}$
 $(\log \log N)^{\log\log \log N} = 8^{3}=512$
 $(\log N)^{\log\log N} = (256)^{8}=2^{64}$
 $2^{\sqrt{\log \log N}} = 2^{2\sqrt 2}$
Let $N = 2^{16}$
 $(\log \log N)! = 4!$
 $(\log \log N)^{\log N} = (4)^{16} = 2^{32}$
 $(\log \log N)^{\log\log \log N} = 4^{2}=16$
 $(\log N)^{\log\log N} = (16)^{4}=2^{16}$
 $2^{\sqrt{\log \log N}} = 2^{2}=4$
Taking ratio for both $N$ values,
 $(\log \log N)! \rightarrow 8!/4!$
 $(\log \log N)^{\log N} \rightarrow 2^{736}$
 $(\log \log N)^{\log\log \log N} \rightarrow 32$
 $(\log N)^{\log\log N} \rightarrow 2^{48}$
 $2^{\sqrt{\log \log N}} \rightarrow \approx 2$
Option $B$ = $(\log \log N)^{\log N}$ asymptotically grows the fastest, as for the same change in $N$, the value increased the most (growth) for option $B$ and this growth is monotonic (continuous).
Now, option $(A)$ gives $n = 2^{2^{k}}$
Option $(B)$ gives $2^{\sqrt{\log2^{2^{k}}}}= 2^{\sqrt{2^{k}}}$
Option $(C)$ gives $2^{2^{\sqrt{\log(\log2^{2^{k}})}}}= 2^{2^{\sqrt{k}}}$
Now, check power only for $(B)$ and $(C)$.
Take $\log$ of power of both functions, i.e.,
$(\log(2^{\sqrt{k}})= O(\log(\sqrt{2^{k}})$
$\sqrt{k} = O(\frac{k}{2})$
Clearly, $(C)<(B)<(A)$.
Option $(A)$ is answer.
$c_1n^{n+\frac{1}{2}}e^{n}\leq n!\leq c_2n^{n+\frac{1}{2}}e^{n}$.
$n!=\Theta\left(\frac{n^{n+1/2}}{e^n}\right)$
We can divide it by $e^{1/2}$ (multiplying or dividing by a +ve constant doesn't affect the asymptotic nature of a function)
$\implies n!=\Theta\left(\frac{n^{n+1/2}}{e^{n+1/2}}\right)=\Theta\left(\left(\frac{n}{e}\right)^{n+\frac{1}{2}}\right)$
Option (D) is the answer.
For larger value of $n$ Option E) grows slowest
Put $n=2^{64}$
$A)$ $2^{log n}= 2^{64}$
$B)$ $n^{10}= 2^{640}$
$C)$ $(√log n)^{log^2 n}= 8^{64*64}= 2^{3*64*64}=2^{12288}$
$D)$ $(log n)^{√log n}= 64^{√8} = 2^{6√8}= 2^{16.97}$
$E)$ $2^{2^{√log log n}}= 2^{2^{√8}}=2^{7.10}$
The subsetsum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a $\text{2dimensional}$ Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which of the following is valid for $2 \leq i \leq n$, and $a_i \leq j \leq W$?
 $X[i, j] = X[i1, j] \vee X[i, ja_i]$
 $X[i, j] = X[i1, j] \vee X[i1, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i1, ja_i]$
The subsetsum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a $\text{2dimensional}$ Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which entry of the array $X$, if TRUE, implies that there is a subset whose elements sum to $W$?
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
l(i,j) = 0, if either i = 0 or j = 0 = expr1, if i,j > 0 and X[i1] = Y[j1] = expr2, if i,j > 0 and X[i1] ≠ Y[j1]
Which one of the following options is correct?
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
$l(i,j) = 0$, if either $i = 0$ or $j = 0$
$= expr1,$ if $i,j > 0$ and $X[i1] = Y[j1]$
$= expr2,$ if $i,j > 0$ and $X[i1] ≠ Y[j1]$
The value of $l(i,j)$ could be obtained by dynamic programming based on the correct recursive definition of $l(i,j)$ of the form given above, using an array $L[M,N]$, where $M = m+1$ and $N = n+1$, such that $L[i, j] = l(i, j)$.
Which one of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of $l(i, j)$?
 All elements of $L$ should be initialized to 0 for the values of $l(i, j)$ to be properly computed.
 The values of $l(i, j)$ may be computed in a row major order or column major order of $L[M, N]$.
 The values of $l(i, j)$ cannot be computed in either row major order or column major order of $L[M, N]$.
 $L[p, q]$ needs to be computed before $L[r, s]$ if either $p<r$ or $q < s$.
The weight of a sequence $a_0,a_1, \dots, a_{n1}$ of real numbers is defined as $a_0+a_1/2+ \dots + a_{n1}/2^{n1}$. A subsequence of a sequence is obtained by deleting some elements from the sequence, keeping the order of the remaining elements the same. Let $X$ denote the maximum possible weight of a subsequence of $a_o,a_1, \dots, a_{n1}$ and $Y$ the maximum possible weight of a subsequence of $a_1,a_2, \dots, a_{n1}$. Then $X$ is equal to
An algorithm to find the length of the longest monotonically increasing sequence of numbers in an array $A[0:n1]$ is given below.
Let $L_i$, denote the length of the longest monotonically increasing sequence starting at index $i$ in the array.
Initialize $L_{n1} = 1$.
For all $i$ such that $0 \leq i \leq n2$
$ L_i = \begin{cases} 1+ L_{i+1} & \quad\text{if A[i] < A[i+1]} \\ 1 & \quad\text{Otherwise}\end{cases} $
Finally, the length of the longest monotonically increasing sequence is $\text{max} \:(L_0, L_1, \dots , L_{n1}).$
Which of the following statements is TRUE?
 The algorithm uses dynamic programming paradigm
 The algorithm has a linear complexity and uses branch and bound paradigm
 The algorithm has a nonlinear polynomial complexity and uses branch and bound paradigm
 The algorithm uses divide and conquer paradigm
Four Matrices $M_1, M_2, M_3$ and $M_4$ of dimensions $ p \times q, \:\:q \times r, \:\:r \times s$ and $s \times t$ respectively can be multiplied in several ways with different number of total scalar multiplications. For example when multiplied as $((M_1 \times M_2) \times (M_3 \times M_4))$, the total number of scalar multiplications is $pqr + rst + prt$. When multiplied as $(((M_1 \times M_2) \times M_3) \times M_4)$, the total number of scalar multiplications is $pqr + prs + pst$.
If $p=10, q=100, r=20, s=5$ and $t=80$, then the minimum number of scalar multiplications needed is
Consider two strings $A$="qpqrr" and $B$="pqprqrp". Let $x$ be the length of the longest common subsequence (not necessarily contiguous) between $A$ and $B$ and let $y$ be the number of such longest common subsequences between $A$ and $B$. Then $x +10y=$ ___.
Suppose you want to move from $0$ to $100$ on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a prespecified pair of integers $i,\:j \:\text{with}\: i <j$. Given a shortcut $(i,j)$, if you are at position $i$ on the number line, you may directly move to $j$. Suppose $T(k)$ denotes the smallest number of steps needed to move from $k$ to $100$. Suppose further that there is at most $1$ shortcut involving any number, and in particular, from $9$ there is a shortcut to $15$. Let $y$ and $z$ be such that $T(9) = 1 + \min(T(y),T(z))$. Then the value of the product $yz$ is _____.
The FloydWarshall algorithm for allpair shortest paths computation is based on
 Greedy paradigm.
 Divideandconquer paradigm.
 Dynamic Programming paradigm.
 Neither Greedy nor DivideandConquer nor Dynamic Programming paradigm.
Assume that multiplying a matrix $G_1$ of dimension $ p \times q$ with another matrix $G_2$ of dimension $q \times r$ requires $pqr$ scalar multiplications. Computing the product of $n$ matrices $G_1G_2G_3 \dots G_n$ can be done by parenthesizing in different ways. Define $G_i G_{i+1}$ as an explicitly computed pair for a given paranthesization if they are directly multiplied. Fr example, in the matrix multiplication chain $G_1G_2G_3G_4G_5G_6$ using parenthesization $(G_1(G_2G_3))(G_4(G_5G_6)), G_2G_3$ and $G_5G_6$ are only explicitly computed pairs.
Consider a matrix multiplication chain $F_1F_2F_3F_4F_5$, where matrices $F_1, F_2, F_3, F_4$ and $F_5$ are of dimensions $ 2 \times 25, 25 \times 3, 3 \times 16, 16 \times 1 $ and $ 1 \times 1000$, respectively. In the parenthesization of $F_1F_2F_3F_4F_5$ that minimizes the total number of scalar multiplications, the explicitly computed pairs is/are
Answers: Dynamic Programming
This is analogous to the dynamic programming solution to $0/1$ knapsack problem.
Consider the capacity of the knapsack, i.e., $W$ to be analogous to $J$(the total sum here).
The solution exploits the optimal substructure of the problem.
At each stage we can have $2$ options:
Case $(1)$: Either we take an item(in this question either we consider the element $A_i$) along with the total solution to previous subproblem(total solution here means the total sum obtained till previous subproblem)
in which case we choose $A[i1]$$\left [ ja_{i} \right ]$
$A[i1]$ indicates we are considering solution to previous subproblem and
$A[ja_i]$ means we have considered element $a_i$ and now remaining sum is $Ja_i$ which has to be further considered.
Case $(2)$: Or we do not consider the item(in this case the element $a_{i}$) in which case we only consider the solution to previous subproblem, which is, $A[i1][J]$
Since the whole solution to this subsetsum problem is Logical OR(+) of cases 1 and 2, we eliminate options $C$ and $D$ because both are considering the Logical AND of the two parts of the solution.
Now, since here in the given question we are given a boolean array $X[n][W+1]$
So, an entry $X[i][j]$ is true only if sum $j$ is possible with array elements from $0$ to $i$.
So, for Each element of array, $a_{i}$, we consider two possibilities:
(1) Either we can ignore it and still get our possible sum, which is, $X[i1][j]$
OR
(2) We could include element $a_i$ and then get our required sum, which is, $X[i1][ja_i]$
And finally, to get $X[i][j]$, we take logical or of the above two cases.
Hence, answer is option B.
Reference :
Video:
By using the analogy of the problem and solution between subsetsum problem and $0/1$ knapsack problem, the above video clearly explains the how the solution to the problem is structured .
Video:ANSWER is C.
If LAST ROW and LAST COLUMN entry is $1$, then there exists a subset whose elements sum to $W$.
Answer is C. When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].
/* Returns length of LCS for X[0..m1], Y[0..n1] */ int lcs( char *X, char *Y, int m, int n ) { if (m == 0  n == 0) return 0; if (X[m1] == Y[n1]) return 1 + lcs(X, Y, m1, n1); else return max(lcs(X, Y, m, n1), lcs(X, Y, m1, n)); }
$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$
When the currently compared elements doesn't match, we have two possibilities for the LCS, one including $X[i]$ but not $Y[j]$ and other including $Y[j]$ but not $X[i]$.
/* Returns length of LCS for X[0..m1], Y[0..n1] */ int lcs( char *X, char *Y, int m, int n ) { if (m == 0  n == 0) return 0; if (X[m1] == Y[n1]) return 1 + lcs(X, Y, m1, n1); else return max(lcs(X, Y, m, n1), lcs(X, Y, m1, n)); }
Answer is B. Dynamic programming is used to save the previously found LCS. So, for any index [p,q] all smaller ones should have been computed earlier. Option D is not correct as the condition given requires even L[3,2] to be computed before L[2,4] which is not a necessity if we follow rowmajor order.
int lcs( char *X, char *Y, int m, int n ) { int L[m+1][n+1]; int i, j; /* Following steps build L[m+1][n+1] in bottom up fashion. Note that L[i][j] contains length of LCS of X[0..i1] and Y[0..j1] */ for (i=0; i<=m; i++) { for (j=0; j<=n; j++) { if (i == 0  j == 0) L[i][j] = 0; else if (X[i1] == Y[j1]) L[i][j] = L[i1][j1] + 1; else L[i][j] = max(L[i1][j], L[i][j1]); } } /* L[m][n] contains length of LCS for X[0..n1] and Y[0..m1] */ return L[m][n]; }
$$\begin{align}
&S = \langle a_0, S_1 \rangle\\
&S_1 = \langle a_1, a_2, a_3 \ldots a_{n1} \rangle
\end{align}$$
Two possible cases arise:
 $\bf{a_0}$ is included in the max weight subsequence of $\textbf{S:}$
In this case, $X = \text{weight} \Bigl( \langle a_0, S_1 \rangle \Bigr) = a_0 + \dfrac Y 2$
 $\bf{a_0}$ is not included in the max weight subsequence of $\textbf{S:}$
In this case, $X = \text{weight}(S_1) = Y$
Since the value of $a_0$ can be anything (negative or $<\frac Y 2$ in general) $\{ \because a_i \in \mathbb{R} \}$, it is possible that $Y > a_0 + \frac Y 2$.
The maximum possible weight of a subsequence of $S$ is given by: $$X = \max \left ( Y, a_0 + \frac Y 2 \right )$$
Thus, option B is correct.
$(A)$ is the answer.
The algorithm is storing the optimal solutions to subproblems at each point (for each $i$), and then using it to derive the optimal solution of a bigger problem. And that is dynamic programming approach. And the program has linear time complexity.
http://stackoverflow.com/questions/1065433/whatisdynamicprogramming
Now, branch and bound comes when we explore all possible solutions (branch) and we backtrack as soon as we realise we won't get a solution (in classical backtracking we will retreat only when we won't find the solution). In backtracking : In each step, you check if this step satisfies all the conditions.
If it does : you continue generating subsequent solutions
If not : you go one step backward to check for another path
So, backtracking gives all possible solutions while branch and bound will give only the optimal one.
The given algorithm here is neither backtracking nor branch and bound. Because we are not branching anywhere in the solution space.
And the algorithm is not divide and conquer as we are not dividing the problem and then merging the solution as in the case of merge sort (where merge is the conquer step).
https://en.wikipedia.org/wiki/Divide_and_conquer_algorithms
Answer is C.
Ordering:
 First Multiply $M_2 \times M_3.$
This requires $100*20*5$ multiplications.  Then Multiply $M_1 \times (M_2 \times M_3).$
This requires $10*100*5$ multiplications.  Then Multiply $(M_1 \times (M_2 \times M_3)) \times M_4.$
This requires $10*5*8$ multiplications.
Total $19000$ Multiplications.
Brute Force approach  anyone can do.
No. of possible ordering for 4 matrices is $C_3$ where $C_3$ is the $3^{rd}$ Catalan number and given by $n=3$ in $\frac{1}{n+1} {}^{2n}C_n = 5.$
So, here we have
 $(M_1 \times M_2) \times (M_3 \times M_4)$
 $(M_1 \times (M_2 \times M_3)) \times M_4$
 $((M_1 \times M_2) \times M_3) \times M_4$
 $M_1 \times (M_2 \times (M_3 \times M_4))$
 $M_1 \times ((M_2 \times M_3) \times M_4))$
Each of these would give no. of multiplications required as
 $pqr + rst + prt $
 $qrs + pqs + pst$
 $pqr + prs + pst$
 $rst + qrt + pqt$
 $qrs + qst + pst$
The last 2 are having $qt$ terms which are the highest terms by far and hence we can avoid them from consideration $qt = 8000$ multiplied by one other term would be larger than any value in choice. So, just find the value of first 3 terms.
 $pqr + rst + prt = 20000 + 8000 + 16000 = 44000$
 $qrs + pqs + pst = 10000 + 5000 + 4000 = 19000$  smallest value in choice, we can stop here.
 $pqr + prs + pst$
Dynamic Programming Solution (should know Matrix Chain Ordering algorithm)
Here we have a chain of length 4.
Dynamic programming solution of Matrix chain ordering has the solution
$m[i, j] = \begin{cases} 0 &\text{ if } i=j\\ \min_{i\leq k < j } m[i]k] + m[k+1][j] + p_{i1}p_{j}p_{k} &\text{ if } i < j\end{cases}$
So, we can fill the following table starting with the diagonals and moving upward diagonally. Here $k < j$ but $\geq i.$
$$\begin{array}{lllll} \hline \text{} & \textbf{j=1} & \textbf{j=2} & \textbf{j=3} & \textbf{j=4}\\\hline
\textbf{i=1} & \text{0} & \text{$p_0p_1p_2$} & \min(10000+p_0p_1p_3, & \min(18000+ p_0p_1p_4, \\
&&=20000&20000+p_0+p_2p_3)&20000+8000+p_0+p_2+p_4, \\
&&&=15000&15000+p_0p_3p_4) = 19000
\\\hline
\textbf{i=2} & \text{} & \text{0} & \text{$p_1p_2p_3 = 10000$} & \text{$\min(10000 + p_2p_3p_4), p_1p_3p_4)$}\\
&&&&=18000
\\\hline
\textbf{i=3} &\text{} & \text{} & \text{0} &\text{$p_2p_3p_4 = 8000$} \\\hline \textbf{i=4} &\text{} &\text{} & \text{} & \text{0} \\\hline \end{array}$$
Our required answer is given by $m[1,4] = 19000.$
In first string, if we want to get $4$ as maximum length then LCS should end with either "$rr$" or "$qr$".
Only $4$ combinations are possible for LCS with length $4$:
"$qpqr$", "$qqrr$","$pqrr$","$qprr$"
Now, check for matching sequences in second string, except for "$qqrr$" all are possible.
Now, it is given that $y$ and $z$ are two numbers such that,
$T(9) = 1 + \min (T(y), T(z))$ , i.e.,
$T(9) = 1 + \min ($Steps from $y$ to $100$, Steps from $z$ to $100)$, where $y$ and $z$ are two possible values that can be reached from $9$.
One number that can be reached from $9$ is $10$, which is the number obtained if we simply move one position right on the number line. Another number is $15$, the shortcut path from $9$, as given in the question. So, we have two paths from $9$, one is $10$ and the other is $15$.
Therefore, the value of $y$ and $z$ is $10$ and $15$ (either variable may take either of the values).
Thus, $yz = 150$.
$T(9)=1+ \min(T(y),T(z))=1+$min(Distance from $y$ to $100$ , Distance from $z$ to $100$)
There are only two such values where we can reach from $9$ , one is simple step to right on number line , i.e $10$ and another is $15$ (given shortcut)
Hence , $y=10$ , $z=15$
$yz=10 \times 15 = 150$
Correct Answer: $C$
The answer is $1500$.
Matrix Parenthesizing : $A_1 ((A_2 A_3) A_4)$
Check my solution below, using dynamic programming $$\begin{array}{ll}\hline A_{1} &A_{2} & A_{3} &A_{4}\\\hline 10 \times 5 & 5 \times 20 & 20 \times 10 & 10\times 5 \\\hline \end{array}$$
 $A_{12} = 10 \times 5 \times 20 =1000$
 $A_{23} = 5 \times 20 \times 10 =1000$
 $A_{34} = 20 \times 10 \times 5 =1000$
$$A_{13}=min\left\{\begin{matrix} A_{12} + A_{33}+ 5 \times 20 \times 10 = 2000\\ A_{11} + A_{23} + 10 \times 5 \times 10 = 1500 \end{matrix}\right.$$
$$A_{24}=min\left\{\begin{matrix} A_{23} + A_{44}+ 5 \times 10 \times 5 = 1250\\ A_{22} + A_{34} + 5 \times 20 \times 5 = 1500 \end{matrix}\right.$$
$$A_{14}=min\left\{\begin{matrix} A_{11} + A_{24}+ 10 \times 5 \times 5 = 1500\\ A_{12} + A_{34} + 10 \times 20\times 5 \geqslant 2000\\ A_{13}+A_{44} + 10 \times 20\times 5 = 2000 \end{matrix}\right.$$
Answer is 1500.
So, here is the sequence giving minimal cost:
$(((F_{1}(F_{2}(F_{3}F_{4}))(F_{5})) = 48 + 75 + 50 + 2000 = 2173$
Explicitly computed pairs is $(F_{3} F_{4} )$
Correct Answer: $C$
Which of the following statements is false?

Optimal binary search tree construction can be performed efficiently using dynamic programming

Breadthfirst search cannot be used to find connected components of a graph

Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.

Depthfirst search can be used to find connected components of a graph
An independent set in a graph is a subset of vertices such that no two vertices in the subset are connected by an edge. An incomplete scheme for a greedy algorithm to find a maximum independent set in a tree is given below:
V: Set of all vertices in the tree; I := ϕ while V ≠ ϕ do begin select a vertex u ∊ V such that _______; V := V  {u}; if u is such that ________then I := I ∪ {u} end; Output(I);

Complete the algorithm by specifying the property of vertex $u$ in each case.

What is the time complexity of the algorithm?
Let $G$ be the directed, weighted graph shown in below figure
We are interested in the shortest paths from $A$.

Output the sequence of vertices identified by the Dijkstra’s algorithm for single source shortest path when the algorithm is started at node $A$

Write down sequence of vertices in the shortest path from $A$ to $E$

What is the cost of the shortest path from $A$ to $E$?
Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?
The most appropriate matching for the following pairs
$$\begin{array}{ll}\hline \text{X: depth first search} & \text{1: heap } \\\hline \text{Y: breadth first search} & \text{2: queue} \\\hline \text{Z: sorting} & \text{3: stack} \\\hline \end{array}$$
is:
Consider an undirected, unweighted graph $G$. Let a breadthfirst traversal of $G$ be done starting from a node $r$. Let $d(r,u)$ and $d(r,v)$ be the lengths of the shortest paths from $r$ to $u$ and $v$ respectively in $G$. If $u$ is visited before $v$ during the breadthfirst traversal, which of the following statements is correct?
Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph $G$ with $n*n$ adjacency matrix $A$. $A[i,j]$ equals $1$ if there is an edge in $G$ from $i$ to $j$, and $0$ otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;} } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n {P[__,__] = min{_______,______}; } } }
 Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
 Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
 Fill in the blank: The running time of the Algorithm is $O$(___).
Consider the following graph:
Among the following sequences:
 abeghf
 abfehg
 abfhge
 afghbe
Which are the depthfirst traversals of the above graph?
Let $G =(V,E)$ be an undirected graph with a subgraph $G_1 = (V_1, E_1)$. Weights are assigned to edges of $G$ as follows.
$$w(e) = \begin{cases} 0 \text{, if } e \in E_1 \\1 \text{, otherwise} \end{cases}$$
A singlesource shortest path algorithm is executed on the weighted graph $(V,E,w)$ with an arbitrary vertex $v_1$ of $V_1$ as the source. Which of the following can always be inferred from the path costs computed?
 The number of edges in the shortest paths from $v_1$ to all vertices of $G$
 $G_1$ is connected

$V_1$ forms a clique in $G$

$G_1$ is a tree
Let $G= (V,E)$ be a directed graph with $n$ vertices. A path from $v_i$ to $v_j$ in $G$ is a sequence of vertices ($v_{i},v_{i+1}, \dots , v_j$) such that $(v_k, v_{k+1}) \in E$ for all $k$ in $i$ through $j1$. A simple path is a path in which no vertex appears more than once.
Let $A$ be an $n \times n$ array initialized as follows:
$$A[j,k] = \begin{cases} 1 \text {, if } (j,k) \in E \\ 0 \text{, otherwise} \end{cases}$$
Consider the following algorithm:
for i=1 to n for j=1 to n for k=1 to n A[j,k] = max(A[j,k], A[j,i] + A[i,k]);
Which of the following statements is necessarily true for all $j$ and $k$ after termination of the above algorithm?
 $A[j,k] \leq n$
 If $A[j,j] \geq n1$ then $G$ has a Hamiltonian cycle
 If there exists a path from $j$ to $k$, $A[j,k]$ contains the longest path length from $j$ to $k$
 If there exists a path from $j$ to $k$, every simple path from $j$ to $k$ contains at most $A[j,k]$ edges
Suppose we run Dijkstra’s single source shortest path algorithm on the following edgeweighted directed graph with vertex $P$ as the source.
In what order do the nodes get included into the set of vertices for which the shortest path distances are finalized?
Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$ is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$
Consider the undirected graph below:
Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?
 $\text{(E, G), (C, F), (F, G), (A, D), (A, B), (A, C)}$
 $\text{(A, D), (A, B), (A, C), (C, F), (G, E), (F, G)}$
 $\text{(A, B), (A, D), (D, F), (F, G), (G, E), (F, C)}$
 $\text{(A, D), (A, B), (D, F), (F, C), (F, G), (G, E)}$
Let $G(V,E)$ be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
The edge $e$ must definitely belong to:
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
Let the weight of an edge $e$ denote the congestion on that edge. The congestion on a path is defined to be the maximum of the congestions on the edges of the path. We wish to find the path from $s$ to $t$ having minimum congestion. Which of the following paths is always such a path of minimum congestion?
In a depthfirst traversal of a graph $G$ with $n$ vertices, $k$ edges are marked as tree edges. The number of connected components in $G$ is
In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.$$\begin{array}{llll}\hline \text{1.} & \text{BellmanFord algorithm} & \text{A:} & \text{$O(m\log n)$} \\\hline \text{2.} & \text{Kruskal’s algorithm} & \text{B:}& \text{$O(n^3)$} \\\hline \text{3.}& \text{FloydWarshall algorithm} & \text{C:} & \text{$O(nm)$} \\\hline \text{4.} & \text{Topological sorting} &\text{D:} & \text{$O(n+m)$} \\\hline \end{array}$$
A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists"); else printf ("Sink does not exist");
Choose the correct expressions for $E_1$ and $E_2$
A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists") ; else printf ("Sink does not exist");
Choose the correct expression for $E_3$
To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:
Let $T$ be a depth first search tree in an undirected graph $G$. Vertices $u$ and $ν$ are leaves of this tree $T$. The degrees of both $u$ and $ν$ in $G$ are at least $2$. which one of the following statements is true?
 There must exist a vertex $w$ adjacent to both $u$ and $ν$ in $G$
 There must exist a vertex $w$ whose removal disconnects $u$ and $ν$ in $G$
 There must exist a cycle in $G$ containing $u$ and $ν$
 There must exist a cycle in $G$ containing $u$ and all its neighbours in $G$
Which of the following is the correct decomposition of the directed graph given below into its strongly connected components?
 $\left \{ P, Q, R, S \right \}, \left \{ T \right \},\left \{ U \right \}, \left \{ V \right \}$
 $\left \{ P,Q, R, S, T, V \right \}, \left \{ U \right \}$
 $\left \{ P, Q, S, T, V \right \}, \left \{ R \right \},\left \{ U \right \}$
 $\left \{ P, Q, R, S, T, U, V \right \}$
Consider the depthfirstsearch of an undirected graph with $3$ vertices $P$, $Q$, and $R$. Let discovery time $d(u)$ represent the time instant when the vertex $u$ is first visited, and finish time $f(u)$ represent the time instant when the vertex $u$ is last visited. Given that
$$\begin{array}{ll}\hline \text{$d(P) = 5$ units } & \text{ $f(P) = 12$ units } \\\hline \text{$d(Q) = 6$ units} & \text{ $f(Q) = 10$ units} \\\hline \text{$d(R) = 14$ unit} & \text{$f(R) = 18$ units} \\\hline \end{array}$$
Which one of the following statements is TRUE about the graph?
 There is only one connected component
 There are two connected components, and $P$ and $R$ are connected
 There are two connected components, and $Q$ and $R$ are connected
 There are two connected components, and $P$ and $Q$ are connected
In an unweighted, undirected connected graph, the shortest path from a node $S$ to every other node is computed most efficiently, in terms of time complexity, by
Consider the DAG with $V = \{1,2,3,4,5,6\}$ shown below.
Which of the following is not a topological ordering?
A depthfirst search is performed on a directed acyclic graph. Let $d[u]$ denote the time at which vertex $u$ is visited for the first time and $f[u]$ the time at which the DFS call to the vertex $u$ terminates. Which of the following statements is always TRUE for all edges $(u, v)$ in the graph ?
Consider a weighted, undirected graph with positive edge weights and let $uv$ be an edge in the graph. It is known that the shortest path from the source vertex $s$ to $u$ has weight 53 and the shortest path from $s$ to $v$ has weight 65. Which one of the following statements is always TRUE?
The Breadth First Search algorithm has been implemented using the queue data structure. One possible order of visiting the nodes of the following graph is:
Dijkstra's single source shortest path algorithm when run from vertex $a$ in the above graph, computes the correct shortest path distance to
The most efficient algorithm for finding the number of connected components in an undirected graph on $n$ vertices and $m$ edges has time complexity
Consider the following sequence of nodes for the undirected graph given below:
 $a$ $b$ $e$ $f$ $d$ $g$ $c$
 $a$ $b$ $e$ $f$ $c$ $g$ $d$
 $a$ $d$ $g$ $e$ $b$ $c$ $f$
 $a$ $d$ $b$ $c$ $g$ $e$ $f$
A Depth First Search (DFS) is started at node $a$. The nodes are listed in the order they are first visited. Which of the above is/are possible output(s)?
Which of the following statement(s) is/are correct regarding BellmanFord shortest path algorithm?
P: Always finds a negative weighted cycle, if one exists.
Q: Finds whether any negative weighted cycle is reachable from the source.
Consider the directed graph shown in the figure below. There are multiple shortest paths between vertices $S$ and $T$. Which one will be reported by Dijkstra’s shortest path algorithm? Assume that, in any iteration, the shortest path to a vertex $v$ is updated only when a strictly shorter path to $v$ is discovered.
What is the time complexity of BellmanFord singlesource shortest path algorithm on a complete graph of n vertices?
Let $G$ be a graph with $n$ vertices and $m$ edges.What is the tightest upper bound on the running time of Depth First Search on $G$, when $G$ is represented as an adjacency matrix?
Consider the directed graph below given.
Which one of the following is TRUE?
 The graph does not have any topological ordering.
 Both PQRS and SRQP are topological orderings.
 Both PSRQ and SPRQ are topological orderings.
 PSRQ is the only topological ordering.
Consider the tree arcs of a BFS traversal from a source node $W$ in an unweighted, connected, undirected graph. The tree $T$ formed by the tree arcs is a data structure for computing
 the shortest path between every pair of vertices.
 the shortest path from $W$ to every vertex in the graph.
 the shortest paths from $W$ to only those nodes that are leaves of $T$.
 the longest path in the graph.
Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.
Let $G = (V, E)$ be a simple undirected graph, and $s$ be a particular vertex in it called the source. For $x \in V$, let $d(x)$ denote the shortest distance in $G$ from $s$ to $x$. A breadth first search (BFS) is performed starting at $s$. Let $T$ be the resultant BFS tree. If $(u, v)$ is an edge of $G$ that is not in $T$, then which one of the following CANNOT be the value of $d(u)  d(v)$?
Consider the following directed graph:
The number of different topological orderings of the vertices of the graph is _____________.
In an adjacency list representation of an undirected simple graph $G=(V, E)$, each edge $(u, v)$ has two adjacency list entries: $[v]$ in the adjacency list of $u$, and $[u]$ in the adjacency list of $v$. These are called twins of each other. A twin pointer is a pointer from an adjacency list entry to its twin. If $E=m$ and $V=n$, and the memory size is not a constraint, what is the time complexity of the most efficient algorithm to set the twin pointer in each entry in each adjacency list?
Let $G=\left ( V,E \right )$ be $any$ connected, undirected, edgeweighted graph. The weights of the edges in $E$ are positive and distinct. Consider the following statements:
 Minimum Spanning Tree of $G$ is always unique.
 Shortest path between any two vertices of $G$ is always unique.
Which of the above statements is/are necessarily true?
The Breadth First Search (BFS) algorithm has been implemented using the queue data structure. Which one of the following is a possible order of visiting the nodes in the graph below?
Given a weighted directed graph with $n$ vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time
Consider the following directed graph.
Suppose a depthfirst traversal of this graph is performed, assuming that whenever there is a choice, the vertex earlier in the alphabetical order is to be chosen. Suppose the number of tree edges is $T$, the number of back edges is $B$ and the number of cross edges is $C$. Then
Answers: Graph Algorithms
Answer is $B$.
 True.
 False.
 True.
 True.
 While adding vertex $u$ to I it should not have an edge with any node in $I$.
 The algorithm runs till V is empty (in $O(n)$ time) and is checking u with each vertex $v$ in set $I$ (in $O(n)$ time). So, overall complexity $O(n^2)$.
$\quad\textbf{DIJKSTRA}{(G,w,s)}$
$\quad 1 \quad \textbf{INITIALIZESINGLESOURCE}(G,s)$
$\quad 2 \quad S = \emptyset$
$\quad 3\quad Q = G.V$
$\quad 4\quad \textbf{while } Q \neq \emptyset$
$\quad 5\quad \quad u = \textbf{EXTRACTMIN}(Q)$
$\quad 6\quad \quad S = S \cup \{u\}$
$\quad 7\quad \quad \textbf{for each vertex } v \in G.Adj[u]$
$\quad 8\quad \quad\quad \textbf{RELAX}(u,v,w)$
Correct Solutions:
$(A).$
$Q=\left\{\overset{\boxed{0}}{\text{A}},\overset{\boxed{\infty}}{\text{B}}, \overset{\boxed{\infty}}{\text{C}}, \overset{\boxed{\infty}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{\infty}}{\text{F}} \right\}$
 First we visit $\text{A}$ as it is the one with smallest distance $0$
 Relax operation updates distances to $\text{B,C,F}$ as $6,90,70$ respectively
$Q=\left\{\overset{\boxed{6}}{\text{B}}, \overset{\boxed{90}}{\text{C}}, \overset{\boxed{\infty}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}} \right\}$  $\text{B}$ is next visited as its distance is now $6$
 Relax operation updates distance to $\text{D}$ as $41+6=47$
$Q=\left\{ \overset{\boxed{90}}{\text{C}}, \overset{\boxed{47}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}} \right\}$  $\text{D}$ is next visited as its distance is now $47$
 Relax operation updates distance to $\text{C}$ as $47+12=59$
$Q=\left\{ \overset{\boxed{59}}{\text{C}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}} \right\}$  $\text{C}$ is next visited as its distance is now $59$
 Relax operation updates distance to $\text{F}$ as $59+10=69$
$Q=\left\{ \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{69}}{\text{F}} \right\}$  $\text{F}$ is next visited as its distance is now $69$
 Relax operation updates distance to $\text{E}$ as $69+15=84$
$Q=\left\{\overset{\boxed{84}}{\text{E}}\right\}$  Finally $\text{E}$ is visited.
So, the sequence of node visits are $\text{A}, \text{B}, \text{D}, \text{C}, \text{F}, \text{E}$
$(B).$ Sequence of vertices in the shortest path from $\text{A}$ to $\text{E}$: $\text{A} \text{B} \text{D}\text{C} \text{F} \text{E}$
$(C).$ Cost of the shortest path from $\text{A}$ to $\text{E} = 84.$
Answer is $C$.
X  3 DFS uses stack implicitly
Y  2 BFS uses queue explicitly in Algo
Z  1 HeapHeapsort
Answer is $(C)$.
BFS is used to count shortest path from source (If all path costs are $1$)
Now, if $u$ is visited before $v$ it means $2$ things:
 Either $u$ is closer to $v$, or,
 If $u$ & $v$ are same distance from $r$, then our BFS algo chose to visit $u$ before $v$.
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =infinite // i.e. if there is no direct path then put infinite else P[i,j] =a[i,j]; } } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n { P[i, j] = min( p[i,j] , p[i,k] + p[k,j]) }; } }
Time complexity $O(n^3)$
This algorithm is for weighted graph but it will work for unweighted graph too because if $p[i,j]=1$, $p[i,k]=1$ and $p[k,j]=1$ then according to the algorithm $p[i,j] = \min(p[i,j] ,p[i,k] + p[k,j]) = \min(1,2) =1$
And all the other cases are also satisfied. $($like if $p[i,j]$ was $0$ in last iteration and there exist a path via $k)$
In DFS, we go in depth first i.e., one node to another in depth first order.
Here, $abfehg$ is not possible as we can not go from $f$ to $e$ directly.
Thus, option $(D)$ is correct.
In all the other options we can reach directly from the node to the next node.
So, just visualize and do.
After applying the shortest path algorithm, check cost of vertex from source to every vertex in $G_1$. If $G_1$ is connected all these costs must be $0$ as edge weights of subgraph $G_1$ is $0$ and that should be the shortest path. If cost is not $0$, to at least one vertex in $G_1$ (not necessarily $G$), then $G_1$ is disconnected.
Answer is B.
D is correct.
Consider a graph with $2$ nodes and one edge from $V_1$ to $V_2$,
Running the above algorithm will result in $A$ being
$$\begin{array}{lll} \hline \textbf{A} & \textbf{1} & \textbf{2} \\\hline \textbf{1} & \text{1} & \text{2} \\\hline \textbf{2} & \text{1} & \text{2} \\\hline \end{array}$$
Clearly options B and C are wrong. Since
 $A[1][1]$ and $A[2][2] > n1$ and there exists no Hamiltonian cycle. Hence invalid.
 The longest path between $V_1$ and $V_2$ is $1$, but $A[1][2]$ is $2$, which is invalid. And no path between $V_2$ and $V_1$ yet $A[2][1] = 1$ // it should be max cost path between j and k, not path length.
Hence A or D could be valid.
Now consider a graph with $2$ nodes and two edges, one from $V_1$ and $V_2$ and other form $V_2$ and $V_1$. Running the above algorithm will result in $A$ being
$$\begin{array}{lll} \hline \textbf{A} & \textbf{1} & \textbf{2} \\\hline \textbf{1} & \text{2} & \text{3} \\\hline \textbf{2} & \text{3} & \text{4} \\\hline \end{array}$$
Hence option $A$ is invalid, as $A[i][j]$ can be $>n.$
D is correct
Answer is (B). In Dijkstra's algorithm at each point we choose the smallest weight edge which starts from any one of the vertices in the shortest path found so far and add it to the shortest path.
Take a tree for example
 False. Every vertex of tree(other than leaves) is a cut vertex.
 True.
 False. Without E in $G1$ and $G2$, $G1 \cup G2$ has no bridge.
 False. $G1 \cup G2$, $G1$, $G2$ three graphs have same chromatic number of $2$.
Answer is D.
$A$ and $B$ produce disconnected components with the GIVEN order in options which is NEVER allowed by prims's algorithm.
$C$ produces connected component every instant a new edge is added BUT when first vertex is chosen(first vertex is chosen randomly) first edge must be the minimum weight edge that is chosen . Therefore, $(A,D)$ MUST be chosen BEFORE $(A,B)$. Therefore, $C$ is FALSE.
Option $(B)$ : Fibonacci heap. $E$ decrease key operations and each taking $O(1)$ time $+$ $V$ extractmin operations each taking $O\left(\logV\right)$.
Option $(A)$ : Array. Finding minvertex in each iteration takes $O(V)$ and this needs to be done $V$ times.
Binomial Heap is same as Binary heap here, as the critical operations are decrease key and extractmin.
Correct Answer: $D$
The answer should be Option A because edge $e$ is the lightest safe edge connecting $X$ and $Y$ so the minimum spanning tree of $G$ must contain $e$ (Greedy and optimal choice).
While option $(B)$ might seem correct but it is not always true. One such case is when $G$ is not connected therefore there might not be any path between $s$ and $t$.
Since the question is about definitely TRUE, $(B)$ is incorrect and $(A)$ is the only correct option
Lets say $AC =1$, $CD = 2$ ,$BD = 3$ and $AB=4$
Then if $s= A$ and $t= B$ then $AC$ is the lightest edge crossing $X$ and $Y$ where $X = { A }$ and $Y = { C, B, D}$
But clearly $AC$ is not on the shortest path from $A$ to $B$. The shortest path is $AB = 4$.
Here answer should be A.
Here, shortest path will give $6$.
Spanning tree contains edges of weights $2$,$3$,$4$ so congestion in this case is $ \max (2,3,4)$, that is, $4$. For path $s$ to $t$, overall congestion is $\max(3,4) =4$ but total weight is $7$.
Option $C$ and $D$ are I think not related to this question.
Tree edges are those edges which appear in the final DFS forest. For example in case of connected graph (i.e. resulting DFS forest containing only one tree), if we run DFS over it, edges belonging to the resulting DFS tree are called tree edges.
Let us assume the graph has $x$ number of connected (or strongly connected in case of a directed graph) components. And assume 1st component has $K_1$ tree edges, 2nd component has $K_2$ tree edges and xth component has $K_x$ tree edges.
Such that $K_1 + K_2 + K_3 + \ldots + K_x = K $ ( = total)
Or in other way we can imagine like, the final DFS forest has $x$ trees and those trees are having $K_1$ , $K_2$ , $K_3, \ldots, K_x$ edges respectively.
Now we know that a tree having $K_x$ edges contains $K_x + 1$ nodes. Similarly a tree having $K_1$ edges contains $K_1 + 1$ nodes, etc. and so on.
So, summation of nodes in each tree $= n$
$(K_1 + 1) + (K_2 + 1) + (K_3 + 1) + \ldots+ (K_x + 1) = n \\ \ \ \ \ \implies (K_1+K_2+K_3+\ldots+K_x) + x = n \\ \ \ \ \ \implies x = n  K$
Correct Answer: $D$
 BellmanFord algorithm $\implies \text {option} (C)$, $O (nm)$. Assuming $n$ as edges , $m$ as vertices, for every vertex we relax all edges. $m*n$ , $O(mn)$.
 Kruskal’s algorithm $\implies$ Remaining Option $(A)$ : $O ( m \log n)$.
 FloydWarshall algorithm $\implies$ option $(B)$, Dynamic Programming Algo, $O(N^3)$.
 Topological sorting $\implies \text {option} (D)$, boils down to DFS, $O(V+E)$.
Answer (A).
If there is a sink in the graph, the adjacency matrix will contain all 1's (except diagonal) in one column and all 0's (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which doesn't have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from $i$ to it, making it impossible for them to form a sink. This is done through
 $E_1: \!A[i][j]$ and $E_2: i = j$;
$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$.
So, answer is (C)
For $E_3$, https://gateoverflow.in/3857/gate2005it_84b
If there is a sink in the graph, the adjacency matrix will contain all 1s (except diagonal) in one column and all 0s (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which does not have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through
$E_1: \ !A[i][j]$
and
$E_2: i = j$;
$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous $j$ can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$.
Now, the loop breaks when we found a potential sink that is a vertex which does not have any outgoing edge to any coming after it in adjacency matrix. So, if the column in which this vertex comes is all 1s and the row is all 0s (except diagonal), this is the sink. Otherwise there is no sink in the graph. So, $E_3$ is checking this condition.
But in the code $\text{flag}$ is used for storing the state that sink is present or not. And as per the usage of flag in code, by default sink is considered present. So, the condition in $E_3$ must make flag = 0, if the found $i$ is not a sink. So, the condition should be:
$A[i][j] \ \left  \right \ !A[j][i]$
So, (D) is the answer.
Answer is A: Queue
We can find single source shortest path in unweighted graph by using Breadth First Search (BFS) algorithm by using "Queue" data structure , in time $O(m+n)$ (i.e. linear with respect to the number of vertices and edges. )
One diagram, which is eliminating option A, B, C.
Hence D is the answer.
Here the answer is B.
A graph is said to be strongly connected if every vertex is reachable from every other vertex.
The strongly connected component is always maximal that is if $x$ is strongly connected component there should not exist another strongly connected component which contains $x$.
If we take $R$ as a strongly connected component but which is part of $PQRS$ and $PQRS$ is part of $PQRSVT$.
As seen in question, after $10$ we have to go for $p$ again and since $p$ is finished and then $r$ is started it means $r$ must be disconnected. If there is an edge from $q$ to $r$ then $r$ must be visited before $q$ and $p$ end.
D is answer.
Dijkastra and Warshall 's algorithm used only for weighted graph.
Both $DFS$ and $BFS$ can be used for finding path between $2$ vertices in undirected and unweighted graph but $BFS$ can only give the shortest path as concerned in given question. So, BFS is answer.
Note : Finding only path$(DFS)$ and finding shortest path$(BFS)$ matters a lot.
Must Read:
https://www.quora.com/WhataretheadvantagesofusingBFSoverDFSorusingDFSoverBFSWhataretheapplicationsanddownsidesofeach
Correct Answer: $D$
Go with vertex with indegree 0. Remove the vertex with all edges going from it. Repeat the procedure.
We see that $3$ cannot come at first because indegree is not $0$. So, D is the answer here.
ALL other options are in Topological order.
Only $1$ and $4$ order matter for this question.
I'm gonna disprove all wrong options here:
 $d[u] < d[v] $, Counter Example $\implies$ Well if we directly start DFS on V first, then I call DFS on $X$ which visits $U$.
 $d[u] < f[v]$, Counter example $\implies$ Same as A
 $f[u] < f[v]$, Counter example $\implies$ Same as A again
So, answer is D.
If weight $(u, v) < 12$, then the min. weight of $(s, v) = $weight of $(s, u) + $ weight of $(u, v) = 53 + (<12) $ will be less than $65$.
 MNOPQR: If you try to run BFS, after M, you must traverse NQR (In some order). Here, P is traversed before Q, which is wrong.
 NQMPOR: This is also not BFS. P is traversed before O.
 QMNPRO: Correct.
 QMNPOR: Incorrect. Because R needs to be traversed before O.(Because M is ahead of N in queue).
Answer : C
Run DFS to find connected components. Its time complexity is $\Theta(m+n)$, hence (C) is the answer.
Answer: B
1. After $f$ is visited, $c$ or $g$ should be visited next. So, the traversal is incorrect.
4. After $c$ is visited, $e$ or $f$ should be visited next. So, the traversal is incorrect.
$2$ and $3$ are correct.
$\underline{\text{Bellmanford Algorithm}}$
Single source shortest Path $O(VE)$
Relax every edge once in each iteration
$E \times \underbrace{(V1)} = E.VE = O(V.E)$
at max $(V1)$ edges can be there
$$\begin{array}{lllll}\hline & \textbf{A} & \textbf{B} & \textbf{C} & \textbf{D} \\\hline & \text{0} & \text{$\infty$} &\text{$\infty$} & \text{$\infty$}\\ &\text{null} & \text{null} & \text{null} & \text{null}\\\hline \text{i=0}&\text{0} & \text{50} & \text{70} & \text{$\infty$}\\ & \text{X} & \text{A} & \text{A} & \text{X}\\\hline \text{i=1} & \text{0} & \text{2} & \text{70} & \text{53} \\ & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \text{i=2} & \text{0} & \text{2} & \text{70} & \text{5} \\ & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \text{i=3} & \text{0} & \text{2} & \text{70} & \text{5} \\ & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \end{array}$$
As we can see that the last step is the verification step. In that step, values remained unchanged. If there was a negative edge weight cycle reachable from source, then at verification step also, those values will be different from the values above.
In case the cycle is not reachable from source then we can see that they will be at $\infty$ distance(or cost) from the source from the beginning till the last step. As take anything away from the $\infty$ it will still be infinite.
But it can also be the case that there are some points which are not forming a cycle and are still unreachable from source, those also will be at $\infty$ distance from the source from the beginning till end.
Hence, we won't be able to make a distinction among the cycle and such vertices. Thus, we say that this algorithm can detect negative edge weight cycles only if they are reachable from the source.
Answer is option B
Relaxation at every vertex is as follows:
Note that the next picked vertex corresponds to the next row in Table
$$\scriptsize{\begin{array}{cc} \hline \text{} & \textbf{A} & \textbf{B} & \textbf{C} & \textbf{D} & \textbf{E} & \textbf{F} & \textbf{G} & \textbf{T} \\\hline
\textbf{S} & \text{4} & \boxed{\text{$3$}} & \text{$\infty$} & \text{7} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&S\to A&\boxed{\bf{S\to B}}&&S \to D\\
\\\hline
\textbf{B} & \boxed{\text{$4$}} & & \text{$\infty$} & 7 & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&\boxed{\bf{S \to A}} &&& S \to D \\\hline
\textbf{A} & \text{} & \text{} & \boxed{5} & \text{7} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&&&\boxed{\bf{S\to A \to C}}&S \to D
\\\hline
\textbf{C} & \text{} & \text{} & \text{} & \text{7} & \boxed{6} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&&&&S \to D&\boxed{\bf{S\to A\\\to C\to E}}\\
\hline
\textbf{E} & \text{} & \text{} & \text{} & \boxed{\text{$7$}} & \text{} & \text{$\infty$} & \text{$8$} & \text{$10$} \\
&&&&\boxed{\bf{S \to D}}&&&S\to A \to C &S\to A \to C
\\&&&&&&&\to E \to G & \to E \to T
\\ \hline
\textbf{D} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{$12$} & \boxed{8} & \text{$10$}\\
&&&&&&{S \to D \to F}&\boxed{\bf{S\to A \to C\\ \to E \to G}}&S\to A \to C\\
&&&&&&&& \to E \to T
\\\hline
\textbf{G} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{12} & \text{} & \boxed{10}
\\
&&&&&&{S \to D \to F}&&\boxed{\bf{S\to A \to C \\\to E \to T}}
\\\hline
\textbf{T} & \text{} & \text{} & \text{} & & \text{} & \boxed{\text{12}} & \text{} & \text{}\\
&&&&&&\boxed{\bf{S \to D \to F}}&\\
\hline \end{array}}$$
For $S$ to $T$ shortest path is $\boxed{S \to A \to C \to E \to T}$
Option : D
Correct Answer: $C$
Depth First Search of a graph takes $O(m+n)$ time when the graph is represented using adjacency list. In adjacency matrix representation, graph is represented as an $n * n$ matrix. To do DFS, for every vertex, we traverse the row corresponding to that vertex to find all adjacent vertices (In adjacency list representation we traverse only the adjacent vertices of the vertex). Therefore time complexity becomes $O(n^2)$.
Correct Answer: $C$
C. Both PSRQ and SPRQ are topological orderings
 Apply DFS by choosing P or S as starting vertices
 As the vertex gets a finishing time assign it to the head of a linked list
 The linked list is your required topological ordering
Correct Answer: $B$
Total $21$ nodes are there. $2$ nodes require back track here in this question.
So, max recursion depth is $212= 19$
( Do DFS from extreme ends such that max recursion depth will occur i.e. take leftmost top node as initial node for $DFS$ as shown in below image)
Note: Backtrack means it reduces recursion depth in stack.
$d(u)  d(v) = 0$ is possible when both $u$ and $v$ have an edge from $t$ and $t$ is in the shortest path from $s$ to $u$ or $v$.
$d(u)  d(v) = 1$ is possible when $v$ and $t$ are in the shortest path from $s$ to $u$ and both $t$ and $v$ are siblings same distance from $s$ to both $t$ and $v$ causing $tu$ edge to be in BFS tree and not $vu$.
$d(u)  d(v) = 1$ is possible as explained above by interchanging $u$ and $v$.
$d(u)  d(v) = 2$ is not possible. This is because on BFS traversal we either visit $u$ first or $v$. Let's take $u$ first. Now, we put all neighbors of $u$ on queue. Since $v$ is a neighbour and $v$ is not visited before as assumed, $d(v)$ will become $d(u) + 1$. Similarly, for $v$ being visited first.
$$a \;\_ \; \_ \; \_ \; \_ \; f$$
Blank spaces are to be filled with $b$, $c$, $d$, $e$ such that $b$ comes before $c$, and $d$ comes before $e$.
Number of ways to arrange $b$, $c$, $d$, $e$ such that $b$ comes before $c$ and $d$ comes before $e$, will be = $4!/(2!*2!) = 6$
In topological sorting all nodes are like tasks and edges show the dependency among the tasks.
Node i to j an edge is there means task i must complete before task j.(in the mean time some other task may get complete after task i and before task j..but task i and j sequence need to be maintained)
Here in following 6 ways all the 6 tasks can get completed.
No of nodes at level $1$ of tree $\Rightarrow 2$
No of nodes at level $2$ of tree $\Rightarrow 4$
No of nodes at level $3$ of tree $\Rightarrow 8$
No of nodes at level $4$ of tree $\Rightarrow 16$
Last node in level $4$th is the node we are looking for $\Rightarrow 1+2+4+8+16 \Rightarrow 31$
We can take extra array of size $V^2$ as memory is not a constraint.
Do the BFS traversal and for each edge $uv$ in advance list in graph store the $v$ node address in $a[i][j].$
For e.g., if we find $1\to 2$ as an edge, then store node $2$ node address in location $a[i][j].$
Now once we have stored all the edge $(uv)$ addresses, then do BFS again.
Now when we encounter $1\to 2,$ then goto $a[2][1]$ which is having twin address and set this twin pointer for node $2$ in list $1.$
Do for all edges similarly.
Remember, we have not initialized memory as we will use only slots we need. If we initialize memory it can take $O(V^2)$ time. We can also use one hash table per node to store the node address while doing first traversal.The key to has table for Node $u$ is the vertex $v$ (for edge $uv$ means in $u$ table it has edge to $v$ and we are storing $v$ address) and value will be $v$ address.
Correct Answer: $B$
Answer is A.
 MST is not unique only when edges are not distinct. Here the edges are distinct. Be careful for the keyword DISTINCT.
 Shortest Path can be different even if the edges are distinct. Example is shown below. Shortest path from A to C is not unique here.
In BFS, starting from a node, we traverse all node adjacent to it at first then repeat same for next nodes.
Here, we can see that only option (D) is following BFS sequence properly.
 As per BFS, if we start from $M$ then $RQN$ $($immediate neighbors of $M)$ have to come after it in any order but in $A$ here, $O$ comes in between. So, it is not BFS.
 As per BFS, if we start from $N$ then $QMO$ has to come after it in any order but in $B$ here, $P$ comes. So, it is not BFS.
 As per BFS, if we start from $Q$ then $MNOP$ has to come after it in any order but in $C$ here, $R$ comes. So, it is not BFS.
But $D$ is following the sequences.
So, D is the correct answer.
So, Bellman Ford algorithm can be used.
$O(VE)$
Changing sign of weights of edges.
Correct Answer: $D$
Since they said that whenever there is a choice we will have to select the node which is alphabetically earlier, therefore we choose the starting node as $A$.
The tree then becomes $ABEC$ . Therefore number of tree edges is $3$, that is, $(T=3)$ .
Now, there is one cycle $BEC$, so, we will get a back edge from $C$ to $B$ while performing DFS. Hence $B=1$.
Now, $D$ becomes disconnected node and it can only contribute in forming cross edge . There are $2$ cross edges $DA$ , $DB$. Therefore $C=2$.
Answer is Option D.
Correct me if am going wrong.
Answers: Graph Connectivity
Answer: 109
Explanation:
We have to find $2$ things here, the degree of every vertex(which will be same for all vertices) and number of connected components.
Instead of $100$, let's solve this by taking lesser value, say $4$.
With $4$! vertices, each vertex is a permutation of $\left \{ 1,2,3,4 \right \}$. So, we have vertices like $\left \{ 1,2,3,4 \right \}$, $\left \{ 1,3,2,4 \right \}$, $\left \{ 4,1,3,2 \right \}$, ... etc.
Here $\left \{ 1,2,3,4 \right \}$will be connected with
$\left \{ 2,1,3,4 \right \}$
$\left \{ 1,3,2,4 \right \}$
$\left \{ 1,2,4,3 \right \}$
To get this list, just take $2$ adjacent numbers and swap them. eg. $\left \{ 1,2,3,4 \right \}$ swap $1$ and $2$ to get $\left \{ 2,1,3,4 \right \}$.
The given $3$ are the only permutations we can get by swapping only $2$ adjacent numbers from $\left \{ 1,2,3,4 \right \}$. So, the degree of vertex $\left \{ 1,2,3,4 \right \}$ will be $3$. Similarly for any vertex it's degree will be $3$.
Here we got "3" because we can chose any $3$ pairs of adjacent numbers. So, with n, we have $n1$ adjacent pairs to swap. So, degree will be n1.
In our question, degree will be $100  1$ $=$ 99
Now let's see how many connected components we have.
It will be $1$. Why?
If one can reach from one vertex to any other vertex, then that means that the graph is connected.
Now if we start with a vertex say $\left \{ 1,2,3,4 \right \}$ we can reach to other vertex, say $\left \{ 4,3,2,1 \right \}$ by the following path:
$\left \{ 1234 \right \}$ > $\left \{ 1243 \right \}$ > $\left \{ 1423 \right \}$ > $\left \{ 4123 \right \}$ > $\left \{ 4132 \right \}$ > $\left \{ 4312 \right \}$ > $\left \{ 4321 \right \}$
Just take two adjacent numbers and swap them. With this operation you can create any permutation, from any given initial permutation.
This way you can show that from any given vertex we can reach any other vertex. This shows that the graph is connected and the number of connected components is 1.
y = 99 and z = 1
y + 10z = 99 + 10*1 = 109
The minimum number of record movements required to merge five files A (with $10$ records), B (with $20$ records), C (with $15$ records), D (with $5$ records) and E (with $25$ records) is:
The following are the starting and ending times of activities $A, B, C, D, E, F, G$ and $H$ respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required?
We are given $9$ tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
$$\begin{array}{lllllllll}\hline \textbf{Task} & \text{$T_1$ } & \text{$T_2$}& \text{$T_3$} & \text{$T_4$} & \text{$T_5$} & \text{$T_6$} & \text{$T_7$} & \text{$T_8$} & \text{$T_9$} \\\hline \textbf{Profit} & \text{$15$ } & \text{$20$}& \text{$30$} & \text{$18$} & \text{$18$} & \text{$10$} & \text{$23$} & \text{$16$} & \text{$25$} \\\hline \textbf{Deadline} & \text{$7$} & \text{$2$} & \text{$5$} & \text{$3$} & \text{$4$} & \text{$5$} & \text{$2$} & \text{$7$} & \text{$3$} \\\hline \end{array}$$
Are all tasks completed in the schedule that gives maximum profit?
We are given $9$ tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
$$\begin{array}{lllllllll}\hline \textbf{Task} & \text{$T_1$ } & \text{$T_2$}& \text{$T_3$} & \text{$T_4$} & \text{$T_5$} & \text{$T_6$} & \text{$T_7$} & \text{$T_8$} & \text{$T_9$} \\\hline \textbf{Profit} & \text{$15$ } & \text{$20$}& \text{$30$} & \text{$18$} & \text{$18$} & \text{$10$} & \text{$23$} & \text{$16$} & \text{$25$} \\\hline \textbf{Deadline} & \text{$7$} & \text{$2$} & \text{$5$} & \text{$3$} & \text{$4$} & \text{$5$} & \text{$2$} & \text{$7$} & \text{$3$} \\\hline \end{array}$$
What is the maximum profit earned?
The characters $a$ to $h$ have the set of frequencies based on the first $8$ Fibonacci numbers as follows
$a : 1$, $b : 1$, $c : 2$, $d : 3$, $e : 5$, $f : 8$, $g : 13$, $h : 21$
A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?
$110111100111010$
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively.
Which of the following is the Huffman code for the letter $a, \,b, \,c, \,d, \,e, \,f$?
$$\begin{array}{ccc}\hline \textbf{Item number} & \textbf{Weight (in Kgs) }& \textbf{Value (in rupees)} \\\hline \text{$1$} & \text{$10$} & \text{$60$} \\\hline \text{$2$} & \text{$7$} & \text{$28$} \\\hline \textbf{$3$} & \text{$4$} & \text{$20$} \\\hline \text{$4$} & \text{$2$} & \text{$24$} \\\hline \end{array}$$
The task is to pick a subset of these items such that their total weight is no more than $11$ Kgs and their total value is maximized. Moreover, no item may be split. The total value of items picked by an optimal algorithm is denoted by $V_{opt}$. A greedy algorithm sorts the items by their valuetoweight ratios in descending order and packs them greedily, starting from the first item in the ordered list. The total value of items picked by the greedy algorithm is denoted by $V_{greedy}$.
The value of $V_{opt}V_{greedy}$ is ____
Answers: Greedy Algorithm
$\overset{\boxed5}{\text{D}}\quad \overset{\boxed{10}}{\text{A}}\quad\overset{\boxed{15}}{\text{C}}\quad\overset{\boxed{20}}{\text{B}}\quad\overset{\boxed{25}}{\text{E}}$
$\qquad\qquad\qquad\qquad\color{blue}{75}$
$\qquad\qquad\quad\color{blue}{30}\qquad\qquad\qquad\color{blue}{45}$
$\qquad\quad\color{blue}{15}\qquad\overset{\boxed{\text{C}}}{15}\qquad\qquad\overset{\boxed{\text{B}}}{20}\qquad\overset{\boxed{\text{E}}}{25}$
$\quad\overset{\boxed{\text{D}}}{5}\qquad\overset{\boxed{\text{A}}} {10}$
No. of movements $=15+30+45+75=165.$
Correct Answer: $A$
Solution: B
The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity $A,B,C,D,E,F,G$ and $H$. Connect the activities that occur between the start and end time of an activity. Now, the chromatic number of the graph is the number of rooms required.
$ \begin{array}{ccc}\hline
\textbf{Task}&\textbf{Profit}&\textbf{Deadline}\\ \hline
T_3&30&5&\checkmark \\ \hline
T_9&25&3&\checkmark \\ \hline
T_7&23&2&\checkmark \\ \hline
T_2&20&2&\checkmark \\ \hline
T_5&18&4&\checkmark \\ \hline
T_4&18&3& ✗ \\ \hline
T_8&16&7&\checkmark \\ \hline
T_1&15&7&\checkmark \\ \hline
T_6&10&2& ✗ \\ \hline
\end{array}$
Step 1 Sort the tasks in decreasing order of profit and if any conflict arises between two or more tasks,resolve them by sorting them on basis of having greater deadline first(Because we have more time to complete the task with greater deadline and same profit).
Step 2 Since Maximum deadline given is $7$, so we consider we have 7 time slots ranging from $07$ where a task $T_i$ having deadline say $2$ can be filled in slots either $01$ or $12$ and not beyond $2$ because this task has deadline of $2$ time units, so this task has to be completed by at most time $T=2$.
Now according to question, since Each task completes in Unit time, so a single tasks takes only one slot as shown.
Now Take the first task in the list i.e. $T_3$ which has a deadline of $5$, so it can be completed in maximum $5$ time units, so place it in slot $45$ which is the maximum deadline by which this task can be completed.
Task $T_9$ with deadline $3$ is similarly placed in slot $23$.
Task $T_7$ with deadline $2$ is placed in slot $12$.
Now for task $T_2$ having deadline $2$ can be placed in either $01$ or $12$ (Occupied by $T_7$). So $T_2$ will occupy slot $01$.
Task $T_5$ with deadline $4$ is placed in slot $34$.
Now comes task $T_4$ which has deadline $3$ can be put in slots $01$ or $12$ or $23$ and not beyond that.Unfortunately, all such slots are occupied so $\bf{T_4}$ will be left out.
Task $T_8$ with deadline $7$ goes in slot $67$.
Task $T_1$ with deadline $7$ can be placed in slot $56$.
Now all time slots are full.
So, Task $\bf{T_6}$ will be left out.
So, option (d) is the answer.
The most important statement in question is
each task requires one unit of time
This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:
$$\begin{array}{lllllllll}\hline \textbf{Task} & \text{$T_7$ } & \text{$T_2$}& \text{$T_9$} & \text{$T_4$} & \text{$T_5$} & \text{$T_3$} & \text{$T_6$} & \text{$T_8$} & \text{$T_1$} \\\hline \textbf{Deadline} & \text{2} & \text{2}& \text{3} & \text{3} & \text{4} & \text{5} & \text{5} & \text{7} & \text{7} \\\hline \end{array}$$
$$0 \overset{T_7}{} 1 \overset{T_2}{} 2 \overset{T_9}{} 3 \overset{T_5}{} 4 \overset{T_3}{} 5 \overset{T_8}{} 6 \overset{T_1}{} 7$$
Thus, $T_4$ and $T_6$ are left.
So, maximum profit will not include those of $T_4$ and $T_6$ and will be $=15 + 20 +30 +18 + 16 + 23 + 25 = 147$
A is answer
Answer is A. Huffman's tree is as follows. The two least frequent characters are taken as the children of a newly made node and the frequency of the newly made node is made equal to the sum of those two child nodes. Then the same procedure is repeated till all nodes are finished.
$110111100111010 = 110\;11110\;0\;1110\;10 = fdheg$
Based on the probabilities, we can say the probable frequency of the letters will be
$16$, $8$, $4$, $2$, $1$, $1$
Now, the Huffman tree can be constructed as follows:
So, A is the answer for $76$.
https://en.wikipedia.org/wiki/Huffman_coding
$V_{opt}$ is clearly $60$. You can go for brute force or by normal intuition you can get it.
Now solving for $V_{greedy}$.
$$\begin{array}{cccc}\hline \textbf{Item name} & \textbf{Weight (in Kgs) }& \textbf{Value (in Rupees)} & \textbf{Value/Weight} \\\hline \text{1} & \text{10} & \text{60} & \text{6} \\\hline \text{2} & \text{7} & \text{28} & \text{4} \\\hline \text{3} & \text{4} & \text{20} & \text{5} \\\hline \text{4} & \text{2} & \text{24} & \text{12} \\\hline \end{array}$$
Sort them in descending order of Value/Weight as per the question.
$$\begin{array}{cccc}\hline \textbf{Item name} & \textbf{Weight (in Kgs) }& \textbf{Value (in Rupees)} & \textbf{Value/Weight} \\\hline \text{4} & \text{2} & \text{24} & \text{12} \\\hline \text{1} & \text{10} & \text{60} & \text{6} \\\hline \text{3} & \text{4} & \text{20} & \text{5} \\\hline \text{2} & \text{7} & \text{28} & \text{4} \\\hline \end{array}$$
Now start picking items.(Note: You cannot take a fraction of the given weight as per the question). Max weight size is given as $11$(Inclusive).
 Item $4$ is picked. Weight remaining = $112 = 9$kg.
 Item $1$ cannot be picked as $10$kg $>9$kg.
 Item $3$ can be picked as $4$kg $<$ $9$kg. Weight Remaining = $94 = 5$kg
 Item $2$ cannot be picked as $7$kg $>5$kg.
So, item $4$ and Item $3$ are picked. Their values are $24$ and $20$ respectively.
$\implies V_{greedy} = 24+20 = 44.$
$V_{optimal}  V_{greedy}= 60  44 = 16.$
Consider a hash table with chaining scheme for overflow handling:
 What is the worstcase timing complexity of inserting $n$ elements into such a table?
 For what type of instance does this hashing scheme take the worstcase time for insertion?
Answers: Hashing
2.) This happens when we are not using an uniformly distributed hash function or when our hash table has only a single key where all the elements are chained up.
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}, \dfrac{1}{16}, \dfrac{1}{32}, \dfrac{1}{32}$, respectively.
What is the average length of the Huffman code for the letters $a, \,b, \,c, \,d, \,e, \,f$?
Answers: Huffman Code
$a0.19,b0.05, c0.17,d0.08,e0.40,f0.11$
Since it is relative frequency we can multiply each with $100$ and result remains the same.
$a19,b5, c17,d8,e40,f11$
For Assigning prefix binary code lets first create the Huffman tree
$(1) \ a19,{\color{Red} {b5} },c17,{\color{Red} {d8} },e40,f11$
$(2) \ a19,{\color{Red} {(b,d)13} },c17,e40,{\color{Red} {f11} }$
$(3) \ {\color{Red} {a19} },(b,d,f)24,{\color{Red} {c17} },e40$
$(4) \ {\color{Red} {(a,c)36,(b,d,f)24}},e40$
$(5) \ {\color{Red} {(a,b,c,d,f)60,e40}}$
Now put $0$ on each of the left edges and $1$ on each of the right edges as shown below
$\text{Prefix code}$ = Traverse from the root node to the leaf node and write the symbol $(1 \ or \ 0)$ present on each edge.
 For $a$ the prefix code is $111$ i.e. $3$ bits
 For $b$ the prefix code is $1010$ i.e. $4$ bits
 For $c$ the prefix code is $110$ i.e. $3$ bits
 For $d$ the prefix code is $1011$ i.e. $4$ bits
 For $e$ the prefix code is $0$ i.e. $1$ bits
 For $f$ the prefix code is $100$ i.e. $3$ bits
And the average length of encoded words $=\dfrac{(19\times 3)+(5\times 4)+(17\times 3)+(8\times 4)+(40\times 1)+(11\times 3)}{100}$
$\quad =\dfrac{57+20+51+32+40+33}{100}$
$\quad =\dfrac{233}{100}$
$\quad =2.33$
$$\begin{array}{cc} \hline \textbf{Letter} & \textbf{Probability} \\\hline a & 1/2 \\\hline b & 1/4 \\\hline c & 1/8 \\\hline d & 1/16 \\\hline e & 1/32 \\\hline f & 1/32 \\\hline \end{array}$$
$\begin{array}{cc} \textbf{Prefix Code} & \textbf{Code Length} \\ a = 0 & 1 \\ b =10 & 2 \\ c = 110 & 3 \\ d = 1110 & 4 \\ e = 11110 & 5 \\ f = 11111 & 5 \end{array}$
Avg length $= \dfrac{1}{2} \times 1 + \dfrac{1}{4} \times 2+ \dfrac{1}{8} \times 3+ \dfrac{1}{16} \times 4+ \dfrac{1}{32} \times 5+ \dfrac{1}{32} \times 5 \\ = \dfrac{16 + 16 + 12 + 8 + 5 + 5}{32} \\ = 1.9375 $
Correct Answer: $D$
$X = \{ P, Q, R, S, T\}$
$∴ \text{Expected length of an encoded character}$
$\qquad \qquad= (0.22 \times 2) + (0.34 \times 2) + (0.17 \times 3) + (0.19 \times 2) + (0.08 \times 3) \hspace{0.1cm} \text{ bits }$
$\qquad \qquad= 0.44 + 0.68 + 0.51+ 0.38 + 0.24 \hspace{0.1cm} \text{ bits}$
$ \qquad \qquad= 2.25 \hspace{0.1cm} \text{ bits } $
$\therefore \text{Expected length of a encoded message of $100$ characters in bits} = 100 \times 2.25 = 225$
What is the output produced by the following program, when the input is "HTGATE"
Function what (s:string): string; var n:integer; begin n = s.length if n <= 1 then what := s else what :=contact (what (substring (s, 2, n)), s.C [1]) end;
Note
 type string=record
length:integer;
C:array[1..100] of char
end  Substring (s, i, j): this yields the string made up of the $i^{th}$ through $j^{th}$ characters in s; for appropriately defined in $i$ and $j$.
 Contact $(s_{1}, s_{2})$: this function yields a string of length $s_{1}$ length + $s_{2}$  length obtained by concatenating $s_{1}$ with $s_{2}$ such that $s_{1}$ precedes $s_{2}$.
The following program computes values of a mathematical function $f(x)$. Determine the form of $f(x)$.
main () { int m, n; float x, y, t; scanf ("%f%d", &x, &n); t = 1; y = 0; m = 1; do { t *= (x/m); y += t; } while (m++ < n); printf ("The value of y is %f", y); }
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Consider the following Pascal function:
Function X(M:integer):integer; Var i:integer; Begin i := 0; while i*i < M do i:= i+1 X := i end
The function call $X(N)$, if $N$ is positive, will return
What does the following code do?
var a, b: integer; begin a:=a+b; b:=ab; a:ab; end;
What function of $x$, $n$ is computed by this program?
Function what(x, n:integer): integer: Var value : integer begin value := 1 if n > 0 then begin if n mod 2 =1 then value := value * x; value := value * what(x*x, n div 2); end; what := value; end;
In the following Pascal program segment, what is the value of X after the execution of the program segment?
X := 10; Y := 20; If X > Y then if X < 0 then X := abs(X) else X := 2*X;
Assume that $X$ and $Y$ are nonzero positive integers. What does the following Pascal program segment do?
while X <> Y do if X > Y then X := X  Y else Y := Y  X; write(X);
 Consider the following Pascal function where $A$ and $B$ are nonzero positive integers. What is the value of $GET(3, 2)$?
function GET(A,B:integer): integer; begin if B=0 then GET:= 1 else if A < B then GET:= 0 else GET:= GET(A1, B) + GET(A1, B1) end;
 The Pascal procedure given for computing the transpose of an $N \times N, (N>1)$ matrix $A$ of integers has an error. Find the error and correct it. Assume that the following declaration are made in the main program
const MAXSIZE=20; type INTARR=array [1..MAXSIZE,1..MAXSIZE] of integer; Procedure TRANSPOSE (var A: INTARR; N : integer); var I, J, TMP: integer; begin for I:=1 to N – 1 do for J:=1 to N do begin TMP:= A[I, J]; A[I, J]:= A[J, I]; A[J, I]:= TMP end end;
What value would the following function return for the input $x=95$?
Function fun (x:integer):integer; Begin If x > 100 then fun = x – 10 Else fun = fun(fun (x+11)) End;
Consider the following $C$ function definition
int Trial (int a, int b, int c) { if ((a>=b) && (c<b)) return b; else if (a>=b) return Trial(a, c, b); else return Trial(b, a, c); }
The functional Trial:
Suppose you are given an array $s[1....n]$ and a procedure reverse $(s, i, j)$ which reverses the order of elements in $s$ between positions $i$ and $j$ (both inclusive). What does the following sequence do, where $1 \leqslant k \leqslant n$:
reverse (s, 1, k); reverse (s, k+1, n); reverse (s, 1, n);
Consider the following $C$ function.
For large values of $y$, the return value of the function $f$ best approximates
float f,(float x, int y) { float p, s; int i; for (s=1,p=1,i=1; i<y; i++) { p *= x/i; s += p; } return s; }
In the following $C$ program fragment, $j$, $k$, $n$ and TwoLog_n are integer variables, and $A$ is an array of integers. The variable $n$ is initialized to an integer $\geqslant 3$, and TwoLog_n is initialized to the value of $2^*\lceil \log_2(n) \rceil$
for (k = 3; k <= n; k++) A[k] = 0; for (k = 2; k <= TwoLog_n; k++) for (j = k+1; j <= n; j++) A[j] = A[j]  (j%k); for (j = 3; j <= n; j++) if (!A[j]) printf("%d", j);
The set of numbers printed by this program fragment is
Consider the following C program
main() { int x, y, m, n; scanf("%d %d", &x, &y); /* Assume x>0 and y>0*/ m = x; n = y; while(m != n) { if (m > n) m = mn; else n = nm; } printf("%d", n); }
The program computes
What does the following algorithm approximate? (Assume $m > 1, \epsilon >0$).
x = m; y = 1; While (xy > ϵ) { x = (x+y)/2; y = m/x; } print(x);
Consider the following Cprogram:
void foo (int n, int sum) { int k = 0, j = 0; if (n == 0) return; k = n % 10; j = n/10; sum = sum + k; foo (j, sum); printf ("%d,",k); } int main() { int a = 2048, sum = 0; foo(a, sum); printf("%d\n", sum); }
What does the above program print?
What is the output printed by the following program?
#include <stdio.h> int f(int n, int k) { if (n == 0) return 0; else if (n % 2) return f(n/2, 2*k) + k; else return f(n/2, 2*k)  k; } int main () { printf("%d", f(20, 1)); return 0; }
A set $X$ can be represented by an array $x[n]$ as follows:
$x\left [ i \right ]=\begin {cases} 1 & \text{if } i \in X \\ 0 & \text{otherwise} \end{cases}$
Consider the following algorithm in which $x$, $y$, and $z$ are Boolean arrays of size $n$:
algorithm zzz(x[], y[], z[]) { int i; for(i=0; i<n; ++i) z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]); }
The set $Z$ computed by the algorithm is:
Consider the following Cfunction in which $a[n]$ and $b[m]$ are two sorted integer arrays and $c[n+m]$ be another integer array,
void xyz(int a[], int b [], int c []){ int i,j,k; i=j=k=0; while ((i<n) && (j<m)) if (a[i] < b[j]) c[k++] = a[i++]; else c[k++] = b[j++]; }
Which of the following condition(s) hold(s) after the termination of the while loop?
 $j< m,k=n+j1$ and $a[n1]< b[j]$ if $i=n$
 $i<n,k=m+i1$ and $b[m1]\leq a[i]$ if $j=m$
The following function computes the value of $\binom{m}{n}$ correctly for all legal values $m$ and $n$ ($m ≥1, n ≥ 0$ and $m > n$)
int func(int m, int n) { if (E) return 1; else return(func(m 1, n) + func(m  1, n  1)); }
In the above function, which of the following is the correct expression for E?
Consider the code fragment written in C below :
void f (int n) { if (n <=1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
What does f(173) print?
Consider the code fragment written in C below :
void f (int n) { if (n <= 1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
Which of the following implementations will produce the same output for $f(173)$ as the above code?
P1  P2 

void f (int n) { if (n/2) { f(n/2); } printf ("%d", n%2); } 
void f (int n) { if (n <=1) { printf ("%d", n); } else { printf ("%d", n%2); f (n/2); } } 
Consider the program below:
#include <stdio.h> int fun(int n, int *f_p) { int t, f; if (n <= 1) { *f_p = 1; return 1; } t = fun(n1, f_p); f = t + *f_p; *f_p = t; return f; } int main() { int x = 15; printf("%d/n", fun(5, &x)); return 0; }
The value printed is:
What is the value printed by the following C program?
#include<stdio.h> int f(int *a, int n) { if (n <= 0) return 0; else if (*a % 2 == 0) return *a+f(a+1, n1); else return *a  f(a+1, n1); } int main() { int a[] = (12, 7, 13, 4, 11, 6); printf("%d", f(a, 6)); return 0; }
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(345, 10)}$?
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(513, 2)}$?
Consider the following function:
int unknown(int n){ int i, j, k=0; for (i=n/2; i<=n; i++) for (j=2; j<=n; j=j*2) k = k + n/2; return (k); }
The return value of the function is
Consider the following C function in which size is the number of elements in the array E:
int MyX(int *E, unsigned int size) { int Y = 0; int Z; int i, j, k; for(i = 0; i< size; i++) Y = Y + E[i]; for(i=0; i < size; i++) for(j = i; j < size; j++) { Z = 0; for(k = i; k <= j; k++) Z = Z + E[k]; if(Z > Y) Y = Z; } return Y; }
The value returned by the function MyX is the
 maximum possible sum of elements in any subarray of array E.
 maximum element in any subarray of array E.
 sum of the maximum elements in all possible subarrays of array E.
 the sum of all the elements in the array E.
Consider the function func shown below:
int func(int num) { int count = 0; while (num) { count++; num>>= 1; } return (count); }
The value returned by func($435$) is ________
Let $A$ be the square matrix of size $n \times n$. Consider the following pseudocode. What is the expected output?
C=100; for i=1 to n do for j=1 to n do { Temp = A[i][j]+C; A[i][j] = A[j][i]; A[j][i] = Temp C; } for i=1 to n do for j=1 to n do output (A[i][j]);
 The matrix $A$ itself
 Transpose of the matrix $A$
 Adding $100$ to the upper diagonal elements and subtracting $100$ from lower diagonal elements of $A$
 None of the above
Consider the following C function.
int fun1 (int n) { int i, j, k, p, q = 0; for (i = 1; i < n; ++i) { p = 0; for (j = n; j > 1; j = j/2) ++p; for (k = 1; k < p; k = k * 2) ++q; } return q; }
Which one of the following most closely approximates the return value of the function fun1?
Consider the following C function.
int fun(int n) { int x=1, k; if (n==1) return x; for (k=1; k<n; ++k) x = x + fun(k) * fun (nk); return x; }
The return value of $fun(5)$ is ______.
Suppose $c = \langle c[0], \dots, c[k1]\rangle$ is an array of length $k$, where all the entries are from the set $\{0, 1\}$. For any positive integers $a \text{ and } n$, consider the following pseudocode.
DOSOMETHING (c, a, n)
$z \leftarrow 1$
for $i \leftarrow 0 \text{ to } k1$
do $z \leftarrow z^2 \text{ mod } n$
if $c[i]=1$
then $z \leftarrow (z \times a) \text{ mod } n$
return z
If $k=4, c = \langle 1, 0, 1, 1 \rangle , a = 2, \text{ and } n=8$, then the output of DOSOMETHING(c, a, n) is _______.
Consider the following C function.
void convert (int n ) { if (n<0) printf{“%d”, n); else { convert(n/2); printf(“%d”, n%2); } }
Which one of the following will happen when the function convert is called with any positive integer $n$ as argument?
 It will print the binary representation of $n$ and terminate
 It will print the binary representation of $n$ in the reverse order and terminate
 It will print the binary representation of $n$ but will not terminate
 It will not print anything and will not terminate
Consider the following program operating on four variables $u, v, x, y$, and two constants $X$ and $Y$.
x, y, u, v:= X, Y, Y, X; While (x ≠ y) do if (x > y) then x, v := x  y, v + u; else if (y > x) then y, u:= y  x, u + v; od; print ((x + y) / 2); print ((u + v) / 2);
Given $X > 0 \land Y > 0$, pick the true statement out of the following:
 The program prints $\text{gcd}(X, Y)$ and the first prime larger than both $X$ and $Y$.
 The program prints $\text{gcd}(X, Y)$ followed by $\text{lcm}(X, Y)$.
 The program prints $\text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
 The program prints $\frac1 2 \times \text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
 The program does none of the above.
Consider the following code.
def brian(n): count = 0 while ( n ! = 0 ) n = n & ( n1 ) count = count + 1 return count
Here $n$ is meant to be an unsigned integer. The operator & considers its arguments in binary and computes their bit wise $AND$. For example, $22$ & $15$ gives $6$, because the binary (say 8bit) representation of $22$ is $00010110$ and the binary representation of $15$ is $00001111$, and the bitwise $AND$ of these binary strings is $00000110$, which is the binary representation of $6$. What does the function $\text{brian}$ return?
 The highest power of $2$ dividing $n$, but zero if $n$ is zero.
 The number obtained by complementing the binary representation of $n$.
 The number of ones in the binary representation of $n$.
 The code might go into an infinite loop for some $n$.
 The result depends on the number of bits used to store unsigned integers.
Consider the following game. There is a list of distinct numbers. At any round, a player arbitrarily chooses two numbers $a, b$ from the list and generates a new number $c$ by subtracting the smaller number from the larger one. The numbers $a$ and $b$ are put back in the list. If the number $c$ is nonzero and is not yet in the list, $c$ is added to the list. The player is allowed to play as many rounds as the player wants. The score of a player at the end is the size of the final list.
Suppose at the beginning of the game the list contains the following numbers: $48, 99, 120, 165$ and $273$. What is the score of the best player for this game?
Consider the following program modifying an $n \times n$ square matrix $A$:
for i=1 to n: for j=1 to n: temp=A[i][j]+10 A[i][j]=A[j][i] A[j][i]=temp10 end for end for
Which of the following statements about the contents of matrix $A$ at the end of this program must be TRUE?
 the new $A$ is the transpose of the old $A$
 all elements above the diagonal have their values increased by $10$ and all the values below have their values decreased by $10$
 all elements above the diagonal have their values decreased by $10$ and all the values below have their values increased by $10$
 the new matrix $A$ is symmetric, that is, $A[i][j]=A[j][i]$ for all $1 \leq i, j \leq n$
 $A$ remains unchanged
Answers: Identify Function
This function is reversing the string "HTGATE".
Here, function substring() gives the substring from the 2nd character of the original string till the end and contact(p,q) is concatenating the strings p and q...
n = s.length means n = length of the string. and c[1] means 1st character of the passed string
So, it will work like this:
what(HTGATE)
contact(what(TGATE),H)
contact(contact(what(GATE),T),H)
contact(contact(contact(what(ATE),G),T),H)
contact(contact(contact(contact(what(TE),A),G),T),H)
contact(contact(contact(contact(contact(what(E),T),A),G),T),H)
contact(contact(contact(contact(contact(E,T),A),G),T),H)
contact(contact(contact(contact(ET,A),G),T),H)
contact(contact(contact(ETA,G),T),H)
contact(contact(ETAG,T),H)
contact(ETAGT,H)
ETAGTH
here , we have used postincrement operator for variable $m$. So, first we will check whether $m<n$ is true or not then we will increase the value of '$m$' as $m=m+1$.
So, for $n=2$ , first we will go in the loop.
So, $t = 1*(\frac{x}{1})$ and $y = 0 + (1*(\frac{x}{1})) = \frac{x}{1}$
Now, we will check the condition as : $(1<2)$ . Since, it is true so we will go in the loop again as well as we will increment the value of $m$. So, now, $m=2$
Now again, in the body of the loop,
$t = (1*(\frac{x}{1}))* (\frac{x}{2})$ and $y = (1*(\frac{x}{1})) + (1*(\frac{x}{1})*(\frac{x}{2})) = (1)^{1}*\frac{x^{1}}{1} + (1)^{2}*\frac{x^{2}}{1*2}$
Like this when we do then at certain point of time , we will check $((n1) < n)$ or not for the value of $m=n1$ . since it is true . So, again we will go in the loop and increment in value of m.So, now , $m=n$
So, Now , $y = 0 + \frac{(1)^{1}*(x^{1})}{1} + \frac{(1)^{2}*x^{2}}{1*2} + \frac{(1)^{3}*x^{3}}{1*2*3}+....+\frac{(1)^{n}*x^{n}}{1*2*3*...*n}$ = $\sum_{i=1}^{n} \frac{ (1)^{i}*x^{i}}{i!}$
Now again, we will check the condition. Since $(n<n)$ is false. So, we will not go in loop again and print the value of $y$ which is $\sum_{i=1}^{n} \frac{ (1)^{i}*x^{i}}{i!}$
For $N=9$, it returns $3$.
For $N=10$ it returns $4$.
For $N=16$ it returns $4$.
For $N=17$ it returns $5$.
So answer should be C.
Answer of $X$ remains unchanged. As the if condition becomes false.
X := 10
Answer is C . This is classic example of $ifelse$ issue. Always $else$ matches for nesting to the closest $if$ in $C$ Programming & Pascal .
https://en.wikipedia.org/wiki/Dangling_else
if (x>y) { if (x<0) x=abs(x) else x=2*x }
Answer: C
Let $X = 3$ and $Y = 7$.
1st pass: $X = 3$, $Y = 4$
2nd pass: $X = 3$, $Y = 1$
3rd pass: $X = 2$, $Y = 1$
4th pass: $X = 1$, $Y = 1$
write $(X)$, which writes $1$.
Ref: https://en.wikipedia.org/wiki/Euclidean_algorithm
 =3
begin for I:=2 to N do for J:=1 to ( I1) do begin TMP:= A[I, J]; A[I, J]:= A[J, I]; A[J, I]:= TMP end
Should be the condition...
$\qquad\qquad= \text{fun}(96)$
$\qquad\qquad= \text{fun}(\text{fun}(107))$
$\qquad\qquad= \text{fun}(97)$
$\qquad\qquad= \text{fun}(\text{fun}(108))$
$\qquad\qquad= \text{fun}(98)$
$\qquad\qquad = \text{fun}(\text{fun}(109))$
$\qquad\qquad= \text{fun}(99)$
$\qquad\qquad= \text{fun}(\text{fun}(110))$
$\qquad\qquad= \text{fun}(100) $
$\qquad\qquad= \text{fun}(\text{fun}(111)) $
$\qquad\qquad= \text{fun}(101) = 91.$
Correct Answer: $C$
Answer D.
Answer is A.
Effect of the above $3$ reversals for any $K$ is equivalent to left rotation of the array of size $n$ by $k$.
Let , $S[1\ldots7] = \begin{array}{ccccccc}\hline1&2&3&4&5&6&7\\ \hline\end{array}$
So, $n=7,k = 2$
reverse $(S,1,2)$ we get $[2,1,3,4,5,6,7]$
reverse $(S,3,7)$ we get $[2,1,7,6,5,4,3]$
reverse $(S,1,7)$ we get $[3,4,5,6,7,1,2]$
Hence, option $(A)$ rotates $s$ left by $k$ positions and is correct.
A simplified version of the given program can be written as:
float f(float x, int y) { float p=1, s=1; int i; for (i=1; i<y; i++) { p = p * (x/i); s = s + p; } return s; }
We can take $p=1,s=1$ initialization outside of for loop because there is no condition checking in for loop involving $p,s$ .
$$\begin{array}{lll} \hline \text{$i$} & \text{$p= p $*$ (x/i)$} & \text{$s=s+p$}\\\hline \text{1} & \text{$x$} & \text{$1+x$} \\\hline \text{2} & \text{$\frac{x^{2}}{2}$} & \text{$1+x+\frac{x^{2}}{2}$}\\\hline \text{3} & \text{$\frac{x^{3}}{6}$} & \text{$1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6}$} \\\hline \text{4} & \text{$\frac{x^{4}}{24}$} & \text{$1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6}+\frac{x^{4}}{24}$}\\\hline \text{n} & \text{$\frac{x^{n}}{n!}$} & \text{$e^x$} \\\hline \end{array}$$
$$\begin{array}{l} \hline \text{$e^x = \displaystyle{\sum_{n=0}^{\infty} \frac{x^n}{n!}} = 1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6} +\frac{x^{4}}{24}+ \ldots+\frac{x^{n}}{n!}$} \\\hline \end{array}$$
Hence, option B is answer.
The nested loop is taking all integers from $2$ to $2\ast \log_2 n$. Take all their nonmultiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[3], A[5], A[7]$, and $A[9] $are made $1$. Similarly for $3$, $4$, ... till $2\ast \log n$. So, if any entry $A[p]$ is $1$ means it must be a multiple of $2, 3, .... 2\ast \log_2 n$, which is $\left ( 2 \log n \right )!$ and is greater than $n$. So, for no index $p$, $A[p]$ will be $0$. So, answer is D.
Suppose the line
A[j] = A[j]  (j%k);
is replaced with
A[j] = A[j]  !(j%k);
Now, the nested loop is taking all integers from $2$ to $\log_2 n$, take all their multiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[4], A[6], A[8]$ and $A[10]$ are made $1$. Similarly for $3, 4, ...$ till $2\ast \log n$ . So, for all nonprime indices of $A$, we will have a $1$, and for prime indices we have a $0$. And we print $i$ if $A[j]$ is $0$ meaning $j$ is prime.
It is an algorithm for $\gcd$ computation.
Here, while loop executes until $m=n$.
We can test by taking any two numbers as $m,n$.
Answer will be (C).
$\quad x= ( x + m/x ) /2$
$\implies 2x^2= x^2 + m$
$\implies x = m^{1/2}$
We can also check by putting $2$ or $3$ different values also.
Correct Answer: $C$
Option is D.
$foo$ is printing the lowest digit. But the $printf$ inside it is after the recursive call. This forces the output to be in reverse order
$2, 0, 4, 8$
The final value $sum$ printed will be $0$ as $C$ uses pass by value and hence the modified value inside $foo$ won't be visible inside $main$.
See the following calling sequence:
Hence, answer is option C.
Option (D)
In the given algorithm the for loop contains a logical expression
z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);
The equivalent set representation of a given logical expression if we assume $z[i] = Z, x[i] = X, y[i] = Y$ then
$Z = ( X \wedge \neg Y ) \vee ( \neg X \wedge Y)$
$\implies Z = ( X  Y ) \cup ( Y  X ) [\because A \wedge \neg B = A  B]$
The while loop adds elements from $a$ and $b$ (whichever is smaller) to $c$ and terminates when either of them exhausts. So, when loop terminates either $i = n$ or $j = m$.
Suppose $i = n$. This would mean all elements from array $a$ are added to $c => k$ must be incremented by $n$. $c$ would also contain $j$ elements from array $b$. So, number of elements in $c$ would be $n+j$ and hence $k = n + j$.
Similarly, when $j = m$, $k = m + i$.
Hence, option (D) is correct. (Had $k$ started from $1$ and not $0$ and we used $++k$ inside loop, answer would have been option (C))
Answer: C
Because $\binom{m}{0}$ = $1$ and $\binom{n}{n}$ = $1$.
Answer: D
The function prints the binary equivalent of the number $n$.
Binary equivalent of $173$ is $10101101$.
The code fragment written in C and P1 prints the binary equivalent of the number n.
P2 prints the binary equivalent of the number n in reverse.
Here, $P1$ and $P2$ will print opposite in direction as shown in diagram.
And given code fragment will print like $P1$ and not like $P2$
Hence, answer will be (C).
The answer is $B$.
Let the address of $x$ be $1000.$
 $f(5,1000)$ $=$ $8$
 $f(4,1000)$ $=$ $5$
 $f(3,1000)$ $=$ $3$
 $f(2,1000)$ $=$ $2$
 $f(1,1000)$ $=$ $1$.
The evaluation is done from $5$ to $1$. Since recursion is used.
Follow below image. Red color shows the return value.
So, $15$ is the answer.
Correct Answer: $C$
$12 + ( 7  (13  (4 + (11  ( 6 + 0)))))$
$\quad = 12 + (7  (13  ( 4 + ( 11 6)))))$
$\quad= 12 + 7  13 + 9$
$\quad= 15$
Red colour represents return values.
Answer is 12.
so $1+0+0+0+0+0+0+0+0+1 = 2$
Correct Answer: $D$
Now in each iteration $k$ is incremented by $n/2$. So, overall $k$ will be added $n/2 * log n * n/2$ with an initial value of $0$. So, final value of $k$ will be $\Theta(n^2 log n)$
Correct Answer: $B$
Answer is (A) maximum possible sum of elements in any subarray of array E.
int MyX ( int * E, unsigned int size ) { int Y= 0; int z; int i, j,k; //calculate sum of the elements of the array E and stores it in Y for i 0;i<size;i++) Y = Y+E[i]; //calculate the sum of all possible subaarays //(starting from postion 0..n1) for (i=0;i<size;i++) for(j=i;j<size ;j++) { z = 0; for(k=i; k<=j;k++) z=z+E[k]; //checks whether sum of elements of each subarray is greater //than the current max, if so, then assign it to currentmax if(z>Y) Y = z; } //ultimately returns the maximum possible sum of elements //in any sub array of given array E return Y; }
Answer is $9$.
$435(110110011) $
num $>>=$ $1$; implies a num is shifted one bit right in every while loop execution.While loop is executed $9$ times successfully and $10$th time num is zero.
So count is incremented $9$ times.
Note:
Shifting a number "1"
bit position to the right will have the effect of dividing by $2$:
8 >> 1 = $4 // In binary: (00001000) >> 1 = (00000100)
A.
In the computation of given pseudo code for each row and column of Matrix $A$, each upper triangular element will be interchanged by its mirror image in the lower triangular and after that the same lower triangular element will be again reinterchanged by its mirror image in the upper triangular, resulting the final computed Matrix $A$ same as input Matrix $A$.
$i$ loop is executing $n$ times. $j$ loop is executing $\log\ n$ times for each $i$, and so value of $p$ is $\log n$. $k$ loop is executing $\log\ p$ times, which is $log\ log\ n$ times for each iteration of $i$. In each of these $q$ is incremented. So, over all iterations of $i$, $q$ will be incremented $n\ log\ log\ n$ times. So, D choice.
$\text{fun}(1) = 1$;
$\text{fun}(2) = 1 + \text{fun}(1) * \text{fun}(1) = 1 + 1 = 2$;
$\text{fun}(3) = 1 + \text{fun}(1) * \text{fun}(2) + \text{fun}(2) * \text{fun}(1) = 5$;
$\text{fun}(4) = 1 + \text{fun}(1) * \text{fun}(3) + \text{fun}(2) * \text{fun}(2) + \text{fun}(3) * \text{fun}(1)$
$\qquad = 1 + 5 + 4 + 5 = 15$;
$\text{fun}(5) = 1 + \text{fun}(1) * \text{fun}(4) + \text{fun}(2) * \text{fun}(3) + \text{fun}(3) * \text{fun}(2) + \text{fun}(4) * \text{fun}(1)$
$\qquad = 1 + 15 + 10 + 10 + 15 = 51$;
More formal way:
The recurrence relation is
$f(n) = \begin{cases} 1, n = 1 \\ 1 + \sum_{i=1}^{n1} f(i) \times f(ni), n > 1 \end{cases}$
$f(1) = 1$
$f(2) = 1+ f(1).f(1)$
$\qquad = 1 + 1.1 = 2$
$f(3) = 1 + f(1). f(2) + f(2).f(1)$
$\qquad= 1+ 1.2 + 2.1 = 5$
$f(4) = 1 + f(1).f(3) + f(2).f(2) + f(3).f(2)$
$ \qquad= 1+ 1.5 + 2.2 + 5.1 = 15$
$f(5) = 1 + f(1).f(4) + f(2).f(3) + f(3). f(2)+ f(4).f(1)$
$\qquad = 1 + 1.15 + 2.5 + 5.2 + 15.1 = 51$
Now let's iterate through the function step by step :
$z = 1$ (at the start of dosomething)
$i = 0$ (start of external for loop)
In the do loop
$z = 1*1 % 8 = 1$ (non zero value so considered as true and continue)
$c[0] = 1$, so in the if clause, $z = 1*2 \% 8 = 2$
In the do loop
$z = 2*2 \% 8 = 4$ (since now $z = 2$) (non zero value so considered as true and continue)
$c[0] = 1$, so in the if clause, $z = 4*2 \% 8 = 0$
Now no need to check further :
Reason : All the operations that update $Z$ are multiplicative operations and hence the value of $Z$ will never change from $0$.
As per the question, the function convert needs to be called with a positive integer n as argument.
So, when a positive int is passed as argument, function will compute$(n/2)$ which will return the quotient of integer division.
Again this result of integer division will be passed through compute$(n/2)$ and so on. Each time the no will be divided by $2$ and will gradually become smaller and smaller and close to 0 but it will never be less than 0.
Hence, the program will not print anything (as after every function call, the value returned is greater than $0$) and will not terminate.
PS: Being a C function and hence when run on a computer each recursive call will require creation of activation record consuming memory space. So, eventually the stack memory will run out and program terminates. But such an option is not there. D is the best option here.
Answer (D)
Consider $x,y,u,v=17,3,3,17$.
$X=14$, $v=20$
$X=11$, $v=23$
$X=8$, $v=26$
$X=5$, $v=29$
$X=2$, $v=32$
$Y=1$, $u=35$
$X=1$, $v=67$
This is the value obtained.
Lastly, printing $(x+y)/2$ and $(v+u)/2$ gives $1$ and $51$.
Option C. It returns no of 1's in binary representation of $n$.
Here, $n$&$(n1)$ reset rightmost bit of $n$ in each iteration.
For example,
Suppose $n=15=00001111$(binary)
$n1=14(00001110)$
$ 00001111$
^ $00001110$

$00001110$
Option D is correct
Here the list is $(48, 99, 120, 165 ,273)$.
$Gcd(48,99)=3$ ,means if we subtract ($9948=51$) then that number is also % $3$,
So the numbers like $(3,6,999)$ are added. Total numbers $=99/3=33$
//$y \ Gcd(48,120)=24$,so the numbers %$24$ are added like $(24,48,120)$. Total numbers $=120/24=5$
//$y \ Gcd(48,165)=3$,so the numbers $(3,6,9,244899120165)$ are added. Totally, $165/3=55$
At end, $Gcd(48,273)=3$,so the numbers $(3,6,9244899120165273)$ are added(which covers all the above numbers)
So total numbers added to this list $=273/3=91$
1.for i=1 to n: 2. for j=1 to n: 3. temp=A[i][j]+10 4. A[i][j]=A[j][i] 5. A[j][i]=temp10 6. end for 7.end for
The $3,4,5$ lines swap $A [ j ] [ i ]$ and $A [ i ] [ j ]$.
The same variables are swapped twice. For eg when: $i =5$, $j = 10$. $A [10] [5]$ and $A [5] [10]$ will be swapped. They will be swapped again when $i = 10$, $j =5$.
Two times swap of same elements will lead to $A$ remaining unchanged.
Hence, E is correct.
Given a set of $n$ distinct numbers, we would like to determine both the smallest and the largest number. Which of the following statements is TRUE?
 These two elements can be determined using $O\left(\log^{100}n\right)$ comparisons.
 $O\left(\log^{100}n\right)$ comparisons do not suffice, however these two elements can be determined using $n + O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these two elements can be determined using $3\lceil n/2 \rceil$ comparisons.
 $3\lceil n/2 \rceil$ comparisons do not suffice, however these two elements can be determined using $2(n  1)$ comparisons.
 None of the above.
Consider the problem of computing the minimum of a set of $n$ distinct numbers. We choose a permutation uniformly at random (i.e., each of the n! permutations of $\left \langle 1,....,n \right \rangle$ is chosen with probability $(1/n!)$ and we inspect the numbers in the order given by this permutation. We maintain a variable MIN that holds the minimum value seen so far. MIN is initialized to $\infty$ and if we see a value smaller than MIN during our inspection, then MIN is updated. For example, in the inspection given by the following sequence, MIN is updated four times.
$5$ $9$ $4$ $2$ $6$ $8$ $0$ $3$ $1$ $7$
What is the expected number of times MIN is updated?
Given a set of $n$ distinct numbers, we would like to determine the smallest three numbers in this set using comparisons. Which of the following statements is TRUE?
 These three elements can be determined using $O\left(\log^{2}n\right)$ comparisons.
 $O\left(\log^{2}n\right)$ comparisons do not suffice, however these three elements can be determined using $n+O(1)$ comparisons.
 $n + O(1)$ comparisons do not suffice, however these three elements can be determined using $n +O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these three elements can be determined using $O(n)$ comparisons.
 None of the above.
Answers: Minimum Maximum
We can solve this question by using Tournament Method Technique 
1. To find the smallest element in the array will take $n1$ comparisons $= 99.$
2. To find the largest element 
 After the first round of Tournament , there will be exactly $n/2$ numbers $= 50$ that will loose the round.
 So, the biggest looser (the largest number) should be among these $50$ loosers.To find the largest number will take $n/2  1$ comparisons $= 49.$
Total $99+49 = 148.$
I think answer will be C.
To be accurate, it will need $3n/2 2$ comparisons .
Let us consider $3$ numbers $\{1,2,3\}$
We will consider the permutation along with min no of times MIN is updated.
$$\begin{array}{cc} \hline \textbf{Permutation} & \textbf{Minimum of times} \\
& \textbf{ MIN is updated} \\\hline
1,2,3 & 1 \\ 1,3,2 & 1 \\ 2,1,3 & 2\\ 2,3,1 & 2 \\ 3,1,2 & 2 \\ 3,2,1 & 3 \end{array}$$
Total number of times MIN updated is : $11$.
Average no of times MIN updated is : $(11/6)$
Now going by the options i am getting B .
$H_3 = 1 + 1/2 + 1/3 = 11/6$ .
$H_3$ is the answer and that is option B .
Option (C) is correct. Reason is as follows :
Here, at first level we are Given $n$ elements, out of which we have to find smallest $3$ numbers.
We compare $2 2$ elements as shown in figure & get $n/2$ elements at Second level.
Note: Minimum element is in these $n/2$ elements.
So, comparisons for this is $n/2$.
Similarly for next level we have $n/4$ Comparisons & $n/2$ elements and so on.
Total Comparisons till now is $n/2 + n/4 + n/8 + \ldots + 4 + 2 +1 = (2^{\log n}1) = n1$ {Use G.P. sum}
Now we have to get smallest $3$.
We have 1st smallest at last level already =>$0$ Comparison for this.
=> $2$nd & $3$rd smallest can be found in $O(\log n)$ time as shown below:
Minimum Element must have descended down from some path from top to Bottom.
=> SQUARES represent Candidates for $2$$^{nd}$ minimum.
Every element that is just below $m$1(first minimum) is a candidate for second minimum.
So, $O(\log n)$ Comparisons for finding second smallest.
=> Similarly for $3$$^{rd}$ minimum we get $O(\log n)$ time.. As, every element that is just below 1st & 2nd minimum is a candidate for $3$$^{rd}$ minimum.
Consider the following undirected graph $G$:
Choose a value for $x$ that will maximize the number of minimum weight spanning trees (MWSTs) of $G$. The number of MWSTs of $G$ for this value of $x$ is ____.
Let $n\geq 3,$ and let $G$ be a simple, connected, undirected graph with the same number $n$ of vertices and edges. Each edge of $G$ has a distinct real weight associated with it. Let $T$ be the minimum weight spanning tree of $G.$ Which of the following statements is NOT ALWAYS TRUE ?
 The minimum weight edge of $G$ is in $T.$
 The maximum weight edge of $G$ is not in $T.$
 $G$ has a unique cycle $C$ and the minimum weight edge of $C$ is also in $T.$
 $G$ has a unique cycle $C$ and the maximum weight edge of $C$ is not in $T.$
 $T$ can be found in $O(n)$ time from the adjacency list representation of $G.$
How many distinct minimum weight spanning trees does the following undirected, weighted graph have ?
Answers: Minimum Spanning Trees
Number of possible MSTs increase, when we have multiple edges with same edge weights.
To maximize the number of MST, $x$ should be $5.$
In the question, number of MST is asked for the value of $X$.
So, number of MST $= 2\times 2 = 4$ (Answer)
$($Because one $4$ forms cycle, cant be included in any way. Now from two $4$ and $5$ we can select one in $2\times 2 = 4$ ways$)$
Take 2 Random graph with same number of vertices and edges:
Now check option 1 by 1:
 The minimum weight edge of $G$ is in $T.$ Always true
 The maximum weight edge of $G$ is not in $T.$ True in 1st graph but false in 2nd graph Maximum weight edge will be in $T$ if it is not part of the cycle.
 $G$ has a unique cycle $C$ and the minimum weight edge of $C$ is also in $T.$ Always true
 $G$ has a unique cycle $C$ and the maximum weight edge of $C$ is not in $T.$ Always true
 $T$ can be found in $O(n)$ time from the adjacency list representation of $G.$ Always true. The given graph is actually a tree with one extra edge. So, using BFS we can find the cycle in the graph and just eliminate the maximum edge weight in it. Time complexity will be $O(n).$
Hence, B is answer
Two choices for edge of weight $4$ for each of the outer triangle, ie, $ \binom{2}{1}*\binom{2}{1}*\binom{2}{1}*\binom{2}{1}=2^4$ and two choices for edge of weight $2$ for each of the inner triangle ie, $\binom{2}{1}*\binom{2}{1}=2^2$.
$2^4*2^2=64$
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Which of the following problems is not $NP$hard?
Ram and Shyam have been asked to show that a certain problem $\Pi$ is NPcomplete. Ram shows a polynomial time reduction from the $3$SAT problem to $\Pi$, and Shyam shows a polynomial time reduction from $\Pi$ to $3$SAT. Which of the following can be inferred from these reductions?
The problem $3$SAT and $2$SAT are
Let S be an NPcomplete problem and Q and R be two other problems not known to be in NP. Q is polynomial time reducible to S and S is polynomialtime reducible to R. Which one of the following statements is true?
The subsetsum problem is defined as follows: Given a set $S$ of $n$ positive integers and a positive integer $W$, determine whether there is a subset of $S$ whose elements sum to $W$. An algorithm $Q$ solves this problem in $O(nW)$ time. Which of the following statements is false?
 $Q$ solves the subsetsum problem in polynomial time when the input is encoded in unary
 $Q$ solves the subsetsum problem in polynomial time when the input is encoded in binary
 The subset sum problem belongs to the class NP
 The subset sum problem is NPhard
Suppose a language $L$ is $\mathbf{NP}$ complete. Then which of the following is FALSE?
 $L \in \mathbf{NP}$
 Every problem in $\mathbf{P}$ is polynomial time reducible to $L$.
 Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
 The Hamilton cycle problem is polynomial time reducible to $L$.
 $\mathbf{P} \neq \mathbf{NP}$ and $L \in \mathbf{P}$.
Given an integer $n\geq 3$, consider the problem of determining if there exist integers $a,b\geq 2$ such that $n=a^{b}$. Call this the forward problem. The reverse problem is: given $a$ and $b$, compute $a^{b}$ (mod b). Note that the input length for the forward problem is $\lfloor\log n\rfloor + 1$, while the input length for the reverse problem is $\lfloor\log a\rfloor + \lfloor\log b\rfloor + 2$. Which of the following statements is TRUE?
 Both the forward and reverse problems can be solved in time polynomial in the lengths of their respective inputs.
 The forward problem can be solved in polynomial time, however the reverse problem is $NP$hard.
 The reverse problem can be solved in polynomial time, however the forward problem is $NP$hard.
 Both the forward and reverse problem are $NP$hard.
 None of the above.
This question concerns the classes $P$ and $NP.$ If you are familiar with them, you may skip the definitions and go directly to the question.
Let $L$ be a set. We say that $L$ is in $P$ if there is some algorithm which given input $x$ decides if $x$ is in $L$ or not in time bounded by a polynomial in the length of $x.$ For example, the set of all connected graphs is in $P,$ because there is an algorithm which, given a graph graph, can decide if it is connected or not in time roughly proportional to the number of edges of the graph.
The class $NP$ is a superset of class $P.$ It contains those sets that have membership witnesses that can be verified in polynomial time. For example, the set of composite numbers is in $NP.$ To see this take the witness for a composite number to be one of its divisors. Then the verification process consists of performing just one division using two reasonable size numbers. Similarly, the set of those graphs that have a Hamilton cycle, i.e. a cycle containing all the vertices of the graph, is in in $NP.$ To verify that the graph has a Hamilton cycle we just check if the witnessing sequence of vertices indeed a cycle of the graph that passes through all the vertices of the graph. This can be done in time that is polynomial in the size of the graph.
More precisely, if $L$ is a set in $P$ consisting of elements of the form $(x, w),$ then the set $$M = \{x : \exists w, w ≤ x^k\text{ and }(x, w) \in L\},$$
is in N P .
Let $G = (V, E)$ be a graph. $G$ is said to have perfect matching if there is a subset $M$ of the edges of $G$ so that
 No two edges in $M$ intersect (have a vertex in common); and
 Every vertex of $G$ has an edge in $M.$
Let $\text{MATCH}$ be the set of all graphs that have a perfect matching. Let $\overline{\text{MATCH}}$ be the set of graphs that do not have a perfect matching. Let $o(G)$ be the number of components of $G$ that have an odd number of vertices.
Tutte’s Theorem: $G \in \text{MATCH}$ if and only if for all subsets $S$ of $V,$ the number of components in $G − S$ (the graph formed by deleting the vertices in $S)$ with an odd number of vertices is at most $S.$ That is, $$G \in \text{MATCH} ↔ \forall S \subseteq V o(G − S) \leq S.$$
Which of the following is true?
Which of the following is not implied by $P=NP$?
 $3$SAT can be solved in polynomial time.
 Halting problem can be solved in polynomial time.
 Factoring can be solved in polynomial time.
 Graph isomorphism can be solved in polynomial time.
 Travelling salesman problem can be solved in polynomial time.
A multivariate polynomial in $n$ variables with integer coefficients has a binary root if it is possible to assign each variable either 0 or 1, so that the polynomial evaluates to 0. For example, the multivariate polynomial $2x_1^3 x_1x_2+2$ has the binary root $(x_1=1, x_2=0)$. Then determining whether a multivariate polynomial, given as the sum of monimials, has a binary root:
Consider the following statements:
 Checking if a given $undirected$ graph has a cycle is in $\mathsf{P}$
 Checking if a given $undirected$ graph has a cycle is in $\mathsf{NP}$
 Checking if a given $directed$ graph has a cycle is in $\mathsf{P}$
 Checking if a given $directed$ graph has a cycle is in $\mathsf{NP}$
Which of the above statements is/are TRUE? Choose from the following options.
A formula is said to be a $3$CFformula if it is a conjunction (i.e., an AND) of clauses, and each clause has at most $3$ literals. Analogously, a formula is said to be a $3$DFformula if it is a disjunction (i.e., an OR) of clauses of at most $3$ literals each.
Define the languages $3$CFSAT and $3$DFSAT as follows:
$$3 \text{CFSAT}=\{ \Phi \mid \Phi \text{ is a } \textit{ satisfiable } 3\text{CF formula} \}$$
$$3\text{DFSAT}=\{ \Phi \mid \Phi \text{ is a } \textit{satisfiable } 3\text{DF formula} \}$$
Which of the following best represents our current knowledge of these languages ?
 Both $\text{3CFSAT}$ and $\text{3DFSAT}$ are in NP but only $\text{3CFSAT}$ is NPcomplete
 Both $\text{3CFSAT}$ and $\text{3DFSAT}$ are in NPcomplete
 Both $\text{3CFSAT}$ and $\text{3DFSAT}$ are in P
 Both $\text{3CFSAT}$ and $\text{3DFSAT}$ are in NP but only $\text{3DFSAT}$ is NPcomplete
 Neither $\text{3CFSAT}$ nor $\text{3DFSAT}$ are in P
Answers: P Np Npc Nph
 Is NPC and hence NP hard.
 Is again NP hard (optimization version is NP hard and decision version is NPC). Ref: http://stackoverflow.com/questions/3907545/howtounderstandtheknapsackproblemisnpcomplete
 Is in P. See the algorithm here based on DFS: http://en.wikipedia.org/wiki/Biconnected_component
 NPC and hence NP hard.
Correct Answer: $C$
Ram's reduction shows that $\Pi$ is NP hard because it must be at least as hard as $3$SAT which is a known NPComplete problem. Here, $\Pi$ need not be NPComplete.
Now, Shyam's reduction shows that $3$SAT problem is at least as hard as $\Pi$ or equivalently $\Pi$ is not harder than $3$SAT. Since $3$SAT is in NP, $\Pi$ must also be in NP.
Thus, $\Pi$ is NPhard and also in NP $\implies$ $\Pi$ is NPComplete.
Answer B.
As $S$ is $NPC$ i.e $NP$$Hard$ and $NP$.
 We know that, if $NP$$Hard$ problem is reducible to another problem in Polynomial Time, then that problem is also $NP$$Hard$ which means every $NP$ problem can be reduced to this problem in Polynomial Time.
Therefore, $R$ is $NP$$Hard$.
Now $Q$ is reduced to $S$ in polynomial time.
 If $Q$ is reducible to $S$ in polynomial time, $Q$ could be $NP$ because all $NP$ problems can be reduced to $S.$ Since $Q$ could be $NP$ therefore $Q$ could be $P$ also as $P$ is subset of $NP$. Also $Q$ could be $NPC$ because every $NPC$ problem can be reduced to another $NPC$ problem in polynomial time.
So, nothing can be concluded about $Q$.
 Input is encoded in unary. So, length of input is equal to the value of the input. So, complexity $= O(nW)$ where both $n$ and $W$ are linear multiples of the length of the inputs. So, the complexity is polynomial in terms of the input length. So, (A) is true.
 Input is encoded in binary. So, length of W will be $\lg W$. (for $W=1024$, input length will be just $10$). So, now $W$ is exponential in terms of the input length of $W$ and $O(nW)$ also becomes exponential in terms of the input lengths. So, $Q$ is not a polynomial time algorithm. So, (B) is false.
Correct Answer: $B$
Option E leads to a contradiction, hence is false.
We know that $L$ is $\mathbf{NPC}$, hence $\in \mathbf{NP}$. If $\mathbf{P} \neq \mathbf{NP}$, then $L$ can't be in $\mathbf{P}$
The reverse problem can be solved in polynomial time as $a^b$ requires at most $\log b$ recursive calls using the approach given below:
pow(int a, int b) { if(b%2) return a* pow(a*a, b/2); else return pow(a*a, b/2); }
Now, the forward problem is also solvable in polynomial time. We need to check for all the roots of $n$ (from $\sqrt n$ till $n^{\frac{1}{\log n}}$) whether it is an integer . But each of these check can be done in $\log n$ time using a binary search on the set of integers from 2..$n$ and so, the overall complexity will be $(\log n)^2$ which is polynomial in $\log n$ ($\log n$ is the size of input). So, (a) must be the answer.
First of all let us see some things asked in question:
 We say $\text{MATCH} \in P$ if given a graph $G$ we can say in polynomial time using a deterministic Turing machine if $G$ has a perfect matching.
 We say $\text{MATCH} \in NP$ if given a graph $G$ we can say in polynomial time using a nondeterministic Turing machine if $G$ has a perfect matching.
 Equivalently given a subset $M$ of edges of $G$ we can say in polynomial time using a deterministic Turing machine, if it is a perfect matching for $G.$
 Similarly, we say $\overline{\text{MATCH}} \in NP$ if given a subset $M$ of edges of $G$ we can say in polynomial time using a deterministic Turing machine, if it is NOT a perfect matching for $G.$
So, we can see if point $2.1$ given above is possible. To do this for the given subset of edges $M$ we have to see
 No two edges in $M$ intersect (have a vertex in common);
Can be done just by traversing the edges and seeing for common vertex. Naive implementation can be done in $O(VM)$ time by marking the two vertices of each edge and see if any vertex gets marked twice which is polynomial.  Every vertex of $G$ has an edge in $M.$
We can do the same procedure done for above and finally see if any vertex is left unmarked. So, polynomial time possible.
If the answer to steps $1$ and $2$ are "Yes", then $M$ is a perfect matching for $G$ and we proved that $\text{MATCH} \in NP.$
If the answer to any one of steps $1$ or $2$ above is "No", then $M$ is not a perfect matching for $G$ and we proved that $\overline{\text{MATCH}} \in NP.$
Thus, both the problems are in $NP.$
Option C is correct.
Further Read: http://mathworld.wolfram.com/PerfectMatching.html
I believe Umang is right, option B is the correct answer.
Intractability : We are looking for EFFICIENT algorithms.
Intractable Problems are problems that are decidable, although the algorithm to decide that problem might be efficient (P) or inefficient (NP), but at least an algorithm exists to solve these problems.
Here we talk about efficient vs inefficient computations.
Thus the language of problems in P and NP classes is the language of Decidable Problems i.e. Recursive Language.
Undecidability: We are looking for algorithms.
Undecidable Problems are problems for which there is no algorithm to solve these problems.
Here we talk about what can or can not be computed.
The language of Undecidable Problems are "Recursively Enumerable but not recursive languages" & "Not Recursively Enumerable Languages".
Clearly we can talk about the intractability of any problem if we know at least one algorithm to solve the problem, if there is no algorithm to solve a problem how can we talk about efficiency?
Halting Problem is undecidable.
I guess, all other problems mentioned here are decidable.
I don't know the most efficient algorithms to solve these problems but at least I can say that Brute force approach will work on all the other options except the Halting Problem.
What P = NP implies?
"Any problem that is solved by a non deterministic Turing machine in polynomial time also be solved by some deterministic Turing machine in polynomial time, even if the degree of the polynomial is higher."
and
There is neither a Non Deterministic Turing Machine nor Deterministic Turing Machine that can solve the Halting Problem.
So any inference about P & NP is not going to affect the solvability of Halting Problem, since it is undecidable.
This problem is similar to Boolean Satisfiability Problem (SAT) problem which is First NPComplete problem.
A problem is NPComplete it means it is in NP as well as NPHard, i.e. intersection of both NP and NPHard is NPComplete.
Hence, option (E) is right.
E. All of them. Because all of them can be solved by Depth first traversal.
Every P problem is a subset of NP.
3 SAT where all the clauses are in conjunctive normal form is the well known NPcomplete problem.
3 SAT with clauses in Disjunctive normal form lies in P class. Now one doubt you might get while solving this question is, if 3 SAT with CNF is NPcomplete and 3 SAT with DNF belongs to P, then why not to convert CNF formula to equivalent DNF formula?? If we can do this conversion, then we can solve 3 SAT CNF too in polynomial time using the algorithm for checking 3 SAT DNF.
But please note that this conversion process is not polynomial. When you convert a CNF formula to equivalent DNF formula, it can have much much larger terms than that of original CNF formula, so this conversion process is not polynomial, it is exponential. Hence the idea of converting 3CNF to 3DNF won't work.
Option A is correct.
Nice read: https://math.stackexchange.com/questions/159591/solvingsatbyconvertingtodisjunctivenormalform
Consider the recursive quicksort algorithm with "random pivoting". That is, in each recursive call, a pivot is chosen uniformly at random from the subarray being sorted.When this randomized algorithm is applied to an array of size $n$ all whose elements are distinct, what is the probability that the smallest and the largest elements in the array are compared during a run of the algorithm ?
Answers: Quicksort
Total elements $= 25$
For worst case number of candidates $= 2$
$P = \frac{2}{25} = 0.08 $
Probability that the smallest and the largest elements in the array are compared during a run of the randomized quicksort algorithm.
After the first split, first subpart of array will contain the minimum element and second subpart contains the maximum element. (Why?)
Choose any element say $x$, greater than or equal to smallest element. After partition, smallest element will be present in the left of it. The largest element will be present on the right sub array.
After the first split, there will be no more comparison between the largest and smallest element.
To have comparison between largest and smallest element we must choose anyone of the as the pivot element in the first split itself.
Among the $N$ elements we have to choose either the smallest element or the largest element. Selection will be uniform.
Probability $= 2 / n.$
Solve the recurrence equations:
 $T(n) = T(n  1)+ n$
 $T(1) = 1$
Solve the recurrence equations:
 $T(n)= T( \frac{n}{2})+1$
 $T(1)=1$
$T(n)=\sqrt{n}+T(\frac{n}{2})$
$T(1)=1$
$T(n)=\frac{n+1}{n} T(n  1)+1$, for $n \geq \sum$ and $T(1) = 1$
What is the asymptotic behaviour of $T(n)$ as a function of $n$ ?
Consider the function $F(n)$ for which the pseudocode is given below :
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do begin C ← C + 1 end F1 = F1 * C end F = F1 end
[$n$ is a positive integer greater than zero]
(a) Derive a recurrence relation for $F(n)$
Consider the function $F(n)$ for which the pseudocode is given below :
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do begin C ← C + 1 end F1 = F1 * C end F = F1 end
[$n$ is a positive integer greater than zero]
Solve the recurrence relation for a closed form solution of $F(n)$.
Consider the recursive algorithm given below:
procedure bubblesort (n); var i,j: index; temp : item; begin for i:=1 to n1 do if A[i] > A[i+1] then begin temp := A[i]; A[i] := A[i+1]; A[i+1] := temp; end; bubblesort (n1) end
Let $a_n$ be the number of times the ‘if…then…’ statement gets executed when the algorithm is run with value $n$. Set up the recurrence relation by defining $a_n$ in terms of $a_{n1}$. Solve for $a_n$.
The recurrence relation that arises in relation with the complexity of binary search is:
The recurrence relation
 $T(1) = 2$
 $T(n) = 3T (\frac{n}{4}) +n$
has the solution $T(n)$ equal to
Consider the following function.
Function F(n, m:integer):integer; begin If (n<=0 or (m<=0) then F:=1 else F:F(n1, m) + F(n, m1); end;
Use the recurrence relation $\begin{pmatrix} n \\ k \end{pmatrix} = \begin{pmatrix} n1 \\ k \end{pmatrix} + \begin{pmatrix} n1 \\ k1 \end{pmatrix}$ to answer the following questions. Assume that $n, m$ are positive integers. Write only the answers without any explanation.

What is the value of $F(n, 2)$?

What is the value of $F(n, m)$?

How many recursive calls are made to the function $F$, including the original call, when evaluating $F(n, m)$.
Let $T(n)$ be the function defined by $T(1) =1, \: T(n) = 2T (\lfloor \frac{n}{2} \rfloor ) + \sqrt{n}$ for $n \geq 2$.
Which of the following statements is true?
$x_n = 2x_{n1}1, n>1$
$x_1=2$
If $T_1 = O(1)$, give the correct matching for the following pairs:
$$\begin{array}{ll}\hline \text{(M) $T_n = T_{n1} + n$} & \text{(U) $T_n = O(n)$} \\\hline \text{(N) $T_n = T_{n/2} + n$} & \text{(V) $T_n = O(n \log n)$} \\\hline \text{(O) $T_n = T_{n/2} + n \log n$} & \text{(W) $T_n = O(n^2)$} \\\hline \text{(P) $T_n = T_{n1} + \log n$} & \text{(X) $T_n = O(\log^2 n)$} \\\hline \end{array}$$
The solution to the recurrence equation $T(2^k) = 3T(2^{k1})+1, T(1) =1$ is
The running time of the following algorithm
Procedure $A(n)$
If $n \leqslant 2$ return ($1$) else return $(A( \lceil \sqrt{n} \rceil))$;
is best described by
Consider the following recurrence relation
$T(1)=1$
$T(n+1) = T(n)+\lfloor \sqrt{n+1} \rfloor$ for all $n \geq 1$
The value of $T(m^2)$ for $m \geq 1$ is
The time complexity of the following C function is (assume $n > 0$)
int recursive (int n) { if(n == 1) return (1); else return (recursive (n1) + recursive (n1)); }
The recurrence equation
$ T(1) = 1$
$T(n) = 2T(n1) + n, n \geq 2$
evaluates to
Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.
$$\begin{array}{llll}\hline \text{} & \textbf{Recursive Algorithm } & \text{} & \textbf{Recurrence Relation} \\\hline \text{P} & \text{Binary search} & \text{l.} & \text{$T(n) = T(nk) +T(k) +cn$} \\\hline \text{Q.} & \text{Merge sort} &\text{ll.} & \text{$T(n) = 2T(n1) + 1$ }\\\hline\text{R.} & \text{Quick sort}& \text{lll.} & \text{$T(n) = 2T(n/2) + cn$}\\\hline \text{S.} & \text{Tower of Hanoi} & \text{lV.} & \text{$T(n) = T(n/2) + 1$} \\\hline \end{array}$$
Which of the following is the correct match between the algorithms and their recurrence relations?
Suppose $T(n) =2T (\frac{n}{2}) + n$, $T(0) = T(1) =1$
Which one of the following is FALSE?
Let $T(n)$ be a function defined by the recurrence
$T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
$T(1) = 1$
Which of the following statements is TRUE?
Consider the following recurrence:
$ T(n)=2T\left ( \sqrt{n}\right )+1,$ $T(1)=1$
Which one of the following is true?
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
Which of the following recurrences does $x_n$ satisfy?
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
The value of $x_5$ is
When $n = 2^{2k}$ for some $k \geqslant 0$, the recurrence relation
$T(n) = √(2) T(n/2) + √n$, $T(1) = 1$
evaluates to :
The running time of an algorithm is represented by the following recurrence relation:
$T(n) = \begin{cases}
n & n \leq 3 \\
T(\frac{n}{3})+cn & \text{ otherwise }
\end{cases}$
Which one of the following represents the time complexity of the algorithm?
The recurrence relation capturing the optimal execution time of the $Towers \ of \ Hanoi$ problem with $n$ discs is
Which one of the following correctly determines the solution of the recurrence relation with $T(1) = 1$?
$$T(n)= 2T\left(\frac {n} {2}\right) + \log n$$
Which one of the following is the recurrence equation for the worst case time complexity of the quick sort algorithm for sorting $n$ ( $\geq $ 2) numbers? In the recurrence equations given in the options below, $c$ is a constant.
Let a$_{n}$ represent the number of bit strings of length n containing two consecutive $1$s. What is the recurrence relation for $a_{n}$?
Consider the following recursive C function.
void get(int n) { if (n<1) return; get (n1); get (n3); printf("%d", n); }
If $get(6)$ function is being called in $main()$ then how many times will the $get()$ function be invoked before returning to the $main()$?
The given diagram shows the flowchart for a recursive function $A(n)$. Assume that all statements, except for the recursive calls, have $O(1)$ time complexity. If the worst case time complexity of this function is $O(n^{\alpha})$, then the least possible value (accurate up to two decimal positions) of $\alpha$ is ________.
Flow chart for Recursive Function $A(n)$.
Consider the recurrence function
$$T(n) = \begin{cases} 2T(\sqrt{n})+1, & n>2 \\ 2, & 0 < n \leq 2 \end{cases}$$
Then $T(n)$ in terms of $\Theta$ notation is
Consider the following recurrence relation:
$T\left(n\right)=
\begin{cases}
T\left(\frac{n}{k}\right)+ T\left(\frac{3n}{4}\right)+ n & \text{if } n \geq 2 \\
1& \text{if } n=1
\end{cases}$
Which of the following statements is FALSE?
Consider the following recurrence relation:
$T(n)
= \begin{cases}
2T (\lfloor\sqrt{n}\rfloor)+ \log n & \text{if }n \geq 2 \\
1& \text{if }n = 1
\end{cases}$
Which of the following statements is TRUE?
 $T(n)$ is $O(\log n)$.
 $T(n)$ is $O(\log n. \log \log n)$ but not $O(\log n)$.
 $T(n)$ is $O(\log^{3/2} n)$ but not $O(\log n. \log \log n)$.
 $T(n)$ is $O(\log^{2} n)$ but not $O(\log^{3/2} n)$.
 $T(n)$ is $O(\log^{2} n. \log \log n)$ but not $O(\log^{2} n)$.
Let $T(a, b)$ be the function with two arguments (both nonnegative integral powers of 2) defined by the following recurrence:
 $ T(a, b) = T \left( \frac{a}{2}, b \right) +T\left( a, \frac{b}{2} \right) \quad \quad \text{ if } a, b \geq 2$;
 $ T(a, 1) = T \left( \frac{a}{2}, 1 \right) \quad \quad \quad \text{ if } a \geq 2$;
 $ T(1, b) = T \left( 1, \frac{b}{2} \right) \quad \quad \quad \text{ if } b \geq 2$;
 $T(1, 1)=1$.
What is $T(2^r, 2^s)$?
Which of the following functions, given by there recurrence, grows the fastest asymptotically ?
Answers: Recurrence
$=T(n2)+(n1)+n$
$=T(n3)+(n2)+(n1)+n$
.
.
.
$=T(nk)+\left[(nk+1)+ (nk+2)+\ldots+(n1) + n\right]$
Recurrence stops at,
$nk=1$
$k=n1$
$\therefore T(n) = T(1)+\left[2+3+\ldots+n\right]
\\= 1+ 2+3+\ldots + n
\\= \frac{n(n+1)}{2}$
PS: Unless explicitly asked for asymptotic bound, we should always try to get the exact answer.
$=T(n/4) + 2$
$= T(n/8) + 3$
$\vdots$
$=T(n/{2^k}) + k.$
Recurrence stops when $2^k >= n$.
When $2^k = n,k = \lg n$
So, $T(n) = T(1) + \lg n \\= 1 + \lg n$
PS: Unless explicitly asked for asymptotic bound, we should give exact answers for solutions of recurrence equations.
$\quad = T(\frac{n}{4}) + \sqrt n + \sqrt {(n/2)}$
$\ldots = \sqrt n + \sqrt {(n/2)}+\sqrt {(n/4)}+\sqrt {(n/8)}+\ldots + \sqrt{(n/2^{\lg n1})} + T(1)$
$ \quad = \sqrt n \left[ 1+\frac{1}{\sqrt 2} + \frac{1}{{\sqrt 2}^2} + \ldots + \frac{1}{\sqrt 2^{{\lg n}}}\right]$
$\quad = \sqrt n \left[ \frac{1  (\frac{1}{\sqrt 2})^{\lg n+1}}{1\frac{1}{\sqrt 2}}\right]$ (Sum of $\lg n +1$ terms of GP with $a = 1$ and $r = 1\sqrt 2)$
$\quad = \sqrt n \left[ \frac{1  \frac{1}{\sqrt 2\sqrt n}}{1  \frac{1}{\sqrt 2}}\right]$
$\quad = \frac{\sqrt {2n} 1}{\sqrt 2  1}$
$\quad = \left({\sqrt {2n} 1}\right)\left({\sqrt 2 + 1}\right)$
$\quad = \sqrt n \left(2+\sqrt 2\right)\sqrt 21$
$T(n1) = \frac{n}{n1}T(n2) + 1\qquad \to(2)$
$T(n2) = \frac{n1}{n2}T(n3) + 1\qquad \to(3)$
Substituting value of $T(n1)$ from $(2)$ in $(1)$
$\implies T(n) = \frac{n+1}{n}*\frac{n}{n1}T(n2) + \frac{n+1}{n} + 1$
$\implies T(n) = \frac{n+1}{n1}T(n2) + \frac{n+1}{n} + 1$
Now substituting value of $T(n2)$ in above equation
$\implies T(n) = \frac{n+1}{n1}* \frac{n1}{n2}T(n3) + \frac{n+1}{n1} + \frac{n+1}{n} + 1$
$\implies T(n) = \frac{n+1}{n2}T(n3) + \frac{n+1}{n1} + \frac{n+1}{n} + 1$
$\displaystyle{\vdots}$
so on
$T(n) = \frac{n+1}{nk+1}T(nk) + \frac{n+1}{n} + \frac{n+1}{n1} + \ldots + \frac{n+1}{nk+2} + 1$
$T(n) = \frac{n+1}{nk+1}T(nk) + (n+1)*( \frac{1}{n} + \frac{1}{n1} +\ldots+ \frac{1}{nk+2}) + 1$
Now let $nk=1$ so $k = n1$, substitute value of $k$ in above equation
$\implies T(n) = \frac{n+1}{n(n1)+1}T(1) + (n+1)*( \frac{1}{n} + \frac{1}{n1} +\ldots+ \frac{1}{n(n1)+2}) + 1$
$\implies T(n) = \frac{n+1}{2} + (n+1)\times ( \frac{1}{n} + \frac{1}{n1} +\ldots+ \frac{1}{3}) + 1$
$\implies T(n) = \frac{n+1}{2}+ (n+1)\times (H_n  \frac{1}{2}  1) + 1$
$\implies T(n) = \frac{n+1}{2} + (n+1)\times H_n  \frac{n+1}{2}  (n+1) + 1$
$\implies \mathbf{T(n) = (n+1)\times H_n  n}$
Now, $H_n \approx \log n+γ$
where $γ$ is the EulerMascheroni constant.
$\mathbf{T(n) = O(n \log n)}$
 The function $F(n)$ is NOT a recursive function. You can't have a recurrence relation for it in the first place!
 $F(n)$ calculates $(n1)^n$.
The equivalent C++ code is as follows: (You can try it out here: http://ideone.com/w0u4lk)
long F(long n) { long F1 = 1; if(n==1) { return 3; } else { for(long i = 1; i <= n; i++) { long C = 0; // Note: the belore For loop only has one line for(long j = 1; j <= n1; j++) { C = C+1; } // At the end of this for loop, C will be = (n1) F1 = F1 * C; } } return F1; }
It is clear that the inner for loop can be replaced by a single statement as follows:
long F(long n) { long F1 = 1; if(n==1) { return 3; } else { for(long i = 1; i <= n; i++) F1 = F1 * (n1); } return F1; }
And this calculates $(n1)^n$
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 //if (n==1) then return 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do //inner loop runs n1 times outer loop runs for n times begin C ← C + 1 end //means C=n1 F1 = F1 * C //means n1 is getting multiplied n times so ans is (n1)^n for n>=2 end F = F1 end
$a_n= a_{n2} + (n2)+(n1)$
$a_n= a_{n3} + (n3)+(n2)+(n1)$
.
.
.
$a_n= a_{nn} + (nn) + (n(n1)) + .... + (n3)+(n2)+(n1)$
$a_n= 0 + 1 + 2 + ....+ (n3) + (n2) + (n1)$
which given $a_n= \frac{(n1) \times (n)}{2}$
It is B. searching for only one half of the list. leading to $T(n/2) $+ constant time in comparing and finding mid element.
Answer: A
According to Master theorem,
$T(n) = aT(\frac{n}{b}) + f(n)$ can be expressed as:
$T(n) = [n^{\log_ba}][T(1) + u(n)]$
where $u(n) = \Theta(h(n))$ where $h(n) = \frac{f(n)}{n^{\log_ba}} = \frac{n}{n^{\log_43}} = n^{1\log_43}$ as $h(n) = n^r$ where $r>0$.
So, $T(n) = [n^{\log_ba}][T(1) + u(n)] = T(n) = [n^{\log_43}][T(1) + \Theta(n^{1\log_43})] = \Theta(n^{1})$.
b) $\frac{n(nm+1)}{m!}$
Answer is $B$.
using master method (case 1)
where a = 2, b = 2
O(n^{1/2}) < O(n^{logba})
O(n^{1/2}) < O(n^{log22})
$\qquad =2(2T(n2)1) 1$
$\qquad =2^2T(n2) 2 1$
$\qquad =2^2(2T(n3)1) 2 1$
$\qquad =2^3T(n3)  2^2 2 1$
$\qquad \dots $
$\qquad =2^{n1}T(n(n1))  \left(2^{n2} +2^{n3}+\dots+2^2 + 2 +1\right) $
$\qquad =2^{n1}\times 2  \frac{2^{n1}1}{21} \because T(1) = 2, S_n = \frac{a.\left(r^n 1\right)}{r1}$
$\qquad =2^n  \left(2^{n1} 1\right)$
$\qquad =2^{n1} + 1$
(N) $T(n) = \Theta(n) =O(n)$, third case of Master theorem
($f(n) = n = \Omega \left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0 + \epsilon}\right) $, satisfied for any positive $\epsilon \leq 1$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} < cn$, satisfied for any $c$ between $0$ and $0.5$)
(O) $T(n) = \Theta(n \log n) = O (n \log n)$, third case of Master theorem
($f(n) = n \log n =\Omega\left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0.5 + \epsilon}\right) $, satisfied for positive $\epsilon = 0.5$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} \log \frac{n}{2} < cn \log n$, satisfied for $c = 0.5$)
(P) Like in (M), here we are adding the log of the first n natural numbers. So,
$T_n = \log 1 + \log 2 + \log 3 + \dots + \log n$
$=\log (1\times 2\times \dots \times n)$
$=\log(n!)$
$=\Theta(n \log n)$ (Stirling's Approximation)
Let $x = 2^k$
$T(x) = 3T\left(\frac{x}{2}\right) + 1$
We can apply Master's theorem case 1 with $a = 3$ and $b = 2$ as $f(x) = 1 = O\left(x^{\log_2 3  \epsilon} \right), \epsilon > 0 $
So, $T(x) = \Theta \left(x^{\log_2 3}\right) \\= \Theta \left( {2^k}^{\log_2 3} \right)\\= \Theta \left({2^{\log_2 3}}^k \right)\\ = \Theta \left(3^k \right)$
So, only option possible is B.
We can also directly solve as follows:
$T(x) = 3T\left(\frac{x}{2}\right) + 1$
$\\\quad= 9T \left (\frac{x}{4}\right) + 1 + 3$
$ \\\quad \vdots$
$ \\\quad= 3^{\log_2 2^k} + \left( 1 + 3 + 9 + \dots + 3^{\log_2 {2^k1}}\right)$
$\\\quad \left(\text{recursion depth is }\log_2 x \text{ and } x = 2^k\right)$
$ \\\quad= 3^{k} + \frac{3^{\log_2 {2^k}}  1}{31}$
$ \\\quad \left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)$
$\\\quad =3^k + \frac{3^k  1}{2}$
$\\\quad=\frac{3. 3^k  1}{2}$
$\\\quad=\frac{3^{k+1} 1}{2} $
OR
$T\left(2^k\right) = 3T\left(2^{k1}\right) + 1$
$ \\\quad= 3^2T\left(2^{k2}\right) + 1 +3$
$ \quad\vdots$
$ \\\quad= 3^k T\left(2^{kk}\right) + \left( 1 + 3 + 9 + \dots + 3^{k1}\right)$
$ \\\quad \left(\text{recursion depth is }k\right)$
$\\\quad= 3^k + \frac{3^{k 1}} {31}$
$\\\quad\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)$
$\\\quad=3^k + \frac{3^k 1}{2}$
$\\\quad=\frac{3. 3^k  1}{2}$
$\\=\frac{3^{k+1} 1}{2} $
The complexity will be the number of times the recursion happens which is equal to the number of times we can take square root of n recursively, till n becomes $2$.
$T(n) = T\left(\lceil \sqrt n \rceil \right) + 1$
$T(2) = 1$
$T\left(2^2\right) = T(2) + 1 = 2$
$T\left(2^{2^2}\right) = T(4) + 1 = 3$
$T\left(2^{2^3}\right) = T(16) + 1 = 4$
So, $T(n) = \lg\lg n + 1 = O\left(\log \log n\right)$
Answer : Option C
$T(m^2) = T(m^21) + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  2) + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  3) + \left\lfloor\sqrt{(m^2  2)} \right\rfloor + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$\vdots$
$= T(1) + \left\lfloor\sqrt{(2)} \right\rfloor + \left\lfloor\sqrt{(3)} \right\rfloor + \ldots + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= 3 \times 1 + 5 \times 2 + \ldots + \left(2m  1\right) \times (m1) +m $
(We are taking floor of square root of numbers, and between successive square roots number of numbers are in the series $3,5,7 \dots$ like $3$ numbers from $1..4$, $5$ numbers from $59$ and so on).
We can try out options here or solve as shown at end:
Put $m = 5$, $T(25) = 3 \times 1 + 5 \times 2 + 7 \times 3 + 9 \times 4 + 5 = 75$
 $59$
 $75$
 noninteger
 $297.5$
So, answer must be B.
$T(m^2) = 3 \times 1 + 5 \times 2 + \dots + \left(2m  1\right) \times (m1) +m$
$= m + \displaystyle \sum_{i=1}^{m1} \left[ (2i+1). (i) \right] $
$= m + \displaystyle \sum_{i=1}^{m1} \left[2i^2 + i\right]$
$= m + \frac{(m1) .m .(2m1)}{3} + \frac{(m1)m}{2}$
$= \frac{m}{6} \left(6 + 4m^2 2m 4m + 2 + 3m  3\right)$
$= \frac{m}{6} \left(4m^2 3m + 5\right) $
 Sum of the first $n$ natural numbers $=\frac{n. (n+1)}{2}.$
 Sum of the squares of first $n$ natural numbers $ = \frac{n. (n+1). (2n+1)}{6}.$
Answer is D) $O(2^n)$
int recursive (int n) { if(n == 1) // takes constant time say 'A' time return (1); // takes constant time say 'A' time else // takes T(n1) + T(n1) time return (recursive (n1) + recursive (n1)); }
$T(n) = 2T(n  1) + a$ is the recurrence equation found from the pseudo code. Note: $a$ is a constant $O(1)$ cost that the nonrecursive part of the function takes.
Solving the recurrence by Back Substitution:
$$\begin{align}
T(n) &= 2T(n  1) + a \\[1em]
T(n  1) &= 2T(n  2) + a \\[1em]
T(n  2) &= 2T(n  3) + a \\[1em]
&\vdots
\end{align}$$
Thus, we can rewrite the equation for $T(n)$ as follows
$$\begin{align*}
T(n)
&= 2 \Bigl [ 2T(n  2) + a \Bigr ] + a &= 4T(n  2) + 2a + a \\[1em]
&= 4 \Bigl [ 2T(n  3) + a \Bigr ] + 3a &= 8T(n  3) + 4a + 2a + a \\[1em]
&\vdots \\[1em]
&= 2^k T(n  k) + (2^k  1) a
\end{align*}$$
On Substituting Limiting Condition
$$
T(1) = 1 \\
\implies n  k = 1 \\
\implies k = n  1
$$
Therefore, our solution becomes
$$2^{n  1} + \Bigl ( 2^{n  1}  1 \Bigr ) a \\ = O(2^n)$$
$ T(n) = n + 2(n1) + 2^2 (n2) + \dots + 2^{(n1)}(n (n1))$
$=n( 1 + 2 + \dots + 2^{n1})  (1.2 + 2.2^{2} +3.2^{3}+\dots+ (n1).2^{n1})$
$=n(2^n1) (n.2^n2^{n+1}+2)$
$=2^{n+1}n 2$
Correct Answer: $A$
T(2) = 4
T(3) = 11
T(4) = 26
T(5) = 57
T(6) = 120
T(7) = 247
So, $$T(n) = 2^{n+1}  n  2$$
Answer is B.
Applying Masters theorem,
$T(n) = \Theta (n \log n)$
So, it cannot be $\Omega(n^2)$.
Hence, answer is (C).
Option $C$ is the answer. It can be done by Master's theorem.
$n^{\log_b a} = n^{\log_2 2} = n$.
$f(n) = \sqrt n = n^{\frac{1}{2}}$.
So, $f(n) = O\left(n^{\log_b a \epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
$$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$
$T(n) = 2T\left(n^{\frac{1}{2}}\right)+1$
$ = 2\left(2T\left(n^{\frac{1}{2^2}}\right) + 1\right) + 1 $
$ = 4\times T\left(n^{\frac{1}{2^2}}\right) + 5 $
$ = 8\times T\left(n^{\frac{1}{2^3}}\right) + 13 \\ \cdots$
$=2^{(\lg \lg n)} + 2 \times \lg \lg n + 1\text{ (Proved below)} $
$= \Theta(\lg n)$
$n^{\frac{1}{2^k}} = 2 \\ \text{(Putting 2 so that we can take log.}\\\text{One more step of recurrence can't change the complexity.)} \\\implies \frac{1}{{2^k}} \lg n = 1 \text{(Taking log both sides)}\\\implies \lg n = 2^k \\\implies k = \lg \lg n$
So, answer is B, $T(n) = \Theta(\log n)$
$01 \ 10 \ 11 3$
$010 \ 011 \ 101 \ 110 \ 111 5$
$0101 \ 0110 \ 0111 \ 1010 \ 1011 \ 1101 \ 1110 \ 1111 8$
So, $x_n = x_{n1} + x_{n2}$ (For all the strings ending in $1$, we get two new strings ending in either $0$ or $1$ and for all strings ending in $0$, we get a new string ending in $1$. So, the new set of strings for $x_{n}$, will have exactly $x_{n1}$ strings ending in $1$ and $x_{n2}$ strings ending in $0)$
$x_{5}= 8+5 = 13$
Correct Answer: $D$
X2=01,10,11(total 3)
X3=010,011,101,110,111(total 5)
Here X3=X2+x1
X4=3+5=8
X5=5+8=13
No option matching here.
$T(n) = \sqrt(2) T\left(\frac{n}{2}\right) + \sqrt n$
$= {\sqrt 2}^2 T\left(\frac{n}{2^2} \right) +\sqrt {2} \sqrt {\frac{n}{2}} + \sqrt n$
$\vdots$
$= {\sqrt 2}^ {\lg n} T(1) +\lg n \sqrt n$
$=\sqrt n + \lg n \sqrt n$
$= \sqrt n \left( \lg n + 1\right)$
If we use Master theorem we get option B. But one must know that Master theorem is used to find the asymptotic bound and not an EXACT value. And in the question here it explicitly says "evaluates to".
$a=1, b=3, \log_b a=0$
So $n^{\log_b a} = n^0=1$
$f(n)=n$
So, $f(n)=Ω(1)$
To, check Master theorem case 3, we need $c>0$,
$f(n/3)\leq c f(n)$
$c=1$
So using case three of master theorem
$T(n)= \Theta(f(n)) = \Theta(n)$
answer is A.
Recurrence relation for Towers of Hanoi is
$T(1) = 1$
$T(n) = 2 T( n1 ) +1$
So Answer should be (D)
$f(n) = \log n$
$a = 2, b = 2 \implies n^{\log_b a} = n$
So, $f(n) = \log n = O\left(n^{1\epsilon} \right)$, we can take any $\epsilon$ from $0$$1$ for example $0.5$ which gives $\log n = O(\sqrt(n))$, whose proof is given here: http://math.stackexchange.com/questions/145739/provethatlognosqrtn
So, Master theorem Case 1, and answer will be $O\left(n^{\log_2 2}\right) = O(n)$
Alternate way:
$T(1) = 1 \\ T(2) = 2T(1) + \log 2 = 3 = 3n  2\\T(4) = 2T(2) + \log 4 = 8 = 3n  4 \\T(8) = 2T(4) + \log 8 = 19 = 3n  5\\ T(16) = 2T(8) + \log 16 = 42 = 3n  6$
The second term being subtracted is growing at a lower rate than the first term. So, we can say $T(n) = O(n)$.
Correct Answer: $A$
B.
Worst case for quick sort happens when $1$ element is on one list and $n1$ elements on another list.
Counting the number of bit strings NOT containing two consecutive $1$'s. $($It is easy to derive a recurrence relation for the NOT case as shown below$)$
$0 \quad 1$
$00 \quad 01 \quad 10  3$ $($append both $0$ and $1$ to any string ending in $0$, and append $0$ to any string ending in $1$$)$
$000 \quad 001 \quad 010 \quad 100 \quad 101  5$ $($all strings ending in $0$ give two strings and those ending in $1$ give $1$ string$)$
$0000 \quad 0001 \quad 0010 \quad 0100 \quad 0101 \quad 1000 \quad 1001 \quad 1010  8$
$\vdots$
$a_n' = a_{n1}' + a_{n2}' $ $($where $a_n$ denote the number of bit strings of length $n$ containing two consecutive $1$s$)$
$2^n  a_n = (2^{n1}  a_{n1}) + (2^{n2}  a_{n2})$
$a_n= 2^{n2}(4  2  1) + a_{n1} +a_{n2}$
$a_n= a_{n1} + a_{n2} + 2^{n2}$
A is choice.
Answer : Option B
$\text{T(n) = T(n1) + T(n3) + 2}$, here $\text{T(n)}$ denotes the number of times a recursive call is made for input $n$. $2$ denotes the two direct recursive calls.
$\text{T(n $\leq0$) = 0}$
$\text{T(1) = 2}$
$\text{T(2) = 4}$
$\text{T(3) = 6}$
$\text{T(4) = 10}$
$\text{T(5) = 16}$
$\text{T(6) = 24}$
So, answer is $\text{24 + 1}$ call from main = $25$.
By calling $A(n)$ we get $A(n/2)$ $5$ times,
$A(n)=5A (n /2) + O(1)$
Hence, by applying masters theorem,
Case 1 : $a>b^k$
$n^{\log_25}$
Thus value of alpha will be $2.32$
$T(n)=2T({\sqrt{n}})+1$
Put, $n=2^m$
$T(2^m)=2T(2^{m/2})+1$
put, $T(2^m) =s(m)$
$s(m)=2s(m/2)+1$
Using case 1 of master method ,
$=\Theta(m) = \Theta(\log n)$
https://gateoverflow.in/1829/gate200651isro201634?show=37791#c37791
Correct Answer: $B$
Considering when $k=3$
$T(n)=T(\frac{n}{3})+T(\frac{3}{4})+n$
Below is the recursion tree
Since, the combine work according to recurrence is linear in term of the size of the subproblem
so at Level 1, total combine work is $n$
Combined work at level 2$=\frac{n}{3}+\frac{3n}{4}=\frac{13n}{12}$
At level 3 combine work $=\frac{n}{9}+\frac{3n}{12}+\frac{3n}{12}+\frac{9n}{16}=(\frac{13}{12})^2n$
So, at level $k$, when the recursion bottoms out at $T(1),$ the combine work will be $(\frac{13}{12})^{k1}.n$
The left height of the tree is when the subproblem $\frac{n}{3}$ would bottom out and that is when $\frac{n}{3^k}=1$
gives $k=\log_3n$
The right height of the tree is $\log_{\frac{4}{3}}n$
for the purpose of upper bound consider right height and then sum up total combine work from root till the leaf level
$n+\frac{13}{12}n+(\frac{13}{12})^2.n+\ldots+(\frac{13}{12})^{\log_{\frac{4}{3}}n1}n$
$=n\left [ 1+\frac{13}{12}+\ldots +(\frac{13}{12})^{\log_{\frac{4}{3}}n1} \right ]$
$=n \left[ 1.\frac{(\frac{13}{12})^{\log_{\frac{4}{3}}n}1}{\frac{13}{12}1} \right]$
Using property
$a^{\log_bc}=c^{\log_ba}$
and ignoring denominator as constant term for upper bound
$n.\left [ n^{\log_{\frac{4}{3}}\frac{13}{12}} 1\right ]=n^{1.27}$ (approximately)
Option (A) : $O(n^{1.5})$ is surely an upper bound when k=3. Correct option.
Option (B): $O(n\log n)$
this means $n.n^{0.27} \leq c.n.\log n$
$n^{0.27}\leq c.\log n$
This is clearly false as polynomials grow faster than polylogarithms.
Option(B) can never be an upper bound, yes lower bound possible when $k=3.$
When $k=4.$
$T(n)=T(\frac{n}{4})+T(\frac{3n}{4})+n$
If we draw recursion tree for this, at every level cost comes to be n and there would be maximum $\log_{\frac{4}{3}}n$ levels
So, in this case $T(n)=O(n\log n)$. Option (C) correct.
When $k=5,$
$T(n)=T(\frac{n}{5})+T(\frac{3n}{4})+n$
It's recursion tree would look like below:
Level 1 Total cost $=n$
Level 2 total cost $=\frac{19n}{20}$
Level 3 total cost $=(\frac{19}{20})^2.n$
Level k cost $=(\frac{19}{20})^{k1}.n$
Number of levels on right $=\log_{\frac{4}{3}}n$
Number of levels on left $=\log_5n$
For the purpose of upper bound I'll consider right height.
Computing total cost from root till leaf
$n.\left[1+\frac{19}{20}+\ldots +(\frac{19}{20})^{\log_{\frac{4}{3}}n1} \right]$
$=n.\left[ \frac{1(\frac{19}{20})^{\log_{\frac{4}{3}}n}}{1\frac{19}{20}}\right]$
Ignoring denominator as we are only interested in some function of $n.$
$=n.\left[ 1(\frac{19}{20})^{\log_{\frac{4}{3}}n} \right]$
$=n\left[1n^{\log_{\frac{4}{3}}\frac{19}{20}} \right]$
$=n[1n^{0.17}]$
$=nn^{0.82}$
So, option(E) is correct.
Option(D) which says $O(n\log n)$ might not look convincing but
$nn^{0.82} \leq c.n\log n$
$1\frac{1}{n^{0.17}} \leq c\log n$
for $c=1,n_o=2$ this satisfies bigoh.
Option (D) is also correct.
Answer: option(B)
Let $n=2^k$
$T\left(2^k\right)= 2T\left(2^{k/2}\right) + k $
Let $T\left(2^k\right)=S(k) \implies T\left(2^{k/2}\right) = S(k/2)$
$S(k)=2S(k/2)+k$
This gives $S(k) = \Theta(k \log k)$ (Master theorem case 2)
$ = \Theta(\log n \log \log n)$
So ans is b
Value of $T(2^r,2^s)$ is nothing but the number of leaf nodes in its recursion tree. The reason for this is that there is only one base condition $T(1,1)$ which returns $1$ which are all ultimately added. Here is an example recursion tree for $T(2^2,2^3)$.
We will try to use some combinatorial technique to count the number of leaf nodes. One way of counting the number of leaf nodes in the recurrence tree is by counting all possible paths that one can take from the root to the leaf node. The reason for doing this would be clear in a moment. This idea would be used to establish a link with a very common combinatorial problem.
Claim: If we consider a grid of size $r \times s$ then the possible paths from $(r,s)$ to $(0,0)$ in that grid such that we can either move backwards or downward is nothing but all possible paths in recurrence tree from the root node to leaf node.
This happens to be true because arguments of recurrence relation are in powers of two, and at each level of recurrence tree, the power of one of the arguments get reduced by one.
$$\begin{align} T(2^r, 2^s) &= T(2^{r1},2^s) + T(2^r, 2^{s1}) \\ T(1,2^s) &= T(1,2^{s1}) \\ T(2^r,1) &= T(2^{r1},1) \\ T(1,1) &= 1 \\ \end{align}$$
Continuing from our previous example here is a grid of size $2 \times 3$ with two valid paths highlighted.
The number of valid paths possible in this grid will give us the count of the number of leaf nodes in the recurrence tree. Interestingly this problem, when interpreted exactly in opposite way, happens to be a very well known problem of combinatory.
In a grid of size $r \times s$, count the number of possible paths from $(0,0)$ to $(r,s)$ such that we can either move forward(F) or upward(U).
Video:
So there are $\binom{r+s}{r}$ many different valid paths on this grid to reach $(r,s)$ from $(0,0)$ or vice versa. And in turn, there are same numbers of leaf nodes in the recursion tree. Therefore, we can conclude with  $$T(2^r,2^s) = \binom{r+s}{r}$$
Video:(a) T(n) = $\Theta$($n^2$)
(b) T(n) = $\Theta$($n^2$)
(c) T(n) = $\Theta$($n^2$*log n)
(d) T(n) = $\Theta$($n^2$)
C is growing fastest.
Consider the following program written in pseudocode. Assume that $x$ and $y$ are integers.
Count (x, y) { if (y !=1 ) { if (x !=1) { print("*"); Count (x/2, y); } else { y=y1; Count (1024, y); } } }
The number of times that the $print$ statement is executed by the call $Count(1024, 1024)$ is _____
Answers: Recursion
Count (x, y) { if (y !=1 ) { if (x !=1) { print("*"); Count (x/2, y); } else { y=y1; Count (1024, y); } } }
Here, for each y value print("$*$") will run $10$ times. Once $x$ value reaches $1$, $count(1024, y1)$ will be called. Variable $y$ can take values $[2\ \ 1024]$ $i.e.$ total $1023$ values. So,
The total number of times '$*$' will be printed = $\big($Number of times '$*$' printed per $y$ value$\big)$*$\big($number of values $y$ takes$\big)$.
Number of times '$*$' printed $= 10* 1023 = 10230$
Consider the following program that attempts to locate an element $x$ in an array $a[ ]$ using binary search. Assume $N > 1$. The program is erroneous. Under what conditions does the program fail?
var i,j,k: integer; x: integer; a: array; [1..N] of integer; begin i:= 1; j:= n; repeat k:(i+j) div 2; if a[k] < x then i:= k else j:= k until (a[k] = x) or (i >= j); if (a[k] = x) then writeln ('x is in the array') else writeln ('x is not in the array') end;
The average number of key comparisons required for a successful search for sequential search on $n$ items is
Consider the following algorithm for searching for a given number $x$ in an unsorted array $A[1..n]$ having $n$ distinct values:
 Choose an $i$ at random from $1..n$
 If $A[i] = x$, then Stop else Goto 1;
Assuming that $x$ is present in $A$, what is the expected number of comparisons made by the algorithm before it terminates?
Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
On which of the following contents of $Y$ and $x$ does the program fail?
 $Y$ is $[1 \ 2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8 \ 9 \ 10]$ and $x < 10 $
 $Y$ is $[1 \ 3 \ 5 \ 7 \ 9 \ 11 \ 13 \ 15 \ 17 \ 19]$ and $x < 1 $
 $Y$ is $[2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2]$ and $x > 2$
 $Y$ is $[2 \ 4 \ 6 \ 8 \ 10 \ 12 \ 14 \ 16 \ 18 \ 20]$ and $ 2 < x < 20$ and $x$ is even
Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
The correction needed in the program to make it work properly is
 Change line 6 to: if $(Y[k] < x) i = k + 1$; else $j = k1$;
 Change line 6 to: if $(Y[k] < x) i = k  1$; else $ j = k +1$;
 Change line 6 to: if $(Y[k] < x) i = k$; else $j = k$;
 Change line 7 to: } while $((Y[k] == x) \&\& (i < j))$ ;
Let $A$ be an array of $31$ numbers consisting of a sequence of $0$'s followed by a sequence of $1$'s. The problem is to find the smallest index $i$ such that $A\left [i \right ]$ is $1$ by probing the minimum number of locations in $A$. The worst case number of probes performed by an optimal algorithm is ____________.
Suppose you are given an array $A$ with $2n$ numbers.
The numbers in odd positions are sorted in ascending order, that is, $A[1] \leq A[3] \leq \ldots \leq A[2n  1]$.
The numbers in even positions are sorted in descending order, that is, $A[2] \geq A[4] \geq \ldots \geq A[2n]$.
What is the method you would recommend for determining if a given number is in the array?
 Sort the array using quicksort and then use binary search.
 Merge the sorted lists and perform binary search.
 Perform a single binary search on the entire array.
 Perform separate binary searches on the odd positions and the even positions.
 Search sequentially from the end of the array.
Consider the following three version of the binary search program. Assume that the elements of type $T$ can be compared with each other; also assume that the array is sorted.
i, j, k : integer; a : array [1....N] of T; x : T; Program 1 : i := 1; j := N; repeat k := (i + j) div 2; if a[k] < x then i := k else j := k until (a[k] = x) or (i > j) Program 2 : i := 1; j := N; repeat k := (i + j) div 2; if x < a[k] then j := k  1; if a[k] < x then i := k + 1; until i > j Program 3 := i := 1; j := N repeat k := (i + j) div 2; if x < a[k] then j := k else i := k + 1 until i > j
A binary search program is called correct provided it terminates with $a[k] = x$ whenever such an element exists, or it terminates with $a\left [ k \right ]\neq x$ if there exists no array element with value $x$. Which of the following statements is correct?
Answers: Searching
The code is wrong here
k=(i+j) / 2; if (a[k] < x) then i = k; else j = k;
The (correct) code should be:
k=(i+j) / 2; if (a[k] < x) then i = k + 1; else j = k  1;
We can try an example with the given code in question
Let the array be $\qquad a[1,2,3,4,5,6,7,8,9,10]$
Index numbers$\qquad \quad1,2,3,4,5,6,7,8,9,10$
Let $x=10$; now run the code;
Initially $i = 1, j=10$;
first time $k =(i+j) /2 = 11/2 =5.5 = 5$ (because of integer type) $=i$
second time $= k =(i+j) /2 =15/2 =7.5 =7 =i$
third time $= k =(i+j) /2 = 17/2 = 8.5 = 8 =i$
fourth time $= k =(i+j) /2 = 18/2 = 9 = i$
fifth time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
sixth time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
seventh time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
Going to infinite loop (run time error)
For terminating the loop, it should be $i = k + 1$ instead of $i =k$ and $j = k  1$ instead of $j = k$;
$= 1 \times$ Probability of first element being $x + 2 \times$ Probability of second element being $x + \ldots +n \times$ Probability of last element being $x$.
$= \frac{1}{n} + \frac{2}{n} + \frac{3}{n} +\ldots +\frac{n}{n} $
$ = \frac{\left ( \frac{n\times (n+1)}{2} \right ) } {n} $
$ = \frac{n+1}{2}$
Correct Answer: $C$
Expected number of comparisons $(E) = 1 \times $ Probability of find on first comparison $+ 2 \times $Probability of find on second comparison $+ ... + i \times $ Probability of find on ith comparison $+ ...$
$= 1 \times \frac{1}{n} + 2 \times \frac{n1}{n^2} + 3 \times \frac{ (n1)^2}{n^3} + \dots $
$ = \frac{1/n} {1  \frac{n1}{n} } + \frac{(n1)/n^2}{\left( {1 \frac{n1}{n} }\right)^2} \\ \left( \text{Sum to infinity of aritmeticogeometric series with }\\a = \frac{1}{n}, r = \frac{n1}{n} \text{ and } d = \frac{1}{n} \right) \\= 1 + n  1 = n$
Reference: https://en.wikipedia.org/wiki/Arithmeticogeometric_sequence
Or we can also do,
$E= 1 \times \frac{1}{n} + 2 \times \frac{n1}{n^2} + 3 \times \frac{ (n1)^2}{n^3} + \dots $
$E \frac{n1}{n} = \frac{n1}{n^2} + 2 \times \frac{(n1)^2}{n^3} + 3 \times \frac{(n1)^3}{n^4} + \dots $
$ E E\frac{n1}{n} = \frac{1}{n} + \frac{n1}{n^2} + \frac{ (n1)^2}{n^3} + \dots $
$E. \frac{1}{n} = \frac{(1/n)}{1  \frac{n1}{n}} = 1 \left( \text{Sum to infinity of GP with } a = \frac{1}{n} \text{ and } r = \frac{n1}{n} \right) \\ \implies E = n$
Correct Answer: $A$
for Q.84
when it is option C the control will continue to iterate as $i=8$ and $j=9$;
again and again $i$ will be assigned $k$ which itself equals $8$ as $\frac{8+9}{2}$ being stored in an integer type variable, will evaluate to $8$.
For option A, with $x=9$, $k$ will take the following values:
 $4$
 $6$
 $7$
 $8  y[8] = 9$, $x$ found
For option D, with $x=10$, $k$ will take the following values:
 $4$, $y[4] = 10$, $x$ found
Answer should be A.
if( Y[k] < x) then i = k + 1;
if given element that we are searching is greater, then searching will be continued in the upper half of array
otherwise $\mathbf{j = k  1}$;
in the lower half.
Take few case in consideration i.e.
 All elements are same
 Increasing order with no repetition
 Increasing order with repetition.
At each stage we compare with ${\frac{(low + high)}{2}}^{th}$ element index and if it is $1$ we check left and if it is $0$ we check right.
Total worst case no. of probes is $\lceil \log_{2}{31}\rceil=5.$
So, answer is $5.$
it should not take more than 5 probes
#include <bits/stdc++.h> using namespace std; #define N 100 void display(bool a[]) { int i=0; while(i<31) printf("%2d ",a[i++]); } void assign(bool a[],int zeros) { for(int i=0;i<31;i++) (i<zeros)?a[i] = 0:a[i] = 1; } int main() { srand(time(NULL)); bool a[31]; // header for(int i=0;i<31;i++) printf("%2d ",i); printf("\n\n"); int max_probes = 0; for(int iteration = 1;iteration <= N;iteration++) { int zeros = rand()%32; assign(a,zeros); sort(a,a+31); int low,high,mid,ans; std::vector<int> seq; low = 0; high = 31; while(low < high) { mid = low + floor((high  low) / 2); seq.push_back(mid); if(a[mid] == 0) { low = mid + 1; ans = low; if(a[mid + 1] == 1) break; } else { high = mid; ans = high; if(mid > 0) if(a[mid  1] == 0) break; } } display(a); printf("  probes=%d ",seq.size()); for(auto e:seq) printf("%d ",e); printf("  at = %dth\n",ans); //if(ans == 15) printf("\nHHH=\n"); max_probes = max(max_probes,(int)(seq.size())); seq.clear(); } printf("%d\n",max_probes); }
Option D is the correct answer.
We can simply use clever indexing to binary search the element in the odd positions, and in the even positions separately.
This will take $O(\log n)$ time and $O(1)$ space in the worst case.
A: Sorting using Quicksort will take $O(n^2)$ time.
B: Merging will take $O(n)$ time and $O(n)$ space.
C: Binary search only works on a sorted array.
E: Sequential search will take $O(n)$ time.
First program wont work if array has elements same..it may go into infinite loop .To make it work it properly we have to do following changes $j=k1$ and $i=k+1$
For second program $a[k]==x$ condition is missing so it is wrong
Third program is also wrong as $j!=k1$ and condition $a[k]==x$ is missing
So, answer is E.
Let $G=(V,E)$ be a DIRECTED graph, where each edge $\large e$ has a positive weight $\large\omega(e),$ and all vertices can be reached from vertex $\large s.$ For each vertex $\large v,$ let $\large \phi(v)$ be the length of the shortest path from $\large s$ to $\large v.$ Let $G'=(V,E)$ be a new weighted graph with the same vertices and edges, but with the edge weight of every edge $\large e=(u\to v)$ changed to $\large \omega'(e)=\omega(e)+\phi(v)\phi(u).$ Let $P$ be a path from $\large s$ to a vertex $\large v,$ and let $\large \omega(P)=\sum_{e\in P} \omega_{e},$ and $\large \omega'(P)=\sum_{e\in P} \omega'_{e}.$
Which of the following options is NOT NECESSARILY TRUE ?
 If $P$ is a shortest path in $G,$ then $P$ is a shortest path in $G'.$
 If $P$ is a shortest path in $F',$ then P is a shortest path in $G.$
 If $P$ is a shortest path in $G,$ then $\omega'(P)=2\times \omega(P).$
 If $P$ is NOT a shortest path in $G,$ then $\omega'(P)<2\times \omega(P).$
 All of the above options are necessarily TRUE.
Answers: Shortest Path
Answer should be $E,$ i.e., all statements $A,B,C,D$ are correct.
Explanation:
$\omega\left ( P \right ) =\sum_{e} \omega_e$
$\omega'\left ( P \right ) =\sum_{e} \omega_e'$
$\qquad =\sum_{e} (\omega (e) + \phi(v)  \phi(u))$
$\qquad =\sum \omega(e) + \sum ( \phi(v)  \phi(u) )$
Given that $P$ is a path from $s$ to $v.$ Let $P$ be comprised of $S \rightarrow V_1 \rightarrow V_2 \rightarrow V_3 \cdots \rightarrow V_n \rightarrow V $
$\require{cancel} \omega'\left ( P \right ) =\sum_{e} \omega(e) + \phi(v) \cancel {\phi(v_n)}$
$\qquad +\cancel {\phi(v_n)}  \cancel{\phi(v_{n1})}$
$\qquad + $
$\qquad \vdots$
$\qquad +\cancel {\phi(v_1)}  \phi(v_s) $
$\omega'(P)=\sum \omega(e) + \phi(v)  \phi(s)$
Since, $s$ is the source (in the context of the shortest path) $\phi (s) = 0.$
$ \therefore \omega'(P) = \sum \omega(e) + \phi(v) = \omega(P) + \phi(v)$
If $P$ is the shortest path in $G,$ $\omega'(P) = \sum \omega(e) + \phi(v)$
$\qquad = \phi(v) + \phi(v)$
$\qquad = 2. \phi(v)$
$\qquad = 2. \sum_{e} \omega_e$
$\qquad =2. \omega(P)$
(C is correct)
$\omega'(P) = \omega(P) + \phi(v)$
$\qquad \le \omega(P) + \omega(P)$
$\qquad = 2* \omega(P)$
or $\omega'(P) \le 2\omega(P)$
(D is correct)
If $P$ is the shortest path in $G,$ (Say from $s$ to $v)$
 $\omega(P) = \phi(v)$
 $\omega'(P) = \omega(P) + \phi(v)$
Say $P'$ is the shortest path in $G'$ (not equal to $P)$
$\omega'(P') \le \omega'(P)$
$\Rightarrow \omega(P') + \phi(v) \le \omega(P) + \phi(v)$
$\Rightarrow \omega(P') \le \omega(P) \Rightarrow$ Contradiction as we assumed $P$ to be shortest path in $G$
$\therefore$ $P$ is the shortest path in $G'$ as well.
(A is correct)
Similarly say $P$ is the shortest path in $G'$ $(s$ to $v)$
$\omega'(P) = \omega(P) + \phi(v)$
We claim $P$ is the shortest path in $G$ as well. By contradiction let us assume $P'$ is the shortest path in $G.$
$\omega(P') \le \omega(P)$
$\omega(P') + \phi(v) \le \omega(P) + \phi(v)$
$\omega'(P') \le \omega'(P) \Rightarrow$ Contradiction
Here, what we assumed was wrong and $P$ is the shortest path in $G$
(B is correct)
$\therefore$ E is the answer
Let $P$ be a quicksort program to sort numbers in ascending order. Let $t_{1}$ and $t_{2}$ be the time taken by the program for the inputs $\left[1 \ 2 \ 3 \ 4\right]$ and $\left[5 \ 4 \ 3 \ 2 \ 1\right]$, respectively. Which of the following holds?
An input files has $10$ records with keys as given below:
$25\quad 7\quad 34\quad 2\quad 70\quad 9\quad 61\quad 16\quad 49\quad 19$
This is to be sorted in nondecreasing order.
 Sort the input file using QUICKSORT by correctly positioning the first element of the file/subfile. Show the subfiles obtained at all intermediate steps. Use square brackets to demarcate subfiles.
 Sort the input file using 2way MERGESORT showing all major intermediate steps. Use square brackets to demarcate subfiles.
Choose the correct alternatives (More than one may be correct).
The complexity of comparision based sorting algorithms is:
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Following algorithm(s) can be used to sort $n$ in the range $[1\ldots n^3]$ in $O(n)$ time
Algorithm design technique used in quicksort algorithm is?
For merging two sorted lists of sizes $m$ and $n$ into a sorted list of size $m+n$, we require comparisons of
A two dimensional array $A[1..n][1..n]$ of integers is partially sorted if $\forall i, j\in [1..n1], A[i][j] < A[i][j+1] \text{ and } A[i][j] < A[i+1][j]$
Fill in the blanks:

The smallest item in the array is at $A[i][j]$ where $i=__$ and $j=__$.
 The smallest item is deleted. Complete the following $O(n)$ procedure to insert item $x$ (which is guaranteed to be smaller than any item in the last row or column) still keeping $A$ partially sorted.
procedure insert (x: integer); var i,j: integer; begin i:=1; j:=1, A[i][j]:=x; while (x > __ or x > __) do if A[i+1][j] < A[i][j] ___ then begin A[i][j]:=A[i+1][j]; i:=i+1; end else begin _____ end A[i][j]:= ____ end
Quicksort is run on two inputs shown below to sort in ascending order taking first element as pivot
 $1, 2, 3, \dots n$
 $n, n1, n2, \dots, 2, 1$
Let $C_1$ and $C_2$ be the number of comparisons made for the inputs (i) and (ii) respectively. Then,
Give the correct matching for the following pairs: $$\begin{array}{llll}\hline \text{(A)} & \text{$O (\log n)$} & \text{(P)} & \text{Selection sort} \\\hline \text{(B)} & \text{$O (n)$} & \text{(Q)}& \text{Insertion sort} \\\hline \text{(C)}& \text{$O (n \log n)$} & \text{(R)} & \text{Binary search} \\\hline \text{(D)} & \text{$O (n^2)$} &\text{(S)} & \text{Merge sort} \\\hline \end{array}$$
A sorting technique is called stable if
 it takes $O (n \log n)$ time

it maintains the relative order of occurrence of nondistinct elements

it uses divide and conquer paradigm

it takes $O(n)$ space
If one uses straight twoway merge sort algorithm to sort the following elements in ascending order:
$20, \ 47, \ 15, \ 8, \ 9, \ 4, \ 40, \ 30, \ 12, \ 17$
then the order of these elements after second pass of the algorithm is:

$8, \ 9, \ 15, \ 20, \ 47, \ 4, \ 12, \ 17, \ 30, \ 40$

$8, \ 15, \ 20, \ 47, \ 4, \ 9, \ 30, \ 40, \ 12, \ 17$

$15, \ 20, \ 47, \ 4, \ 8, \ 9, \ 12, \ 30, \ 40, \ 17$

$4, \ 8, \ 9, \ 15, \ 20, \ 47, \ 12, \ 17, \ 30, \ 40$
Let $A$ be an $n \times n$ matrix such that the elements in each row and each column are arranged in ascending order. Draw a decision tree, which finds $1$^{st}, $2$^{nd} and $3$^{rd} smallest elements in minimum number of comparisons.
An array contains four occurrences of $0$, five occurrences of $1$, and three occurrences of $2$ in any order. The array is to be sorted using swap operations (elements that are swapped need to be adjacent).
 What is the minimum number of swaps needed to sort such an array in the worst case?
 Give an ordering of elements in the above array so that the minimum number of swaps needed to sort the array is maximum.
Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using Randomized quicksort?
The unusual $\Theta(n^2)$ implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will
In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of \(1. . . n\)?
In a permutation \(a_1 ... a_n\), of $n$ distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of \(1. . . n\) with at most $n$ inversions?
The tightest lower bound on the number of comparisons, in the worst case, for comparisonbased sorting is of the order of
Suppose there are $\lceil \log n \rceil$ sorted lists of $\lfloor n /\log n \rfloor$ elements each. The time complexity of producing a sorted list of all these elements is: (Hint:Use a heap data structure)
Let $a$ and $b$ be two sorted arrays containing $n$ integers each, in nondecreasing order. Let $c$ be a sorted array containing $2n$ integers obtained by merging the two arrays $a$ and $b$. Assuming the arrays are indexed starting from $0$, consider the following four statements
 $a[i] \geq b [i] \Rightarrow c[2i] \geq a [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \geq b [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \leq a [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \leq b [i]$
Which of the following is TRUE?
Which one of the following in place sorting algorithms needs the minimum number of swaps?
The median of $n$ elements can be found in $O(n)$ time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot?
Which of the following sorting algorithms has the lowest worsecase complexity?
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let $T(n)$ be the number of comparisons required to sort $n$ elements. Then
If we use Radix Sort to sort $n$ integers in the range $\left (n^{k/2}, n^k \right ]$, for some $k > 0$ which is independent of $n$, the time taken would be?
What is the number of swaps required to sort $n$ elements using selection sort, in the worst case?
In quicksort, for sorting $n$ elements, the $\left(n/4\right)^{th}$ smallest element is selected as pivot using an $O(n)$ time algorithm. What is the worst case time complexity of the quick sort?
A list of $n$ strings, each of length $n$, is sorted into lexicographic order using the mergesort algorithm. The worst case running time of this computation is
The number of elements that can be sorted in $\Theta(\log n)$ time using heap sort is
Which one of the following is the tightest upper bound that represents the number of swaps required to sort $n$ numbers using selection sort?
Let $P$ be quicksort program to sort numbers in ascending order using the first element as the pivot. Let $t_1$ and $t_2$ be the number of comparisons made by P for the inputs $[1 \ 2 \ 3 \ 4 \ 5]$ and $[4 \ 1 \ 5 \ 3 \ 2]$ respectively. Which one of the following holds?
You have an array of $n$ elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is
Suppose you are provided with the following function declaration in the C programming language.
int partition(int a[], int n);
The function treats the first element of $a[\:]$ as a pivot and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part.
The following partially given function in the C programming language is used to find the $k^{th}$ smallest element in an array $a[\:]$ of size $n$ using the partition function. We assume $k \leq n$.
int kth_smallest (int a[], int n, int k) { int left_end = partition (a, n); if (left_end+1==k) { return a[left_end]; } if (left_end+1 > k) { return kth_smallest (___________); } else { return kth_smallest (___________); } }
The missing arguments lists are respectively
Assume that a mergesort algorithm in the worst case takes $30$ seconds for an input of size $64$. Which of the following most closely approximates the maximum input size of a problem that can be solved in $6$ minutes?
The worst case running times of Insertion sort , Merge sort and Quick sort, respectively are:
 $\Theta (n \log n)$, $\Theta (n \log n)$ and $\Theta(n^2)$
 $\Theta (n^2)$, $\Theta (n^2)$ and $\Theta(n \log n)$
 $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n \log n)$
 $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n^2)$
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in the ascending order, which of the following are TRUE?
 Quicksort runs in $\Theta (n^2)$ time
 Bubblesort runs in $\Theta (n^2)$ time
 Mergesort runs in $\Theta (n)$ time
 Insertion sort runs in $\Theta (n)$ time
Suppose you are given $n$ numbers and you sort them in descending order as follows:
First find the maximum. Remove this element from the list and find the maximum of the remaining elements, remove this element, and so on, until all elements are exhausted. How many comparisons does this method require in the worst case?
Consider the Insertion Sort procedure given below, which sorts an array $L$ of size $n\left ( \geq 2 \right )$ in ascending order:
begin for xindex:= 2 to n do x := L [xindex]; j:= xindex  1; while j > 0 and L[j] > x do L[j + 1]:= L[j]; j:= j  1; end {while} L [j + 1]:=X; end{for} end
It is known that insertion sort makes at most $n (n  1) / 2$ comparisons. Which of the following is true?
 There is no input on which insertion Sort makes $n (n  1) / 2 $ comparisons.
 Insertion Sort makes $n (n  1) / 2$ comparisons when the input is already sorted in ascending order.
 Insertion Sort makes $n (n  1) / 2$ comparisons only when the input is sorted in descending order.
 There are more than one input orderings where insertion sort makes $n (n  1) / 2 $ comparisons.
 Insertion Sort makes $n (n  1) / 2$ comparisons whenever all the elements of $L$ are not distinct.
Let $S=\left \{ x_{1},....,x_{n} \right \}$ be a set of $n$ numbers. Consider the problem of storing the elements of $S$ in an array $A\left [ 1...n \right ]$ such that the following minheap property is maintained for all $2\leq i\leq n: A [\lfloor i/2\rfloor]\leq A[i]$. (Note that $\lfloor x\rfloor$ is the largest integer that is at most $x$). Which of the following statements is TRUE?
 This problem can be solved in $O(\log n)$ time.
 This problem can be solved in $O (n)$ time but not in $O(\log n)$ time.
 This problem can be solved in $O(n\log n)$ time but not in $O (n)$ time.
 This problem can be solved in $O \left ( n^{2} \right )$ time but not in $O(n\log n)$ time.
 None of the above.
Given a set of $n=2^{k}$ distinct numbers, we would like to determine the smallest and the second smallest using comparisons. Which of the following statements is TRUE?
 Both these elements can be determined using $2k$ comparisons.
 Both these elements can be determined using $n  2$ comparisons.
 Both these elements can be determined using $n + k  2$ comparisons.
 $2n  3$ comparisons are necessary to determine these two elements.
 $nk$ comparisons are necessary to determine these two elements.
The first $n$ cells of an array $L$ contain positive integers sorted in decreasing order, and the remaining $m  n$ cells all contain 0. Then, given an integer $x$, in how many comparisons can one find the position of $x$ in $L$?
 At least $n$ comparisons are necessary in the worst case.
 At least $\log m$ comparisons are necessary in the worst case.
 $O (\log (m  n))$ comparisons suffice.
 $O (\log n)$ comparisons suffice.
 $O (\log (m / n))$ comparisons suffice.
An array $A$ contains $n$ integers. We wish to sort $A$ in ascending order. We are told that initially no element of $A$ is more than a distance $k$ away from its final position in the sorted list. Assume that $n$ and $k$ are large and $k$ is much smaller than $n$. Which of the following is true for the worst case complexity of sorting $A$?
 $A$ can be sorted with constant $.kn$ comparison but not with fewer comparisons.
 $A$ cannot be sorted with less than constant $.n \log n$ comparisons.
 $A$ can be sorted with constant $.n$ comparisons.
 $A$ can be sorted with constant $.n \log k$ comparisons but not with fewer comparisons.
 $A$ can be sorted with constant $.k^{2}n$ comparisons but not fewer.
Consider the quick sort algorithm on a set of $n$ numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?
 The running time of the algorithm is $\Theta (n).$
 The running time of the algorithm is $\Theta (n\log n)$.
 The running time of the algorithm is $\Theta (n^{1.5})$.
 The running time of the algorithm is $\Theta (n^{2})$.
 None of the above.
Suppose $n$ processors are connected in a linear array as shown below. Each processor has a number. The processors need to exchange numbers so that the numbers eventually appear in ascending order (the processor $\rm P1$ should have the minimum value and the the processor $\rm Pn$ should have the maximum value).
The algorithm to be employed is the following. Odd numbered processors and even numbered processors are activated alternate steps; assume that in the first step all the even numbered processors are activated. When a processor is activated, the number it holds is compared with the number held by its righthand neighbour (if one exists) and the smaller of the two numbers is retained by the activated processor and the bigger stored in its right hand neighbour.
How long does it take for the processors to sort the values?
An array of $n$ distinct elements is said to be unsorted if for every index $i$ such that $ 2 \leq i \leq n1$, either $A[i] > max \{A [i1], A[i+1]\}$, or $A[i] < min \{A[i1], A[i+1]\}$. What is the timecomplexity of the fastest algorithm that takes as input a sorted array $A$ with $n$ distinct elements, and unsorts $A$?
Answers: Sorting
Correct Answer: $C$
Answer is LESS.
As worst case time for Quicksort is $O(n^2)$ and worst case for heap sort is $O(n \log n)$.
Quick Sort
$\textbf{Pseudocode for quicksort}$
$\text{QUICKSORT(A,p,r)}$
$\quad \textbf{if } p < r$
$\quad\quad \textbf{then} q \leftarrow \text{PARTITION}(A,p,r)$
$\quad\quad\quad \text{QUICKSORT}(A,p,q1)$
$\quad\quad\quad\text{QUICKSORT}(A,q+1,r)$
${\color{Red}{\text{Initial call: }} }\text{QUICKSORT}(A,1,n)$
 ${\color{Red} {25}}\ 7\ 34\ 2\ 70\ 9\ 61\ 16\ 49\ 19$
 ${\color{Red} {25}}\ 7\ \color{blue}{2\ 34}\ 70\ 9\ 61\ 16\ 49\ 19$
 ${\color{Red} {25}}\ 7\ 2\ \color{blue}{9}\ 70\; \color{blue}{34}\; 61\ 16\ 49\ 19$
 ${\color{Red} {25}}\ 7\ 2\ 9 \ \color{blue}{16}\ 34\ 61\ \color{blue}{70}\ 49\ 19$
 ${\color{Red} {25}}\ 7\ 2\ 9\ 16\ \color{blue}{19} \; 61\ 70\ 49 \ \color{blue}{34}$
 $[7\ 2\ 9\ 16$ $19]$ ${\color{Red} {25}}\ [61\ 70\ 49$ $34]$
 $[{\color{Red} {7}}\ 2\ 9\ 16\ 19]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 $[[2]\ {\color{Red} {7}}\ [9\ 16\ 19]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 $[{\color{Red} {2\ 7}}\ [{\color{Red} {9}}\ [16\ 19]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 $[{\color{Red} {2\ 7}}\ [{\color{Red} {9}}\ [{\color{Red} {16}} \ 19]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 $[{\color{Red} {2}}\ {\color{Red} {7}}\ [{\color{Red} {9}}\ [{\color{Red} {16}} \ {\color{Red} {19}}]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 ${\color{Red} {[2}}\ {\color{Red} {7[}}\ {\color{Red} {9}}\ {\color{Red} {16}} \ {\color{Red} {19]]}}\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 ${\color{Red} {[2}}\ {\color{Red} {7}}\ {\color{Red} {9}}\ {\color{Red} {16}} \ {\color{Red} {19]}}\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[{\color{Red} {61}}\ \color{blue}{49\ 70}\ 34]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[{\color{Red} {61}}\ 49\; \color{blue}{34\ 70}]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }} [[49\ 34]\ {\color{Red} {61}}\ [70] ]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[[ {\color{Red} {49}}\ 34]\ {\color{Red} {61}}\ [70] ]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[[ [34]\ {\color{Red} {49}}]\ {\color{Red} {61}}\ [70] ]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[ {\color{Red} {[ [34]\ 49]}}\ {\color{Red} {61}}\ [70] ]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[ {\color{Red} {[34\ 49]}}\ {\color{Red} {61}}\ [70] ]$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }} [{\color{Red} {34\ 49}}\ {\color{Red} {61\ [70 ]}}] $
 ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }} {\color{Red} {[34\ 49}}\ {\color{Red} {61\ 70 ]}}\ $
 ${\color{Red} {[2\ 7\ 9\ 16\ 19\ 25\ }} {\color{Red} {34\ 49}}\ {\color{Red} {61\ 70] }}\ $
2way MergeSort [Iterative version]
 $[25\ 7]\ [34\ 2]\ [70\ 9]\ [61\ 16]\ [49\ 19]$
 $[{\color{Red} {[7\ 25]}}\ {\color{Red} {[2\ 34]}}]\ [{\color{Red} {[9\ 70]}}\ {\color{Red} {[16\ 61]}}]\ [{\color{Red} {[19\ 49]}}]$
 $[{\color{Green} {[2\ 7\ 25\ 34 ]}}\ {\color{Green} {[9\ 16\ 61\ 70]}}]\ {\color{Green} {[19\ 49]}}$
 $[{\color{Blue} {[2\ 7\ 9\ 16\ 25\ 34\ 61\ 70]}}\ {\color{Green} {[19\ 49]}}]$
 $[{\color{Blue} {[2\ 7\ 9\ 16\ 19\ 25\ 34\ 49\ 61\ 70]}}$
$\textbf{MergeSort [Recursive version]}$
$\textbf{MERGE  SORT A[1...n]}$
 $\text{If } n = 1,$ done.
 Recursively sort $A[1...\left \lceil n/2 \right \rceil]$ and $A[\left \lceil n/2 \right \rceil + 1 ... n]. $
 ${\color{Red}{\text{“Merge"}}}$ the $2$ sorted lists.
$\qquad{\color{Red}{\text{Key subroutine: }}} \textbf{MERGE}$
 $[25\ 7\ 34\ 2\ 70]\ [9\ 61\ 16\ 49\ 19]$
 $[[25\ 7\ 34]\ 2\ 70]\ [9\ 61\ 16\ 49\ 19]$
 $[[{\color{Red} {[7\ 25]}}\ [34]]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
 $[[{\color{Red} {[7\ 25]}}\ {\color{Red} {[34]}}]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
 $[[{\color{Red} {[7\ 25\ 34]}}]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
 $[{\color{Red} {[7\ 25\ 34]}}\ {\color{Red} {[2\ 70]}}]\ [9\ 61\ 16\ 49\ 19]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [9\ 61\ 16\ 49\ 19]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[9\ 61\ 16]\ [49\ 19]]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[[9\ 61]\ [16]]\ [49\ 19]]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[{\color{Red} {[9\ 61]}}\ [16]]\ [49\ 19]]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[{\color{Red} {[9\ 61]\ [16]}}\ ]\ [49\ 19]]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [{\color{Red} {[9\ 16\ 61]}}\ [49\ 19]]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [{\color{Red} {[9\ 16\ 61]}}\ {\color{Red} {[19\ 49]}}]$
 ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ {\color{Red} {[9\ 16\ 19\ 49\ 61]}}$
 ${\color{Red} {[2\ 7\ 9\ 16\ 19\ 25\ 34\ 49\ 61\ 70]}}$
First of all which sorting algorithm is being asked? It is not just the normal ones we see in algorithm text books but can be any sorting algorithm which uses comparisons. So, in crux we cannot rely on any specific algorithms for this question.
Now, coming to the options, we can see that $\Theta$ notation is used. We use $\Theta$ notation for a problem when
 we have a lower bound for the solution  that is any algorithm which solves the problem must have minimum this much complexity (time complexity being implied)
 there exist an algorithm which solves the problem with above minimum time complexity (worst case input being implied)
For comparison based sorting we already have known algorithms like heap sort and merge sort, which solves the problem in $O(n \log n)$ $($See $O)$ and now if we show that this is the best possible by any algorithm our answer becomes $\Theta (n \log n).$
And it is indeed proved that for comparison based sorting minimum $\Omega(n \log n)$ comparisons are required and considering time taken being proportional to the number of comparisons, the time complexity is also $\Omega(n \log n).$ Proof of this is given here but in essence it says
 The output of sorting $n$ elements can be any of the $n!$ permutations.
 Each comparison reduces the possibility by $2$
 So, to finish all $n!$ permutations we need minimum $\lg n!$ comparisons which is $\Omega (n \lg n)$
Now, if someone asks what is the minimum number of comparisons to sort $5$ elements answer is definitely $\geq$ $\lg 5! \geq \lg 120 \geq 7. $ We can only use $\geq$ here and not $=$ unless we prove that we can do it in $7$ comparisons. But interestingly, this $\geq$ becomes $=$ till $n = 11$ as given in below Wikipedia link.
Ref: https://en.wikipedia.org/wiki/Comparison_sort
Answer A. $\Theta (n \log n)$
Answer is 7.
Minimum number of comparisons = $\lceil \log(n!) \rceil$ = $\lceil \log(5!) \rceil$ = $\lceil \log(120) \rceil$ = 7.
Reference: http://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_a_list