1 Algorithms (276)
topLet $A$ be array of $n$ integers, sorted so that $A[1] \leq A[2] \leq \dots A[n]$. Suppose you are given a number $x$ and you wish to find out if there exist indices $k, \: l$ such that $A[k]+A[l] = x$.
 Design an $O(n)$ algorithm for this problem.
Here is the algorithm, Which return True if there is a number $ \mathbf{ x }$ present such that ($\mathbf{ A[k]+A[l] == x} $) else return False.
// Consider this algorithm is called as // AlgorithmCheck(A,1,size,x); // Where A is the sorted array Bool AlgorithmCheck(A, L, K, x){ while(L<K){ if(A[L]+A[K] == x) return true; else if(A[L]+A[K] < x) L++; else K; } return false; }
This will take only $ \mathbf {O(n)}$ time to do its work.
An element in an array $X$ is called a leader if it is greater than all elements to the right of it in $X$. The best algorithm to find all leaders in an array
 Solves it in linear time using a left to right pass of the array
 Solves it in linear time using a right to left pass of the array
 Solves it using divide and conquer in time $\Theta (n\log n)$
 Solves it in time $\Theta( n^2)$
Option B) We can move from right to left, while keeping a note of the maximum element so far (let's call it current_max
).
 Starting from the rightmost element, we initialize our
current_max
with it, since the rightmost element will always be a leader.  Moving from right to left, if an element $x$ is greater than our
current_max
, then $x$ is also a leader. Add this element to list of leaders (or simply print it). Setcurrent_max
to $x$ and carryon leftward.
Time Complexity would be $\Theta(n)$.
Given two arrays of numbers $a_{1},...,a_{n}$ and $b_{1},...,b_{n}$ where each number is $0$ or $1$, the fastest algorithm to find the largest span $(i, j)$ such that $ a_{i}+a_{i+1}+\dots+a_{j}=b_{i}+b_{i+1}+\dots+b_{j}$ or report that there is not such span,
 Takes $ 0(3^{n})$ and $\Omega(2^{n})$ time if hashing is permitted
 Takes $ 0(n^{3})$ and $\Omega(n^{2.5})$ time in the key comparison mode
 Takes $\Theta (n)$ time and space
 Takes $O(\sqrt n)$ time only if the sum of the $2n$ elements is an even number
Answer is (C). Following algorithm would do.
Since array is binary, the max sum will go until $n$ and so the sum difference of the two arrays can vary between $n$ and $n$. We use array start to keep the starting index of each possible sum (hence of size $2n+1$) and array end to keep the ending index (these two arrays work like hash tables and since we have only $2n+1$ possible keys, we can do a perfect hashing). So, our required solution will be max(end[i]start[i]) provided both are assigned values.
The algorithm works as follows:
 Initialize diff array to contain the difference of sum of elements of array a and b. i.e., $diff[i] = \sum_{i=0}^n a[i]  b[i]$.
 Now diff[i] can have values from $n$ to $n$ which gives $2n+1$ possible values and the first occurrence of a diff value marks the beginning of a span and the last occurrence marks the end. We use start and end array for storing these two positions for the $2n+1$ possible values.
 Now, the largest value of end[i]  start[i] for any i, will be the largest span and the start of it will be start[i] + 1, and end will be end[i]. If the span is starting from first position itself (arrays a and b have same first elements), then it will start from start[i] itself.
#include <stdio.h> #define size 100 //assume n is less than 100 int main() { int n, a[size], b[size]; int start[2*size+1], end[2*size+1]; int sum1 = 0, sum2 = 0, i; int diff[size]; printf("Enter n: "); scanf("%d", &n); for(i = 0; i < n; i++) { printf("Enter a[%d]: ", i); scanf("%d", &a[i]); } for(i = 0; i < n; i++) { printf("Enter b[%d]: ", i); scanf("%d", &b[i]); } for(i = 0; i < n; i++) { if(a[i]) sum1++; if(b[i]) sum2++; diff[i] = sum1  sum2; } for(i = 0; i < 2*n; i++) start[i] = 1,end[i] = 1; start[n] = end[n] = 0; //initially sum is 0 at the beginning of array and //the first n1 elements of start and end are used //if sum of A till ith element is less than sum of B till ith element for(i=0; i < n; i++) { if(start[diff[i] + n] == 1)//interested only in the first occurrence of diff[i] start[diff[i] + n] = i; end[diff[i] + n] = i;//interested in the last occurrence of diff[i] } int max = 1; int savei = 1; //savei is for storing the sum having the largest span for(i = 0; i < 2*n; i++) { if(start[i] > 1 && (end[i]  start[i] > max)) { max = end[i]  start[i]; savei = i; } } if(savei >= 0) { printf("The largest span is from %d to %d\n", start[savei]+(savei != n), end[savei]); //when sum zero is having the largest span, span starts from first element itself. //Else, the span starts from the next element from which the span does not change } else { printf("No span\n"); } }
You are given ten rings numbered from 1 to 10, and three pegs labeled A, B, and C. Initially all the rings are on peg A, arranged from top to bottom in ascending order of their numbers. The goal is to move all the rings to peg B in the minimum number of moves obeying the following constraints:
 In one move, only one ring can be moved.
 A ring can only be moved from the top of its peg to the top of a new peg.
 At no point can a ring be placed on top of another ring with a lower number.
How many moves are required?
 501
 1023
 2011
 10079
 None of the above.
Therefore Total number of function call $2^{n} 1 = 1023$ : option B.
The correct matching for the following pairs is
(A) All pairs shortest path  (1) Greedy 
(B) Quick Sort  (2) DepthFirst search 
(C) Minimum weight spanning tree  (3) Dynamic Programming 
(D) Connected Components  (4) Divide and and Conquer 

A2 B4 C1 D3

A3 B4 C1 D2

A3 B4 C2 D1

A4 B1 C2 D3
(A) All pairs shortest path  (3) Dynamic Programming 
(B) Quick Sort  (4) Divide and and Conquer 
(C) Minimum weight spanning tree  (1) Greedy 
(D) Connected Components  (2) DepthFirst search 
Reference : Read the Intro/Algo SubHeading.
Match the following:
P. Prim's algorithm for minimum spanning tree  i. Backtracking 
Q. FloydWarshall algorithm for all pairs shortest paths  ii. Greedy method 
R. Mergesort  iii. Dynamic programming 
S. Hamiltonian circuit  iv. Divide and conquer 
 Piii, Qii, Riv, Si
 Pi, Qii, Riv, Siii
 Pii, Qiii, Riv, Si
 Pii, Qi, Riii, Siv
P. Prim  ii. Greedy
http://www.geeksforgeeks.org/greedyalgorithmsset5primsminimumspanningtreemst2/
Q.Floyd Warshall  iii.Dynamic
http://www.geeksforgeeks.org/dynamicprogrammingset16floydwarshallalgorithm/
R.MergeSort  iv.Divide&Conquer
http://www.geeksforgeeks.org/mergesort/
S.Hamiltonian circuit  i.Backtracking
http://www.geeksforgeeks.org/backtrackingset7hamiltoniancycle/
Option C
Given below are some algorithms, and some algorithm design paradigms.
1. Dijkstra's Shortest Path  i. Divide and Conquer 
2. FloydWarshall algorithm to compute all pairs shortest path  ii. Dynamic Programming 
3. Binary search on a sorted array  iii. Greedy design 
4. Backtracking search on a graph  iv. Depthfirst search 
v. Breadthfirst search 
Match the above algorithms on the left to the corresponding design paradigm they follow.
 1i, 2iii, 3i, 4v
 1iii, 2iii, 3i, 4v
 1iii, 2ii, 3i, 4iv
 1iii, 2ii, 3i, 4v
Consider the following table:
Algorithms  Design Paradigms 

(P) Kruskal  (i) Divide and Conquer 
(Q) Quicksort  (ii) Greedy 
(R) FloydWarshall  (iii) Dynamic Programming 
Match the algorithms to the design paradigms they are based on.
 (P) $\leftrightarrow$ (ii), (Q) $\leftrightarrow$ (iii), (R) $\leftrightarrow$ (i)
 (P) $\leftrightarrow$ (iii), (Q) $\leftrightarrow$ (i), (R) $\leftrightarrow$ (ii)
 (P) $\leftrightarrow$ (ii), (Q) $\leftrightarrow$ (i), (R) $\leftrightarrow$ (iii)
 (P) $\leftrightarrow$ (i), (Q) $\leftrightarrow$ (ii), (R) $\leftrightarrow$ (iii)
in KRUSKAL in every iteration, an edge of the MOST MINIMUM WEIGHT (GREEDIEST) possible is selected and added to MST construction. Hence, GREEDY.
in QUICKSORT we partition the problem in to subproblems, solve them and the combine. Hence it is DIVIDE and CONQUER
FloydWarshal uses Dynamic programming.
Hence correct answer is : OPTION (C).
Consider the following two functions:
$g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$
$g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$
Which of the following is true?

$g_1(n) \text{ is } O(g_2(n))$

$g_1(n) \text{ is } O(n^3)$

$g_2(n) \text{ is } O(g_1(n))$

$g_2(n) \text{ is } O(n)$
Options A and B are true here.
Which of the following is false?

$100n \log n=O(\frac{n\log n}{100})$

$\sqrt{\log n} = O(\log\log n)$

If $0 < x < y \text{ then } n^x = O\left(n^y\right)$

$2^n \neq O\left(nk\right)$
 $100n \log n=O(\frac{n\log n}{100})$ : BigO denotes the growth rate of functions and multiplication or division by a constant does not change the growth rate. So, this is TRUE and here $O$ can even be replaced by $\Theta$ or $\Omega$.
 $\sqrt{\log n} = O(\log\log n)$ : false take any long value like 256 LHS results 16 But RHS resuts 4 only . Generally we take log left side but that is wrong.
 $0 < x < y \text{ then } n^x = O\left(n^y\right)$ : true since y is always greater than x so RHS is always greater than LHS.
 $2^n \neq O\left(nk\right)$ : true since k is constant. So for large value of n LHS is very higher than RHS ( exponential function always greater than linear).
Only B is false
If $T_1 = O(1)$, give the correct matching for the following pairs:
(M) $T_n = T_{n1} + n$  (U) $T_n = O(n)$ 
(N) $T_n = T_{n/2} + n$  (V) $T_n = O(n \log n)$ 
(O) $T_n = T_{n/2} + n \log n$  (W) $T = O(n^2)$ 
(P) $T_n = T_{n1} + \log n$  (X) $T_n = O(\log^2 n)$ 
 MW NV OU PX
 MW NU OX PV
 MV NW OX PU
 MW NU OV PX
(N) $T(n) = \Theta(n) =O(n)$, third case of Master theorem
($f(n) = n = \Omega \left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0 + \epsilon}\right) $, satisfied for any positive $\epsilon \leq 1$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} < cn$, satisfied for any $c$ between $0$ and $0.5$)
(O) $T(n) = \Theta(n \log n) = O (n \log n)$, third case of Master theorem
($f(n) = n \log n =\Omega\left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0.5 + \epsilon}\right) $, satisfied for positive $\epsilon = 0.5$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} \log \frac{n}{2} < cn \log n$, satisfied for $c = 0.5$)
(P) Like in (M), here we are adding the log of the first n natural numbers. So,
$T_n = \log 1 + \log 2 + \log 3 + \dots + \log n$
$=\log (1\times 2\times \dots \times n)$
$=\log(n!)$
$=\Theta(n \log n)$ (Stirling's Approximation)
Consider the following functions
 $f(n) = 3n^{\sqrt{n}}$
 $g(n) = 2^{\sqrt{n}{\log_{2}n}}$
 $h(n) = n!$
Which of the following is true?
 $h(n)$ is $O(f(n))$
 $h(n)$ is $O(g(n))$
 $g(n)$ is not $O(f(n))$
 $f(n)$ is $O(g(n))$
$n = 256$  $n=65536$  

$f(n) = 3n^{\sqrt{n}}$

$3 \times 256^{16}$ $=3 \times {2^{128}}$ 
$3 \times 65536^{256}$ $ = 3 \times 2^{16 \times 256}$ $ = 3 \times 2^{4096}$ 
$g(n) = 2^{\sqrt{n}{\log_{2}n}}$ 
$2^{16 \times 8}$ $=2^{128}$ 
$2^{256 \times 16}$ $ = 2^{4096}$ 
$h(n) = n!$ 
$256!$ $= O\left(\left(2^8\right)^{256}\right)$ $=O\left(2^{2048}\right)$ 
$65536!$ $= O\left(\left(2^{16}\right)^{65536}\right)$ $=O\left(2^{1M}\right)$ 
Case of h(n) is given only by an upper bound but factorial has higher growth rate than exponential.
f(n) and g(n) are having same order of growth as f(n) is simply 3 g(n) (we can prove this by taking log also). So, (d) is correct and all other choices are false.
1. $f(n) = 3n^{\sqrt{n}}$
2. $g(n) = 2^{\sqrt{n}{\log_{2}n}} = (2^{\log_{2}n})^\sqrt{n} = (n^{\log_{2}2})^\sqrt{n} = n^{\sqrt{n}} $
3. $h(n) = n! = o(n^n)$
So, $f(n) = g(n) < h(n) $
Therefore, (D) is the correct option!
Let $f(n) = n^2 \log n$ and $g(n) = n(\log n)^{10}$ be two positive functions of $n$. Which of the following statements is correct?
 $f(n) = O(g(n)) \text{ and } g(n) \neq O(f(n))$
 $g(n) = O(f(n)) \text{ and } f(n) \neq O(g(n))$
 $f(n) \neq O(g(n)) \text{ and } g(n) \neq O(f(n))$
 $f(n) =O(g(n)) \text{ and } g(n) = O(f(n))$
A more formal approach:
$f(n) = n^2 \log n$
$g(n) = n (\log n)^{10}$
We can use the limit definition of Onotation
http://cse.unl.edu/~choueiry/S06235/files/AsymptoticsHandoutNoNotes.pdf
$\lim_{n \to \infty } \frac{f(n)}{g(n)} = 0, \implies f(n) = o(g(n))$
small $o$ implying $f$ is strictly asymptotically lower than $g$. Also by definition, $o \implies O \text{ but } O \not \implies o$.
$\lim_{n\to \infty} \frac{f(n)}{g(n)} = c, c > 0 \implies f(n) = \Theta(g(n))$
$\lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty, \implies f(n) = \omega(g(n))$
small $\omega$ implying $f$ is strictly asymptotically higher than $g$. Also by definition, $\omega \implies \Omega, \text{ but } \Omega \not \implies \omega.$
We can use this to prove the above question
For any $k > 0$
$\lim_{n \to \infty} \frac{(\log n)^{k}}{ n}$
Applying L'Hopital rule
$= \lim_{n \to \infty} \frac{ k*(\log n)^{k1}}{ n}$
$= \lim_{n \to \infty} \frac{ k!}{n} = 0$
So $(\log n)^k = o(n)$
Now for large $n$, $n > (\log n)^9$
i.e $n^{2}\log n > n(\log n)^{10}$
So $n(\log n )^{10} = o(n^{2}\log n)$
Or
$n(\log n )^{10} = O(n^{2}\log n)$
and
$O(n^{2}\log n) \neq n(\log n )^{10}$
Option B.
$f(n)$  $g(n)$  

$n = 2^{10}$  $10 \ast 2^{10} \ast 2^{10}$  $2^{10} \ast 10^{10}$ 
$n = 2^{256}$  $256 \ast 2^{256} \ast 2^{256}$  $2^{256} \ast 256^{10}$ 
So, as $n$ is going larger, $f(n)$ is overtaking $g(n)$ and the growth rate of $f$ is faster than that of $g$. So, $g(n) = O(f(n))$ and $f(n) \neq O(g(n))$.
B choice.
Consider the following three claims
 $(n+k)^m = \Theta(n^m)$ where $k$ and $m$ are constants
 $2^{n+1} = O(2^n)$
 $2^{2n+1} = O(2^n)$
Which of the following claims are correct
 I and II
 I and III
 II and III
 I, II, and III
1) Clearly rate of growth of $(n+k)^m = n^m$ as $k$ and $m$ are constants
so TRUE
2) $2^{n+1} = 2*(2^n) = \Theta\left(2^n\right)$ as $2$ is a constant here
As $2^{n+1}$ is both upper and lower bounded by $2^n$ we can say $2^{n+1} = O\left(2^n\right)$
so TRUE
3) $2^{2n+1}$ has same rate of growth as $2^{2n}$
$2^{2n} = {2^n}^2$
$2^n$ is upper bounded by $(2^n)^2$, not the other way round
so FALSE
Let $f(n)$, $g(n)$ and $h(n)$ be functions defined for positive inter such that
$f(n) = O(g(n))$, $g(n) \neq O(f(n))$, $g(n) = O(h(n))$, and $h(n) = O(g(n))$.
Which one of the following statements is FALSE?
 $f(n) + g(n) = O(h(n) + h(n))$
 $f(n) = O(h(n))$
 $h(n) \neq O(f(n))$
 $f(n)h(n) \neq O(g(n)h(n))$
Answer is (D)
We can verify as : $f \leqslant g$ BUT $g \nleqslant f$ . Therefore $f<g$
Also $g=h$ as $g=O(h)$ and $h=O(g)$
h(n)=Ωf(n)
f(n) g(n)=O(g(n) h(n) is true
So d is false
Suppose $T(n) =2T (\frac{n}{2}) + n$, $T(0) = T(1) =1$
Which one of the following is FALSE?

$T(n)=O(n^2)$

$T(n)=\Theta(n \log n)$

$T(n)=\Omega(n^2)$

$T(n)=O(n \log n)$
Applying Masters theorem $T(n) = \Theta (n \log n)$ So, it can't be $\Omega(n^2)$
Hence answer is C.
Consider the following functions:
$f(n) = 2^n$
$g(n) = n!$
$h(n) = n^{\log n}$
Which of the following statements about the asymptotic behavior of $f(n)$, $g(n)$ and $h(n)$ is true?
 $f\left(n\right)=O\left(g\left(n\right)\right); g\left(n\right)=O\left(h\left(n\right)\right)$

$f\left(n\right) = \Omega\left(g\left(n\right)\right); g(n) = O\left(h\left(n\right)\right)$

$g\left(n\right) = O\left(f\left(n\right)\right); h\left(n\right)=O\left(f\left(n\right)\right)$

$h\left(n\right)=O\left(f\left(n\right)\right); g\left(n\right) = \Omega\left(f\left(n\right)\right)$
$g(n) = n!$ on expanding factorial we get $g(n) = \mathcal{O}(n^n)$ :
$$\begin{align*} n^n &> n^{\log n} \\ n^n &> 2^n \end{align*}$$
this condition is violated by option A, B and C by first statements of each Hence, they cannot be said true.
second statement of option D says that $g(n)$ is asymptotically biggest of all.
answer = option D
$f(n)$  $g(n)$  $h(n)$  
$n = 2^{10}$  $2^{1024}$  $1024!$  ${2^{10}}^{10} = 2^{100}$ 
$n = 2^{11}$  $2^{2048}$  $2048!$  $2^{121}$ 
Increase  $2^{1024}$  $1025 \times 1026 \times \dots \times 2048$  $2^{21}$ 
So, growth rate is highest for $g(n)$ and lowest for $h(n)$. Option D.
Arrange the following functions in increasing asymptotic order:
 $n^{1/3}$
 $e^n$
 $n^{7/4}$
 $n \log^9n$
 $1.0000001^n$
 a, d, c, e, b
 d, a, c, e, b
 a, c, d, e, b
 a, c, d, b, e
E < B
and
C, D < E as E is exponential function.
Now, we just need to see if C or D is larger.
In C we have a term $n^{3/4}$ and correspondingly in D we have $\log^9n$ (after taking $n$ out).
$n^{3/4}$ is asymptotically larger than $\log^9n$ as when $n = 10^{100}, \log^9 n$ gives $100^9$, while $n^{3/4}$ gives $10^{75} > 100^{37}$ a much higher value and this is true for all higher values of $n$. So, D < C.
Thus A is correct.
Which of the given options provides the increasing order of asymptotic complexity of functions $f_1, f_2, f_3$ and $f_4$?
 $f_1(n) = 2^n$
 $f_2(n) = n^{3/2}$
 $f_3(n) = n \log_2 n$
 $f_4(n) = n^{\log_2 n}$
 $f_3, f_2, f_4, f_1$
 $f_3, f_2, f_1, f_4$
 $f_2, f_3, f_1, f_4$
 $f_2, f_3, f_4, f_1$
answer A
$n \log_2 n < n^{3/2}$ is quite straightforward
also $n^{3/2} < n^{\log_2 n} $ and $n^{3/2} < 2^n$
now only $n^{\log_2 n}$ and $ 2^n$ need to be compared
taking log of both $(log_2 n)^2$ and $n$
$n > (log_2 n)^2$
hence $2^n > n^{\log_2 n} $
Let $W(n) $ and $A(n)$ denote respectively, the worst case and average case running time of an algorithm executed on an input of size $n$. Which of the following is ALWAYS TRUE?
 $A(n) = \Omega (W(n))$
 $A(n) = \Theta (W(n))$
 $A(n) = \text{O} (W(n))$
 $A(n) = \text{o} (W(n))$
Worst case complexity can never be lower than the average case complexity, but it can be higher. So, (C) is the answer.
$$A(n) = O(W(n)) $$
Consider the equality $\sum_{(i=0)}^n i^3 = X$ and the following choices for $X$
 $\Theta(n^4)$
 $\Theta(n^5)$
 $O(n^5)$
 $\Omega(n^3)$
The equality above remains correct if $X$ is replaced by
 Only I
 Only II
 I or III or IV but not II
 II or III or IV but not I
Sum of the cubes of the first n natural numbers is given by $(n(n+1) / 2) ^ 2$ which is $\Theta(n^4)$. So, I, III and IV are correct. II is wrong. C choice.
Let $f(n) = n$ and $g(n) = n^{(1 + sin \: n)}$ where $n$ is a positive integer. Which of the following statements is/are correct?
 $f(n) = O(g(n))$
 $f(n) = \Omega(g(n))$
 Only I
 Only II
 Both I and II
 Neither I nor II
The answer is option D.
Since the value of $sin(n)$ will always range from $1$ to $+1$, hence $g(n)$ can take values $1$, $n$, $n^2$.
Hence, if $g(n) = 1$, Statement I is incorrect.
And, if $g(n) = n^2$, then Statement II is incorrect.
Consider the following functions from positive integers to real numbers:
$10$, $\sqrt{n}$, $n$, $\log_{2}n$, $\frac{100}{n}$.
The CORRECT arrangement of the above functions in increasing order of asymptotic complexity is:
 $\log_{2}n$, $\frac{100}{n}$, $10$, $\sqrt{n}$, $n$
 $\frac{100}{n}$, $10$, $\log_{2}n$, $\sqrt{n}$, $n$
 $10$, $\frac{100}{n}$, $\sqrt{n}$, $\log_{2}n$, $n$
 $\frac{100}{n}$, $\log_{2}n$, $10$, $\sqrt{n}$, $n$
$10$  constant. Growth rate is $0$.
$\sqrt{n}$  grows slower than linear but faster than $\log$. (Consider $\frac {\sqrt {n^2}}{\sqrt n} =\sqrt n$, where as $\frac {\log \left(n^2\right)}{\log n} = 2$.
$n$  growth rate is linear.
$\log_{2}n$  growth rate is logarithmic, for asymptotic growth the base does not matter,
$\frac{100}{n}$  growth rate decreases with $n$.
So, correct answer is B.
PS: Please never substitute large values of n in such questions. If ever you do, at least do for $2$ such values and take ratio to get the growth rate or plot a graph. Remember ${1.01}^{n} \neq O\left(n^{100}\right)$
 $10$,Constant
 $\sqrt{n}$,Square root
 $n$, polynomial
 $\log_{2}n$, Logorithmic
 $\frac{100}{n}$. Constant division by polynomial (clearly less than constant for every value of n >100)
Now we know Constant division by polynomial < Constant < Logorithmic <Square root < polynomial
So correct order is $\frac{100}{n}$, $10$, $\log_{2}n$, $\sqrt{n}$, $n$
Let $n$ be a large integer. Which of the following statements is TRUE?
 $n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n} < n^{1/100}$
 $n^{1/100} < n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n}$
 $n^{1 / \sqrt{\log_2 n}} < n^{1/100} < \sqrt{\log_2 n}$
 $\sqrt{\log_2 n} < n^{1 / \sqrt{\log_2 n}} < n^{1/100}$
 $\sqrt{\log_2 n} < n^{1/100} < n^{1 / \sqrt{\log_2 n}}$
Let $n = 2^x$. Then, $\log_2 n = x$
$$\begin{array}{rlllcll} f(n) &=& n^{1/\sqrt{\log_2 n}} &=& \left (2^x \right )^{1/\sqrt{x}} = 2^{x/\sqrt{x}} &=& 2^{\sqrt{x}}\\[1em] g(n) &=& \sqrt{\log_2 n} &=& \sqrt{\log_2 {\left ( 2^x \right )}}&=& \sqrt{x}\\[1em] h(n) &=& n^{1/100} &=& \left ( 2^x \right )^{1/100} &=& 2^{x/100} \end{array}$$
Since exponentials grow faster than polynomials, $h(n) > g(n)$ for large $n$.
Since linear functions grow faster than square roots, $\frac{x}{100} > \sqrt{x}$ for large $x$. Thus, $h(n) > f(n)$ for large $n$.
Since exponentials grow faster than polynomials, $2^{\sqrt{x}} > \sqrt{x}$ for large $\sqrt{x}$. Thus, $f(n) > g(n)$ for large $n$.
Hence, the relation is,
$g(n) < f(n) < h(n)$
Thus, option D is correct.
Let $n$ be a large integer. Which of the following statements is TRUE?
 $2^{\sqrt{2\log n}}< \frac{n}{\log n}< n^{1/3}$
 $\frac{n}{\log n}< n^{1/3}< 2^{\sqrt{2\log n}}$
 $2\sqrt{^{2\log n}}< n^{1/3}< \frac{n}{\log n}$
 $n^{1/3}< 2\sqrt{^{2\log n}}<\frac{n}{\log n}$
 $\frac{n}{\log n}< 2\sqrt{^{2\log n}}<n^{1/3}$
Ans will be C
Take $n=2^{1024}$
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{45}$
$n^{\frac{1}{3}} \approx 2^{341}$
$n / \log n = 2^{1024}/1024 \approx 2^{1014}$
Just one value is not enough to confirm growth rate. So, take $n = 1024$.
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{4}$
$n^{\frac{1}{3}}\approx 2^{3}$
$n / \log n = 2^{10}/10 \approx 2^{7}$
So, as $n$ increases the gap between second and third function increases and also the second function overtakes the first. So, $f1 < f2 < f3$.
Which of these functions grows fastest with $n$?
 $e^{n}/n$.
 $e^{n0.9 \log n}$.
 $2^{n}$.
 $\left(\log n\right)^{n1}$.
 None of the above.
Assuming that the base of the $\log$ in the question is $e$.
Let us try to rewrite each of these functions in the form $e^{\,\small\rm something}$, to make the comparison easier.
$$\def\d{\displaystyle\,}
\begin{array}{lll}
a. & e^{n} / n & = \dfrac{e^{\d n}}{e^{\d(\ln n)}} & = e^{\d (n  \ln n)}\\[1em]
b. & e^{n  0.9 \ln n} &&= e^{\d (n  0.9 \ln n)}\\[1em]
c. & 2^n &= \left (e^{\d\ln 2} \right )^{\d n} &= e^{\d (n \ln 2)}\\[1em]
d. & (\ln n)^{n1} &= \left (e^{\d\ln \ln n} \right )^{\d n1} &= e^{\d (n \ln \ln n  \ln\ln n)}
\end{array}$$
Now, if we just compare the exponents of all, we can clearly see that $(n \ln \ln n  \ln \ln n)$ grows faster than the rest. Note that in option c. the multiplicative $\ln 2$ is a constant, and hence grows slower than the multiplicative $\ln \ln n$ from option d.
This implies that $e^{\d (n \ln \ln n  \ln\ln n)}$ grows the fastest, and hence, $(\ln n)^{n1}$ grows the fastest.
Thus, option d. is the correct answer.
Let $n = m!$. Which of the following is TRUE?
 $m = \Theta (\log n / \log \log n)$
 $m = \Omega (\log n / \log \log n)$ but not $m = O(\log n / \log \log n)$
 $m = \Theta (\log^2 n)$
 $m = \Omega (\log^2 n)$ but not $m = Ο( (\log^2 n)$
 $m = \Theta (\log^{1.5} n)$
$m$  $n=m!$  $\log n / \log \log n$ 

$4$  $24$  $4/2 = 2$ 
$6$  $720$  $9/3 = 3$ 
$8$  $720*56$  $15/3 = 5$ 
$10$  $720*56*90$  $21/4 = 5$ 
If we see, $m$ is growing at same rate as $\log n / \log \log n$.
Which of the following functions asymptotically grows the fastest as $n$ goes to infinity?
 $(\log \: \log \: n)!$
 $(\log \: \log \: n)^ {\log \: n}$
 $(\log \: \log \: n)^{\log \: \log \: \log \: n}$
 $(\log \: n)^{\log \: \log \: n}$
 $2^{\sqrt{\log \: \log \: n}}$
Let $N = 2^{256}$
 $(\log \log N)! = 8!$
 $(\log \log N)^{\log N} = (8)^{256} = 2^{768}$
 $(\log \log N)^{\log\log \log N} = 8^{3}=512$
 $(\log N)^{\log\log N} = (256)^{8}=2^{64}$
 $2^{\sqrt{\log \log N}} = 2^{2\sqrt 2}$
Let $N = 2^{16}$
 $(\log \log N)! = 4!$
 $(\log \log N)^{\log N} = (4)^{16} = 2^{32}$
 $(\log \log N)^{\log\log \log N} = 4^{2}=16$
 $(\log N)^{\log\log N} = (16)^{4}=2^{16}$
 $2^{\sqrt{\log \log N}} = 2^{2}=4$
Taking ratio for both $N$ values,
 $(\log \log N)! \rightarrow 8!/4!$
 $(\log \log N)^{\log N} \rightarrow 2^{736}$
 $(\log \log N)^{\log\log \log N} \rightarrow 32$
 $(\log N)^{\log\log N} \rightarrow 2^{48}$
 $2^{\sqrt{\log \log N}} \rightarrow \approx 2$
Option $B$ = $(\log \log N)^{\log N}$ asymptotically grows the fastest as $N$ goes to infinity as for the same change in $N$, the value increased the most (growth) for option $B$.
We are given a set of $n$ distinct elements and an unlabeled binary tree with $n$ nodes. In how many ways can we populate the tree with the given set so that it becomes a binary search tree?
 $0$
 $1$
 $n!$
 $\frac{1} {n+1} .^{2n}C_n$
With $n$ nodes, there are $\frac{^{2n}Cn}{(n+1)}$ distinct tree structures possible.
Corresponding to each structure only $1$ binary search tree can be formed because inorder is fixed.
Here we are already given one such structure therefore only $1$ tree possible.
If Binary trees would be asked $n!$ possible corresponding to each distinct tree structure. Here tree structure is fixed and hence we can have only 1 possibility for BST as elements are distinct. For general cases:
http://gatecse.in/wiki/Number_of_Binary_trees_possible_with_n_nodes
The subsetsum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2dimensional Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which of the following is valid for $2 \leq i \leq n$, and $a_i \leq j \leq W$?
 $X[i, j] = X[i1, j] \vee X[i, ja_i]$
 $X[i, j] = X[i1, j] \vee X[i1, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i1, ja_i]$
This is analogous to the dynamic programming solution to 0/1 knapsack problem.
Consider the Capacity of the knapsack W analogous to J(the total sum here).
The Solution exploits the optimal substructure of the problem.
At each stage we can have 2 options
Case (1)Either we take an item(in this question either we consider the element A_{i}) along with the total solution to previous subproblem(total solution here means the total sum obtained till previous subproblem)
in which case we choose A[i1][ja_{i}]
A[i1] indicates we are considering solution to previous subproblem and
A[ja_{i}] means we have considered element a_{i} and now remaining sum is Ja_{i} which has to be further considered.
Case (2) Or we do not consider the item(in this case the element a_{i}) in which case we only consider the solution to previous subproblem.
which is A[i1][J]
Since the whole solution to this subsetsum problem is Logical OR(+) of cases 1 and 2
we eliminate options C and D because both are considering the Logical AND of the two part of solution.
Now Since here in the given question we are given a boolean array X[n][W+1]
So An entry X[i][j] is true only if sum j is possible with array elements from 0 to i.
So for Each element of array, a_{i}, we consider two possibilites
(1) EIther we can ignore it and still get our possible sum
Which is X[i1][j]
OR
(2) We could include element a_{i} and then get our required sum.
Which is X[i1][ja_{i}]
And finally, to get X[i][j], we take logical or of the above two cases.
Hence answer is option B.
Reference :
By using the analogy of the problem and solution between subsetsum problem and 01 knapsack problem, the above video clearly explains the how the solution to the problem is structured .
The subsetsum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2dimensional Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which entry of the array $X$, if TRUE, implies that there is a subset whose elements sum to $W$?
 $X[1, W]$
 $X[n, 0]$
 $X[n, W]$
 $X[n1, n]$
ANS is C.
If LAST ROW and LAST COLUMN entry is $1$, then there exists subset whose elements sum to $W$.
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
l(i,j) = 0, if either i = 0 or j = 0 = expr1, if i,j > 0 and X[i1] = Y[j1] = expr2, if i,j > 0 and X[i1] ≠ Y[j1]
Which one of the following options is correct?

$\text{expr1} = l\left(i1, j\right) +1$

$\text{expr1} = l\left(i, j1\right)$

$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$

$\text{expr2} = \max\left(l\left(i1, j1\right), l\left(i,j\right)\right)$
Answer is C. When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].
/* Returns length of LCS for X[0..m1], Y[0..n1] */ int lcs( char *X, char *Y, int m, int n ) { if (m == 0  n == 0) return 0; if (X[m1] == Y[n1]) return 1 + lcs(X, Y, m1, n1); else return max(lcs(X, Y, m, n1), lcs(X, Y, m1, n)); }
Answer is C.
When the currently compared elements doesn't match, we have two possibilities for the LCS, one including $X\left [ i \right ]$ but not $Y\left [ j \right ]$ and other including $Y\left [ j \right ]$ but not $X\left [ i \right ]$.
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
$l(i,j) = 0$, if either $i = 0$ or $j = 0$
$= expr1, if i,j > 0 and X[i1] = Y[j1]$
$= expr2, if i,j > 0 and X[i1] ≠ Y[j1]$
The value of $l(i,j)$ could be obtained by dynamic programming based on the correct recursive definition of $l(i,j)$ of the form given above, using an array $L[M,N]$, where $M = m+1$ and $N = n+1$, such that $L[i, j] = l(i, j)$.
Which one of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of $l(i, j)$?
 All elements of $L$ should be initialized to 0 for the values of $l(i, j)$ to be properly computed.
 The values of $l(i, j)$ may be computed in a row major order or column major order of $L[M, N]$.
 The values of $l(i, j)$ cannot be computed in either row major order or column major order of $L[M, N]$.
 $L[p, q]$ needs to be computed before $L[r, s]$ if either $p<r$ or $q < s$.
$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$
When the currently compared elements doesn't match, we have two possibilities for the LCS, one including $X[i]$ but not $Y[j]$ and other including $Y[j]$ but not $X[i]$.
/* Returns length of LCS for X[0..m1], Y[0..n1] */ int lcs( char *X, char *Y, int m, int n ) { if (m == 0  n == 0) return 0; if (X[m1] == Y[n1]) return 1 + lcs(X, Y, m1, n1); else return max(lcs(X, Y, m, n1), lcs(X, Y, m1, n)); }
Answer is B. Dynamic programming is used to save the previously found LCS. So, for any index [p,q] all smaller ones should have been computed earlier. Option D is not correct as the condition given requires even L[3,2] to be computed before L[2,4] which is not a necessity if we follow rowmajor order.
int lcs( char *X, char *Y, int m, int n ) { int L[m+1][n+1]; int i, j; /* Following steps build L[m+1][n+1] in bottom up fashion. Note that L[i][j] contains length of LCS of X[0..i1] and Y[0..j1] */ for (i=0; i<=m; i++) { for (j=0; j<=n; j++) { if (i == 0  j == 0) L[i][j] = 0; else if (X[i1] == Y[j1]) L[i][j] = L[i1][j1] + 1; else L[i][j] = max(L[i1][j], L[i][j1]); } } /* L[m][n] contains length of LCS for X[0..n1] and Y[0..m1] */ return L[m][n]; }
$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$
When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].
Answer is B. We can either use Row Major or column major order.
Issue of option D > Read option D carefully.
L[p,q] needs to be computed before L[r,s] if either p < q or r < s
Assuming that we want to compute L(3,3). We need not compute L(4,2) if we are using Row Major Order ! Here L(4,2) = L[p,q] & L(3,3) = L[r,s]. Then q<s still we need not compute it ! so D IS FALSE
The weight of a sequence $a_0,a_1, \dots, a_{n1}$ of real numbers is defined as $a_0+a_1/2+ \dots + a_{n1}/2^{n1}$. A subsequence of a sequence is obtained by deleting some elements from the sequence, keeping the order of the remaining elements the same. Let $X$ denote the maximum possible weight of a subsequence of $a_o,a_1, \dots, a_{n1}$ and $Y$ the maximum possible weight of a subsequence of $a_1,a_2, \dots, a_{n1}$. Then $X$ is equal to
 $max(Y, a_0+Y)$
 $max(Y, a_0+Y/2)$
 $max(Y, a_0 +2Y)$
 $a_0+Y/2$
$$\begin{align}
S &= \langle a_0, S_1 \rangle\\
S_1 &= \langle a_1, a_2, a_3 \ldots a_{n1} \rangle
\end{align}$$
Two possible cases arise:
 $a_0$ is included in the max weight subsequence of $S$:
In this case, $X = \text{weight} \Bigl( \langle a_0, S_1 \rangle \Bigr) = a_0 + \dfrac Y 2$
 $a_0$ is not included in the max weight subsequence of $S$:
In this case, $X = \text{weight}(S_1) = Y$
Since the value of $a_0$ can be anything (negative or $<\frac Y 2$ in general) $\{ \because a_i \in \mathbb{R} \}$, it is possible that $Y > a_0 + \frac Y 2$.
The maximum possible weight of a subsequence of $S$ is given by: $$X = \max \left ( Y, a_0 + \frac Y 2 \right )$$
Thus, option B is correct.
answers will be max( Y, X) we just need to find X in the terms of Y because we have two subsequences X and Y but X is not given inside MAX function in any options.
Suppose P = 20, 40, 32 hence Y = 20 + 40/2^1 + 32/2^2
= 20+20+8 = 48
Now we need to add a0 with our Y, a0 can be negative or positive because a is a real number can be ve too. if a0 is ve then
a0 + Y < Y // if a0 is ve
lets calculate X now as a0 , a1/2^1, a2/2^2 ....... an1/2^(n1)
X = 30 + 20/2^1+ 40/2^2+ 32/2^3 = 30 + 10 + 10 + 4 = 54
X is 54, a0 is 30 and Y is 48
in the terms of Y, X will be = a0 + y/2 i.e. 30 + 48/2 = 54
hence MAX (Y, a0+ Y/2) is correct answer!
An algorithm to find the length of the longest monotonically increasing sequence of numbers in an array $A[0:n1]$ is given below.
Let $L_i$, denote the length of the longest monotonically increasing sequence starting at index $i$ in the array.
Initialize $L_{n1} = 1$.
For all $i$ such that $0 \leq i \leq n2$
$ L_i = \begin{cases} 1+ L_{i+1} & \quad\text{if A[i] < A[i+1]} \\ 1 & \quad\text{Otherwise}\end{cases} $
Finally the the length of the longest monotonically increasing sequence is $\text{Max} \:(L_0, L_1, \dots , L_{n1}).$
Which of the following statements is TRUE?
 The algorithm uses dynamic programming paradigm
 The algorithm has a linear complexity and uses branch and bound paradigm
 The algorithm has a nonlinear polynomial complexity and uses branch and bound paradigm
 The algorithm uses divide and conquer paradigm
(A) is the answer.
The algorithm is storing the optimal solutions to subproblems at each point (for each $i$), and then using it to derive the optimal solution of a bigger problem. And that is dynamic programming approach. And the program has linear time complexity.
http://stackoverflow.com/questions/1065433/whatisdynamicprogramming
Now, branch and bound comes when we explore all possible solutions (branch) and backtracks as soon as we find we won't get a solution (in classical backtracking we will retreat only when we won't find the solution).In backtracking : In each step, you check if this step satisfies all the conditions
If it does : you continue generating subsequent solutions
If not : you go one step backward to check for another path
So, backtracking gives all possible solutions while branch and bound will give only the optimal one. http://www.cs.cornell.edu/~wdtseng/icpc/notes/bt2.pdf
The given algorithm here is neither backtracking nor branch and bound. Because we are not branching anywhere in the solution space.
And the algorithm is not divide and conquer as we are not dividing the problem and then merging the solution as in the case of merge sort (where merge is the conquer step).
Four Matrices $M_1, M_2, M_3,$ and $M_4$ of dimensions $ p \times q, \:\:q \times r, \:\:r \times s$ and $s \times t$ respectively can be multiplied in several ways with different number of total scalar multiplications. For example when multiplied as $((M_1 \times M_2) \times (M_3 \times M_4))$, the total number of scalar multiplications is $pqr + rst + prt$. When multiplied as $(((M_1 \times M_2) \times M_3) \times M_4)$, the total number of scalar multiplications is $pqr + prs + pst$.
If $p=10, q=100, r=20, s=5$ and $t=80$, then the minimum number of scalar multiplications needed is
 248000
 44000
 19000
 25000
Answer is C.
$\text{ Ordering: }\\ \text{ First Multiply } M_2 \times M_3. \text{ This requires 100*20*5 multiplications.}$ $\text{ Then Multiply } M_1 \times (M_2 \times M_3). \text{ This requires 10*100*5 multiplications. }$ $\text{ Then Multiply } (M_1 \times (M_2 \times M_3)) \times M_4. \text{ This requires 10*5*8 multiplications. }$ $\text{ Total 19000 Multiplications.}$
Brute Force approach  anyone can do.
No. of possible ordering for 4 matrices is $C_3$ where $C_3$ is the $3^{rd}$ Catalan number and given by $n=3$ in $\frac{1}{n+1} {}^{2n}C_n = 5.$
So, here we have
 $(M_1 \times M_2) \times (M_3 \times M_4)$
 $(M_1 \times (M_2 \times M_3)) \times M_4$
 $((M_1 \times M_2) \times M_3) \times M_4$
 $M_1 \times (M_2 \times (M_3 \times M_4))$
 $M_1 \times ((M_2 \times M_3) \times M_4))$
Each of these would give no. of multiplications required as
 $pqr + rst + prt $
 $qrs + pqs + pst$
 $pqr + prs + pst$
 $rst + qrt + pqt$
 $qrs + qst + pst$
The last 2 are having $qt$ terms which are the highest terms by far and hence we can avoid them from consideration $qt = 8000$ multiplied by one other term would be larger than any value in choice. So, just find the value of first 3 terms.
 $pqr + rst + prt = 20000 + 8000 + 16000 = 44000$
 $qrs + pqs + pst = 10000 + 5000 + 4000 = 19000$  smallest value in choice, we can stop here.
 $pqr + prs + pst$
Dynamic Programming Solution (should know Matrix Chain Ordering algorithm)
Here we have a chain of length 4.
Dynamic programming solution of Matrix chain ordering has the solution
$m[i, j] = \begin{cases} 0 &\text{ if } i=j\\ \min_{i\leq k < j } m[i]k] + m[k+1][j] + p_{i1}p_{j}p_{k} &\text{ if } i < j\end{cases}$
So, we can fill the following table starting with the diagonals and moving upward diagonally. Here $k < j$ but $\geq i.$
j=1  j=2  j=3  j=4  

i=1  0 
$p_0p_1p_2 = 
$\min(10000+p_0p_1p_3, 
$\min(18000+ p_0p_1p_4, \\20000+8000+p_0+p_2+p_4, \\15000+p_0p_3p_4) = \\19000$ 
i=2  0  $p_1p_2p_3 = 10000$  $\min(10000 + p_2p_3p_4), \\p_1p_3p_4) = 18000$  
i=3  0  $p_2p_3p_4 = 8000$  
i=4  0 
Our required answer is given by $m[1,4] = 19000.$
Consider two strings $A$="qpqrr" and $B$="pqprqrp". Let $x$ be the length of the longest common subsequence (not necessarily contiguous) between $A$ and $B$ and let $y$ be the number of such longest common subsequences between $A$ and $B$. Then $x +10y=$ ___.
"qprr"
"Pqrr"
"qpqr"
In first string
If we want to get 4 as max len den lcs should end with either "rr" or "qr"
Only 4 combinations possible for lcs with len 4
"qpqr"
"qqrr"
"pqrr"
"qprr"
Now check for matching sequences in second string, except for "qqrr" all possible
I hope this helps. :)
Suppose you want to move from $0$ to $100$ on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a prespecified pair of integers $i,\:j \:\text{with}\: i <j$. Given a shortcut $i,j$ if you are at position $i$ on the number line, you may directly move to $j$. Suppose $T(k)$ denotes the smallest number of steps needed to move from $k$ to $100$. Suppose further that there is at most $1$ shortcut involving any number, and in particular from $9$ there is a shortcut to $15$. Let $y$ and $z$ be such that $T(9) = 1 + \min(T(y),T(z))$. Then the value of the product $yz$ is _____.
$T(9)=1+ \min(T(y),T(z))=1+$min(Distance from $y$ to $100$ , Distance from $z$ to $100$)
There are only two such values where we can reach from $9$ , one is simple step to right on number line , i.e $10$ and another is $15$ (given shortcut)
Hence , $y=10$ , $z=15$
$yz=10 \times 15 = 150$
The FloydWarshall algorithm for allpair shortest paths computation is based on
 Greedy paradigm.
 Divideandconquer paradigm.
 Dynamic Programming paradigm.
 Neither Greedy nor DivideandConquer nor Dynamic Programming paradigm.
Answer is 1500 !
Matrix Parenthesizing => A1 ((A2 A3) A4)
Check my solution below, using dynamic programming
$A_1$  $A_2$  $A_3$  $A_4$ 
$10 \times 5$  $5 \times 20$  $20 \times 10$  $10 \times 5$ 
$A_{12} = 10 \times 5 \times 20 =1000$
$A_{23} = 5 \times 20 \times 10 =1000$
$A_{34} = 20 \times 10 \times 5 =1000$
$$A_{13}=min\left\{\begin{matrix} A_{12} + A_{33}+ 5 \times 20 \times 10 = 2000\\ A_{11} + A_{23} + 10 \times 5 \times 10 = 1500 \end{matrix}\right.$$
$$A_{24}=min\left\{\begin{matrix} A_{23} + A_{44}+ 5 \times 10 \times 5 = 1250\\ A_{22} + A_{34} + 5 \times 20 \times 5 = 1500 \end{matrix}\right.$$
$$A_{14}=min\left\{\begin{matrix} A_{11} + A_{24}+ 10 \times 5 \times 5 = 1500\\ A_{12} + A_{34} + 10 \times 20\times 5 \geqslant 2000\\ A_{13}+A_{44} + 10 \times 20\times 5 = 2000 \end{matrix}\right.$$
Answer is 1500.
Consider the following directed graph:
The number of different topological orderings of the vertices of the graph is _____________.
Here Start with $a$ and End with $f$.
$a$ _ _ _ _ $f$
Blanks spaces are to be filled with $b$, $c$, $d$, $e$ such that $b$ comes before $c$, and $d$ comes before $e$..
Number of ways to arrange $b$, $c$, $d$, $e$ such that $b$ comes before $c$ and $d$ comes before $e$. will be = $4!/(2!*2!) = 6$
In topological sorting all nodes are like tasks and edges show the dependency among the tasks.
Node i to j an edge is there means task i must complete before task j.(in the mean time some other task may get complete after task i and before task j..but task i and j sequence need to be maintained)
Here in following 6 ways all the 6 tasks can get completed.
Which of the following statements is false?

Optimal binary search tree construction can be performed efficiently using dynamic programming

Breadthfirst search cannot be used to find connected components of a graph

Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.

Depthfirst search can be used to find connected components of a graph
Answer: B
 True.
 False.
 True.
 True.
An independent set in a graph is a subset of vertices such that no two vertices in the subset are connected by an edge. An incomplete scheme for a greedy algorithm to find a maximum independent set in a tree is given below:
V: Set of all vertices in the tree; I := ϕ while V ≠ ϕ do begin select a vertex u ∊ V such that _______; V := V  {u}; if u is such that ________then I := I ∪ {u} end; Output(I);

Complete the algorithm by specifying the property of vertex $u$ in each case.

What is the time complexity of the algorithm
 While adding vertex $u$ to I it should not have an edge with any node in $I$.
 The algorithm runs till V is empty (in $O(n)$ time) and is checking u with each vertex $v$ in set $I$ (in $O(n)$ time). So, overall complexity $O(n^2)$.
Let $G$ be the directed, weighted graph shown in below figure
We are interested in the shortest paths from $A$.

Output the sequence of vertices identified by the Dijkstra’s algorithm for single source shortest path when the algorithm is started at node $A$

Write down sequence of vertices in the shortest path from $A$ to $E$

What is the cost of the shortest path from $A$ to $E$?
Correct graph:
Dijkstra's Algorithm:
Correct Solutions:
part a:
Part b:
ABDCFE
Part c. 84
Part b: A B D C F E
Part c : 84
Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?

Dynamic programming

Backtracking

Greedy

Divide and Conquer
The most appropriate matching for the following pairs
X: depth first search 1: heap Y: breadthfirst search 2: queue Z: sorting 3: stack
is:
 X  1 Y  2 Z  3
 X  3 Y  1 Z  2
 X  3 Y  2 Z  1
 X  2 Y  3 Z  1
Answer C:
X  3 DFS uses stack implicitly
Y  2 BFS uses queue explicitly in Algo
Z  1 HeapHeapsort
Consider an undirected unweighted graph $G$. Let a breadthfirst traversal of $G$ be done starting from a node $r$. Let $d(r,u)$ and $d(r,v)$ be the lengths of the shortest paths from $r$ to $u$ and $v$ respectively in $G$. if $u$ visited before $v$ during the breadthfirst traversal, which of the following statements is correct?
 $d(r,u) < d(r,v)$
 $d(r,u) > d(r,v)$
 $d(r,u) \leq d(r,v)$
 None of the above
Ans : C.
BFS is used to count shortest path from source (If all path costs are 1 !)
now if $u$ is visited before $v$ it means $2$ things.
 Either $u$ is closer to $v$.
 if $u$ & $v$ are same distance from $r$, then our BFS algo chose to visit $u$ before $v$.
Considering all given constraints :
u is visited before q means 2cases can only occur:
Hence option C is Ans.
Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph $G$ with $n*n$ adjacency matrix $A$. $A[i,j]$ equals $1$ if there is an edge in $G$ from $i$ to $j$, and $0$ otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;} } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n {P[__,__] = min{_______,______}; } } }
 Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
 Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
 Fill in the blank: The running time of the Algorithm is $O$(___).
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =infinite // i.e. if there is no direct path then put infinite else P[i,j] =a[i,j]; } } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n { P[i, j] = min( p[i,j] , p[i,k] + p[k,j]) }; } }
time complexity 0($n^3$)
this algorithm is $4$ weighted graph but it will work $4$ unweighted graph $2$ because if $p[i,j]=1$, $p[i,k]=1$ and $p[k,j]=1$ then according to the algo $p[i,j] = \min(p[i,j] ,p[i,k] + p[k,j]) = \min(1,2) =1$
And all the other case is also satisfied.(like as if $p[i,j]$ was $0$ in last iteration $nd$ there exist a path via $k$)
Consider the following graph:
Among the following sequences:
 abeghf
 abfehg
 abfhge
 afghbe
which are depth first traversals of the above graph?
 I, II and IV only
 I and IV only
 II, III and IV only
 I, III and IV only
For GATE purpose, without actually Applying DFS, you could answer by just seeing options.
In DFS, we go in depth first i.e., one node to another in depth first order.
Here, abfehg is not possible as we can not go from f to e directly.
Thus, option D is correct.
In all the other options we could reach directly from the node to the next node.
So, just visualize and do.
Let $G =(V,E)$ be an undirected graph with a subgraph $G_1 = (V_1, E_1)$. Weights are assigned to edges of $G$ as follows.
$$w(e) = \begin{cases} 0 \text{ if } e \in E_1 \\1 \text{ otherwise} \end{cases}$$
A singlesource shortest path algorithm is executed on the weighted graph $(V,E,w)$ with an arbitrary vertex $v_1$ of $V_1$ as the source. Which of the following can always be inferred from the path costs computed?
 The number of edges in the shortest paths from $v_1$ to all vertices of $G$
 $G_1$ is connected

$V_1$ forms a clique in $G$

$G_1$ is a tree
After applying the shortest path algorithm, check cost of vertex from source to every vertex in $G_1$. If $G_1$ is connected all these costs must be $0$ as edge weights of subgraph $G_1$ is $0$ and that should be the shortest path. If cost is not $0$, to at least one vertex in $G_1$ (not necessarily $G$), then $G_1$ is disconnected.
Ans is B.
Let $G= (V,E)$ be a directed graph with $n$ vertices. A path from $v_i$ to $v_j$ in $G$ is a sequence of vertices ($v_{i},v_{i+1}, \dots , v_j$) such that $(v_k, v_k+1) \in E$ for all $k$ in $i$ through $j1$. A simple path is a path in which no vertex appears more than once.
Let $A$ be an $n \times n$ array initialized as follows.
$$A[j,k] = \begin{cases} 1 \text { if } (j,k) \in E \\ 0 \text{ otherwise} \end{cases}$$
Consider the following algorithm.
for i=1 to n for j=1 to n for k=1 to n A[j,k] = max(A[j,k], A[j,i] + A[i,k]);
Which of the following statements is necessarily true for all j and k after termination of the above algorithm?
 $A[j,k] \leq n$
 If $A[j,j] \geq n1$ then $G$ has a Hamiltonian cycle
 If there exists a path from $j$ to $k$, $A[j,k]$ contains the longest path length from $j$ to $k$
 If there exists a path from $j$ to $k$, every simple path from $j$ to $k$ contains at most $A[j,k]$ edges
D is correct.
Consider a graph with $2$ nodes and one edge from to $V_1$ and $V_2$,
Running the above algorithm will result in A being
A  1  2 
1  1  2 
2  1  2 
Clearly options B and C are wrong. Since
 $A[1][1]$ and $A[2][2] > n1$ and there exists no Hamiltonian cycle. Hence invalid
 The longest path between $V_1$ and $V_2$ is 1, but $A[1][2]$ is $2$, which is invalid. And no path between $V_2$ and $V_1$ yet $A[2][1] = 1$ // it should be max cost path between j and k not path length.
Hence A or D could be valid.
Now consider a graph with $2$ nodes and two edges, one from $V_1$ and $V_2$ and other form $V_2$ and $V_1$. Running the above algorithm will result in A being
A  1  2 
1  2  3 
2  3  4 
Hence option A is invalid, as $A[i][j]$ can be $>n$
D is correct
Suppose we run Dijkstra’s single source shortestpath algorithm on the following edgeweighted directed graph with vertex P as the source.
In what order do the nodes get included into the set of vertices for which the shortest path distances are finalized?
 P,Q,R,S,T,U
 P,Q,R,U,S,T
 P,Q,R,U,T,S
 P,Q,T,R,U,S
Ans is (B). In Dijkstra's algorithm at each point we choose the smallest weight edge which starts from any one of the vertices in the shortest path found so far and add it to the shortest path.
Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$ is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$
 cannot have a cut vertex
 must have a cycle
 must have a cutedge (bridge)
 Has chromatic number strictly greater than those of $G_1$ and $G_2$
There are two connected graphs G1 and G2, with same vertices. In least case both graphs will have n1 edges with n vertices as both the given graphs are connected.
When we UNION both the graphs as G= G1 U G2 then G will be having at most (2n2) edges in the best case when both the graphs don't have any common edges between them (or) at least more than n1 edges if they have few common edges between them as the intersection of these two graphs is not connected.
Suppose if both the graphs G1 and G2 have exactly the same edges then their intersection will be a connected graph.
A graph with n vertices and more than n1 edges will definitely have a cycle.
Hence (B) is the correct option!
Take a tree for example
 False. Every vertex of tree(other than leaves) is a cut vertex.
 True.
 False. Without E in $G1$ and $G2$, $G1 U G2$ has no bridge.
 False. $G1 U G2$, $G1$, $G2$ three graphs have same chromatic number of $2$.
Consider the undirected graph below:
Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?
 (E, G), (C, F), (F, G), (A, D), (A, B), (A, C)
 (A, D), (A, B), (A, C), (C, F), (G, E), (F, G)
 (A, B), (A, D), (D, F), (F, G), (G, E), (F, C)
 (A, D), (A, B), (D, F), (F, C), (F, G), (G, E)
Answer is D
A and B produce disconnected components with the GIVEN order in options which is NEVER allowed by prims's algorithm.
C produces connected component every instant a new edge is added BUT when first vertex is chosen(first vertex is chosen randomly) first edge must be the minimum weight edge that is chosen . Therefore (A,D) MUST be chosen BEFORE (A,B). Therefore C is FALSE.
Let $G(V,E)$ be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:

$O\left(V^2\right)$

$O\left(E+V\log V\right)$

$O\left(V\logV\right)$

$O\left(\left(E+V\right)\logV\right)$
B. Fibonacci heap. $E$ decrease key operations and each taking $O(1)$ time plus $V$ extractmin operations each taking $O\left(\logV\right)$.
A. Array. Finding minvertex in each iteration takes $O(V)$ and this needs to be done $V$ times.
Binomial Heap same as Binary heap here as the critical operations are decrease key and extractmin.
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
The edge $e$ must definitely belong to:

the minimum weighted spanning tree of $G$

the weighted shortest path from $s$ to $t$

each path from $s$ to $t$

the weighted longest path from $s$ to $t$
For 82a: The answer should be Option A because edge 'e' is the lightest safe edge connecting $X$ and $Y$ so the minimum spanning tree of $G$ must contain 'e' (Greedy and optimal choice).
While B might seem correct but it is not always true. One such case is when $G$ is not connected therefore there might not be any path between $s$ and $t$.
Since the question is about definitely true B is incorrect and A is the only correct option.
Lets say $AC =1$, $CD = 2$ ,$BD = 3$ and $AB=4$
Then if $s= A$ and $t= B$ then $AC$ is the lightest edge crossing $X$ and $Y$ where $X = { A }$ and $Y = { C, B, D}$
But clearly $AC$ is not on the shortest path from $A$ to $B$. The shortest path is $AB = 4$.
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
Let the weight of an edge $e$ denote the congestion on that edge. The congestion on a path is defined to be the maximum of the congestions on the edges of the path. We wish to find the path from $s$ to $t$ having minimum congestion. Which of the following paths is always such a path of minimum congestion?

a path from $s$ to $t$ in the minimum weighted spanning tree

a weighted shortest path from $s$ to $t$

an Euler walk from $s$ to $t$

a Hamiltonian path from $s$ to $t$
Here ans should be A.
Here shortest path will give $6$.
Spanning tree contains edges of weights $2$,$3$,$4$ so congestion in this case is $ \max (2,3,4)$ that is $4$. for path $s$ to $t$ overall congestion is $\ max(3,4) =4$ but total weight is $7$.
option C and D are I think not related to this question.
In a depthfirst traversal of a graph $G$ with $n$ vertices, $k$ edges are marked as tree edges. The number of connected components in $G$ is
 $k$
 $k+1$
 $nk1$
 $nk$
Tree edges are those edges which appear in the final DFS forest. For example in case of connected graph (i.e. resulting dfs forest containing only one tree), if we run DFS over it, edges belonging to the resulting DFS tree are called tree edges.
Let us assume the graph has $x$ number of connected (or strongly connected in case of a directed graph) components. And assume 1st component has $K_1$_{ }tree edges, 2nd component has $K_2$_{ }tree edges and xth component has $K_x$_{ }tree edges.
Such that $K_1 + K_2 + K_3 + ............ + K_x = K $ ( = total)
Or in other way we can imagine like, the final DFS forest has x trees and those trees are having $K_1$_{ }, $K_2$_{ }, $K_3$ , .... $K_x$_{ }edges respectively.
Now we know that a tree having $K_x$_{ }edges contains $K_x + 1$ nodes. Similarly a tree having $K_1$ edges contains $K_1 + 1$ nodes, etc. and so on.
So, Summation of nodes in each tree $= n$
$(K_1 + 1) + (K_2 + 1) + (K_3 + 1) + ................ + (K_x + 1) = n \\ \ \ \ \ => (K_1+K_2+K_3+......+K_x) + x = n \\ \ \ \ \ => x = n  K$
Why ?
If Graph is connected , while doing DFS we will visit some spanning Tree of Graph. So no of edges will be n1 &
No of components => n  (n1) => 1
If Graph is not connected in that case when we do the DFS on those disconnected graph,
For every disconnected component with say x vertices, we will get x1 Tree edge. When we sum up all vertices we will get total no of vertices. When we sum up all edges in spanning tree of each component, we will get => Total no of vertices  Total No of connected component ( Due to for each connected component we are getting tree of no of vertices in that connnected component  1)
So Total connected component => D) nk
In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.
1. BellmanFord algorithm 2. Kruskal’s algorithm 3. FloydWarshall algorithm 4. Topological sorting 
A : $O ( m \log n)$ B : $O (n^3)$ C : $O (nm)$ D : $O (n + m)$ 
 1→ C, 2 → A, 3 → B, 4 → D
 1→ B, 2 → D, 3 → C, 4 → A
 1→ C, 2 → D, 3 → A, 4 → B
 1→ B, 2 → A, 3 → C, 4 → D
 BellmanFord algorithm => C, $O (nm)$ Assuming $n$ as edges , $m$ as vertices, for every vertex we relax all edges. $m*n$ , $O(mn)$
 Kruskal’s algorithm => Remaining Option ,A : $O ( m \log n)$
 FloydWarshall algorithm => B, Dynamic Programming Algo, $O(N^3)$
 Topological sorting =>D, Boils Down to DFS, $O(V+E)$
Answer A
A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists"); else printf ("Sink does not exist");
Choose the correct expressions for $E_1$ and $E_2$
 $E_1 : A[i][j]$ and $E_2 : i = j$;
 $E_1 :\ !A[i][j]$ and $E_2 : i = j + 1$;
 $E_1:\ !A[i][j]$ and $E_2 : i = j$;
 $E_1 : A[i][j]$ and $E_2 : i = j + 1$;
If there is a sink in the graph, the adjacency matrix will contain all 1's (except diagonal) in one column and all 0's (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which doesn't have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from $i$ to it, making it impossible them to form a sink. This is done through
$E_1: \!A[i][j]$
and
$E_2: i = j$;
$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$.
So, answer is (C)
For $E_3$,
https://gateoverflow.in/3857/gate2005it_84b
A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists") ; else printf ("Sink does not exist");
Choose the correct expression for $E_3$
 $(A[i][j] \ \&\& \ !A[j][i])$
 $(!A[i][j] \ \&\& \ A[j][i])$
 $(!A[i][j] \ \left  \right  A[j][i])$
 $(A[i][j] \ \left  \right  \ !A[j][i])$
If there is a sink in the graph, the adjacency matrix will contain all 1s (except diagonal) in one column and all 0s (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which does not have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through
$E_1: \ !A[i][j]$
and
$E_2: i = j$;
$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous $j$ can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$.
Now, the loop breaks when we found a potential sink that is a vertex which does not have any outgoing edge to any coming after it in adjacency matrix. So, if the column in which this vertex comes is all 1s and the row is all 0s (except diagonal), this is the sink. Otherwise there is no sink in the graph. So, $E_3$ is checking this condition.
But in the code $\text{flag}$ is used for storing the state that sink is present or not. And as per the usage of flag in code, by default sink is considered present. So, the condition in $E_3$ must make flag = 0, if the found $i$ is not a sink. So, the condition should be:
$A[i][j] \ \left  \right \ !A[j][i]$
So, (D) is the answer.
To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:
 Queue
 Stack
 Heap
 BTree
Answer is A: Queue
we can find single source shortest path in unweighted graph by using Breadth first search (BFS) algorithm which using "Queue" data structure , which time $O(m+n)$ (i.e. linear with respect to the number of vertices and edges. )
Reference: geekforgeeks
Let $T$ be a depth first search tree in an undirected graph $G$. Vertices $u$ and $ν$ are leaves of this tree $T$. The degrees of both $u$ and $ν$ in $G$ are at least $2$. which one of the following statements is true?
 There must exist a vertex $w$ adjacent to both $u$ and $ν$ in $G$
 There must exist a vertex $w$ whose removal disconnects $u$ and $ν$ in $G$
 There must exist a cycle in $G$ containing $u$ and $ν$
 There must exist a cycle in $G$ containing $u$ and all its neighbours in $G$
One diagram, which is eliminating option A, B, C.
Hence D is the answer.
Consider following graph
Dfs is .
so D is answer.
Which of the following is the correct decomposition of the directed graph given below into its strongly connected components?
 $\left \{ P, Q, R, S \right \}, \left \{ T \right \},\left \{ U \right \}, \left \{ V \right \}$
 $\left \{ PP,Q, R, S, T, V \right \}, \left \{ U \right \}$
 $\left \{ P, Q, S, T, V \right \}, \left \{ R \right \},\left \{ U \right \}$
 $\left \{ P, Q, R, S, T, U, V \right \}$
Here the answer is B.
A graph is said to be strongly connected if every vertex is reachable from every other vertex.
The strongly connected component is always maximal that is if $x$ is strongly connected component there should not exist another strongly connected component which contains $x$.
If we take $R$ as a strongly connected component but which is part of $PQRS$ and $PQRS$ is part of $PQRSVT$.
Consider the depthfirstsearch of an undirected graph with $3$ vertices $P$, $Q$, and $R$. Let discovery time $d(u)$ represent the time instant when the vertex $u$ is first visited, and finish time $f(u)$ represent the time instant when the vertex $u$ is last visited. Given that
$d(P) = 5$ units  $f(P) = 12$ units 
$d(Q) = 6$ units  $f(Q) = 10$ units 
$d(R) = 14$ unit  $f(R) = 18$ units 
Which one of the following statements is TRUE about the graph
 There is only one connected component
 There are two connected components, and $P$ and $R$ are connected
 There are two connected components, and $Q$ and $R$ are connected
 There are two connected components, and $P$ and $Q$ are connected
As seen in quetion after $10$ we have to go for p again and since $p$ is finish and then $r$ is start so $r$ must be disconnected because if their is edges from $q$ to $r$ then $r$ must be visited before of $q$ and $p$ ending.
D is answer
In an unweighted, undirected connected graph, the shortest path from a node $S$ to every other node is computed most efficiently, in terms of time complexity, by

Dijkstra’s algorithm starting from $S$.

Warshall’s algorithm.

Performing a DFS starting from $S$.

Performing a BFS starting from $S$.
Dijkastra and warshall 's algo only used for weighted graph.
Both DFS and BFS can be used for finding path between 2 vertices in undirected and unweighted graph but BFS can only give the shortest path as concern in given question so BFS is Ans.
Note : finding only path(DFS) and finding shortest path(BFS) ..Matters a lot
Must Read:
https://www.quora.com/WhataretheadvantagesofusingBFSoverDFSorusingDFSoverBFSWhataretheapplicationsanddownsidesofeach
Consider the DAG with $V = \{1,2,3,4,5,6\}$ shown below.
Which of the following is not a topological ordering?
 1 2 3 4 5 6
 1 3 2 4 5 6
 1 3 2 4 6 5
 3 2 4 1 6 5
Go with Vertex with indegree 0 remove that vertex with all edges going from it . Follow that procedure.
We See 3 cannot come at first B/c indegree is not 0. so D is answer here .
ALL other options are Topological order.
Only 1 and 4 order matter for this question.
A depthfirst search is performed on a directed acyclic graph. Let $d[u]$ denote the time at which vertex $u$ is visited for the first time and $f[u]$ the time at which the dfs call to the vertex $u$ terminates. Which of the following statements is always true for all edges $(u, v)$ in the graph ?
 $d[u] < d[v]$
 $d[u] < f[v]$
 $f[u] < f[v]$
 $f[u] > f[v]$
I'm gonna disprove all wrong options here.
 $d[u] < d[v] $, Counter Example => Well if we directly start DFS on V first. Then I call DFS on $X$ which visits $U$.
 $d[u] < f[v]$ Counter example => Same as A
 $f[u] < f[v]$ Counter example => Same as A again
So answer is D.
Consider a weighted undirected graph with positive edge weights and let $uv$ be an edge in the graph. It is known that the shortest path from the source vertex $s$ to $u$ has weight 53 and the shortest path from $s$ to $v$ has weight 65. Which one of the following statements is always true?
 Weight $(u,v) \leq 12$
 Weight $(u,v) = 12$
 Weight $(u,v) \geq 12$
 Weight $(u,v) > 12$
If weight $(u, v) < 12$, then the min. weight of $(s, v) = $weight of $(s, u) + $ weight of $(u, v) = 53 + (<12) $ will be less than $65$.
The Breadth First Search algorithm has been implemented using the queue data structure. One possible order of visiting the nodes of the following graph is
 MNOPQR
 NQMPOR
 QMNPRO
 QMNPOR
 MNOPQR > If you try to run BFS, after M, you must traverse NQR (In some order) Here P is traversed before Q, which is wrong.
 NQMPOR > This is also not BFS. P is traversed before O !
 QMNPRO > Correct
 QMNPOR > Incorrect. Because R need to be traversed before O.(Because M is ahead of N in queue).
Answer : > C
Dijkstra's single source shortest path algorithm when run from vertex $a$ in the above graph, computes the correct shortest path distance to

only vertex $a$

only vertices $a, e, f, g, h$

only vertices $a, b, c, d$

all the vertices
The most efficient algorithm for finding the number of connected components in an undirected graph on $n$ vertices and $m$ edges has time complexity
 $\Theta(n)$
 $\Theta(m)$
 $\Theta(m+n)$
 $\Theta(mn)$
Run DFS to find connected components. Its time complexity is $\Theta(m+n)$, hence (C) is the answer.
Both BFS or DFS can be used to find number of connected components.
The Time complexity of BFS and DFS is $\Theta$(m+n).
Hence C is Ans
Consider the following sequence of nodes for the undirected graph given below.
 a b e f d g c
 a b e f c g d
 a d g e b c f
 a d b c g e f
A Depth First Search (DFS) is started at node a. The nodes are listed in the order they are first visited. Which all of the above is (are) possible output(s)?
 1 and 3 only
 2 and 3 only
 2, 3 and 4 only
 1, 2 and 3 only
Answer: B
1. After f is visited, c or g should be visited next. So, the traversal is incorrect.
4. After c is visited, e or f should be visited next. So, the traversal is incorrect.
2 and 3 are correct.
Which of the following statement(s) is/are correct regarding BellmanFord shortest path algorithm?
P: Always finds a negative weighted cycle, if one exists.
Q: Finds whether any negative weighted cycle is reachable from the source.
 P only
 Q only
 Both P and Q
 Neither P nor Q
as we can see that last step is the verification step. In that step values remained unchanged. If there was a negative edge weight cycle reachable from source then at verification step also those values will be different from the values above.
In case the cycle is not reachable from source then we can see that they will be at $\infty$ distance(or cost) from the source from the beginning till the last step. As take anything away from the $\infty$ it will still be infinite.
But it can also be the case that there are some points which are not forming a cycle and are still unreachable from source those also will be at $\infty$ distance from the source from the beginning till end.
Hence, we won't be able to make a distinction among the cycle and such vertices. Thus, we say that this algorithm can detect negative edge weight cycles only if they are reachable from the source.
answer = option B
Consider the directed graph shown in the figure below. There are multiple shortest paths between vertices $S$ and $T$. Which one will be reported by Dijkstra’s shortest path algorithm? Assume that, in any iteration, the shortest path to a vertex $v$ is updated only when a strictly shorter path to $v$ is discovered.
 SDT
 SBDT
 SACDT
 SACET
Relaxation at every vertex is as follows
Note the next vertex is taken out here:
A  B  C  D  E  F  G  T  
S  $4$  $3$(by s)  $\infty$  $7$  $\infty$  $\infty$  $\infty$  $\infty$ 
B  $4$(by s)  $\infty$  $7 \because (4+3 \ also \ =7)(S \to d)$  $\infty$  $\infty$  $\infty$  $\infty$  
A  $5(S \to B \to A)$  $7$  $\infty$  $\infty$  $\infty$  $\infty$  
C  $7$  $6(S \to B \to C)$  $\infty$  $\infty$  $\infty$  
E  $7(S \to D)$  $\infty$  $8(S \to A \to C \to E)$  $10(S \to A \to C \to E)$  
D  $12(S \to B \to D)$  8  $10$(same so no change)  
E  $12$  $10$  
T  $12$ 
Now We see for $S$ to $T$ its $(S \to A \to C \to E \to T)$
which is Option : D
What is the time complexity of BellmanFord singlesource shortest path algorithm on a complete graph of n vertices?
 $\theta(n^2)$
 $\theta(n^2\log n)$
 $\theta(n^3)$
 $\theta(n^3\log n)$
Let $G$ be a graph with $n$ vertices and $m$ edges.What is the tightest upper bound on the running time of Depth First Search on $G$, when $G$ is represented as an adjacency matrix?
 $\Theta(n)$
 $\Theta(n+m)$
 $\Theta(n^2)$
 $\Theta(m^2)$
Depth First Search of a graph takes O(m+n) time when the graph is represented using adjacency list. In adjacency matrix representation, graph is represented as an "n x n" matrix. To do DFS, for every vertex, we traverse the row corresponding to that vertex to find all adjacent vertices (In adjacency list representation we traverse only the adjacent vertices of the vertex). Therefore time complexity becomes O(n^{2})
Ans (C)
http://web.eecs.utk.edu/~huangj/CS302S04/notes/graphsearching.html
Consider the directed graph below given.
Which one of the following is TRUE?
 The graph does not have any topological ordering.
 Both PQRS and SRQP are topological orderings.
 Both PSRQ and SPRQ are topological orderings.
 PSRQ is the only topological ordering.
The C option has been copied wrongly
C. Both PSRQ and SPRQ are topological orderings
 Apply DFS by choosing P or S as starting vertices
 As the vertex gets a finishing time assign it to the head of a linked list
 The linked list is your required topological ordering
choose vertex in the graph which has 0 indegree . now see graph has such two vertices ie P,S so we can start topological sort either from vertex P or from S .
lets first start from vertex P now remove this vertex from graph ,we left with three vertices named asQ,S,R from these vertices see which vertex has INDEGREE 0,S vertex HAS indegree 0 therefore sequence is P,S,R,Q
repeat the above step from vertex S we get sequence as S,P,R,Q
Consider the tree arcs of a BFS traversal from a source node $W$ in an unweighted, connected, undirected graph. The tree $T$ formed by the tree arcs is a data structure for computing
 the shortest path between every pair of vertices.
 the shortest path from $W$ to every vertex in the graph.
 the shortest paths from $W$ to only those nodes that are leaves of $T$.
 the longest path in the graph.
Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.
Total 21 nodes are there out of 2 nodes require back track here in this question.
So max recursion depth is 212= 19
( Do dfs from extreme ends such that max recursion depth will occur i.e. take Left  top node as initial node for DFS as shown in below img.)
Note: Backtrack means it reduces recursion depth in stack.
Let $G = (V, E)$ be a simple undirected graph, and $s$ be a particular vertex in it called the source. For $x \in V$, let $d(x)$ denote the shortest distance in $G$ from $s$ to $x$. A breadth first search (BFS) is performed starting at $s$. Let $T$ be the resultant BFS tree. If $(u, v)$ is an edge of $G$ that is not in $T$, then which one of the following CANNOT be the value of $d(u)  d(v)$?
 $1$
 $0$
 $1$
 $2$
$d(u)  d(v) = 0$ is possible when both $u$ and $v$ have an edge from $t$ and $t$ is in the shortest path from $s$ to $u$ or $v$.
$d(u)  d(v) = 1$ is possible when $v$ and $t$ are in the shortest path from $s$ to $u$ and both $t$ and $v$ are siblings same distance from $s$ to both $t$ and $v$ causing $tu$ edge to be in BFS tree and not $vu$.
$d(u)  d(v) = 1$ is possible as explained above by interchanging $u$ and $v$.
$d(u)  d(v) = 2$ is not possible. This is because on BFS traversal we either visit $u$ first or $v$. Let's take $u$ first. Now, we put all neighbors of $u$ on queue. Since $v$ is a neighbour and $v$ is not visited before as assumed, $d(v)$ will become $d(u) + 1$. Similarly, for $v$ being visited first.
(B)Take distance from S
d(U) =1 from S
d(V)=1 from S
So, d(U)  d(V) =0 , d(X) is one shortest distance of the graph
and BFS traversal will not take edge UV
(C) Now for assuming d(U) and d(V) has a distance 1, we similarly calculate the distance from S(in next figure)
(A)In previous figure change the position of U and V, so get
d(U)d(V) = 1
(D) but 2 not possible as there is a direct edge from U to V always, So, no graph possible with min distance 2. The max distance could be 1
So, ans is (D)
No of nodes at level $1$ of tree $\Rightarrow 2$
No of nodes at level $2$ of tree $\Rightarrow 4$
No of nodes at level $3$ of tree $\Rightarrow 8$
No of nodes at level $4$ of tree $\Rightarrow 16$
Last node in level $4$th is the node we are looking for $\Rightarrow 1+2+4+8+16 \Rightarrow 31$
In an adjacency list representation of an undirected simple graph $G=(V, E)$, each edge $(u, v)$ has two adjacency list entries: $[v]$ in the adjacency list of $u$, and $[u]$ in the adjacency list of $v$. These are called twins of each other. A twin pointer is a pointer from an adjacency list entry to its twin. If $E=m$ and $V=n$, and the memory size is not a constraint, what is the time complexity of the most efficient algorithm to set the twin pointer in each entry in each adjacency list?
 $\Theta\left(n^{2}\right)$
 $\Theta\left(n+m\right)$
 $\Theta\left(m^{2}\right)$
 $\Theta\left(n^{4}\right)$
Applying BFS on Undirected graph give you twin pointer .Visit every vertex levelwise for every vertex fill adjacent vertex in the adjacency list. BFS take $\Theta\left(n+m\right)$ time.
take extra field for storing no. of linked list for perticular vertex. take extra $m+n$ time( $m$ vertex and $n$ edges).
So B is answer
Let $G=\left ( V,E \right )$ be $any$ connected undirected edgeweighted graph. The weights of the edges in $E$ are positive and distinct. Consider the following statements:
 Minimum Spanning Tree of $G$ is always unique.
 Shortest path between any two vertices of $G$ is always unique.
Which of the above statements is/are necessarily true?
 I only
 II only
 both I and II
 neither I nor II
Ans: A
> MST is not unique only when edges are not distinct. Here the edges are distinct. Be careful for the keyword DISTINCT.
> Shortest Path can be different even if the edges are distinct. Example is shown below. Shortest path from A to C is not unique here.
The Breadth First Search (BFS) algorithm has been implemented using the queue data structure. Which one of the following is a possible order of visiting the nodes in the graph below?
 MNOPQR
 NQMPOR
 QMNROP
 POQNMR
In BFS starting from a node, we traverse all node adjacent to it at first then repeat same for next nodes
Here you can see only D is following BFS sequence properly
As per BFS if we start from M then RQN has to come after it in any order but in A here O comes so it is not
As per BFS if we start from Nthen QMO has to come after it in any order but in A here P comes so it is not
As per BFS if we start from M then MNOP has to come after it in any order but in A here R comes so it is not
but D following the sequences
so D is correct answer
more editing coming soon
Let $G$ be an undirected graph. Consider a depthfirst traversal of $G$, and let $T$ be the resulting depthfirst search tree. Let $u$ be a vertex in $G$ and let $v$ be the first new (unvisited) vertex visited after visiting $u$ in the traversal. Which of the following statement is always true?
 $\{u, v\}$ must be an edge in $G$, and $u$ is a descendant of $v$ in $T$
 $\{u, v\}$ must be an edge in $G$, and $v$ is a descendant of $u$ in $T$
 If $\{u, v\}$ is not an edge in $G$ then $u$ is a leaf in $T$
 If $\{u, v\}$ is not an edge in $G$ then $u$ and $v$ must have the same parent in $T$
let this be the dfs order of tree then
$u= D$ and $v = F$
so what we conclude
 its not necessary their is edge b/w them .
 if thier is no edge then $u$ must be leaf i.e. $D$ is leaf here.
 it not always possible $u$ and $v$ have same parent . but they have same ancestor.
Given a weighted directed graph with $n$ vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time
 $O(n)$
 $O(n . \log(n))$ but not $O (n)$
 $O(n^{1.5})$ but not $O (n \log n)$
 $O(n^{3})$ but not $O(n^{1.5})$
 $O(2^{n})$ but not $O(n^{3})$
Consider the following directed graph.
Suppose a depthfirst traversal of this graph is performed, assuming that whenever there is a choice, the vertex earlier in the alphabetical order is to be chosen. Suppose the number of tree edges is $T$, the number of back edges is $B$ and the number of cross edges is $C$. Then
 $B = 1$, $C = 1$, and $T = 4$.
 $B = 0$, $C = 2$, and $T = 4$.
 $B = 2$, $C = 1$, and $T = 3$.
 $B = 1$, $C = 2$, and $T = 3$.
 $B = 2$, $C = 2$, and $T = 1$.
Since they said that whenever their is a choice we will have to select the node which is alphabetically earlier , therefore we choose the starting node as $A$ .
The tree then becomes $ABEC$ . Therefore no of tree edges is $3$ that is $(T=3)$ .
Now there is one cycle $BEC$ so we will get a back edge from $C$ to $B$ while performing DFS. Hence $B=1$.
Now $D$ becomes disconnected node and it can only contribute in forming cross edge . There are $2$ cross edges $DA$ , $DB$. Therefore $C=2$.
Answer is Option D.
Correct me if am going wrong.
 Sort the degrees in nonincreasing order.
 Pick up the highest degree ( let us say it a), remove it from the list of degrees and subtract $1$ from next a degrees in list.
 Repeat Step $2$ until :
 If we get all $0$ entries in list $\color{red}{\Rightarrow}$ Simple Graph exits
 If we get a negative entry or not enough entries to subtract $1$ in step 2 $\color{red}{\Rightarrow}$ Simple Graph does not exist
Read More : http://goo.gl/4u3nfh
Let's take a example : $3, 2, 1, 2$
Step 1 : Sort the degree sequence : $3, 2, 2, 1$
Step 2: Pick $3$, Remove $3$ from list and from next $3$ elements subtract $1$, $ Result : (1,1,0) $
Again repeat step 2 : select $1$, Remove $1$ and from next $1$ subtract $1$, $Result : (0,0,0) $
Thus, a simple graph exists for the following degreesequence.
The minimum number of record movements required to merge five files A (with $10$ records), B (with $20$ records), C (with $15$ records), D (with $5$ records) and E (with $25$ records) is:
 $165$
 $90$
 $75$
 $65$
Arrange files in increasing order of records
D A C B E
$5$ $10$ $15$ $20$ $25$
$75$
$30$ $45$
$15$ $15$ (C) $20$ (B) $25$ (E)
$5$ (D) $10$ (A)
No of movements$=15+30+45+75=165$
The following are the starting and ending times of activities $A, B, C, D, E, F, G$ and $H$ respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required?
 3
 4
 5
 6
Solution: B
The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity $A,B,C,D,E,F,G$ and $H$. Connect the activities that occur between the start and end time of an activity now the chromatic number of the graph is the number of rooms required.
We are given 9 tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
Task  $T_1$  $T_2$  $T_3$  $T_4$  $T_5$  $T_6$  $T_7$  $T_8$  $T_9$ 

Profit  $15$  $20$  $30$  $18$  $18$  $10$  $23$  $16$  $25$ 
Deadline  $7$  $2$  $5$  $3$  $4$  $5$  $2$  $7$  $3$ 
Are all tasks completed in the schedule that gives maximum profit?

All tasks are completed

$T_1$ and $T_6$ are left out

$T_1$ and $T_8$ are left out

$T_4$ and $T_6$ are left out
Step 1 Sort the tasks in decreasing order of profit and if any conflict arises between two or more tasks,resolve them by sorting them on basis of having greater deadline first(Because we have more time to complete the task with greater deadline and same profit).
Step 2 Since Maximum deadline given is $7$, so we consider we have 7 time slots ranging from $07$ where a task $T_i$ having deadline say $2$ can be filled in slots either $01$ or $12$ and not beyond $2$ because this task has deadline of $2$ time units, so this task has to be completed by atmost time $T=2$.
Now according to question, since Each task completes in Unit time, so a single tasks takes only one slot as shown.
Now Take the first task in the list i.e. $T_3$ which has a deadline of $5$, so it can be completed in maximum $5$ time units, so place it in slot $45$ which is the maximum deadline by which this task can be completed.
Task $T_9$ with deadline $3$ is similarly placed in slot $23$.
Task $T_7$ with deadline $2$ is placed in slot $12$.
Now for task $T_2$ having deadline $2$ can be placed in either $01$ or $12$ (Occupied by $T_7$). So $T_2$ will occupy slot $01$.
Task $T_5$ with deadline $4$ is placed in slot $34$.
Now comes task $T_4$ which has deadline $3$ can be put in slots $01$ or $12$ or $23$ and not beyond that.Unfortunately, all such slots are occupied so $T_4$ will be left out.
Task $T_8$ with deadline $7$ goes in slot $67$.
Task $T_1$ with deadline $7$ can be placed in slot $56$.
Now all time slots are full.
So, Task $T_6$ will be left out.
So, option (d) is the answer.
The most important statement in question is
each task requires one unit of time
This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:
Task  T7  T2  T9  T4  T5  T3  T6  T8  T1 

Deadline  2  2  3  3  4  5  5  7  7 
0 T7  1T22T93T54T35T86T17
T4 and T6 left out
D is answer.
We are given $9$ tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
Task  $T_1$  $T_2$  $T_3$  $T_4$  $T_5$  $T_6$  $T_7$  $T_8$  $T_9$ 

Profit  $15$  $20$  $30$  $18$  $18$  $10$  $23$  $16$  $25$ 
Deadline  $7$  $2$  $5$  $3$  $4$  $5$  $2$  $7$  $3$ 
What is the maximum profit earned?
 $147$
 $165$
 $167$
 $175$
The most important statement in question is
each task requires one unit of time
This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:
Task  $T_7$  $T_2$  $T_9$  $T_4$  $T_5$  $T_3$  $T_6$  $T_8$  $T_1$ 

Deadline  $2$  $2$  $3$  $3$  $4$  $5$  $5$  $7$  $7$ 
$0$ $T_7$  $1$  $T_2$  $2$  $T_9$  $3$  $T_5$  $4$  $T_3$  $5$  $T_8$  $6$  $T_1$  $7$
so we know that $T_4$ and $T_6$ are left
so profit will not include $T_4$ and $T_6 = 15 + 20 +30 +18 + 16 + 23 + 25 = 147$
A is answer
The characters $a$ to $h$ have the set of frequencies based on the first $8$ Fibonacci numbers as follows
$a : 1$, $b : 1$, $c : 2$, $d : 3$, $e : 5$, $f : 8$, $g : 13$, $h : 21$
A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?
$110111100111010$
 $fdheg$
 $ecgdf$
 $dchfg$
 $fehdg$
Answer is A. Huffman's tree is as follows. The two least frequent characters are taken as the children of a newly made node and the frequency of the newly made node is made equal to the sum of those two child nodes. Then the same procedure is repeated till all nodes are finished.
$110111100111010 = 110 11110 0 1110 10 = fdheg$
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively.
Which of the following is the Huffman code for the letter $a, \,b, \,c, \,d, \,e, \,f$?

$0$, $10$, $110$, $1110$, $11110$, $11111$

$11$, $10$, $011$, $010$, $001$, $000$

$11$, $10$, $01$, $001$, $0001$, $0000$

$110$, $100$, $010$, $000$, $001$, $111$
Based on the probabilities, we can say the probable frequency of the letters will be
$16$, $8$, $4$, $2$, $1$, $1$
Now, the Huffman tree can be constructed as follows:
So, A is the answer for $76$.
https://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively.
What is the average length of the Huffman code for the letters $a, \,b, \,c, \,d, \,e, \,f$?
 $3$
 $2.1875$
 $2.25$
 $1.9375$
Ans should be D.
Avg length $= \frac{1}{2} \times 1 + \frac{1}{4} \times 2+ \frac{1}{8} \times 3+ \frac{1}{16} \times 4+ \frac{1}{32} \times 5+ \frac{1}{32} \times 5 \\ = \frac{16 + 16 + 12 + 8 + 5 + 5}{32} \\ = 1.9375 $
A message is made up entirely of characters from the set $X=\{P, Q, R, S, T\}$. The table of probabilities for each of the characters is shown below:
Character  Probability 
P  0.22 
Q  0.34 
R  0.17 
S  0.19 
T  0.08 
Total  1.00 
If a message of 100 characters over $X$ is encoded using Huffman coding, then the expected length of the encoded message in bits is ______.
hope it might help.......
so ans is 225
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Consider the following Pascal function:
Function X(M:integer):integer; Var i:integer; Begin i := 0; while i*i < M do i:= i+1 X := i end
The function call $X(N)$, if $N$ is positive, will return
 $\lfloor\sqrt N \rfloor$
 $\lfloor\sqrt N \rfloor +1$
 $\lceil \sqrt N \rceil$
 $\lceil \sqrt N \rceil +1$
 None of the above
For $N=9$, it returns $3$.
For $N=10$ it returns $4$.
For $N=16$ it returns $4$.
For $N=17$ it returns $5$.
So answer should be C.
What function of $x$, $n$ is computed by this program?
Function what(x, n:integer): integer: Var value : integer begin value := 1 if n > 0 then begin if n mod 2 =1 then value := value * x; value := value * what(x*x, n div 2); end; what := value; end;
In the following Pascal program segment, what is the value of X after the execution of the program segment?
X := 10; Y := 20; If X > Y then if X < 0 then X := abs(X) else X := 2*X;
 $10$
 $20$
 $10$
 None
Ans of $X$ remains unchanged. As the if condition becomes false.
X := 10
ans is C . This is classic example of ifelse issue. Always else matches for nesting to closest if in C Programming & Pascal .
https://en.wikipedia.org/wiki/Dangling_else
if (x>y) { if (x<0) x=abs(x) else x=2*x }
What value would the following function return for the input $x=95$?
Function fun (x:integer):integer; Begin If x > 100 then fun = x – 10 Else fun = fun(fun (x+11)) End;
 $89$
 $90$
 $91$
 $92$
Consider the following $C$ function definition
int Trial (int a, int b, int c) { if ((a>=b) && (c<b) return b; else if (a>=b) return Trial(a, c, b); else return Trial(b, a, c); }
The functional Trial:

Finds the maximum of $a$, $b$, and $c$

Finds the minimum of $a$, $b$, and $c$

Finds the middle number of $a$, $b$, $c$

None of the above
$a$  $b$  $c$  Return 

1  1  1  The final return statement is $c < b$, so this never returns. Answer D. 
Suppose you are given an array $s[1....n]$ and a procedure reverse $(s, i, j)$ which reverses the order of elements in $s$ between positions $i$ and $j$ (both inclusive). What does the following sequence do, where $1 \leqslant k \leqslant n$:
reverse (s, 1, k); reverse (s, k+1, n); reverse (s, 1, n);
 Rotates $s$ left by $k$ positions
 Leaves $s$ unchanged
 Reverses all elements of $s$
 None of the above
Answer is A.
Effect of the above $3$ reversal for any $K$ is equivalent to left rotation of the array of size $n$ by $k$.
Let , $S[1......7]$
1  2  3  4  5  6  7 
so,$n=7$ ,$k = 2$
reverse $(S,1,2)$ we get $[2,1,3,4,5,6,7]$
reverse $(S,3,7)$ we get $[2,1,7,6,5,4,3]$
reverse $(S,1,7)$ we get $[3,4,5,6,7,1,2]$
hence option A Rotates $s$ left by $k$ position is correct
Consider the following $C$ function.
For large values of $y$, the return value of the function $f$ best approximates
float f,(float x, int y) { float p, s; int i; for (s=1,p=1,i=1; i<y; i++) { p *= x/i; s += p; } return s; }
 $x^y$
 $e^x$
 $\text{ln} (1+x)$
 $x^x$
The simplified version of above program can be rewritten as float f(float x, int y) { float p=1, s=1; int i; for (i=1; i<y; i++) { p = p * (x/i); s = s + p; } return s; }
we can take p=1,s=1 intialization outside of for loop cz there is no condition checking in for loop involving p,s .
i  p= p * (x/i)  s= s + p 
1  x  1+x 
2  $\frac{x^{2}}{2}$  1+x+$\frac{x^{2}}{2}$ 
3  $\frac{x^{3}}{6}$  1+x+$\frac{x^{2}}{2}$+$\frac{x^{3}}{6}$ 
4  $\frac{x^{4}}{24}$  1+x+$\frac{x^{2}}{2}$+$\frac{x^{3}}{6}$+$\frac{x^{4}}{24}$ 
n  $\frac{x^{n}}{n!}$  e^{x} 
e^{x }= x^{n} / n! = 1+x+$\frac{x^{2}}{2}$+$\frac{x^{3}}{6}$ +$\frac{x^{4}}{24}$+ ...+$\frac{x^{n}}{n!}$ 
Hence Option B is Ans.
$$\begin{array}{l}
i = 1 &\rm then& p = x &\&& s = 1 + x\\[1em]
i = 2 &\rm then& p = \frac{x^2}{2} &\&& s = 1+x+\frac{x^2}{2}
\end{array}$$
As $y$ tends to infinity, $s$ tends to $e^x$.
Hence, the correct answer is option B.
In the following $C$ program fragment, $j$, $k$, $n$ and TwoLog_n are integer variables, and $A$ is an array of integers. The variable $n$ is initialized to an integer $\geqslant 3$, and TwoLog_n is initialized to the value of $2^*\lceil \log_2(n) \rceil$
for (k = 3; k <= n; k++) A[k] = 0; for (k = 2; k <= TwoLog_n; k++) for (j = k+1; j <= n; j++) A[j] = A[j]  (j%k); for (j = 3; j <= n; j++) if (!A[j]) printf("%d", j);
The set of numbers printed by this program fragment is

$\left\{m \mid m \leq n, (\exists i)\left[m=i!\right]\right\}$

$\left\{m \mid m \leq n, (\exists i) \left[m=i^2\right]\right\}$

$\left\{m \mid m \leq n, \text{m is prime} \right\}$

{ }
The nested loop is taking all integers from 2 to $2\ast \log_2 n$ take all their nonmultiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[3], A[5], A[7]$, and $A[9] $are made $1$. Similarly for $3$, $4$, ... till $2\ast \log n$. So, if any entry $A[p]$ is $1$ means it must be a multiple of $2, 3, .... 2\ast \log_2 n$, which is $\left ( 2 \log n \right )!$ and is greater than $n$. So, for no index $p$, $A[p]$ will be $0$. So, answer is D.
Suppose the line
A[j] = A[j]  (j%k);
is replaced with
A[j] = A[j]  !(j%k);
Now, the nested loop is taking all integers from $2$ to $\log_2 n$, take all their multiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[4], A[6], A[8]$ and $A[10]$ are made $1$. Similarly for $3, 4, ...$ till $2\ast \log n$ . So, for all nonprime indices of $A$, we will have a $1$, and for prime indices we have a $0$. And we print $i$ if $A[j]$ is $0$ meaning $j$ is prime.
take n=3 So TwoLog_n=4 . Suppose A[4]
What does the following algorithm approximate? (Assume $m > 1, \epsilon >0$).
x = m; y = 1; While (xy > ϵ) { x = (x+y)/2; y = m/x; } print(x);
 $\log \, m$
 $m^2$
 $m^{\frac{1}{2}}$
 $m^{\frac{1}{3}}$
$x= ( x + m/x ) /2$
$=> 2x^2= x^2 + m$
$=> x = m^1/2$
or we can check by putting 23 different values also.
Consider the following Cprogram:
void foo (int n, int sum) { int k = 0, j = 0; if (n == 0) return; k = n % 10; j = n/10; sum = sum + k; foo (j, sum); printf ("%d,",k); } int main() { int a = 2048, sum = 0; foo(a, sum); printf("%d\n", sum); }
What does the above program print?

8, 4, 0, 2, 14

8, 4, 0, 2, 0

2, 0, 4, 8, 14

2, 0, 4, 8, 0
Option D.
foo is printing the lowest digit. But the printf inside it is after the recursive call. This forces the output to be in reverse order
2, 0, 4, 8
The final value "sum" printed will be 0 as C uses pass by value and hence the modified value inside foo won't be visible inside main.
What is the output printed by the following program?
#include <stdio.h> int f(int n, int k) { if (n == 0) return 0; else if (n % 2) return f(n/2, 2*k) + k; else return f(n/2, 2*k)  k; } int main () { printf("%d", f(20, 1)); return 0; }
 5
 8
 9
 20
See the following calling sequence
Hence Ans is Option C
6.) f(20,1) = 9.
5.) f(10,2)  1 = 9
4.) f(5,4)  2 = 10
3.) f(2,8) + 4 = 12
2.) f(1,16)  8 = 8
1.) f(0,32) + 16 = 16
A set $X$ can be represented by an array $x[n]$ as follows:
x$\left [ i \right ]=\begin {cases} 1 & \text{if } i \in X \\ 0 & otherwise \end{cases}$
Consider the following algorithm in which $x$, $y$, and $z$ are Boolean arrays of size $n$:
algorithm zzz(x[], y[], z[]) { int i; for(i=0; i<n; ++i) z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]); }
The set $Z$ computed by the algorithm is:
 $(X\cup Y)$
 $(X\cap Y)$
 $(XY)\cap (YX)$
 $(XY)\cup (YX)$
Option (D)
In the given algorithm the for loop contains a logical expression
z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);
The equivalent set representation of a given logical expression if we assume $z[i] = z, X[i] = X, Y[i] = Y$ then
$z = ( X ^ Y' ) V ( X ' ^ Y)$
$z = ( X  Y ) V ( Y  X ) [A ^ B' = A  B]$
Consider the following Cfunction in which $a[n]$ and $b[m]$ are two sorted integer arrays and $c[n+m]$ be another integer array,
void xyz(int a[], int b [], int c []){ int i,j,k; i=j=k=0; while ((i<n) && (j<m)) if (a[i] < b[j]) c[k++] = a[i++]; else c[k++] = b[j++]; }
Which of the following condition(s) hold(s) after the termination of the while loop?
 $j< m,k=n+j1$ and $a[n1]< b[j]$ if $i=n$
 $i<n,k=m+i1$ and $b[m1]\leq a[i]$ if $j=m$
 only (i)
 only (ii)
 either (i) or (ii) but not both
 neither (i) nor (ii)
The while loop add elements from $a$ and $b$ (whichever is smaller) to $c$ and terminates when either of them exhausts. So, when loop terminates either $i = n$ or $j = m$.
Suppose $i = n$. This would mean all elements from array a are added to $c => k$ must be incremented by $n$. $c$ would also contain $j$ elements from array $b$. So, number of elements in $c$ would be $n+j$ and hence $k = n + j$.
Similarly, when $j = m$, $k = m + i$.
Hence, option (D) is correct. (Had $k$ started from $1$ and not $0$ and we used $++k$ inside loop, answer would have been option (C))
Take Array Contents
a={1} b={2} so n=1,m=1
Now A is small so it will copy to C then terminate in next iteration cz i<n no more holds So content of variables after while loop
c={1} , i=1,j=0,k=1,n=1,m=1
Check Condition i :>> (i==n) Yes. j<m holds but k=n+j1 does not hold ,So Condition i is false.
Now Take Array Contents
a={2} b={1} so n=1,m=1
Now B is small so it will copy to C then terminate in next iteration cz j<m no more holds So content of variables after while loop
c={1} , i=0,j=1,k=1,n=1,m=1
Check Condition ii :>> (j==m) Yes. i<n holds but k=m+i1 does not hold ,So Condition ii is false.
Neither condition holds Hence Option D is correct Ans.
The following function computes the value of $\binom{m}{n}$ correctly for all legal values $m$ and $n$ ($m ≥1, n ≥ 0$ and $m > n$)
int func(int m, int n) { if (E) return 1; else return(func(m 1, n) + func(m  1, n  1)); }
In the above function, which of the following is the correct expression for E?
 $(n = = 0)  (m = = 1)$
 $(n = = 0) && (m = = 1)$
 $(n = = 0)  (m = = n)$
 $(n = = 0) && (m = = n)$
Answer: C
Because $\binom{m}{0}$ = 1 and $\binom{n}{n}$ = 1.
Consider the code fragment written in C below :
void f (int n) { if (n <=1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
What does f(173) print?
 010110101
 010101101
 10110101
 10101101
Answer: D
The function prints the binary equivalent of the number n.
Binary equivalent of $173$ is $10101101$.
Consider the code fragment written in C below :
void f (int n) { if (n <= 1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
Which of the following implementations will produce the same output for $f(173)$ as the above code?
P1  P2 

void f (int n) { if (n/2) { f(n/2); } printf ("%d", n%2); }

void f (int n) { if (n <=1) { printf ("%d", n); } else { printf ("%d", n%2); f (n/2); } }

 Both P1 and P2
 P2 only
 P1 only
 Neither P1 nor P2
here P1 and P2 will print opposite in direction as shown in diagram
and given code fragment will print like P1 and not like P2
Hence ans will be (C)
The code fragment written in C and P1 prints the binary equivalent of the number n.
P2 prints the binary equivalent of the number n in reverse.
Consider the program below:
#include <stdio.h> int fun(int n, int *f_p) { int t, f; if (n <= 1) { *f_p = 1; return 1; } t = fun(n1, f_p); f = t + *f_p; *f_p = t; return f; } int main() { int x = 15; printf("%d/n", fun(5, &x)); return 0; }
The value printed is
 $6$
 $8$
 $14$
 $15$
Let the address of x be 1000.
1.f(5,1000) = 8
2.f(4,1000) = 5
3.f(3,1000) = 3
4.f(2,1000) = 2
5.f(1,1000) = 1.
The evaluation is done from 5 to 1. Since recursion is used.
What is the value printed by the following C program?
#include<stdio.h> int f(int *a, int n) { if (n <= 0) return 0; else if (*a % 2 == 0) return *a+f(a+1, n1); else return *a  f(a+1, n1); } int main() { int a[] = (12, 7, 13, 4, 11, 6); printf("%d", f(a, 6)); return 0; }
 $9$
 $5$
 $15$
 $19$
Suppose int array takes $4$ byte for each element and strored at base address $100$.
Follow below img and Red color shows the return value.
So $15$ is Ans.
12 + ( 7  (13  (4 + (11  ( 6 + 0)))))
= 12 + (7  (13  ( 4 + ( 11 6)))))
= 12 + 7  13 + 9
= 15
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(345, 10)}$?
 $345$
 $12$
 $5$
 $3$
Red color represent return values
Ans is $12$
so 5+4+3 = 12
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(513, 2)}$?
 $9$
 $8$
 $5$
 $2$
so $1+0+0+0+0+0+0+0+0+1 = 2$
Consider the following function:
int unknown(int n){ int i, j, k=0; for (i=n/2; i<=n; i++) for (j=2; j<=n; j=j*2) k = k + n/2; return (k); }
The return value of the function is
 $\Theta(n^2)$
 $\Theta(n^2\log n)$
 $\Theta(n^3)$
 $\Theta(n^3\log n)$
Now in each iteration $k$ is incremented by $n/2$. So, overall $k$ will be added $n/2 * log n * n/2$ with an initial value of $0$. So, final value of $k$ will be $\Theta(n^2 log n)$
Consider the following C function in which size is the number of elements in the array E:
int MyX(int *E, unsigned int size) { int Y = 0; int Z; int i, j, k; for(i = 0; i< size; i++) Y = Y + E[i]; for(i=0; i < size; i++) for(j = i; j < size; j++) { Z = 0; for(k = i; k <= j; k++) Z = Z + E[k]; if(Z > Y) Y = Z; } return Y; }
The value returned by the function MyX is the
 maximum possible sum of elements in any subarray of array E.
 maximum element in any subarray of array E.
 sum of the maximum elements in all possible subarrays of array E.
 the sum of all the elements in the array E.
answer is (A) maximum possible sum of elements in any subarray of array E.
int MyX ( int * E, unsinged int size ) { int Y= 0; int z; int i, j,k; // calculate sum of the elements of the array E and stores it in Y for i 0;i<size;i++) Y = Y+E[i]; //calculate the sum of all possible subaarays (starting from postion 0..n1) for (i=0;i<size;i++) for(j=i;j<size ;j++) { z = 0; for(k=i; k<=j;k++) z=z+E[k]; // checks whether sum of elements of each subarray is greater than the current max, if so, then assign it to currentmax if(z>Y) Y = z; } // ultimately returns the maximum possible sum of elements in any sub array of given array E return Y; }
Consider the function func shown below:
int func(int num) { int count = 0; while (num) { count++; num>>= 1; } return (count); }
The value returned by func($435$) is ________
Ans  $9$
$435(110110011) $
num >>= 1; implies a num is shifted one bit right in every while loop execution.While loop is executed $9$ times successfully and $10$th time num is zero.
So count is incremented $9$ times.
Note:
Shifting a number "1"
bit position to the right will have the effect of dividing by $2$:
8 >> 1 = 4 // In binary: (00001000) >> 1 = (00000100)
int func(int num) // take decimal number { int count = 0; while (num) // until all bits are zero { count++; // count bit num>>= 1; // shift bits, removing lower bit } return (count); // returns total number of bits }
(435)_{10} = (110110011)_{2}
So, the given program counts total number of bits in binary representation . Hence, answer is 9
http://stackoverflow.com/questions/109023/howtocountthenumberofsetbitsina32bitinteger
Let $A$ be the square matrix of size $n \times n$. Consider the following pseudocode. What is the expected output?
C=100; for i=1 to n do for j=1 to n do { Temp = A[i][j]+C; A[i][j] = A[j][i]; A[j][i] = Temp C; } for i=1 to n do for j=1 to n do output (A[i][j]);
 The matrix $A$ itself
 Transpose of the matrix $A$
 Adding $100$ to the upper diagonal elements and subtracting $100$ from lower diagonal elements of $A$
 None of the above
A.
In the computation of given pseudo code for each row and column of Matrix $A$, each upper triangular element will be interchanged by its mirror image in the lower triangular and after that the same lower triangular element will be again reinterchanged by its mirror image in the upper triangular, resulting the final computed Matrix $A$ same as input Matrix $A$.
1 2
3 4
now trace the iteration
i=1,j=1 //no change in matrix cz a[i,j]=a[j,i]
i=1,j=2 // resultant matrix
1 3
2 4
i=2,j=1 //resultant matrix
1 2
3 4
i=2,j=2 //no change in matrix cz a[i,j]=a[j,i]
we get the original matrix as it is So, Option A is Ans.
Consider the following C function.
int fun1 (int n) { int i, j, k, p, q = 0; for (i = 1; i < n; ++i) { p = 0; for (j = n; j > 1; j = j/2) ++p; for (k = 1; k < p; k = k * 2) ++q; } return q; }
Which one of the following most closely approximates the return value of the function fun1?
 $n^3$
 $n(\log n)^2$
 $n \log n$
 $n \log(\log n)$
$i$ loop is executing $n$ times. $j$ loop is executing $\log\ n$ times for each $i$, and so value of $p$ is $\log n$. $k$ loop is executing $\log\ p$ times, which is $log\ log\ n$ times for each iteration of $i$. In each of these $q$ is incremented. So, over all iterations of $i$, $q$ will be incremented $n\ log\ log\ n$ times. So, D choice.
Here for loop of i is executing n times
j is executing log n time
Now, k is incrementing log p times
what that p is?
p is no of time j executes
j executes log n times
So, value of p is also log n
So, from here k executes log log n times
So, return value of fun1 is n log log n
Complexity of program O(n log n)
Consider the following C function.
int fun(int n) { int x=1, k; if (n==1) return x; for (k=1; k<n; ++k) x = x + fun(k) * fun (nk); return x; }
The return value of $fun(5)$ is ______.
$fun(1) = 1$;
$fun(2) = 1 + fun(1) * fun(1) = 1 + 1 = 2$;
$fun(3) = 1 + fun(1) * fun(2) + fun(2) * fun(1) = 5$;
$fun(4) = 1 + fun(1) * fun(3) + fun(2) * fun(2) + fun(3) * fun(1) = 1 + 5 + 4 + 5 = 15$;
$fun(5) = 1 + fun(1) * fun(4) + fun(2) * fun(3) + fun(3) * fun(2) + fun(4) * fun(1) = 1 + 15 + 10 + 10 + 15 = 51$;
More formal way:
The recurrence relation is
$f(n) = \begin{cases} 1, n = 1 \\ 1 + \sum_{i=1}^{n1} f(i) \times f(ni), n > 1 \end{cases}$
$f(1) = 1$
$f(2) = 1+ f(1).f(1) \\= 1 + 1.1 = 2$
$f(3) = 1 + f(1). f(2) + f(2).f(1) \\= 1+ 1.2 + 2.1 = 5$
$f(4) = 1 + f(1).f(3) + f(2).f(2) + f(3).f(2) \\= 1+ 1.5 + 2.2 + 5.1 = 15$
$f(5) = 1 + f(1).f(4) + f(2).f(3) + f(3). f(2)+ f(4).f(1) \\= 1 + 1.15 + 2.5 + 5.2 + 15.1 = 51$
Suppose $c = \langle c[0], \dots, c[k1]\rangle$ is an array of length $k$, where all the entries are from the set $\{0, 1\}$. For any positive integers $a \text{ and } n$, consider the following pseudocode.
DOSOMETHING (c, a, n)
$z \leftarrow 1$
for $i \leftarrow 0 \text{ to } k1$
do $z \leftarrow z^2 \text{ mod } n$
if c[i]=1
then $z \leftarrow (z \times a) \text{ mod } n$
return z
If $k=4, c = \langle 1, 0, 1, 1 \rangle , a = 2, \text{ and } n=8$, then the output of DOSOMETHING(c, a, n) is _______.
now let's iterate through the function step by step :
$z = 1$ (at the start of dosomething)
$i = 0$ (start of external for loop)
in the do loop
$z = 1*1 % 8 = 1$ (non zero value so considered as true and continue)
$c[0] = 1$ so in the if clause $z = 1*2 \% 8 = 2$
in the do loop
$z = 2*2 \% 8 = 4$ (since now $z = 2$) (non zero value so considered as true and continue)
$c[0] = 1$ so in the if clause $z = 4*2 \% 8 = 0$
now no need to check further :
reason all the operations that update $Z$ are multiplicative operations and hence the value of $Z$ will never change from $0$.
Ans: minimum number of comparison require to find minimum and maximum is: Approach is divide and conquer ....
T(n) = T(floor(n/2)) + T(ceil(n/2)) + 2 T(2) = 1 // if two element then compare both and return max and min T(1) = 0 // if one element then return both max and min same
If $n$ is a power of $2$, then we can write $T(n)$ as:
T(n) = 2T(n/2) + 2
After solving above recursion, we get
T(n) = 3/2n 2
Thus, the approach does $3/2n 2$ comparisons if $n$ is a power of $2$. And it does more than $3/2n 2$ comparisons if $n$ is not a power of $2$.
So, here in this case put $n=100$ and we will get $(3/2)(100)  2 = 148$ comparison .....
Another easier way to find it is by using Tournament Method Technique 
1. To find the smallest element in the array will take n1 comparisions = 99.
2. To find the largest element 
a.After the first round of Tournament , there will be exactly n/2 numbers = 50 that will loose the round.
b.So the biggest looser (the largest number) should be among these 50 loosers.To find the largest number will take n/2  1 comparisons = 49.
Total 99+49 = 148.
Given a set of $n$ distinct numbers, we would like to determine both the smallest and the largest number. Which of the following statements is TRUE?
 These two elements can be determined using $O\left(\log^{100}n\right)$ comparisons.
 $O\left(\log^{100}n\right)$ comparisons do not suffice, however these two elements can be determined using $n + O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these two elements can be determined using $3\lceil n/2 \rceil$ comparisons.
 $3\lceil n/2 \rceil$ comparisons do not suffice, however these two elements can be determined using $2(n  1)$ comparisons.
 None of the above.
I think ans will be C.
to be accurate it will be need $3n/2 2$ comparisons .
Consider the problem of computing the minimum of a set of $n$ distinct numbers. We choose a permutation uniformly at random (i.e., each of the n! permutations of $\left \langle 1,....,n \right \rangle$ is chosen with probability $(1/n!)$ and we inspect the numbers in the order given by this permutation. We maintain a variable MIN that holds the minimum value seen so far. MIN is initialized to $\infty$ and if we see a value smaller than MIN during our inspection, then MIN is updated. For example, in the inspection given by the following sequence, MIN is updated four times.
$5$ $9$ $4$ $2$ $6$ $8$ $0$ $3$ $1$ $7$
What is the expected number of times MIN is updated?
 $O (1)$
 $H_{n}=\sum ^{n}_{i=1} 1/i$
 $\sqrt{n}$
 $n/2$
 $n$
Let us consider $3$ numbers $\{1,2,3\}$
We will consider the permutation along with min no of times MIN is updated .
Permutation : No of times MIN updated (Minimum)
$1,2,3$ $1$
$1,3,2$ $1$
$2,1,3$ $2$
$2,3,1$ $2$
$3,1,2$ $2$
$3,2,1$ $3$
Total number of times MIN updated is : $11$.
Average no of times MIN updated is : $(11/6)$
Now going by the options i am getting B .
$H_3 = 1 + 1/2 + 1/3 = 11/6$ .
$H_3$ is the answer and that is option B .
Given a set of $n$ distinct numbers, we would like to determine the smallest three numbers in this set using comparisons. Which of the following statements is TRUE?
 These three elements can be determined using $O\left(\log^{2}n\right)$ comparisons.
 $O\left(\log^{2}n\right)$ comparisons do not suffice, however these three elements can be determined using $n+O(1)$ comparisons.
 $n + O(1)$ comparisons do not suffice, however these three elements can be determined using $n +O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these three elements can be determined using $O(n)$ comparisons.
 None of the above.
Option (C) is correct .. Reason is as follows :
Here , At first level we are Given $n$ elements , out of which we have to find smallest $3$ numbers.
we compare $2 2$ elements as shown in figure & get $n/2$ elements at Second level.
Note: Minimum element is in these $n/2$ elements.
So, comparisons for this is $n/2$..
Similarly for next level we have $n/4$ Comparisons & $n/2$ elements..and so on..
Total Comparisons till now is $n/2 + n/4 + n/8 + .... + 4 + 2 +1 = (2^{\log n}1) = n1$ {Use G.P. sum}
Now we have to get smallest $3$..
We have 1st smallest at last level already =>$0$ Comparison for this..
=> $2$^{nd} & $3$^{rd }smallest can be found in $O(\log n)$ time as shown below:
Minimum Element must have descended down from some path from top to Bottom..
=> SQUARES represent Candidates for 2nd minimum..
every element that is just below m1(first minimum) is a candidate for second minimum..
So, $O(\log n)$ Comparisons for finding second smallest..
=> similarly for 3rd minimum we get $O(\log n)$ time.. As, every element that is just below 1st & 2nd minimum is a candidate for 3rd minimum ..
All element are distinct
1.Using heap
Make a min heap tree = O(n)
Delete Three element from top = 3logn
+
n + 3logn = n+ O(logn) comparison
2.Using Selection sort
3 pass to get three minimum element
Comparison = n1+ n2 +n3 = 3n5
Swap =1+1+1 = 3
+
3n+2
3. for(i=0;i<=n;i++) // Min1 Min2 Min3 are first 3 element of array
{
if x < Min1 then, Min3 = Min2; Min2 = Min1; Min1 = x;
else if Min1 < x < Min2 then, Min3 = Min2; Min2 = x;
else if Min2 < x < Min3 then, Min3 = x;
else if Min3 < x skip
}
Worst case 3n comparision
4.using Divide and Conquer
minimum element = n1 compare
second minimum = log_{2}n  1 compare
third minimum = loglog_{2}n  1 compare
total = n+log_{2}n + loglog_{2}n 3 = n + O(log_{2}n)
So i think three element can be retrieved in O(n){n + O(logn)} time but comparison cant be n + 0(1) since all 3 element can be at n*(n1)*(n2) location and we have to scan the list at least once and compare the to 3 elements.
Option C.
Ram and Shyam have been asked to show that a certain problem $\Pi$ is NPcomplete. Ram shows a polynomial time reduction from the $3$SAT problem to $\Pi$, and Shyam shows a polynomial time reduction from $\Pi$ to $3$SAT. Which of the following can be inferred from these reductions?

$\Pi$ is NPhard but not NPcomplete

$\Pi$ is in NP, but is not NPcomplete

$\Pi$ is NPcomplete

$\Pi$ is neither NPhard, nor in NP
Ram's reduction shows that $\Pi$ is NP hard because it must be at least as hard as $3$SAT which is a known NPComplete problem. Here, $\Pi$ need not be NPComplete.
Now, Shyam's reduction shows that $3$SAT problem is at least as hard as $\Pi$ or equivalently $\Pi$ is not harder than $3$SAT. Since $3$SAT is in NP, $\Pi$ must also be in NP.
Thus, $\Pi$ is NPhard and also in NP $\implies$ $\Pi$ is NPComplete.
Let S be an NPcomplete problem and Q and R be two other problems not known to be in NP. Q is polynomial time reducible to S and S is polynomialtime reducible to R. Which one of the following statements is true?
 R is NPcomplete
 R is NPhard
 Q is NPcomplete
 Q is NPhard
Answer B.
As S is NPC i.e NPHard and NP.
We know that If NPHard problem is reducable to another problem in Polynomial Time, then that problem is also NPHard which means every NP problem can be reduced to this problem in Polynomial Time
Therefore R is NPHard.
Now Q is reduced To S in polynomial time.
If Q is reducable to S in polynomial time, Q could be NP because all NP problems can be reduced to S. Since Q could be Np therefore Q could be P also as P is subset of NP. Also Q could be NPC because every NPC problem can be reduced to another NPC problem in polynomial time.
So nothing can be concluded about Q.
The subsetsum problem is defined as follows: Given a set $S$ of $n$ positive integers and a positive integer $W$, determine whether there is a subset of $S$ whose elements sum to $W$. An algorithm $Q$ solves this problem in $O(nW)$ time. Which of the following statements is false?
 $Q$ solves the subsetsum problem in polynomial time when the input is encoded in unary
 $Q$ solves the subsetsum problem in polynomial time when the input is encoded in binary
 The subset sum problem belongs to the class NP
 The subset sum problem is NPhard
 Input is encoded in unary. So, length of input is equal to the value of the input. So, complexity $= O(nW)$ where both $n$ and $W$ are linear multiples of the length of the inputs. So, the complexity is polynomial in terms of the input length. So, (A) is true.
 Input is encoded in binary. So, length of W will be $lgW$. (for $W=1024$, input length will be just $10$). So, now $W$ is exponential in terms of the input length of $W$ and $O(nW)$ also becomes exponential in terms of the input lengths. So, $Q$ is not a polynomial time algorithm. So, (B) is false.
so according to question $11X+10Y= 323$ ...(1)
$ X+Y=31$.....(2) solving (1),(2) we get $X=13$ and $Y=18$ so here coins of $11$ gm are $13$ ,and one and only possible combination for $13= 1$ coin from bag$1+4$ coins from bag$3 +8$ coins from bag$4$...
so product of label of bags will be $=1*3*4=12$..
1,2,4,8,16 of all bags have 10 gm coins then total weight will come to
10 + 20 + 40 + 80+ 160 = 310 but total weight should be 323, but 13 is less, i divided 13 into 1 + 4 + 8
11 + 20 + 44+ 88 + 160, means 1st, 3rd and 4th bags have 11 gm coins. so product of labels will be 1*3*4 = 12
Consider the following game. There is a list of distinct numbers. At any round, a player arbitrarily chooses two numbers $a, b$ from the list and generates a new number $c$ by subtracting the smaller number from the larger one. The numbers $a$ and $b$ are put back in the list. If the number $c$ is nonzero and is not yet in the list, $c$ is added to the list. The player is allowed to play as many rounds as the player wants. The score of a player at the end is the size of the final list.
Suppose at the beginning of the game the list contains the following numbers: $48, 99, 120, 165$ and $273$. What is the score of the best player for this game?
 $40$
 $16$
 $33$
 $91$
 $123$
Option D is correct
Here the list is $(48, 99, 120, 165 ,273)$.
$Gcd(48,99)=3$ ,means if we subtract ($9948=51$) that no is also % $3$,
so the no's like $(3,6,999)$ are added , total no's$=99/3=33$
//$y \ Gcd(48,120)=24$,so the no's %$24$ are added like $(24,48,120)$ ,total no's$=120/24=5$
//$y \ Gcd(48,165)=3$,so the no's $(3,6,9,244899120165)$ are added ,totally $165/3=55$
at end $Gcd(48,273)=3$,so the no's$(3,6,9244899120165273)$ are added(which covers all the above no's)
so total no"s added to this list$=273/3=91$
Solve the recurrence equations:
 $T(n) = T(n  1)+ n$
 $T(1) = 1$
$T(n) = T(n1)+n$
$=T(n2)+(n1)+n$
$=T(n3)+(n2)+(n1)+n$
$=T(n3)+3n(1+2)$
.
.
.
$=T(nk)+kn(1+2.....+(k1))$
$nk=1$
$k=n1$
$T(1)+(n1)n(1+2+3......+n11)= 1+(n^2n) \frac{n(n+1)}{2} 2$
$\approx O(n^2)$
$T(n)=\frac{n+1}{n} T(n  1)+1$, for $n \geq \sum$ and $T(1) = 1$
What is the asymptotic behaviour of $T(n)$ as a function of $n$ ?
$T(n) = \frac{n+1}{n}T(n1) + 1$ (1)
$T(n1) = \frac{n}{n1}T(n2) + 1$(2)
$T(n2) = \frac{n1}{n2}T(n3) + 1$(3)
substituting value of $T(n1)$ from eqn (2) in eqn (1)
$T(n) = \frac{n+1}{n}*\frac{n}{n1}T(n2) + \frac{n+1}{n} + 1$
$T(n) = \frac{n+1}{n1}T(n2) + \frac{n+1}{n} + 1$
now substituting value of $T(n2)$ in above eqn
$T(n) = \frac{n+1}{n1}* \frac{n1}{n2}T(n3) + \frac{n+1}{n1} + \frac{n+1}{n} + 1$
$T(n) = \frac{n+1}{n2}T(n3) + \frac{n+1}{n1} + \frac{n+1}{n} + 1$
.
.
.
.
so on
$T(n) = \frac{n+1}{nk+1}T(nk) + \frac{n+1}{n} + \frac{n+1}{n1} + ...... + \frac{n+1}{nk+2} + 1$
$T(n) = \frac{n+1}{nk+1}T(nk) + (n+1)*( \frac{1}{n} + \frac{1}{n1} +.....+ \frac{1}{nk+2}) + 1$
now let $nk=1$ so $k = n1$, substitute value of $k$ in above eqn
$T(n) = \frac{n+1}{n(n1)+1}T(1) + (n+1)*( \frac{1}{n} + \frac{1}{n1} +.....+ \frac{1}{n(n1)+2}) + 1$
$T(n) = \frac{n+1}{2} + (n+1)*( \frac{1}{n} + \frac{1}{n1} +.....+ \frac{1}{3}) + 1$
$T(n) = \frac{n+1}{2}+ (n+1)*(H_n  \frac{1}{2}  1) + 1$
$T(n) = \frac{n+1}{2} + (n+1)*H_n  \frac{n+1}{2}  (n+1) + 1$
$\mathbf{T(n) = (n+1)*H_n  n}$
Now $H_n \approx \log n+γ$
where $γ$ is the EulerMascheroni constant.
$\mathbf{T(n) = O(n \log n)}$
Consider the function $F(n)$ for which the pseudocode is given below :
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do begin C ← C + 1 end F1 = F1 * C end F = F1 end
[$n$ is a positive integer greater than zero]
Solve the recurrence relation for a closed form solution of $F(n)$.
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 //if (n==1) then return 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do //inner loop runs n1 times outer loop runs for n times begin C ← C + 1 end //means C=n1 F1 = F1 * C //means n1 is getting multiplied n times so ans is (n1)^n for n>=2 end F = F1 end
Consider the recursive algorithm given below:
procedure bubblesort (n); var i,j: index; temp : item; begin for i:=1 to n1 do if A[i] > A[i+1] then begin temp := A[i]; A[i] := A[i+1]; A[i+1] := temp; end; bubblesort (n1) end
Let $a_n$ be the number of times the ‘if…then…’ statement gets executed when the algorithm is run with value $n$. Set up the recurrence relation by defining $a_n$ in terms of $a_{n1}$. Solve for $a_n$.
$a_n= a_{n2} + (n2)+(n1)$
$a_n= a_{n3} + (n3)+(n2)+(n1)$
.
.
.
$a_n= a_{nn} + (nn) + (n(n1)) + .... + (n3)+(n2)+(n1)$
$a_n= 0 + 1 + 2 + ....+ (n3) + (n2) + (n1)$
which given $a_n= \frac{(n1) \times (n)}{2}$
The recurrence relation that arises in relation with the complexity of binary search is:

$T(n) = 2T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

$T(n) = T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

$T(n) = T\left(\frac{n}{2}\right)+\log n$

$T(n) = T\left(\frac{n}{2}\right)+n$
It is B. searching for only one half of the list. leading to $T(n/2) $+ constant time in comparing and finding mid element.
The recurrence relation
 $T(1) = 2$
 $T(n) = 3T (\frac{n}{4}) +n$
has the solution $T(n)$ equal to

$O(n)$

$O (\log n)$

$O\left(n^\frac{3}{4}\right)$

None of the above
Answer: A
According to Master theorem,
$T(n) = aT(\frac{n}{b}) + f(n)$ can be expressed as:
$T(n) = [n^{\log_ba}][T(1) + u(n)]$
where $u(n) = \Theta(h(n))$ where $h(n) = \frac{f(n)}{n^{\log_ba}} = \frac{n}{n^{\log_43}} = n^{1\log_43}$ as $h(n) = n^r$ where $r>0$.
So, $T(n) = [n^{\log_ba}][T(1) + u(n)] = T(n) = [n^{\log_43}][T(1) + \Theta(n^{1\log_43})] = \Theta(n^{1})$.
Let $T(n)$ be the function defined by $T(1) =1, \: T(n) = 2T (\lfloor \frac{n}{2} \rfloor ) + \sqrt{n}$ for $n \geq 2$.
Which of the following statements is true?

$T(n) = O \sqrt{n}$

$T(n)=O(n)$

$T(n) = O (\log n)$

None of the above
answer B
using master method (case 1)
where a = 2, b = 2
O(n^{1/2}) < O(n^{logba})
O(n^{1/2}) < O(n^{log22})
$x_n = 2x_{n1}1, n>1$
$x_1=2$
$ =2(2T(n2)1) 1$
$ =2^2T(n2) 2 1$
$ =2^2(2T(n3)1) 2 1$
$ =2^3T(n3)  2^2 2 1$
$ \dots $
$ =2^{n1}T(n(n1))  \left(2^{n2} +2^{n3}+\dots+2^2 + 2 +1\right) $
$ =2^{n1}\times 2  \frac{2^{n1}1}{21} \ \ \ \because T(1) = 2, S_n = \frac{a.\left(r^n 1\right)}{r1}$
$ =2^n  \left(2^{n1} 1\right)$
$ =2^{n1} + 1$
Another alternative is:
The solution to the recurrence equation $T(2^k) = 3T(2^{k1})+1, T(1) =1$ is
 $2^k$
 $\frac{(3^{k+1}1)}{2}$
 $3^{\log_2 k}$
 $2^{\log_3 k}$
Let $x = 2^k$
$T(x) = 3T\left(\frac{x}{2}\right) + 1$
We can apply Master's theorem case 1 with $a = 3$ and $b = 2$ as $f(x) = 1 = O\left(x^{\log_2 3  \epsilon} \right), \epsilon > 0 $
So, $T(x) = \Theta \left(x^{\log_2 3}\right) \\= \Theta \left( {2^k}^{\log_2 3} \right)\\= \Theta \left({2^{\log_2 3}}^k \right)\\ = \Theta \left(3^k \right)$
So, only option possible is B.
We can also directly solve as follows:
$T(x) = 3T\left(\frac{x}{2}\right) + 1\\= 9T \left (\frac{x}{4}\right) + 1 + 3 \\ \dots \\= 3^{\log_2 2^k} + \left( 1 + 3 + 9 + \dots + 3^{\log_2 {2^k1}}\right)\\ \left(\text{recursion depth is }\log_2 x \text{ and } x = 2^k\right) \\= 3^{k} + \frac{3^{\log_2 {2^k}}  1}{31} \\ \left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)\\=3^k + \frac{3^k  1}{2} \\=\frac{3. 3^k  1}{2} \\=\frac{3^{k+1} 1}{2} $
OR
$T\left(2^k\right) = 3T\left(2^{k1}\right) + 1 \\= 3^2T\left(2^{k2}\right) + 1 +3 \\ \dots \\= 3^k T\left(2^{kk}\right) + \left( 1 + 3 + 9 + \dots + 3^{k1}\right) \\ \left(\text{recursion depth is }k\right)\\= 3^k + \frac{3^{k 1}} {31}\\\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right) \\=3^k + \frac{3^k 1}{2}\\=\frac{3. 3^k  1}{2} \\=\frac{3^{k+1} 1}{2} $
The running time of the following algorithm
Procedure $A(n)$
If $n \leqslant 2$ return ($1$) else return $(A( \lceil \sqrt{n} \rceil))$;
is best described by
 $O(n)$
 $O(\log n)$
 $O(\log \log n)$
 $O(1)$
$T(n) = T\left(\lceil \sqrt n \rceil \right) + 1$
$T(2) = 1$
$T\left(2^2\right) = T(2) + 1 = 2$
$T\left(2^{2^2}\right) = T(4) + 1 = 3$
$T\left(2^{2^3}\right) = T(16) + 1 = 4$
So, $T(n) = \lg\lg n + 1 = O\left(\log \log n\right)$
substitute root n = m then proceed.
Consider the following recurrence relation
$T(1)=1$
$T(n+1) = T(n)+\lfloor \sqrt{n+1} \rfloor$ for all $n \geq 1$
The value of $T(m^2)$ for $m \geq 1$ is
 $\frac{m}{6}\left(21m39\right)+4$
 $\frac{m}{6}\left(4m^23m+5\right)$
 $\frac{m}{2}\left(3m^{2.5}11m+20\right)5$
 $\frac{m}{6}\left(5m^334m^2+137m104\right)+\frac{5}{6}$
$T(m^2) = T(m^21) + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  2) + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  3) + \left\lfloor\sqrt{(m^2  2)} \right\rfloor + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$\vdots$
$= T(1) + \left\lfloor\sqrt{(2)} \right\rfloor + \left\lfloor\sqrt{(3)} \right\rfloor + \ldots + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= 3 \times 1 + 5 \times 2 + \ldots + \left(2m  1\right) \times (m1) +m $
(We are taking floor of square root of numbers, and between successive square roots number of numbers are in the series $3,5,7 \dots$ like $3$ numbers from $1..4$, $5$ numbers from $59$ and so on).
We can try out options here or solve as shown at end:
Put $m = 5$, $T(25) = 3 \times 1 + 5 \times 2 + 7 \times 3 + 9 \times 4 + 5 = 75$
 59
 75
 noninteger
 297.5
So, answer must be B.
$T(m^2) = 3 \times 1 + 5 \times 2 + \dots + \left(2m  1\right) \times (m1) +m$
$= m + \sum_i=1^{m1} (2i+1). (i) $
$= m + \sum_i=1^{m1} 2i^2 + i$
$= m + \frac{(m1) .m .(2m1)}{3} + \frac{(m1)m}{2}$
$= \frac{m}{6} \left(6 + 4m^2 2m 4m + 2 + 3m  3\right)$
$= \frac{m}{6} \left(4m^2 3m + 5\right) $
[Sum of the first $n$ natural numbers $=\frac{n. (n+1)}{2}.$
Sum of the squares of first $n$ natural numbers $ = \frac{n. (n+1). (2n+1)}{6}.$]
we can write this recurrence as T(n)=T(n1)+√n
Now we can apply master theorem for subtract and conquer
here, a=1,b=1 f(n)=O(√n )
By case 2 of subtract and conquer T(n)=O(n^1.5)
if we take n=m^2
T(m^2)=O(m^3),and which is tighter upper bound of option B.
see below for subtract and conquer of master theorem.
https://www.eecis.udel.edu/~saunders/notes/recurrencerelations.pdf
The time complexity of the following C function is (assume $n > 0$)
int recursive (int n) { if(n == 1) return (1); else return (recursive (n1) + recursive (n1)); }
 $O(n)$
 $O(n \log n)$
 $O(n^2)$
 $O(2^n)$
Option D
int recursive (int n) { if(n == 1) // takes constant time say 'A' time return (1); // takes constant time say 'A' time else return (recursive (n1) + recursive (n1)); // takes T(n1) + T(n1) time }
$T(n) = 2T(n1) +a$ is the recurrence equation found from the pseudo code .
Solving the Recurrence Equation By Back Substitution Method
$T (n) = 2 T ( n1 ) +a$ ( equation 1 )
$T(n1) = 2T ( n2)+a$
$T(n2) = 2T ( n3)+a$
We can re write Equation 1 as
$\begin{align*}T(n) &= 2 [2T ( n2)+a ] +a = 4T(n2)+ 3a \\ &= 2[2T ( n3)+a] + 3a = 8T (n3) + 7a \\&\dots \\&= 2^kT(nk) + (2^{k}1) a \end{align*}$  (Equation 2)
On Substituting Limiting Condition
$T(1) = 1$ implies $nk = 1 \\ \implies k= n1$
Therefore Equation 2 becomes
$2^{ n1} +(2^{n1}1)a = O(2^n)$
$$T(n)=2T(n1) + 1$$
T(1)=1
T(2)=2.1 + 1 =3
T(3)=2.3 +1 =7
T(4)=2.7 +1 =15 .... .....
T(n)=2.T(n1)+ 1
we can see that its a pattern getting formed which is $t(n)=2^n1$ so it is d $O(2^n)$
The recurrence equation
$ T(1) = 1$
$T(n) = 2T(n1) + n, n \geq 2$
evaluates to
 $2^{n+1}  n  2$
 $2^n  n$
 $2^{n+1}  2n  2$
 $2^n + n $
$ T(n) = n + 2(n1) + 2^2 (n2) + \dots + 2^{(n1)}(n (n1))$
$=n( 1 + 2 + \dots + 2^{n1})  (1.2 + 2.2^{2} +3.2^{3}+\dots+ (n1).2^{n1})$
$=n(2^n1) (n.2^n2^{n+1}+2)$
$=2^{n+1}n 2$
T(2) = 4
T(3) = 11
T(4) = 26
T(5) = 57
T(6) = 120
T(7) = 247
So, $$T(n) = 2^{n+1}  n  2$$
Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.
Recursive Algorithm  Recurrence Relation  

P.  Binary search  I.  $T(n) = T(nk) + T(k) + cn$ 
Q.  Merge sort  II.  $T(n) = 2T(n1) + 1$ 
R.  Quick sort  III.  $T(n) = 2T(n/2) + cn$ 
S.  Tower of Hanoi  IV.  $T(n) = T(n/2) + 1$ 
Which of the following is the correct match between the algorithms and their recurrence relations?
 PII, QIII, RIV, SI
 PIV, QIII, RI, SII
 PIII, QII, RIV, SI
 PIV, QII, RI, SIII
Answer is B.
Let $T(n)$ be a function defined by the recurrence
$T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
$T(1) = 1$
Which of the following statements is TRUE?
 $T(n) = \Theta(\log n)$
 $T(n) = \Theta(\sqrt n)$
 $T(n) = \Theta(n)$
 $T(n) = \Theta(n \log n)$
Option C it can be done by Master theorem.
$n^{\log_b a} = n^{\log_2 2} = n$.
$f(n) = \sqrt n = n^{\frac{1}{2}}$.
So, $f(n) = O\left(n^{\log_b a \epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
$$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$
Consider the following recurrence:
$ T(n)=2T\left ( \sqrt{n}\right )+1,$ $T(1)=1$
Which one of the following is true?
 $ T(n)=\Theta (\log\log n)$
 $ T(n)=\Theta (\log n)$
 $ T(n)=\Theta (\sqrt{n})$
 $ T(n)=\Theta (n)$
$T(n) = 2T\left(n^{\frac{1}{2}}\right)+1$
$ = 2\left(2T\left(n^{\frac{1}{2^2}}\right) + 1\right) + 1 $
$ = 4\times T\left(n^{\frac{1}{2^2}}\right) + 5 $
$ = 8\times T\left(n^{\frac{1}{2^3}}\right) + 13 \\ \cdots$
$=2^{(\lg \lg n)} + 2 \times \lg \lg n + 1\text{ (Proved below)} $
$= \Theta(\lg n)$
$n^{\frac{1}{2^k}} = 2 \\ \text{(Putting 2 so that we can take log.}\\\text{One more step of recurrence can't change the complexity.)} \\\implies \frac{1}{{2^k}} \lg n = 1 \text{(Taking log both sides)}\\\implies \lg n = 2^k \\\implies k = \lg \lg n$
So, answer is B, $T(n) = \Theta(\log n)$
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
Which of the following recurrences does $x_n$ satisfy?
 $x_n = 2x_{n1}$
 $x_n = x_{\lfloor n/2 \rfloor} + 1$
 $x_n = x_{\lfloor n/2 \rfloor} + n$
 $x_n = x_{n1} + x_{n2}$
$01 \ 10 \ 11 3$
$010 \ 011 \ 101 \ 110 \ 111 5$
$0101 \ 0110 \ 0111 \ 1010 \ 1011 \ 1101 \ 1110 \ 1111 8$
So, $x_n = x_{n1} + x_{n2}$ (For all the strings ending in $1$, we get two new strings and for all strings ending in $0$, we get a new string. So, the new set of strings for $n+1$, will have exactly $n$ strings ending in $1$)
$x_{5}= 8+5 = 13$
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
The value of $x_5$ is
 5
 7
 8
 16
Number of binary strings of length n that contain no consecutive 0s, following will be the required recurrence relation:
$T(n) = T(n1) + T(n2)$ $n>2$
base condition $T(1) = 2$ and $T(2) = 3$
$T(1) =2$ There will be $ 2$ strings of length $1$, i.e $0\ \& \ 1$
$T(2) =3 $ There will be $3$ strings of length $2$, i.e. $01,10,11$
$T(3) = T(1) + T(2) = 2+3 = 5$
$T(4) = T(3) + T(2) = 5 + 3 = 8$
$T(5) = T(4) +T(3) = 8 + 5 = 13$
Hence, answer is 13, but no option matches!
When $n = 2^{2k}$ for some $k \geqslant 0$, the recurrence relation
$T(n) = √(2) T(n/2) + √n$, $T(1) = 1$
evaluates to :
 $√(n) (\log n + 1)$
 $√(n) \log n$
 $√(n) \log √(n)$
 $n \log √n$
$T(n) = \sqrt(2) T\left(\frac{n}{2}\right) + \sqrt n$
$= {\sqrt 2}^2 T\left(\frac{n}{2^2} \right) +\sqrt {2} \sqrt {\frac{n}{2}} + \sqrt n$
$\dots$
$= {\sqrt 2}^ {\lg n} T(1) +\lg n \sqrt n$
$=\sqrt n + \lg n \sqrt n$
$= \sqrt n \left( \lg n + 1\right)$
If we use Master theorem we get option B. But one must know that Master theorem is used to find the asymptotic bound and not an EXACT value. And in the question here it explicitly says "evaluates to".
T(1) = 1 is given. Put n = 1 in all options. Only option A gives T(1) = 1.
The running time of an algorithm is represented by the following recurrence relation:
$T(n) = \begin{cases}
n & n \leq 3 \\
T(\frac{n}{3})+cn & \text{ otherwise }
\end{cases}$
Which one of the following represents the time complexity of the algorithm?

$\Theta(n)$

$\Theta(n \log n)$

$\Theta(n^2)$

$\Theta(n^2 \log n)$
$a=1, b=3, \log_b a=0$
So $n^{\log_b a} = n^0=1$
$f(n)=n$
So, $f(n)=Ω(1)$
To, check Master theorem case 3, we need $c>0$,
$f(n/3)\leq c f(n)$
$c=1$
So using case three of master theorem
$T(n)= \Theta(f(n)) = \Theta(n)$
answer is A.
The recurrence relation capturing the optimal execution time of the $Towers \ of \ Hanoi$ problem with $n$ discs is
 $T(n) = 2T(n − 2) + 2$
 $T (n) = 2T(n − 1) + n$
 $T (n) = 2T(n/2) + 1$
 $T (n) = 2T(n − 1) + 1$
Recurrence relation for Towers of Hanoi is
$T(1) = 1$
$T(n) = 2 T( n1 ) +1$
So Answer should be (D)
Which one of the following correctly determines the solution of the recurrence relation with $T(1) = 1$?
$$T(n)= 2T\left(\frac {n} {2}\right) + \log n$$
 $\Theta(n)$
 $\Theta(n\log n)$
 $\Theta(n^2)$
 $\Theta(\log n)$
$f(n) = \log n$
$a = 2, b = 2 \implies n^{\log_b a} = n$
So, $f(n) = \log n = O\left(n^{1\epsilon} \right)$, we can take any $\epsilon$ from $0$$1$ for example $0.5$ which gives $\log n = O(\sqrt(n))$, whose proof is given here: http://math.stackexchange.com/questions/145739/provethatlognosqrtn
So, Master theorem Case 1, and answer will be $O\left(n^{\log_2 2}\right) = O(n)$
Alternate way:
$T(1) = 1 \\ T(2) = 2T(1) + \log 2 = 3 = 3n  2\\T(4) = 2T(2) + \log 4 = 8 = 3n  4 \\T(8) = 2T(4) + \log 8 = 19 = 3n  5\\ T(16) = 2T(8) + \log 16 = 42 = 3n  6$
The second term being subtracted is growing at a lower rate than the first term. So, we can say $T(n) = O(n)$.
Let a$_{n}$ represent the number of bit strings of length n containing two consecutive $1$s. What is the recurrence relation for $a_{n}$?
 $a_{n  2} + a_{n  1} + 2^{n  2}$
 $a_{n  2} + 2a_{n  1} + 2^{n  2}$
 $2a_{n  2} + a_{n  1} + 2^{n  2}$
 $2a_{n  2} + 2a_{n  1} + 2^{n  2}$
Counting the number of bit strings NOT containing two consecutive $1$'s. (It is easy to derive a recurrence relation for the NOT case as shown below.)
$0 \ 1$
$00 \ 01 \ 10  3$ (append both $0$ and $1$ to any string ending in $0$, and append $0$ to any string ending in $1$)
$000 \ 001 \ 010 \ 100 \ 101  5$ (all strings ending in $0$ give two strings and those ending in $1$ give $1$ string)
$0000 \ 0001 \ 0010 \ 0100 \ 0101 \ 1000 \ 1001 \ 1010  8$
....
$a_n' = a_{n1}' + a_{n2}' $ (where $a_n$ denote the number of bit strings of length $n$ containing two consecutive $1$s)
$2^n  a_n = (2^{n1}  a_{n1}) + (2^{n2}  a_{n2})$
$a_n= 2^{n2}(4  2  1) + a_{n1} +a_{n2}$
$a_n= a_{n1} + a_{n2} + 2^{n2}$
A choice.
apply value putting and try to satisfy options using these values.
only option A holds good.
answer = option A
Consider the following recursive C function.
void get(int n) { if (n<1) return; get (n1); get (n3); printf("%d", n); }
If $get(6)$ function is being called in $main()$ then how many times will the $get()$ function be invoked before returning to the $main()$?
 $15$
 $25$
 $35$
 $45$
T(n<=0) = 0
T(1) = 2
T(2) = 4
T(3) = 6
T(4) = 10
T(5) = 16
T(6) = 24
So, answer is 24 + 1 call from main = 25.
The given diagram shows the flowchart for a recursive function $A(n)$. Assume that all statements, except for the recursive calls, have $O(1)$ time complexity. If the worst case time complexity of this function is $O(n^{\alpha})$, then the least possible value (accurate up to two decimal positions) of $\alpha$ is ________.
Flow chart for Recursive Function $A(n)$.
By calling $A(n)$ we get $A(n/2)$ $5$ times,
$A(n)=5A (n /2) + O(1)$
Hence by applying masters theorem,
Case 1 : $a>b^k$
$n^{\log_25}$
Thus value of alpha will be $2.32$
Consider the recurrence function
$$T(n) = \begin{cases} 2T(\sqrt{n})+1, & n>2 \\ 2, & 0 < n \leq 2 \end{cases}$$
Then $T(n)$ in terms of $\theta$ notation is
 $\theta(\log \log n)$
 $\theta( \log n)$
 $\theta (\sqrt{n})$
 $\theta(n)$
$T(n)=2T({\sqrt{n}})+1$
Put, $n=2^m$
$T(2^m)=2T(2^{m/2})+1$
put, $T(2^m) =s(m)$
$s(m)=2s(m/2)+1$
Using case 1 of master method ,
$=\Theta(m) = \Theta(\log n)$
https://gateoverflow.in/1829/gate200651isro201634?show=37791#c37791
Hence answer is b.
Consider the following recurrence relation:
$T\left(n\right)=
\begin{cases}
T\left(\frac{n}{k}\right)+ T\left(\frac{3n}{4}\right)+ n & \text{if } n \geq 2 \\
1& \text{if } n=1
\end{cases}$
Which of the following statements is FALSE?
 $T(n)$ is $O(n^{3/2})$ when $k=3$.
 $T(n)$ is $O(n \log n)$ when $k=3$.
 $T(n)$ is $O(n \log n)$ when $k=4$.
 $T(n)$ is $O(n \log n)$ when $k=5$.
 $T(n)$ is $O(n)$ when $k=5$.
OPTION b is false
c. apply AkraBazzi method .
for ex $k=4$, then eq is $T(n)=T(n/4)+T(3n/4)+n$
let $g(n)=n$ and $a1=1$, $b1=1/4$, $a2=1$, $b2=3/4$
then ∑ $ai (bi)p=1 :: 1 (1/4)p + 1 (3/4)p =1 \implies p=1$
then $T(n)=O(np(1+1\∫n (g(u) / u1+p).du )$
$=n(1+1\∫n u/u2 du$
$=n(1+ \log n)$
$=O(nlogn)$ ::option c is correct
d. apply back substitution
if $k=5$ $ T(n)=T(n/5) + T(3n/4)+n$ eq1
$T(n/5)=T(n/52) + T((1/5)(3/4)n)+n/5$
$T(3n/4)=T((3/4)((1/5)n)+T((3/4)2n)+3n/4$ substitute in eq1
we got $T(n)=T(n/52) + 2 T((1/5)(3/4)n)+T((3/4)2n)+n(1/5+3/4) +n$
in the next we get $T(n)=T(n/53)+ ... +n(1/5+3/4)2+n(1/5+3/4) +n$
$ T(n)=T(n/53)+...+n(19/20)2+n(19/20) +n$
$ T(n)=T(n/5k) + ... +n(1+(19/20)+(19/20)2)+... (19/20)k1)$
$ T(n)=T(n/5k)+... +n((19/20)k +1)/(1/20))$
$n/5k=1 \implies k=\log n$
$::T(n)=1+ 20 n 20 n(19/20)\log n)$
:: $ T(n)=O(n)$ which inturn $O(n \log n)$ both d,e are correct
a. by observing option a & b $T(n)=(n \√n) $ $T(n)=O(n \log n)$ and $\log n=O(\√n)$
so if option b is correct then option a is also correct > so option b is false (we can eliminate)
Consider the following recurrence relation:
$T(n)
= \begin{cases}
2T (\lfloor\sqrt{n}\rfloor)+ \log n & \text{if }n \geq 2 \\
1& \text{if }n = 1
\end{cases}$
Which of the following statements is TRUE?
 $T(n)$ is $O(\log n)$.
 $T(n)$ is $O(\log n. \log \log n)$ but not $O(\log n)$.
 $T(n)$ is $O(\log^{3/2} n)$ but not $O(\log n. \log \log n)$.
 $T(n)$ is $O(\log^{2} n)$ but not $O(\log^{3/2} n)$.
 $T(n)$ is $O(\log^{2} n. \log \log n)$ but not $O(\log^{2} n)$.
Let $n=2^k$
$T\left(2^k\right)= 2T\left(2^{k/2}\right) + k $
Let $T\left(2^k\right)=S(k) \implies T\left(2^{k/2}\right) = S(k/2)$
$S(k)=2S(k/2)+k$
This gives $S(k) = \Theta(k \log k)$ (Master theorem case 2)
$ = \Theta(\log n \log \log n)$
So ans is b
Consider the following program that attempts to locate an element $x$ in an array $a[ ]$ using binary search. Assume $N > 1$. The program is erroneous. Under what conditions does the program fail?
var i,j,k: integer; x: integer; a: array; [1..N] of integer; begin i:= 1; j:= n; repeat k:(i+j) div 2; if a[k] < x then i:= k else j:= k until (a[k] = x) or (i >= j); if (a[k] = x) then writeln ('x is in the array') else writeln ('x is not in the array') end;
if $(a[k] < x)$ then $i = k$;
else $j = k$ ;
the (correct) code should be ..
$k=(i+j) / 2$;
if $(a[k] < x)$ then $i = k +1$;
else $j = k 1$;
try an example ....with given code in question
let an array $ a[1,2,3,4,5,6,7,8,9,10]$
index number $1,2,3,4,5,6,7,8,9,10$
and $x=10$; now run the code ;
initially $i = 1$, $j=10$;
first time $k =(i+j) /2 = 11/2 =5.5 = 5$ (because of integer type) $=i$
second time $= k =(i+j) /2 =15/2 =7.5 =7 =i$
third time $= k =(i+j) /2 = 17/2 = 8.5 = 8 =i$
fourth time $= k =(i+j) /2 = 18/2 = 9 = i$
fifth time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
sixth time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
seventh time $= k =(i+j) /2 = 19/2 = 9.5 =9 = i$
.................................................................
going to infinite loop (run time error) .... ;
for terminating loop , it should be , $i = k + 1$ instead of $i =k$ ;and $j = k  1$ instead of $j = k$;
correct me ....???
The average number of key comparisons required for a successful search for sequential search on $n$ items is
 $\frac{n}{2}$
 $\frac{n1}{2}$
 $\frac{n+1}{2}$
 None of the above
$= 1 \times$ Probability of first element be $x + 2 \times$ Probability of second element be $x + .... +n \times$ Probability of last element be $x$.
$= \frac{1}{n} + \frac{2}{n} + \frac{3}{n} +.... +\frac{n}{n} $
$ = \frac{\left ( \frac{n\times (n+1)}{2} \right ) } {n} $
$ = \frac{n+1}{2}$
say list given = {3,1,2} and say you want to search element '2' in sequential way.So first you will visit first element and compare it with '2' .If it is '2' then your search will end at first element with only 1 comparison. But if it is not equal to '2',then you compare it with second element. so second element is '1' so again search was unsuccessful and comparison required was total '2' i.e. b/w '2' & '3' and b/w '2' & '1' and so on.
So if required element is found at first position , no of comparison = 1;
if required element is found at second position , no of comparison = 2 ...and so on.
Now since our list is not sorted so it can be anything e.g. list can be {1,2,3} or {3,2,1} or {2,3,1}etc.So the element we are looking for may be present at any of these three positions with equal chances of 1/3.
Now consider our list containing 'n' elements. So element to be searched can be present at any of these 'n' positions in the list with equal chance(probability) of 1/n.
Total comparison required = No.of comparison if element present in 1st position + No.of comparison if element present in 2nd position + .......+ No.of comparison if element present in nth position
= 1 + 2 + 3+ ......+n = n(n+1)/2
Since there are 'n' elements in the list.
So avg. no. of comparison = Total comparison/total no of elements = [n(n+1)/2] / n = (n+1)/2.
Consider the following algorithm for searching for a given number $x$ in an unsorted array $A[1..n]$ having $n$ distinct values:
 Choose an $i$ at random from $1..n$
 If $A[i] = x$, then Stop else Goto 1;
Assuming that $x$ is present in $A$, what is the expected number of comparisons made by the algorithm before it terminates?
 $n$
 $n1$
 $2n$
 $\frac{n}{2}$
Expected number of comparisons $(E) = 1 \times $ Probability of find on first comparison $+ 2 \times $Probability of find on second comparison $+ ... + i \times $ Probability of find on i^{th} comparison $+ ...$
$= 1 \times \frac{1}{n} + 2 \times \frac{n1}{n^2} + 3 \times \frac{ (n1)^2}{n^3} + \dots $
$ = \frac{1/n} {1  \frac{n1}{n} } + \frac{(n1)/n^2}{\left( {1 \frac{n1}{n} }\right)^2} \\ \left( \text{Sum to infinity of aritmeticogeometric series with }\\a = \frac{1}{n}, r = \frac{n1}{n} \text{ and } d = \frac{1}{n} \right) \\= 1 + n  1 = n$
Ref: https://en.wikipedia.org/wiki/Arithmeticogeometric_sequence
Or we can also do,
$E= 1 \times \frac{1}{n} + 2 \times \frac{n1}{n^2} + 3 \times \frac{ (n1)^2}{n^3} + \dots $
$E \frac{n1}{n} = \frac{n1}{n^2} + 2 \times \frac{(n1)^2}{n^3} + 3 \times \frac{(n1)^3}{n^4} + \dots $
$ E E\frac{n1}{n} = \frac{1}{n} + \frac{n1}{n^2} + \frac{ (n1)^2}{n^3} + \dots $
$E. \frac{1}{n} = \frac{(1/n)}{1  \frac{n1}{n}} = 1 \left( \text{Sum to infinity of GP with } a = \frac{1}{n} \text{ and } r = \frac{n1}{n} \right) \\ \implies E = n$
Why is the third term , and not
The way I see it, the probability of getting it in 2nd try is: Probability of failing at first trial times probability of succeeding in 2nd trial.
It's not like we are removing one element from the list after the first trial.
Here is my calculation and proof:
Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
On which of the following contents of $Y$ and $x$ does the program fail?
 $Y$ is $[1 \ 2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8 \ 9 \ 10]$ and $x < 10 $
 $Y$ is $[1 \ 3 \ 5 \ 7 \ 9 \ 11 \ 13 \ 15 \ 17 \ 19]$ and $x < 1 $
 $Y$ is $[2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2 \ 2]$ and $x > 2$
 $Y$ is $[2 \ 4 \ 6 \ 8 \ 10 \ 12 \ 14 \ 16 \ 18 \ 20]$ and $ 2 < x < 20$ and $x$ is even
for Q.84
when it is option C the control will continue to iterate as $i=8$ and $j=9$;
again and again $i$ will be assigned $k$ which itself equals $8$ as $\frac{8+9}{2}$ being stored in an integer type variable, will evaluate to $8$.
For option A, with $x=9$, $k$ will take the following values:
 $4$
 $6$
 $7$
 $8  y[8] = 9$, $x$ found
For option D, with $x=10$, $k$ will take the following values:
 $4$, $y[4] = 10$, $x$ found
Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
The correction needed in the program to make it work properly is
 Change line 6 to: if $(Y[k] < x) i = k + 1$; else $j = k1$;
 Change line 6 to: if $(Y[k] < x) i = k  1$; else $ j = k +1$;
 Change line 6 to: if $(Y[k] < x) i = k$; else $j = k$;
 Change line 7 to: } while $((Y[k] == x) \&\& (i < j))$ ;
Ans should be A
if( Y[k] < x) then i = k + 1;
if given element that we are searching is greater then searching will be continued upper half of array
otherwise $\mathbf{j = k  1}$;
lower half.
Take few case in consideration i.e.
 all elements are same
 increasing order with no repeatation
 increasing order with repeatation.
Let $A$ be an array of $31$ numbers consisting of a sequence of $0$'s followed by a sequence of $1$'s. The problem is to find the smallest index $i$ such that $A\left [i \right ]$ is $1$ by probing the minimum number of locations in $A$. The worst case number of probes performed by an optimal algorithm is ____________.
at each stage we compare with $(low + high)/2$ th element index and if its $1$ we check left and if its $0$ we check right...
total worst case probes is ceil$(Log_{2}^{31})=5$
so answer is $5$
it should not take more than 5 probes
#include <bits/stdc++.h> using namespace std; #define N 100 void display(bool a[]) { int i=0; while(i<31) printf("%2d ",a[i++]); } void assign(bool a[],int zeros) { for(int i=0;i<31;i++) (i<zeros)?a[i] = 0:a[i] = 1; } int main() { srand(time(NULL)); bool a[31]; // header for(int i=0;i<31;i++) printf("%2d ",i); printf("\n\n"); int max_probes = 0; for(int iteration = 1;iteration <= N;iteration++) { int zeros = rand()%32; assign(a,zeros); sort(a,a+31); int low,high,mid,ans; std::vector<int> seq; low = 0; high = 31; while(low < high) { mid = low + floor((high  low) / 2); seq.push_back(mid); if(a[mid] == 0) { low = mid + 1; ans = low; if(a[mid + 1] == 1) break; } else { high = mid; ans = high; if(mid > 0) if(a[mid  1] == 0) break; } } display(a); printf("  probes=%d ",seq.size()); for(auto e:seq) printf("%d ",e); printf("  at = %dth\n",ans); //if(ans == 15) printf("\nHHH=\n"); max_probes = max(max_probes,(int)(seq.size())); seq.clear(); } printf("%d\n",max_probes); }
Suppose you are given an array $A$ with $2n$ numbers.
The numbers in odd positions are sorted in ascending order, that is, $A[1] \leq A[3] \leq \ldots \leq A[2n  1]$.
The numbers in even positions are sorted in descending order, that is, $A[2] \geq A[4] \geq \ldots \geq A[2n]$.
What is the method you would recommend for determining if a given number is in the array?
 Sort the array using quicksort and then use binary search.
 Merge the sorted lists and perform binary search.
 Perform a single binary search on the entire array.
 Perform separate binary searches on the odd positions and the even positions.
 Search sequentially from the end of the array.
Option D is the correct answer.
We can simply use clever indexing to binary search the element in the odd positions, and in the even positions separately.
This will take $O(\log n)$ time and $O(1)$ space in the worst case.
A: Sorting using Quicksort will take $O(n^2)$ time.
B: Merging will take $O(n)$ time and $O(n)$ space.
C: Binary search only works on a sorted array.
E: Sequential search will take $O(n)$ time.
Consider the following three version of the binary search program. Assume that the elements of type $T$ can be compared with each other; also assume that the array is sorted.
i, j, k : integer; a : array [1....N] of T; x : T; Program 1 : i := 1; j := N; repeat k := (i + j) div 2; if a[k] < x then i := k else j := k until (a[k] = x) or (i > j) Program 2 : i := 1; j := N; repeat k := (i + j) div 2; if x < a[k] then j := k  1; if a[k] < x then i := k + 1; until i > j Program 3 := i := 1; j := N repeat k := (i + j) div 2; if x < a[k] then j := k else i := k + 1 until i > j
A binary search program is called correct provided it terminates with $a[k] = x$ whenever such an element exists, or it terminates with $a\left [ k \right ]\neq x$ if there exists no array element with value $x$. Which of the following statements is correct?
 Only Program $1$ is correct
 Only Program $2$ is correct
 Only Program $1$ and $2$ are correct.
 Both Program $2$ and $3$ are correct
 All the three programs are wrong
first program wont work if array has elements same..it may go into infinite loop .To make it work it properly we have to do following changes $j=k1$ and $i=k+1$
For second program $a[k]==x$ condition is missing so it is wrong
Third program is also wrong as $j!=k1$ and condition $a[k]==x$ is missing
So ans is E
You have $n$ lists, each consisting of $m$ integers sorted in ascending order. Merging these lists into a single sorted list will take time:
 $O(nm \log m)$
 $O(mn \log n)$
 $O(m + n)$
 $O(mn)$
Since $n$ lists of each size $m$.
Since each list is sorted in ascending order use directly merge procedure of merge sort algo.
take two list and merge..so one pair will take $\mathbf{2m}$ time.
so total pairs in first level will be $n/2$. So total cost for one level is $\mathbf{(n/2)*2m=nm}$.
In next level cost for one pair is $4m$ and no of pairs will be $n/4$.. so next level cost will be $nm$.
so like this each level will have cost $nm$.
No of levels will be when we have one complete list..
$\mathbf{n/2^k =1}$..
$\mathbf{k=\log_2^ \ n}$^{.}
So total cost will be $\mathbf{ \log n *(nm)}$
The worst case running times of Insertion sort , Merge sort and Quick sort, respectively are:
 $\Theta (n \log n)$, $\Theta (n \log n)$ and $\Theta(n^2)$
 $\Theta (n^2)$, $\Theta (n^2)$ and $\Theta(n \log n)$
 $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n \log n)$
 $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n^2)$
Answer is D..
Insertion sort $= \Theta(n^2)$
Merge sort $= \Theta(n\log n)$
Quick sort $= \Theta (n^2)$
Note : here $\Theta$ is not average case since question asked worst case so $\Theta$ represent worst case only
Let $P$ be a quicksort program to sort numbers in ascending order. Let $t_{1}$ and $t_{2}$ be the time taken by the program for the inputs $[1 \ 2 \ 3 \ 4]$ and $[5 \ 4 \ 3 \ 2 \ 1]$, respectively. Which of the following holds?
 $t_{1} = t_{2}$
 $t_{1} > t_{2}$
 $t_{1} < t_{2}$
 $t_{1}=t_{2}+5 \log 5$
Answer: 7
Minimum number of comparisons = $\lceil \log(n!) \rceil$ = $\lceil \log(5!) \rceil$ = $\lceil \log(120) \rceil$ = 7.
Ref: http://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_a_list
ans should be counting sort which will take $O(n+k)$ time
see here
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Following algorithm(s) can be used to sort $n$ in the range $[1.....n^3]$ in $O(n)$ time
 Heap sort
 Quick sort
 Merge sort
 Radix sort
Answer is (D) Part.
Although people have provided correct answers but it seems some more explanation is required.
Let there be $\mathbf{d}$ digits in max input integer, b is the base for representing input numbers and $\mathbf{n}$ is total numbers then Radix Sort takes $\mathbf{O(d*(n+b))}$ time. Sorting is performed from least significant digit to most significant digit.
For example, for decimal system, $b$ is $10$. What is the value of $d$? If $k$ is the maximum possible value, then $d$ would be $O(\log_b (k))$. So overall time complexity is $O((n+b) * \log_b(k))$. Which looks more than the time complexity of comparison based sorting algorithms for a large $k$. Let us first limit $k$. Let $k \leqslant n^{c}$ where $c$ is a constant. In that case, the complexity becomes $O(n \log_b(n))$. But it still does not beat comparison based sorting algorithms.
What if we make value of $b$ larger?. What should be the value of $b$ to make the time complexity linear? If we set $\mathbf{b}$ as $\mathbf{n}$ then we will get the time complexity as $O(n)$.
In other words, we can sort an array of integers with range from $1$ to $n^{c}$, If the numbers are represented in base $n$ (or every digit takes $\log_2(n)$ bits).
Reference > http://www.geeksforgeeks.org/radixsort/
As no comparison based sort can ever do any better than nlogn (Unless in special cases) a,b,c are eliminated. nlogn is lower bound for comparison based sorting.
As Radix sort is not comparison based sort (it is counting sort) D is correct !
The algorithm will take maximum time when:
 The array is already sorted in same order.
 The array is already sorted in reverse order.
 All elements are same in the array.
Algorithm design technique used in quicksort algorithm is?

Dynamic programming

Backtracking

Divide and conquer

Greedy method
C. it is one of the efficient algorithms in Divide and Conquer strategy.
Merge sort uses

Divide and conquer strategy

Backtracking approach

Heuristic search

Greedy approach
It is A.
One of the best examples of Divide and conquer strategy.
For merging two sorted lists of sizes $m$ and $n$ into a sorted list of size $m+n$, we require comparisons of

$O(m)$

$O(n)$

$O(m+n)$

$O(\log m + \log n)$
It is C.
The number of moves are however always $m+n$ so that we can term it as $\Theta(m+n)$. But the number of comparisons vary as per the input. In the best case the comparisons are $Min(m,n)$ and in worst case they are $m+n1$.
$$92, 37, 52, 12, 11, 25$$
Use bubblesort to arrange the sequence in ascending order. Give the sequence at the end of each of the first five passes.
$1$^{st }Pass: $37 \ 52 \ 12 \ 11 \ 25 \ 92$
$2$^{nd} Pass: $37 \ 12 \ 11 \ 25 \ 52 \ 92$
$3$^{rd} Pass: $12 \ 11 \ 25 \ 37 \ 52 \ 92$
$4$^{th} Pass: $11 \ 12 \ 25 \ 37 \ 52 \ 92$
$5$^{th} Pass: $11 \ 12 \ 25 \ 37 \ 52 \ 92$
A two dimensional array $A[1..n][1..n]$ of integers is partially sorted if $\forall i, j\in [1..n1], A[i][j] < A[i][j+1] \text{ and } A[i][j] < A[i+1][j]$
Fill in the blanks:

The smallest item in the array is at $A[i][j]$ where $i=__$ and $j=__$.
 The smallest item is deleted. Complete the following $O(n)$ procedure to insert item $x$ (which is guaranteed to be smaller than any item in the last row or column) still keeping $A$ partially sorted.
procedure insert (x: integer); var i,j: integer; begin i:=1; j:=1, A[i][j]:=x; while (x > __ or x > __) do if A[i+1][j] < A[i][j] ___ then begin A[i][j]:=A[i+1][j]; i:=i+1; end else begin _____ end A[i][j]:= ____ end
 smallest element is at index 1,1.
 So we have to give an array which is partially sorted. Definition of partially sorted is given in the question.
We will give the value of $x$ which is less than last row & column value.
At last, $1$,$1$ should be deleted & $x$ should be at its correct place.
i=1;j=1; a[i][j]=x; while ((x>a[i+1][j])  (x>a[i][j+1])) { if((a[i+1][j] < x) && (a[i+1][j] <a[i][j+1])) { a[i][j]=a[i+1][j]; i=i+1; } else { a[i][j]=a[i][j+1]; j=j+1; } } a[i][j]=x;
Enter the dimension of $nxn$ array. Give the value of $n$
$3$
Enter the array elements in partially sorted order
$2$ $3$ $9$
$5$ $6$ $10$
$8$ $11$ $15$
Enter the value of $x$
$7$
The final output.
$3$ $6$ $9$
$5$ $7$ $10$
$8$ $11$ $15$
Quicksort is run on two inputs shown below to sort in ascending order taking first element as pivot
 $1, 2, 3, \dots n$
 $n, n1, n2, \dots, 2, 1$
Let $C_1$ and $C_2$ be the number of comparisons made for the inputs (i) and (ii) respectively. Then,
 $C_1 < C_2$
 $C_1 > C_2$
 $C_1 = C_2$
 we cannot say anything for arbitrary $n$
C.
both are the worst cases of quick sort. (assuming pivot is either first or last element)
 is sorted in ascending order.
 is sorted in descending order.
Give the correct matching for the following pairs:
(A) $O (\log n)$  (P) Selection sort 
(B) $O (n)$  (Q) Insertion sort 
(C) $O (n \log n)$  (R) Binary search 
(D) $O (n^2)$  (S) Merge sort 

AR BP CQ DS

AR BP CS DQ

AP BR CS DQ

AP BS CR DQ
merge sort $O(n \log n)$
binary search $(log n)$
insertion sort $O(n)$ note if you use $O(n^2)$ here you will not be left with any choice to fill selection sort
A) O(logN) > Binary Search. No other option Matches > R . Option C & D eliminated.
C) O(nlogn) => Merge sort, even in best worst or any case. > S Option A eliminated.
It seems to me that all options are wrong here. But I would go with
Option B) AR BP CS DQ
D => Q This is okay.
B=> P It is okay if we just have selection. Selection sort is O(N^{2})
If one uses straight twoway merge sort algorithm to sort the following elements in ascending order:
$20, \ 47, \ 15, \ 8, \ 9, \ 4, \ 40, \ 30, \ 12, \ 17$
then the order of these elements after second pass of the algorithm is:

$8, \ 9, \ 15, \ 20, \ 47, \ 4, \ 12, \ 17, \ 30, \ 40$

$8, \ 15, \ 20, \ 47, \ 4, \ 9, \ 30, \ 40, \ 12, \ 17$

$15, \ 20, \ 47, \ 4, \ 8, \ 9, \ 12, \ 30, \ 40, \ 17$

$4, \ 8, \ 9, \ 15, \ 20, \ 47, \ 12, \ 17, \ 30, \ 40$
$20$ $47$ $15$ $8$ $9$ $4$ $40$ $30$ $12$ $17$
\ / \ / \ / \ / \ /
$20$ $47$ $8$ $15$ $4$ $9$ $30$ $40$ $12$ $17$ after $1$st pass
\ / \ / \ /
\ / \ / \ /
$8,15,20,47$ $4,9, 30,40$ $12,17$ after $2$nd pass
Ans. B
Let $A$ be an $n \times n$ matrix such that the elements in each row and each column are arranged in ascending order. Draw a decision tree, which finds $1$^{st}, $2$^{nd} and $3$^{rd} smallest elements in minimum number of comparisons.
This is a two dimensional array of size $4\times4$.
$2$____$5$____$8$____$9$
$4$____$11$___$19$___$21$
$6$____$13$___$24$___$27$
$8$____$15$___$25$___$29$
You can see the elements in each row and each column are arranged in ascending order.
Smallest element : $A[0][0] = 2$
$2$nd Smallest element : $min(A[0][1],A[1][0])= min(5,4)=4$
$3$rd smallest element: Just exclude the element you got as $2$nd smallest$(4)$.... here we can compare $A[2][0],A[0][1]$ no need to compare with $A[0][2]$..... So it depends upon from where you got $2$nd element. You can draw a decision tree...If you got $2$nd best from $A[0][1]$ then what to do & if you get from $A[1][0]$ then what to do.
Any way, time complexity is simply $O(1)$...
The elements are in ascending order. Not in non decreasing order. Clearly they are all distinct in a particular row or column.
A sorting technique is called stable if
 it takes $O (n \log n)$ time

it maintains the relative order of occurrence of nondistinct elements

it uses divide and conquer paradigm

it takes $O(n)$ space
(B) If it maintains the relative order of occurrence of nondistinct elements.
(from definition of stable sorting)
An array contains four occurrences of $0$, five occurrences of $1$, and three occurrences of $2$ in any order. The array is to be sorted using swap operations (elements that are swapped need to be adjacent).
 What is the minimum number of swaps needed to sort such an array in the worst case?
 Give an ordering of elements in the above array so that the minimum number of swaps needed to sort the array is maximum.
In bubble sort, all smaller elements to right of an element are required to be swapped. So if have ordering
$[2,2,2,1,1,1,1,1,0,0,0,0]$, then we need total $47$ swaps, and this will be the worst case.
so it answers actually both parts.
Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using Randomized quicksort?
 $O(n)$
 $O(n \log n)$
 $O(n^2)$
 $O(n!)$
Answer will be (C)
There are following two cases, when Randomized Quick Sort will result into worstcase of time complexity $O(n^{2})$
 When all elements are same in the input array, Partition algo will divide input array in two subarray, one with $n1$ elements and second with $0$ element. There is an assumption here that, we are using the same partition algorithm without any modification.
 If the randomised pivot selector happens to select e.g. the smallest element N times in a row, we will get the worst possible performance. Though the probability of this particular case is about $\frac{1}{n!}$ "
PS: If the partitioning is unbalanced, Quick Sort algorithm runs asymptotically as slow as Insertion Sort i.e $O(n^{2})$
Both the deterministic and randomized quicksort algorithms have the same bestcase running times of O($nlogn$) and the same worstcase running times of O(n$^{2}$).The difference is that with the deterministic algorithm, a particular input can elicit that worstcase behavior. With the randomized algorithm, however, no input can always elicit the worstcase behavior. The reason it matters is that, depending on how partitioning is implemented, an input that is already sortedor almost sortedcan elicit the worstcase behavior in deterministic quicksort.
source: Thomas Coremen
Ans. C
The unusual $\Theta(n^2)$ implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will
 remain $\Theta(n^2)$
 become $\Theta(n (\log n)^2)$
 become $\Theta(n \log n)$
 become $\Theta(n)$
In insertion sort, with linear search, it takes
(worst case) $n$ comparisons for searching the right position, and $n$ swaps to make room to place the element.
Hence for n elements, a total of $n\times(n+n)$; $n$ for search and $n$ for swaps.
$= \Theta (2n^2) = \Theta (n^2)$
If we replace it with binary search, it takes
(worst case) $\log n$ comparisons for searching the right position, and $n$ swaps to make room to place the element.
Hence for n elements, a total of $n\times(\log n+n)$; $n$ for search and $n$ for swaps.
$= \Theta (n \times \log n + n^2) = \Theta (n^2)$
Hence answer is A
A. Complexity remains same.θ(n^{2})
To place the element x in the correct position first we are finding its correct position in the sorted array using binary search but we have to make the space for it by shifting all elements to the right, which in worst case may be equal to the size of the sorted array.
In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of \(1. . . n\)?
 \(\frac{n(n1)}{2}\)
 \(\frac{n(n1)}{4}\)
 \(\frac{n(n+1)}{4}\)
 \(2n[\log_2n]\)
They are asking the average number of inversion. basically what $i$ learned about averages from dbms indexing is.
average apart from the standard definition can be calculated as $( best \ case + worst \ case )/2 $
and inversion is like $ 9,5$.
so best case will be sorted array $ 1,2,3,4,5$ no inversion .$ = zero$
$worst \ case = 9,5,4,3,2,1$ . here total number of inversion will be $n(n1)/2$ as . $9$ can be paired with any $5$ elements $(5,4,3,2,1)$ will form a inversion pair. similarly $5$ with $4$.elements .
so we can say if we have $n$ elements then. it will be $(n1)+(n2)+(n3)...+2+1$ which is the sum of first $n1$ natural numbers. so it is $n(n1)/2$
so expected average number of inversion $= (n(n1)/2 + zero (best \ case)) /2= n(n1)/4$
so option B.
second question.
we all know that insertion sort has complexity due to swapping and movements, if we have $n$ $n$ inversion pair then the movements and comparison will be restricted to $n$ only . like if inversion is $1$ , then array must be sorted and only the inversion should exist at the end, like $1,2,3,5,4$. otherwise more than one inversion pair will form. so to sort this. for two it will be $1,2,3,7,5,4$. so to sort this type of array using insertion sort atmost $N$ swaps wiill be required, so D
In a permutation \(a_1 ... a_n\), of $n$ distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of \(1. . . n\) with at most $n$ inversions?
 \(\Theta(n^2)\)
 \(\Theta(n\log n)\)
 \(\Theta(n^{1.5})\)
 \(\Theta(n)\)
ANSWER: D. $\Theta(n)$
REASON:
Count of number of times the inner loop of insertion sort executes is actually equal to number of inversions in input permutation $a_1,a_2,\dots a_n$. Since for each value of $i = 1... n$, $j$ take the value $1... i1$, which means for every $j<i$ it checks if $a[j] > a[i]$.
In any given permutation, maximum number of inversions possible is $n(n1)/2$ which is $O(n^2)$. It is the case where the array is sorted in reverse order. Therefore, to resolve all inversions i.e., worst case time complexity of insertion sort is $\Theta(n^2)$.
However, as per the question the number of inversion in input array is restricted to $n$. The worst case time complexity of insertion sort reduces to $\Theta(n)$.
INSERTION SORT ALGORITHM (for reference)
As the question says, how the Inversion is defined .
In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
 One important thing to see is Difference between swap and Inversions.
 Swapping is done explicitly by the programmer, hence a explicit feature whereas, Inversion is an implicit feature which is defined in the input .
Ex : Take the input => $\left \{ 0,1,9,8,10,7,6,11 \right \}$
How many Inversions here : $\left \{ 9,8 \right \}$ , $\left \{ 9,7 \right \}$ , $\left \{ 9,6 \right \}$ ,$\left \{ 8,7 \right \}$ , $\left \{ 8,6 \right \}$ , $\left \{ 10,7 \right \}$ and $\left \{ 10,6 \right \}$. Hence, it is an implicit feature of the input and not any explicit operation done (Swap) .
Actual Time complexity of Insertion Sort is defined as $\Theta \left ( N + f(N) \right )$, where $f\left ( N \right )$ is the total number of Inversions .
Ex : Again take the input => $\left \{ 0,6,7,8,9,10,1 \right \}$
Here, how many comparisons will be there to place $1$ in the right place ?
 First of all, $1$ is compared with $10$  Returns True as it is smaller than $10$.
 Then, with $9$  again True.
 Similar, happens with $6,7,8$  All return True .
Hence, There $5$ comparisons were the Inversions and $1$ more comparison will be there, in which outer while loop fails .
For, placing $1$ in the right place $6$ comparisons are there .
Hence, Total Comparisons in the Insertion sort will be : Total number of elements for which our while loop fails + Total number of inversions in the input
 Total number of elements for which our while loop fails : Suppose the input $\left \{ 0,6,7,8,9,10,1 \right \}$. Here, first $0$ will be kept as it is and one while loop fail comparison for each element, hence total comparisons like this : $\left ( N1 \right )=O\left ( N \right )$ comparisons.
 Total number of inversions in the input : Best Case : $0$ and in the Worst Case : $\frac{N\left ( N1 \right )}{2} = O\left ( N^{2} \right )$
Total Time complexity of insertion sort : $\Theta (N+f(N))$
It is given here that atmost $N$ inversions are there, so we get the best Time complexity : $\Theta (N+f(N)) = \Theta (N + N) = \Theta (N)$ .
The tightest lower bound on the number of comparisons, in the worst case, for comparisonbased sorting is of the order of
 $n$
 $n^2$
 $n \log n$
 $n \log^2n$
Tightest lower bound of sorting (say S$(n)$) is $n \log n$ means there is no function $f$ which has an order of growth larger than $n \log n$ and $f(n) = \Omega (S(n))$ holds.
A usual mistake is to think worst case changes with lower and upper bounds, but that is not the case. Worst case is defined for the algorithm and it is always the input which causes the algorithm the maximum complexity.
Suppose there are $\lceil \log n \rceil$ sorted lists of $\lfloor n /\log n \rfloor$ elements each. The time complexity of producing a sorted list of all these elements is: (Hint:Use a heap data structure)
 $O(n \log \log n)$
 $\Theta(n \log n)$
 $\Omega(n \log n)$
 $\Omega\left(n^{3/2}\right)$
Let $a$ and $b$ be two sorted arrays containing $n$ integers each, in nondecreasing order. Let $c$ be a sorted array containing $2n$ integers obtained by merging the two arrays $a$ and $b$. Assuming the arrays are indexed starting from $0$, consider the following four statements
 $a[i] \geq b [i] \Rightarrow c[2i] \geq a [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \geq b [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \leq a [i]$
 $a[i] \geq b [i] \Rightarrow c[2i] \leq b [i]$
Which of the following is TRUE?
 only I and II
 only I and IV
 only II and III
 only III and IV
$a[i] \geq b[i]$
Since both $a$ and $b$ are sorted in the beginning, there are $i$ elements smaller than or equal to $a[i] (i$ starts from $0)$, and similarly $i$ elements smaller than or equal to $b[i]$. So, $a[i] \geq b[i]$ means there are $2i$ elements smaller than or equal to $a[i]$, and hence in the merged array $a[i]$ will come after these $2i$ elements (its index will be $> 2i$). So, $c[2i] \leq a[i]$ (equality takes care of the "equal to" case which comes when array contains repeated elements).
Similarly, $a[i] \geq b[i]$ says for $b$ that, there are not more than $2i$ elements smaller than $b[i]$ in the sorted array ($i$ elements from $b$, and maximum another $i$ elements from $a$). So, $b[i] \leq c[2i]$
So, II and III are correct > option (C)
A= {40,50,60}
B={10,20,30}
Both Array have n elements and they are in nondecreasing order(they may contain equal no.s in asc. order)
C={10,20,30,40,50,60}
Array index starts with 0.
Now take i=2
A[2] >= B[2]
So now check all options
I. C[4] >= A[2]. 50 >= 60 . False
II.C[4] >= B[2]. 50>= 30 . True
III.C[4] <= A[2]. 50<=60. True
IV. C[4] <= B[2]. 50 <=30. False
So Option C is Correct Ans.
Which one of the following in place sorting algorithms needs the minimum number of swaps?
 Quick sort
 Insertion sort
 Selection sort
 Heap sort
because in selection the maximum swaps which can take place are $O(n)$
because we pick up an element an find the minimum (in case of forward sorting) from the next index till the end of array and than perform the swap
hence $O(n)$ whereas in all other algos the swaps are greater ( considering WorstCase scenario )
The median of $n$ elements can be found in $O(n)$ time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot?
 $\Theta (n)$
 $\Theta (n \log n)$
 $\Theta (n^{2})$
 $\Theta (n^{3})$
To people having doubt on what will be the time complexity when all input values are equal. I had the same doubt for few days and after spending some time on this question here is what i found:
1.There are two standard partition procedure of quicksort, one is lomuto partition which is given in topic of quicksort in cormen in which last element is chosen as pivot and both i and j starts from starting index, second one is hoare's partition which is the original quicksort partition procedure given by hoare who gave quicksort algorithm, this partition procedure is given in problems section of cormen, in this first element is chosen as pivot but i and j starts from opposite ends.
2.Now if u consider lomuto partition then no matter what pivot choosing strategy u follow, be it median or middle or anything the worst case will always be O(n^2) because of input type being all elements with equal values. but if we consider hoare partition and we choose good pivot like median then if all elements are same then we get equal partition so in this worst case can be guaranteed O(nlogn).
here is the reference to geeks for geeks comparing hoare vs lomuto partition: http://www.geeksforgeeks.org/hoaresvslomutopartitionschemequicksort/
what i think is in two gate questions asking time complexity when pivot is median and in other one pivot is 4th smallest element they might be considering hoare partition only.
Which of the following sorting algorithms has the lowest worsecase complexity?

Merge sort

Bubble sort

Quick sort

Selection sort
A.
Irrespective of the input, Merge sort always have a time complexity of $\Theta(n \log n)$.
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let $T(n)$ be the number of comparisons required to sort $n$ elements. Then

$T(n) \leq 2T(n/5) + n$

$T(n) \leq T(n/5) + T(4n/5) + n$

$T(n) \leq 2T(4n/5) + n$

$T(n) \leq 2T(n/2) + n$
$T(n) \leq T(n/5) + T(4n/5) + n$
One part contains $n/5$ elements
and the other part contains $4n/5$ elements
$+n$ is common to all options, so we need not to worry about it.
Hence, answer = option B
With 1/5 elements on one side giving T(n/5) and 4n/5 on the other side, giving T(4n/5).
and n is the pivot selection time.
If we use Radix Sort to sort $n$ integers in the range $\left (n^{k/2}, n^k \right ]$, for some $k > 0$ which is independent of $n$, the time taken would be?
 $\Theta(n)$
 $\Theta(kn)$
 $\Theta(n \log n)$
 $\Theta(n^2)$
Answer: C
The complexity of Radix Sort is $O(wn)$, for $n$ keys which are integers of word size $w$.
Here, $w = \log_2(n^k) = k \times \log_2(n)$
So, the complexity is $O(wn) = O(k \times \log_2(n) \times n)$, which leads to option C.
TIME COMPLEXITY OF RADIX SORT IS THETA ((n+k)d))
where n= no of numbers
k= base of number system
d= no of digits.
Here let base is b.
Numbers are are in range (n^k/2,n^k).
No of digits required= (log n^k)base b=k log n base b.
thus time complexity =( n+b) k log n base b.
Since b and k are constants with respect to n time complexity = Θ(nlogn)
What is the number of swaps required to sort $n$ elements using selection sort, in the worst case?

$\Theta(n)$

$\Theta(n \log n)$

$\Theta(n^2)$

$\Theta(n^2 \log n)$
The answer is A.
we have $1$ swap in each loop and hence $n$ swaps at max for $1$ to $n$. Therefore the worst case number of swaps is $\Theta(n)$
In quicksort, for sorting $n$ elements, the $\left(n/4\right)^{th}$ smallest element is selected as pivot using an $O(n)$ time algorithm. What is the worst case time complexity of the quick sort?

$\Theta(n)$

$\Theta(n \log n)$

$\Theta(n^2)$

$\Theta(n^2 \log n)$
B.
$T(n)= O(n)$ pivot selection time $ + T(n/4  1) + T(3n/4)$
which'll give $\Theta\left(n \log n\right)$.
Pivot selection complexity is given in questions. Pivot being the $(n/4)$th smallest element, once it is found, we have two sub arrays one of size $(n/4  1)$ and other of size $(3n/4)$ and for both of these we solve recursively.
A list of $n$ strings, each of length $n$, is sorted into lexicographic order using the mergesort algorithm. The worst case running time of this computation is
 $O (n \log n) $
 $ O(n^{2} \log n) $
 $ O(n^{2} + \log n) $
 $ O(n^{2}) $
you are given the first character of each $n$ strings to sort it will take $O(n \log n)$ time..in the worst case we may have to do the above process $2$ times,$3$ times,........,$n$ times so $n*O(n \log n)=O(n^2 \log n)$ please correct me if my approach is wrong...
**answer is O(n^2logn)
,
we know Merge sort has recurrence form
T(n) = a T(n/b) + O(n)
in case of merge sort
it is
T(n) = 2T(n/2) + O(n) when there are n elements
but here the size of the total is not "n" but "n string of length n"
so a/c to this in every recursion we are breaking the n*n elements in to half
for each recursion as specified by the merge sort algorithm
MERGESORT(A,P,R) ///here A is the array P=1st index=1, R=last index in our case it
is n^2
if P<R
then Q = lower_ceiling_fun[(P+R)/2]
MERGESORT(A,P,Q)
MERGESORT(A,Q+1,R)
MERGE (A,P,Q,R)
MERGE(A,P,Q,R) PROCEDURE ON AN N ELEMENT SUBARRAY TAKES TIME O(N)
BUT IN OUR CASE IT IS N*N
SO A/C to this merge sort recurrence equation for this problem becomes
T(N^2)= 2T[(N^2)/2] + O(N^2)
WE CAN PUT K=N^2 ie.. substitute to solve the recurrence
T(K)= 2T(K/2) + O(K)
a/c to master method condition T(N)=A T(N/B) + O(N^d)
IF A<=B^d then T(N)= O(NlogN)
therefore T(K) = O(KlogK)
substituting K=N^2
we get T(N^2)= O(n*nlogn*n)
ie.. O(2n*nlogn)
.. O(n*nlogn)
The number of elements that can be sorted in $\Theta(\log n)$ time using heap sort is
 $\Theta(1)$
 $\Theta(\sqrt{\log} n)$
 $\Theta(\frac{\log n}{\log \log n})$
 $\Theta(\log n)$
To sort $k$ elements in a heap, complexity is $\Theta (k \log k)$. Lets assume there are $\frac{\log n}{\log \log n}$ elements in the heap.
Complexity $= \Theta\left(\frac{\log n}{\log \log n} \log\left(\frac{\log n}{\log \log n}\right)\right) $
$=\Theta \left( \frac{\log n}{\log \log n} \left({\log \log n} { \log \log \log n}\right) \right )$
$= \Theta \left ({ \log n}  \frac{ \log n \log \log \log n}{\log \log n} \right )$
$=\Theta (\log n)$ (as shown below)
So, (C) is the answer.
$\log \log n > \log \log \log n$
$\implies \frac{\log \log \log n}{\log \log n} < 1$
$\implies \frac{ \log n \log \log \log n}{\log \log n} < \log n$
$\implies \Theta \left ( { \log n}  \frac{ \log n \log \log \log n}{\log \log n} \right ) =\Theta (\log n)$
answer = option C
Which one of the following is the tightest upper bound that represents the number of swaps required to sort $n$ numbers using selection sort?
 $O(\log n$)
 $O(n$)
 $O(n \log n$)
 $O(n^{2}$)
Let $P$ be quicksort program to sort numbers in ascending order using the first element as the pivot. Let $t_1$ and $t_2$ be the number of comparisons made by P for the inputs $[1 \ 2 \ 3 \ 4 \ 5]$ and $[4 \ 1 \ 5 \ 3 \ 2]$ respectively. Which one of the following holds?
 $t_1 = 5$
 $t_1 < t_2$
 $t_1>t_2$
 $t_1 = t_2$
The splitting occurs as
$[1] [2345]$
$[2] [345]$
$[3] [45]$
$[4] [5]$
and
$[123] [45]$
$[1] [23] [4][5]$
$[2] [3]$
Number of recursive calls remain the same, but in second case the number of elements passed for the recursive call is less and hence the number of comparisons also less.
Question is asking about number of comparisons.
First case [1 2 3 4 5]
1 [2 3 4 5] > 4 comparisons
2 [3 4 5] > 3 comparisons
3 [4 5] > 2 comparisons
4 [5] > 1 comparison
Second case [4 1 5 3 2]
4 [1 3 2] [5] > 4 comparisons
1 [3 2] > 2 comparisons
3 [2] > 1 comparison
Hence, in second case number of comparisons is less. => t1 > t2.
$20 \ 24 44, 43$ comparisons
$30 \ 35 65, 64$ comparisons
$44 \ 50 94, 93$ comparisons
$65 \ 94 159, 158$ comparisons
so, totally $43 + 64 + 93 + 158 = 358$ comparisons.
You have an array of $n$ elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is
 $O(n^2)$
 $O(n \log n)$
 $\Theta(n\log n)$
 $O(n^3)$
(A) $O(n^2)$ is the answer. When we choose the first element as the pivot, the worst case of quick sort comes if the input is sorted either in ascending or descending order. Now, when we choose the middle element as pivot, sorted input no longer gives worst case behavior. But, there will be some permutation of the input numbers which will be giving the same worst case behavior. For example,
$1 \ 2 \ 3 \ 4 \ 5 \ 6 \ 7$
This array gives worst case behavior for quick sort when the first element is pivot.
$6 \ 4 \ 2 \ 1 \ 3 \ 5 \ 7$
This array gives the worst case behavior of $O(n^2) $ if we take middle element as the pivot each split will be $1$ element on one side and $n1$ elements on other side. Similarly, for any input, we can have a permutation where the behavior is like this. So, whichever element we take as pivot it gives worst case complexity of $O(n^2)$ as long as pivot is from a fixed position (not random position as in randomized quick sort).
Which one of the following is the recurrence equation for the worst case time complexity of the quick sort algorithm for sorting $n$ ( $\geq $ 2) numbers? In the recurrence equations given in the options below, $c$ is a constant.
 $T(n) = 2 T (n/2) + cn$
 $T(n) = T ( n  1) + T(1) + cn$
 $T(n) = 2T ( n  1) + cn$
 $T(n) = T (n/2) + cn$
B.
Worst case for quick sort happens when $1$ element is on one list and $n1$ elements on another list.
Suppose you are provided with the following function declaration in the C programming language.
int partition(int a[], int n);
The function treats the first element of $a[\:]$ as a pivot and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part.
The following partially given function in the C programming language is used to find the $k^{th}$ smallest element in an array $a[\:]$ of size $n$ using the partition function. We assume $k \leq n$.
int kth_smallest (int a[], int n, int k) { int left_end = partition (a, n); if (left_end+1==k) { return a[left_end]; } if (left_end+1 > k) { return kth_smallest (___________); } else { return kth_smallest (___________); } }
The missing arguments lists are respectively
 $(a,$ left_end$, k) $ and $ (a+$left_end$+1, n$left_end$1, k$left_end$1)$
 $(a,$ left_end$, k) $ and $ (a, n$left_end$1, k$left_end$1)$
 $(a,$ left_end$+1, n$left_end$1, k$left_end$1) $ and $(a, $left_end$, k)$
 $(a, n$left_end$1, k$left_end$1)$ and $(a, $left_end$, k)$
First of all here The return value is the number of elements less than the pivot
Pivot is just to minimize searching
So, now we are assuming our array has $10$ elements , $N=10 , k=8$
So,in STEP 1 and STEP 2 'else' condition satisfying, and STEP 3 and STEP 4 'if ' condition satisfying
Here partition is calling and returning left_end value
Ans will be (A)
Assume that a mergesort algorithm in the worst case takes $30$ seconds for an input of size $64$. Which of the following most closely approximates the maximum input size of a problem that can be solved in $6$ minutes?
 $256$
 $512$
 $1024$
 $2018$
For an input of size $64$, the algorithm takes $30s$. Therefore,
$$\begin{align}
k \times 64 \log_2{64} &= 30s\\[1em]
k \times 384 &= 30s\\[1em]
\implies k &= 0.078125s
\end{align}$$
Let the size of the problem that can be solved in $6$ minutes be $x$. Then,
$$k \times x \log_2 x = 360s$$
From this, we get:
$$\begin{align}
x \log_2 x &= \frac{360s}{0.078125s}\\[1em]
\implies x &= 512
\end{align}$$
Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in the ascending order, which of the following are TRUE?
 Quicksort runs in $\Theta (n^2)$ time
 Bubblesort runs in $\Theta (n^2)$ time
 Mergesort runs in $\Theta (n)$ time
 Insertion sort runs in $\Theta (n)$ time
 I and II only
 I and III only
 II and IV only
 I and IV only
$Q = Q$ can be used instead of Theta
 Quicksort takes $Q(N^2)$ in case of already sorted input. This is true
 This is false. If no swap happens then bubble sort can stop in single loop. $Q(N)$ is best case. This is false !
 Mergesort never takes more than $Q(N \log N)$ This is false
 This is true. Insertion sort will finish in $Q(N)$ time in case of sorted input.
Answer D. I and IV
Proof Bubble sort has best case $O(N)$ =>
Ref > https://en.wikipedia.org/wiki/Bubble_sort, Aduni lecture
Following Photo from Art of Computer Programming , Sorting and Searching (Volume 3)
Now quicksort taking $\mathbf{O(N)}$ can happen in some cases but not all cases, so that is why I) should be considered true. WHereas Bubble sort time complexity in best case is always $\mathbf{O(N)}$ . So D is any time stronger answer than C
Consider the Insertion Sort procedure given below, which sorts an array $L$ of size $n\left ( \geq 2 \right )$ in ascending order:
begin for xindex:= 2 to n do x := L [xindex]; j:= xindex  1; while j > 0 and L[j] > x do L[j + 1]:= L[j]; j:= j  1; end {while} L [j + 1]:=X; end{for} end
It is known that insertion sort makes at most $n (n  1) / 2$ comparisons. Which of the following is true?
 There is no input on which insertion Sort makes $n (n  1) / 2 $ comparisons.
 Insertion Sort makes $n (n  1) / 2$ comparisons when the input is already sorted in ascending order.
 Insertion Sort makes $n (n  1) / 2$ comparisons only when the input is sorted in descending order.
 There are more than one input orderings where insertion sort makes $n (n  1) / 2 $ comparisons.
 Insertion Sort makes $n (n  1) / 2$ comparisons whenever all the elements of $L$ are not distinct.
$50 \ 40 \ 30 \ 20 \ 10$
pass $1 \ 50 \ 40 \ 30 \ 20 \ 10..............n$ $0$ compare
pass $2 \ 40 \ 50 \ 30 \ 20 \ 10..............n$ $1$ compare
.
.
.
.
pass $n \ n$ ...............$10 \ 20 \ 30 \ 40 \ 50$ $n1$ compare
$1+2+3.............+n1 = n(n1)/2$ comparisons
Let $S=\left \{ x_{1},....,x_{n} \right \}$ be a set of $n$ numbers. Consider the problem of storing the elements of $S$ in an array $A\left [ 1...n \right ]$ such that the following minheap property is maintained for all $2\leq i\leq n: A [\lfloor i/2\rfloor]\leq A[i]$. (Note that $\lfloor x\rfloor$ is the largest integer that is at most $x$). Which of the following statements is TRUE?
 This problem can be solved in $O(\log n)$ time.
 This problem can be solved in $O (n)$ time but not in $O(\log n)$ time.
 This problem can be solved in $O(n\log n)$ time but not in $O (n)$ time.
 This problem can be solved in $O \left ( n^{2} \right )$ time but not in $O(n\log n)$ time.
 None of the above.
store the elements in an array and then call build_heap(A). the build_heap takes $O(n)$ time.
so, option 'b' is correct.
but, if we try building heap by inserting each element one by one, the total complexity will be then $O(n \log n)$. cause insertion takes $O(\log n)$ and inserting '$n$' elements will take $O(n \log n)$.
Given a set of $n=2^{k}$ distinct numbers, we would like to determine the smallest and the second smallest using comparisons. Which of the following statements is TRUE?
 Both these elements can be determined using $2k$ comparisons.
 Both these elements can be determined using $n  2$ comparisons.
 Both these elements can be determined using $n + k  2$ comparisons.
 $2n  3$ comparisons are necessary to determine these two elements.
 $nk$ comparisons are necessary to determine these two elements.
Option (c) $n+k2$
Here is a nice explanation of the algorithm: http://www.codinghelmet.com/?path=exercises/twosmallest
it is solution to the problem known for ages, and it has to do with tennis tournaments. The question was, knowing the outcome of the tennis tournament, how can we tell which player was the second best? The defeated finalist is a good candidate, but there are other players that were defeated directly by the tournament winner and any of them could also be a good candidate for the second best. So the solution to the problem is quite simple: Once the tournament finishes, pick up the $\log N$ competitors that were beaten by the tournament winner and hold a minitournament to find which one is the best among them. If we imagine that better players correspond with smaller numbers, the algorithm now goes like this. Hold the tournament to find the smallest number (requires $N1$ comparisons). During this step, for each number construct the list of numbers it was smaller than. Finally, pick the list of numbers associated with the smallest number and find their minimum in $\log N1$ steps. This algorithm requires $N+\log N2$ comparisons to complete, but unfortunately it requires additional space proportional to N (each element except the winner will ultimately be added to someone’s list); it also requires more time per step because of the relatively complex enlisting logic involved in each comparison. When this optimized algorithm is applied to example array, we get the following figure.
Tournament held among numbers promotes value $1$ as the smallest number. That operation, performed on an array with nine numbers, requires exactly eight comparisons. While promoting the smallest number, this operation has also flagged four numbers that were removed from competition by direct comparison with the future winner: $6, 2, 3$ and $8$ in that order. Another sequence of three comparisons is required to promote number $2$ as the secondsmallest number in the array. This totals $11$ comparisons, while naive algorithm requires $17$ comparisons to come up with the same result.
All in all, this algorithm that minimizes number of comparisons looks to be good only for real tournaments, while number cracking algorithms should keep with the simple logic explained above. Implementation of simple algorithm may look like this:
a – array containing n elements min1 = a[0] – candidate for the smallest value min2 = a[1] – candidate for the second smallest value if min2 < min1 min1 = a[1] min2 = a[0] for i = 2 to n – 1 if a[i] < min1 min2 = min1 min1 = a[i] else if a[i] < min2 min2 = a[i]by Zoran Horvat @zoranh75
The first $n$ cells of an array $L$ contain positive integers sorted in decreasing order, and the remaining $m  n$ cells all contain 0. Then, given an integer $x$, in how many comparisons can one find the position of $x$ in $L$?
 At least $n$ comparisons are necessary in the worst case.
 At least $\log m$ comparisons are necessary in the worst case.
 $O (\log (m  n))$ comparisons suffice.
 $O (\log n)$ comparisons suffice.
 $O (\log (m / n))$ comparisons suffice.
(d) $\mathbf{O(\log n)}$ comparisons suffice.
Since it is possible that $m \ggg n$, we need to restrict ourselves to the first $O(n)$ elements to perform the binary search.
We start with the first element (index $i=1$), and check if it is equal to $0$. If not, we double the value of $i$, and check again. We repeat this process until we hit a $0$.
i = 1; while(arr[i] != 0) i *= 2;
Once we hit a $0$, the largest possible value (worst case) of $i$ can be $2n2$. This will happen if $n = 2^k+1$ for some $k$. Then, our 2nd last value of $i$ will be $2^k$, and then we get $2^{k+1}$, which is equal to $2n2$.
Now that we've hit a $0$, and the array contains positive numbers in decreasing order, if $x$ is present in $L$, it must be in the first $i$ elements.
We can binary search the first $i$ elements in $O(\log i)$ comparisons.
Since the largest possible value of $i = 2n2$, our algorithm takes $O \Bigl (\log (2n2) \Bigr ) = O(\log n)$ comparisons.
An array $A$ contains $n$ integers. We wish to sort $A$ in ascending order. We are told that initially no element of $A$ is more than a distance $k$ away from its final position in the sorted list. Assume that $n$ and $k$ are large and $k$ is much smaller than $n$. Which of the following is true for the worst case complexity of sorting $A$?
 $A$ can be sorted with constant $.kn$ comparison but not with fewer comparisons.
 $A$ cannot be sorted with less than constant $.n \log n$ comparisons.
 $A$ can be sorted with constant $.n$ comparisons.
 $A$ can be sorted with constant $.n \log k$ comparisons but not with fewer comparisons.
 $A$ can be sorted with constant $.k^{2}n$ comparisons but not fewer.
Let Array element be $\{4,3,2,1,7,5,6,9,10,8\}$ and $K$ be $3$ here no element is more than $3$ distance away from its final position
So if we take
$arr(1 \ to \ 6)$ and sort then surely first three element will be sorted in its final position$\{12345769108\}$ $O(6 \log 6)$
then sort $arr(3 \ to \ 9)$ then $3$ to $6$ will be sorted $\{12345679108\}$ $O(6 \log 6$)
then at last $arr(6 \ to \ 9)$ less than $O(6 \log6) \ \ \{12345678910\}$
in general
Sort $arr(0 \ to \ 2k)$
Now we know that $arr[0 \ to \ k)$ are in their final sorted positions
and $arr(k \ to \ 2k)$ may be not sorted.
Sort $arr(k \ to \ 3k)$
Now we know that $arr[k \ to \ 2k)$ are in their final sorted positions
and $arr(2k \ to \ 3k)$ may be not sorted.
.
.
.
.
sort till $arr(ik..N)$
in final sorting there will be less than $2k$ element.
in each step it will take $O(2k \log2k)$
and there will $\frac{n}{k}$ steps so $O(n \log k)$
option D.
Consider the quick sort algorithm on a set of $n$ numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?
 The running time of the algorithm is $\Theta (n).$
 The running time of the algorithm is $\Theta (n\log n)$.
 The running time of the algorithm is $\Theta (n^{1.5})$.
 The running time of the algorithm is $\Theta (n^{2})$.
 None of the above.
Algorithm is choosing $\text{median} = n/2$ smallest element as pivot.
Hence, the array is divided as:
$$\large
\begin{array}{cc}
\hline\\
\substack{\LARGE \left (\frac n 2 1 \right )\\[0.5em] \text{ elements}}
\quad&\quad
\substack{\text{Median at}\\[0.5em]\LARGE \frac n 2 \text{th}\\[0.5em]\text{location}}
\quad&\quad
\substack{\LARGE \left (n  \frac n 2 \right )\\[0.5em] \text{ elements}}
\quad
\\\\
\hline
\end{array}$$
Therefore quick sort recurrence relation is given by:
$$\begin{align}
T(n) &= T \left ( \frac n 2 1 \right ) + T \left ( n \frac n 2 \right ) + \Theta(n)\\[1em]
&= \Theta ( n \log n)
\end{align}$$
Hence, Option B is the correct answer.
Suppose $n$ processors are connected in a linear array as shown below. Each processor has a number. The processors need to exchange numbers so that the numbers eventually appear in ascending order (the processor $\rm P1$ should have the minimum value and the the processor $\rm Pn$ should have the maximum value).
The algorithm to be employed is the following. Odd numbered processors and even numbered processors are activated alternate steps; assume that in the first step all the even numbered processors are activated. When a processor is activated, the number it holds is compared with the number held by its righthand neighbour (if one exists) and the smaller of the two numbers is retained by the activated processor and the bigger stored in its right hand neighbour.
How long does it take for the processors to sort the values?
 $n \log n$ steps
 $n^2$ steps
 $n$ steps
 $n^{1.5}$ steps
 The algorithm is not guaranteed to sort
Answer :: C
Let $G$ be a weighted connected undirected graph with distinct positive edge weights. If every edge weight is increased by the same value, then which of the following statements is/are TRUE?
 $P$: Minimum spanning tree of $G$ does not change.
 $Q$: Shortest path between any pair of vertices does not change.
 $P$ only
 $Q$ only
 Neither $P$ nor $Q$
 Both $P$ and $Q$
Statement $P$ is true.
For statement $Q$ consider a simple graph with $3$ nodes.
A  B  C  
A  $0$  $1$  $100$ 
B  $1$  $0$  $2$ 
C  $100$  $2$  $0$ 
Shortest path from A to C is ABC $= 1 + 2 = 3$
Now if the value of each edge is increased by $100$,
A  B  C  
A  $0$  $101$  $200$ 
B  $101$  $0$  $102$ 
C  $200$  $102$  $0$ 
The shortest path from A to C is AC $= 200$, (ABC $= 101 + 102 = 203$)
Hence option A is correct.
P is True:> Bcz distinct edge weights and all the edge cost are increasing with same value so edges which forms MST will remain intact.
Q is false . See counter example
See here shortest path from S to U was initially STU but after increasing there value by 2 it becomes SU.
Note here Min Spanning tree edges remains same ST and TU.
So P true and Q false So Option A is Ans.
Graph $G$ can be like this:
Many people here havent understood the question itself. Consider a complete graph of 4 vertices. We have a total of 6 edges of given weights but we dont have the exact graph. Many different graphs are possible each having a different structure. Consider these 2 graphs, both of them are different. We dont know the exact structure of the graph, so what the question wants is to find the MST of all such structures and out of these tell the weight of the MST having maximum weight. The point about the MST of a graph with unique edge weights is valid for a given structure of the graph. With the same set of edge weights more than 1 graph is possible and all of them can have different MSTs.
My solution: draw a complete graph of 4 vertices. Sort given edges y weight in increasing order. JUst like Kruskal's algo sort the edges by weight. MST of graph with 4 vertices and 6 edges will have 3 edges. Now in any case we will have to include edge with weight 1 and 2 as they are minimum and Kruskal's algorithm included minimum weight edge if it doesnt form a cycle. You can not have a cycle with 2 edges. In Kruskals algo, an edge will be rejected if it forms a cycle with the edges already selected. To increase the weight of our MST we will try to reject the edge with weight 3. This can be done by forming a cycle. The graph in pic1 shows this case. This implies, th toal weight of this graph will be 1+2+4 = 7.
$G=(V, E)$ is an undirected simple graph in which each edge has a distinct weight, and $e$ is a particular edge of $G$. Which of the following statements about the minimum spanning trees $(MSTs)$ of $G$ is/are TRUE?
 If $e$ is the lightest edge of some cycle in $G$, then every MST of $G$ includes $e$.
 If $e$ is the heaviest edge of some cycle in $G$, then every MST of $G$ excludes $e$.
 I only.
 II only.
 Both I and II.
 Neither I nor II.
Statement 1: False by [Cut Property of MST]
See counter example : Here in below Graph $G$ in $($cycle $3,4,5 ) \ 3$ is the lightest edge and still it is not included in MST.
Statement 2:>True by[ Cycle property of MST] :>( in above Grahp $G \ \ 123$ is a cycle and $3$ is the heaviest edge) If heaviest edge is in cycle then we will always exclude that bcz Cycle is their means we can have other choice of low cost edges.
So Option B is Ans.
Must visit Links
http://www.cs.princeton.edu/courses/archive/spr07/cos226/lectures/mst.pdf
http://www.cs.toronto.edu/~vassos/teaching/c73/handouts/cutproperty.pdf
Statement 2 is correct absolutely. if e is the heaviest edge in cycle every mst excludes it.
Regarding statement 1, It is not fully right i think. When we think of a complete graph with 4 vertices and edge weights 1,2,5,6 in non diagonal and diagonal edges 3 and 4. 4,5,6 will create a cycle and we can exclude the lighest edge e (4) from it, in a MST
So i think answer could be B
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Kruskal’s algorithm for finding a minimum spanning tree of a weighted graph $G$ with $n$ vertices and $m$ edges has the time complexity of:
 $O(n^{2})$
 $O(mn)$
 $O(m+n)$
 $O(m \log n)$
 $O(m^2)$
Answer: d, b, e
When UnionFind algorithm is used to detect cycle while constructing the MST time complexity is $O(m \log n)$ where $m$ is the number of edges, and $n$ is the number of vertices. Since $n = O\left(m^2\right)$ in a graph, options B and E are also correct as bigO specifies asymptotic upper bound only.
Ref: http://www.geeksforgeeks.org/greedyalgorithmsset2kruskalsminimumspanningtreemst/
if all edges are already sorted then this problem will reduced to unionfind problem on a graph with $E$ edges and $V$ vertices.
for each edge (u,v) in E if(FINDSET(u) != FINDSET(v)) UNION(u,v)
FINDSET$(v)$ and UNION$(u,v)$ runs in $α(V)$
where $α(n)$ is inverse ackermann function i.e $\log^*(n)$
So overall complexity becomes $O(E.α(V))$
A complete, undirected, weighted graph $G$ is given on the vertex $\{0, 1,\dots, n 1\}$ for any fixed ‘n’. Draw the minimum spanning tree of $G$ if

the weight of the edge $(u, v)$ is $\mid uv\mid$

the weight of the edge $(u, v)$ is $u + v$
minimum spanning tree
Consider a graph whose vertices are points in the plane with integer coordinates $(x,y)$ such that $1 \leq x \leq n$ and $1 \leq y \leq n$, where $n \geq 2$ is an integer. Two vertices $(x_1, y_1)$ and $(x_2, y_2)$ are adjacent iff $\mid x_1x_2 \mid \leq 1 \text { and } \mid y_1 – y_2 \mid \leq 1$. The weight of an edge $\{(x_1, y_1), (x_2, y_2)\} \text{ is } \sqrt{(x_1 – x_2)^2 + (y_1 – y_2)^2}$

What is the weight of a minimum weightspanning tree in this graph? Write only the answer without any explanations.

What is the weight of a maximum weightspanning tree in this graph? Write only the answer without any explanations.
In the first image , the cost of the edge between the vertices $(2,3)$ and $(2,2)=1$ and the cost of the edge between the vertices $(2,2)$ and $(2,1)=1$.
Let $G$ be an undirected connected graph with distinct edge weights. Let $e_{max}$ be the edge with maximum weight and $e_{min}$ the edge with minimum weight. Which of the following statements is false?
 Every minimum spanning tree of $G$ must contain $e_{min}$
 If $e_{max}$ is in a minimum spanning tree, then its removal must disconnect $G$
 No minimum spanning tree contains $e_{max}$
 $G$ has a unique minimum spanning tree
C. the case should be written as "may or may not", to be true.
D will always be true as per the question saying that the graph has distinct weights.
Consider a weighted undirected graph with vertex set $V = \{ n1, n2, n3, n4, n5, n6 \} $ and edge set $E = \{(n1,n2,2), (n1,n3,8), (n1,n6,3), (n2,n4,4), (n2,n5,12), (n3,n4,7), (n4,n5,9), (n4,n6,4)\}$. The third value in each tuple represents the weight of the edge specified in the tuple.
 List the edges of a minimum spanning tree of the graph.
 How many distinct minimum spanning trees does this graph have?
 Is the minimum among the edge weights of a minimum spanning tree unique over all possible minimum spanning trees of a graph?
 Is the maximum among the edge weights of a minimum spanning tree unique over all possible minimum spanning tree of a graph?
a) edges with weight: 2,3,4,7,9
b) no of distinct minimum spanning tree: 2 (2nd with different edge of weight 4)
c) yes
d) yes
Edit: Combining both answers into 1.
An undirected graph $G$ has $n$ nodes. its adjacency matrix is given by an $n \times n$ square matrix whose (i) diagonal elements are 0’s and (ii) nondiagonal elements are 1’s. Which one of the following is TRUE?

Graph $G$ has no minimum spanning tree (MST)

Graph $G$ has unique MST of cost $n1$

Graph $G$ has multiple distinct MSTs, each of cost $n1$

Graph $G$ has multiple spanning trees of different costs
From the given data given graph is a complete graph with all edge weights $1$. A MST will contain $n1$ edges . Hence weight of MST is $n1$.
The graph will have multiple MST. In fact all spanning trees of the given graph wll be MSTs also since all edge weights are equal.
Let $G$ be a weighted undirected graph and e be an edge with maximum weight in $G$. Suppose there is a minimum weight spanning tree in $G$ containing the edge $e$. Which of the following statements is always TRUE?
 There exists a cutset in $G$ having all edges of maximum weight.
 There exists a cycle in $G$ having all edges of maximum weight.
 Edge $e$ cannot be contained in a cycle.
 All edges in $G$ have the same weight.
Option $A$ is correct.
Questions says the MST of graph $G$ contain an edge $e$ which is a maximum weight edge in $G$. Need to choose the answer which is always true to follow the above constraint.
Case1
Option B says tht if edge $e$ ia in MST then for sure there is a cycle having all edges of maximum weight. But it is not true always because when there is only n1 edges( but no cycle) in graph then also maximum edge has to be taken for MST.
Case 2
Option C says otherwise. That if e is in MST then it cannot be in any cycle that is wrong as if there is a cycle with all maximum edges then also e will be in MST
Option D says all edges should be of same weight same explanation if there are $n1$ distinct edges( but no cycle) in $G$ then have to take all edges including maximum weight edge.
And at last option A says if e is in MST then for sure there is a cutset ( minimum edge set whose removal disconnects the graph) in $G$ having all edges of maximum weight. And it is true.
Because then only we maximum weight edges has to be taken in MST.
For eg. If there are $n1$ edges (but no cycle) then if edge e is not taken in the MST then MST will not be connected.
Option A is correct.
Here, in this example we can easily see that B, C, D are false. So, B,C,D are not always true.
that's why A is always true.. A (Ans)
Consider a weighted complete graph $G$ on the vertex set $\{v_1,v_2,.....v_n\}$ such that the weight of the edge $(v_i, v_j)$ is $2ij$. The weight of a minimum spanning tree of $G$ is:
 $n1$
 $2n2$
 $\begin{pmatrix} n \\ 2 \end{pmatrix}$
 $n^2$
For connecting every i & i+1 node we have edge of weight 2 ,therefore We get 2*(n1).
Consider the following graph:
Which one of the following cannot be the sequence of edges added, in that order, to a minimum spanning tree using Kruskal’s algorithm?
 $(ab),(df),(bf),(dc),(de)$
 $(ab),(df),(dc),(bf),(de)$
 $(df),(ab),(dc),(bf),(de)$
 $(df),(ab),(bf),(de),(dc)$
in kruskal's algo the edges are added in non decreasing order of their weight. But in Option D edge $de$ with weight $3$ is added before edge $dc$ with weight $2$.Hence Option D is wrong option
Let $w$ be the minimum weight among all edge weights in an undirected connected graph. Let $e$ be a specific edge of weight $w$. Which of the following is FALSE?

There is a minimum spanning tree containing $e$

If $e$ is not in a minimum spanning tree $T$, then in the cycle formed by adding $e$ to $T$, all edges have the same weight.

Every minimum spanning tree has an edge of weight $w$

$e$ is present in every minimum spanning tree
D is the false statement.
A minimum spanning tree must have the edge with the smallest weight (In Kruskal's algorithm we start from the smallest weight edge). So, C is TRUE.
If e is not part of a minimum spanning tree, then all edges which are part of a cycle with e, must have weight $\leq e$, as otherwise we can interchange that edge with $e$ and get another minimum spanning tree of lower weight. So, B and A are also TRUE.
For the undirected, weighted graph given below, which of the following sequences of edges represents a correct execution of Prim's algorithm to construct a Minimum Spanning Tree?
 (a, b), (d, f), (f, c), (g, i), (d, a), (g, h), (c, e), (f, h)
 (c, e), (c, f), (f, d), (d, a), (a, b), (g, h), (h, f), (g, i)
 (d, f), (f, c), (d, a), (a, b), (c, e), (f, h), (g, h), (g, i)
 (h, g), (g, i), (h, f), (f, c), (f, d), (d, a), (a, b), (c, e)
Prim's algorithm starts with from any vertex and expands the MST by adding one vertex in each step which is close to the Intermediate MST(made till previous step).
Therefore correct answer would be (C).
(A): (d,f) is chosen but neither d nor f vertices are part of the previous MST(MST made till previous step).
(B): (g,h) is chosen but neither g or h vertices are part of the previous MST(MST made till previous step).
(D): (f,c) is chosen but at that point (f,d) is close to the intermediate MST.
Consider the following graph:
Which one of the following is NOT the sequence of edges added to the minimum spanning tree using Kruskal’s algorithm?

(b, e) (e, f) (a, c) (b, c) (f, g) (c, d)

(b, e) (e, f) (a, c) (f, g) (b, c) (c, d)

(b, e) (a, c) (e, f) (b, c) (f, g) (c, d)

(b, e) (e, f) (b, c) (a, c) (f, g) (c, d)
in option d bc with weight $4$ is added before ac with weight $3$ is added. In kruskal's algorithm edges should be added in non decreasing order of weight
So option D may be correct
Consider a complete undirected graph with vertex set $\{0, 1, 2, 3, 4\}$. Entry $W_{ij}$ in the matrix $W$ below is the weight of the edge $\{i, j\}$
$$W=\begin{pmatrix} 0 & 1 & 8 & 1 & 4 \\ 1 & 0 & 12 & 4 & 9 \\ 8 & 12 & 0 & 7 & 3 \\ 1 & 4 & 7 & 0 & 2 \\ 4 & 9 & 3 & 2 & 0 \end{pmatrix}$$
What is the minimum possible weight of a spanning tree $T$ in this graph such that vertex 0 is a leaf node in the tree $T$?
 $7$
 $8$
 $9$
 $10$
Answer is (D) $10$. The edges of the spanning tree are: $0  1, 1  3, 3  4, 4  2$. Total Weight $= 10$
Consider a complete undirected graph with vertex set $\{0, 1, 2, 3, 4\}$. Entry $W_{ij}$ in the matrix $W$ below is the weight of the edge $\{i, j\}$
$$W=\begin{pmatrix} 0 & 1 & 8 & 1 & 4 \\ 1 & 0 & 12 & 4 & 9 \\ 8 & 12 & 0 & 7 & 3 \\ 1 & 4 & 7 & 0 & 2 \\ 4 & 9 & 3 & 2 & 0 \end{pmatrix}$$
What is the minimum possible weight of a path $P$ from vertex $1$ to vertex $2$ in this graph such that $P$ contains at most $3$ edges?
 $7$
 $8$
 $9$
 $10$
Answer is (B) $8$. The possible path is: $1  0, 0  4, 4  2$.
For getting quick answer just draw a tree of depth 3. And compare various path. Although it is brute force, But this method will not take much time.
An undirected graph $G(V,E)$ contains $n \: (n>2)$ nodes named $v_1,v_2, \dots, v_n$. Two nodes $v_i, v_j$ are connected if and only if $ 0 < \mid ij\mid \leq 2$. Each edge $(v_i,v_j)$ is assigned a weight $i+j$. A sample graph with $n=4$ is shown below.
What will be the cost of the minimum spanning tree (MST) of such a graph with $n$ nodes?
 $\frac{1}{12} (11n^2  5 n)$
 $n^2n+1$
 $6n11$
 $2n+1$
Q 54. Answer is B
$\text{ We observe a pattern in the weight of MST being formed }$
$\text{ For n=3 } (1+2+3)+(1)$
$\text{ For n=4 } (1+2+3+4)+(1+2)$
$\text{ For n=5 } (1+2+3+4+5)+(1+2+3)$
$\text{ These can be obtained by drawing graphs for these graphs. }$
$\therefore \text{ Total weight of MST is } \sum_{i=1}^{n}i+\sum_{i=1}^{n2}i=n^2n+1\\$
An undirected graph $G(V,E)$ contains $n \: (n>2)$ nodes named $v_1,v_2, \dots, v_n$. Two nodes $v_i, v_j$ are connected if and only if $ 0 < \mid ij\mid \leq 2$. Each edge $(v_i,v_j)$ is assigned a weight $i+j$. A sample graph with $n=4$ is shown below.
The length of the path from $v_5$ to $v_6$ in the MST of previous question with $n=10$ is
 $11$
 $25$
 $31$
 $41$
Above is the graph....
Below is the MST.
Length of the path from $v_5$ to $v_6= 8+4+3+6+10= 31$ (Ans)
There is no direct edge between V_{i} and V_{i+1} vertex in mst except V1 to V2 so path from V_{i} and V_{i+1} includes all edges from v1 to V_{i+1} which are in mst:
edge weights in mst:
=3 + 4 + 6 + 8 + 10 + 12 +...+*(2(n1))
=1+(2+ 4+6+....till n1 terms)
=1+ 2(1+2+3+4+...n1)
=1+(n1)*n=n^{2}n+1
In this case v6 = 6^{2}6+1 =31
Let $G$ be a weighted graph with edge weights greater than one and $G'$ be the graph constructed by squaring the weights of edges in $G$. Let $T$ and $T'$ be the minimum spanning trees of $G$ and $G'$, respectively, with total weights $t$ and $t'$. Which of the following statements is TRUE?
 $T' = T$ with total weight $t' = t^2 $
 $T' = T$ with total weight $t' < t^2$
 $T' \neq T$ but total weight $t' = t^2$
 None of the above
$t'$ < $t^2$, because sum of squares is always less than the square of the sums except for a single element case.
Hence, B is the general answer and A is also true for a single edge graph. Hence, in GATE 2012, marks were given to all.
The number of distinct minimum spanning trees for the weighted graph below is _____
$6$ is the answer.
The graph shown below has 8 edges with distinct integer edge weights. The minimum spanning tree (MST) is of weight $36$ and contains the edges: $\{(A, C), (B, C), (B, E), (E, F), (D, F)\}$. The edge weights of only those edges which are in the MST are given in the figure shown below. The minimum possible sum of weights of all $8$ edges of this graph is_______________.
Similarly, for the cycle $DEF, ED > 6$.
And for the cycle $BCDE, CD > 15$.
So, minimum possible sum of these will be $10 + 7 + 16 = 33$. Adding the weight of spanning tree, we get the total sum of edge weights
$= 33 + 36 = 69$
The minimum possible sum of weights of all 8 edges of this graph is 69..
first we compare A to C and A to B we find 9 at A to C it means A to B must greater than A to C and for minimum possible greater value than 9 will b 10 .. so first we conclude 10.
after that we have again two conflict pair B to E and C to D in which we select B to E 15 which C to D possible weight 16.
now we have E to D and F to D in which we select F to D 6 means E to D must be greater than 6 so possible value greater than 6 is 7 .
so missing weight are
A to B >10
C to D>16
E to D>7
mst has $n1$ edges where $n$ is no of vertices. $1001 =99$ edges
each $99$ edges in mst increases by $5$ so weight in mst increased $99*5=495$
now total weight of mst $=500+495=995$
In a connected weighted graph with $n$ vertices, all the edges have distinct positive integer weights. Then, the maximum number of minimum weight spanning trees in the graph is
 $1$
 $n$
 equal to number of edges in the graph.
 equal to maximum weight of an edge of the graph.
 $n^{n2}$
There will be unique min weight spanning tree since all weights are distinct.
option A.
Consider the following undirected graph with some edge costs missing.
Suppose the wavy edges form a Minimum Cost Spanning Tree for $G$. Then, which of the following inequalities NEED NOT hold?
 cost$(a, b) \geq 6$.
 cost$(b, e) \geq 5$.
 cost$(e, f) \geq 5$.
 cost$(a, d) \geq 4$.
 cost$(b, c) \geq 4$.
Now check this diagram, this is forest obtained from above given graph using Kruskal's algorithm for MST.
So according to the question edge $de$ has weight $5$ and it is included in the formation of MST. Now if edges $be$ and $ef$ has weight greater than $5$ than it is not a problem for our MST because still we will get the given tree as Kruskal's algorithm takes the smallest weighted edge without forming a cycle.
Cost of edge $bc \geq 4$ may also lead us to the same tree as above though Kruskal's algorithm will have choice between $cf$ and $bc$.
Now if the edge weight of $ad$ becomes $4$, it is guaranteed that Kruskal's algorithm will not select edge $de$ because its edge cost is $5$, and hence the tree structure will change. But there can be the case where edge weight is greater than $4$ and we still get the same tree (happens when $ad \geq 5$). Because in the question they asked to point out an unnecessary condition this case is not the answer as we need $ad \geq 5$ which implies $ad \geq 4.$
Now notice option A. Put $ab = 5.$ The given MST would not change. So, this condition is not always necessary and hence is the answer..
Therefore option A is the answer .
Let $G = (V,E)$ be an undirected connected simple (i.e., no parallel edges or selfloops) graph with the weight function $w: E \rightarrow \mathbb{R}$ on its edge set. Let $w(e_{1}) < w(e_{2}) < · · · < w(e_{m})$, where $E = \left\{e_{1}, e_{2}, . . . , e_{m}\right\}$. Suppose $T$ is a minimum spanning tree of $G$. Which of the following statements is FALSE?
 The tree $T$ has to contain the edge $e_{1}$.
 The tree $T$ has to contain the edge $e_{2}$.
 The minimum weight edge incident on each vertex has to be present in $T$.
 $T$ is the unique minimum spanning tree in $G$.
 If we replace each edge weight $w_{i} = w(e_{i})$ by its square $w^{2}_{i}$ , then $T$ must still be a minimum spanning tree of this new instance.
Answer is E . The catch here is Edges weights belongs to real number . Therefore edge weight can be negative . In that case the minimum spanning tree may be different .
EDIT
(Here every edge weight is distinct, therefore MST is unique. You do it using any algo.)
Option A is True. If we apply kruskal's algorithm then it will choose $e_1$
Option B is True. If we apply kruskal's algorithm then it will also choose $e_2$, and 2 edges can not forms a cycle. ($e_3$ is not guaranteed in MST, as it may form cycle.)
Option C is also true. If we apply prims also on any vertex (say u) then it chooses minimum
weight edge incident on vertex u.
Option D is true. Because every edge weight is distinct.
Consider the following undirected connected graph $G$ with weights on its edges as given in the figure below. A minimum spanning tree is a spanning tree of least weight and a maximum spanning tree is one with largest weight. A second best minimum spanning tree whose weight is the smallest among all spanning trees that are not minimum spanning trees in $G$.
Which of the following statements is TRUE in the above graph? (Note that all the edge weights are distinct in the above graph)
 There is more than one minimum spanning tree and similarly, there is more than one maximum spanning tree here.
 There is a unique minimum spanning tree, however there is more than one maximum spanning tree here.
 There is more than one minimum spanning tree, however there is a unique maximum spanning tree here.
 There is more than one minimum spanning tree and similarly, there is more than one secondbest minimum spanning tree here.
 There is unique minimum spanning tree, however there is more than one secondbest minimum spanning tree here.
In the graph we have all edge weights are distinct so we will get unique minimum and maximum spanning tree.
Each Cycle must exclude maximum weight edge in minimum spanning tree.
Here we have two cycle of $3$ edges , ade and cgk .
for second best minimum spanning tree = exclude ae edge and include de edge
other way : second best minimum spanning tree= exclude cg edge and include gk edge.
so e should be the ans.
How many times is the comparison $i >= n$ performed in the following program?
int i=85, n=5; main() { while (i >= n) { i=i1; n=n+1; } }
 $40$
 $41$
 $42$
 $43$
Correct answer is C) $42$
It will start comparison from $(85, 5), (84, 6), (83, 7) .............. (45, 45), (44, 46)$
Hence total number of comparison will be $\mathbf{(8544) + 1 = 41 + 1 = 42}$
Match the pairs in the following questions:
(A) $O (\log n)$  (p) Heapsort 
(B) $O(n)$  (q) Depthfirst search 
(C) $O (n \log n)$  (r) Binary search 
(D) $O (n^{2})$  (s) Selection of the $k^{th}$ smallest element in a set of $n$ elements. 
(A) 
(r) Binary Search 
(B) $O(n)$ 
(s) Selection of the $k^{th}$ smallest element in a set of $n$ elements.(Worst case) 
(C) $O(n \log n)$  (p) Heapsort 
(D) $O(n^2)$ 
(q) Depthfirst search ( It will be $O(n+m)$ if the graph is given in the form of adjacency list but if the graph is in the form of adjacency matrix then the complexity is $O(n*n)$, as we have to traverse through the whole row until we find an edge.) 
PS: $K^{th}$ smallest element can be found in $O(n)$ time using partioning algorithm.
$T(N)=T(N/2)+N$ (FOR FINDING $K^{TH}$ SMALLEST ELEMENT)
BY MASTERS THEOREM IT WILL BE $O(N)$.
$\displaystyle \sum_{1\leq k\leq n} O(n)$, where $O(n)$ stands for order $n$ is:
 $O(n)$
 $O(n^2)$
 $O(n^3)$
 $O(3n^2)$
 $O(1.5n^2)$
This is $N$ added itself $N$ times. So it is $N^2$^{ }. Even if you consider as sum of $O(1) + O(2) + .. O(n1) + O(N)$. it will add up to $N^2$^{ }
So answer is
A) $O(N)$ this is false.
B,C,D,E ) All of this are true. We have $N^2$^{ }here, so all options apart from A are correct.
In fact B = D = E this three options are same. and $N^3$ is always upper bound of $N2$. So $O(N^3)$ is also true.

PS: $∑ (K=1$ to $n)$ $O(K)$ is never equal to $n$ functions. It is always equal to one single function.
It can be written as $c.1+c.2+c.3$ and so on which results in $O(N^2)$ (source Cormen)
Suppose we want to arrange the $n$ numbers stored in any array such that all negative values occur before all positive ones. Minimum number of exchanges required in the worst case is
 $n1$
 $n$
 $n+1$
 None of the above
Answer is (D) None of these
We just require $n/2$ swaps in the worst case. The algorithm is as given below:
Find positive number from left side and negative number from right side and do exchange. Since, at least one of them must be less than or equal to $n/2$, there cannot be more than $n/2$ exchanges. An implementation is given below:
http://gatecse.in/wiki/Moving_Negative_Numbers_to_the_Beginning_of_Array
If $n$ is a power of 2, then the minimum number of multiplications needed to compute $a^n$ is
 $\log_2 n$
 $\sqrt n$
 $n1$
 $n$
$a^n = (a^2)^{\frac{n}{2}}$.
One multiplication and recurrence on $\frac{n}{2}$. So, we get the recurrence relation for the number of multiplications as
$T(n) = T(n/2) + 1$.
This gives $T(n) = \log_2 n$
For n = 8, we can do
$b = a \times a$
$b = b\times b$
$b = b\times b$ and we get $b = a^8$
Consider the following algorithms. Assume, procedure $A$ and procedure $B$ take $O (1)$ and $O(1/n)$ unit of time respectively. Derive the time complexity of the algorithm in $O$notation.
algorithm what (n) begin if n = 1 then call A else begin what (n1); call B(n) end end.
$T(n) = T(n1) + \frac{1}{n} + c$ (O(1/n) replaced with 1/n and so our answer will also be in $O$ only and not $\Theta$).
$T(n) = T(n2) + \frac{1}{n1} + \frac{1}{n} + 2c \\=T(1) + \frac{1}{2} + \frac{1}{3}+ \dots + \frac{1}{n} + (n1) c \\=A(1) +\frac{1}{2} + \frac{1}{3} \dots + \frac{1}{n} + nc \\= 1 +\frac{1}{2} + \frac{1}{3} \dots + \frac{1}{n} + nc \\= \log n + nc $
(Sum of the first $n$ terms in harmonic series is $\Theta(\log n)$)
So, our time complexity will be $O(n)$.
Let $S$ be a sorted array of $n$ integers. Let $T(n)$ denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than $1000$ in $S$. Which of the following statement is true?
 $T (n)$ is $O(1)$
 $n \leq T(n) \leq n \log_2 n$
 $n \log_2 n ≤ T(n) < \frac{n}{2}$
 $T(n) = \left (\frac{n}{2} \right)$
Option A. Because array is always sorted just check the 1st two elements.
The cube root of a natural number $n$ is defined as the largest natural number $m$ such that $(m^3 \leq n)$ . The complexity of computing the cube root of $n$ ($n$ is represented by binary notation) is
 $O(n)$ but not $O(n^{0.5})$
 $O(n^{0.5})$ but not $O((\log n)^k)$ for any constant $k>0$
 $O((\log n)^k)$ for some constant $k>0$, but not $O( (\log \log n)^m)$ for any constant $m>0$
 $O( (\log \log n)^k )$ for some constant $k > 0.5$, but not $O( (\log \log n)^{0.5} )$
Now, a number is represented in binary using $\log n$ bit. Since each bit is important in finding the cube root, any cube root finding algorithm must examine each bit at least once. This ensures that complexity of cube root finding algorithm cannot be lower than $\log n$. (It must be $\Omega \left( \log n \right)$). So, (D) is also false and (C) is the correct answer.
Two matrices $M_1$ and $M_2$ are to be stored in arrays $A$ and $B$ respectively. Each array can be stored either in rowmajor or columnmajor order in contiguous memory locations. The time complexity of an algorithm to compute $M_1 \times M_2$ will be

best if $A$ is in rowmajor, and $B$ is in columnmajor order

best if both are in rowmajor order

best if both are in columnmajor order

independent of the storage scheme
D is correct
Here time complexity is asked, for each access of array element it will be constant,
So the time complexity will not depend upon storage. If at all program execution time is asked
a is true
Let $A[1, …n]$ be an array storing a bit ($1$ or $0$) at each location, and $f(m)$ is a function whose time complexity is $\Theta(m)$. Consider the following program fragment written in a C like language:
counter = 0; for (i=1; i<=n; i++) { if a[i] == 1) counter++; else {f (counter); counter = 0;} }
The complexity of this program fragment is

$\Omega(n^2)$

$\Omega (n\log n) \text{ and } O(n^2)$

$\Theta(n)$

$o(n)$
The key part in the code is "$counter = 0$" in the else part as we can see below.
Lets take the best case. This happens when $a[i] = 1$ for all $i$, and then the loop executes with time complexity $\Theta(1)$ for each iteration and hence overall time complexity of $\Theta(n)$ and we can say time complexity of the code fragment is $Ω(n)$ and hence options A and B are false.
Now, consider the worst case. This happens when $a[i] = 0$ or when else part is executed. Here, the time complexity of each iteration will be $\Theta(counter)$ and after each else, counter is reset to $0$. Let $k$ iterations go to the else part during the worst case. Then the worst case time complexity will be $\Theta(x_1) + \Theta(x_2) + \dots +\Theta(x_k) + \Theta(nk)$, where $x_i$ is the value of the counter when, $A[i] = 0$ and $f(counter)$ is called. But due to $counter = 0$ after each call to $f()$, we have, $x_1 + x_2 + \dots + x_k = n$. So, $\Theta(x_1) + \Theta(x_2) + \dots +\Theta(x_k) + \Theta(nk) = \Theta(n) + \Theta(nk) = \Theta(n)$.
Since the time complexity is $Ω(n)$ and $\Theta(n)$ we can say it is $\Theta(n)$  Option (C). (Option D is false because the small o needs the growth rate to be STRICTLY lower and not equal to or lower as the case for big 0)
If $counter = 0$ was not there in else part, then time complexity would be $Ω(n)$ and $O(n^2)$ as in worst case we can have equal number of $0$'s and $1$'s in array $a$ giving time complexity $\Theta(1) + \Theta(2) + \dots + \Theta(n/2) + \Theta(n/2)$ would give $O(n^2)$.
Consider the following Cprogram fragment in which $i$, $j$ and $n$ are integer variables.
for( i = n, j = 0; i > 0; i /= 2, j +=i );
Let $val(j)$ denote the value stored in the variable $j$ after termination of the for loop. Which one of the following is true?
 $val(j)=\Theta(\log n)$
 $val(j)=\Theta (\sqrt{n})$
 $val(j)=\Theta( n)$
 $val(j)=\Theta (n\log n)$
$j = n/2+n/4+n/2+\ldots +1$
number of iteration will be $2^k = n$ or $k = \log n$
this is in GP find sum till $\log n $,= $\Theta(n)$
is correct because i gets reduced log2(n) time say for eg
i=16 than
i=8 j=8
i=4 j=12
i=2 j=14
i=1 j=15
i=0
hence
Consider the following segment of Ccode:
int j, n; j = 1; while (j <= n) j = j * 2;
The number of comparisons made in the execution of the loop for any $n > 0$ is:

$\lceil \log_2n \rceil +1$

$n$

$\lceil \log_2n \rceil$

$\lfloor \log_2n \rfloor +1$
$n$  no. of comparision  Ceil$(\log_2 n) + 1$ 
$1$  $2 (j = 1,2)$  $1$ 
$2$  $3 (j = 1,2,4)$  $2$ 
$3$  $3 (j = 1,2,4)$  $2$ 
$4$  $4 (j = 1,2,4,8)$  $3$ 
$5$  $4 (j = 1,2,4,8)$  $4$ 
may be we have to count those comparisons which results in the execution of loop.
Answer should be Ceil$(\log_2 n) + 1$
EDIT: but answer could be: floor$(\log_2 n) + 2$
Say we have j = 7.
Then successful comparisons are 1,2,4. (We get out in next comparison.) So 3 comparisons
Then
A) Incorrect as We get 21 !=3.
B) Incorrect, this is 8.
C) Correct
D) Correct
C & D both gave 3
In C & D for tie breaking, we can take j = 8. No of comparisons (Successful) > 1,2,4,8.(We get out of loop in 16)
C) 3 != 4. Incorrect
D) 4 Correct
Answer > D
In the following C function, let $n \geq m$.
int gcd(n,m) { if (n%m == 0) return m; n = n%m; return gcd(m,n); }
How many recursive calls are made by this function?

$\Theta(\log_2n)$

$\Omega(n)$

$\Theta(\log_2\log_2n)$

$\Theta(\sqrt{n})$
Worst case will arise when both $n$ and $m$ are consecutive Fibonacci numbers.
$\text{gcd}(F_n, F_{n1}) = \text{gcd}(F_{n1},F_{n2}) = \dots = \text{gcd}(F_1,F_0) = 1$
and $n^{th}$ Fibonacci number is $1.618^n$, where $1.618$ is the Golden ratio.
So, to find $\text{gcd} (n,m)$, number of recursive calls will be $\Theta (\log n) $.
What is the $\text{time complexity}$ of the following recursive function?
int DoSomething (int n) { if (n <= 2) return 1; else return (DoSomething (floor (sqrt(n))) + n); }

$\Theta(n^2)$

$\Theta(n \log_2n)$

$\Theta(\log_2n)$

$\Theta(\log_2\log_2n)$
We are asked the time complexity which will be the number of recursive calls in the function as in each call we perform a constant no. of operations and a recursive call. The recurrence relation for this is (considering constant time "$c$" as $1$)
$T(n) = T(\sqrt n) + 1$
$= T\left(n^{1/4}\right) + 2 \\=T\left(n^{1/8}\right) + 3 $
Going like this we will eventually reach $T(3)$ or $T(2)$. For asymptotic case this doesn't matter and we can assume we reach $T(2)$ and in next step reach $T(1)$. So, all we want to know is how many steps it takes to reach $T(1)$ which will be $1+ $no. of steps to reach $T(2)$.
From the recurrence relation we know that $T(2)$ happens when $n^{\left(\frac{1}{2^k}\right)} = 2$.
Taking $\log $ and equating,
$\frac{1}{2^k} \log n = 1 \\2^k = \log n \\k = \log \log n$.
So, $T(1)$ happens in $\log \log n + 1$ calls, but for asymptotic complexity we can write as $\Theta \left( \log \log n\right)$
Alternatively,
Substituting values
$T(1) = 1$
$T(2) = 1$
$T(3) = T(1) + 1 = 2$
$\dots$
$T(8) = T(2) + 1 = 2$
$T(9) = T(3) + 1 = 3$
$\dots$
$T\left(\left({\left({2^2}\right)^2}\right)^2\right) = T\left(\left({2^2}\right)^2\right) + 1 \\= T(2^2)+ 2 \\= T(2) + 3 = 1 + 3 = 4, \\ \log \log n = 3 \text{ as } n = 256$.
$T\left(\left({\left({\left({\left({2^2}\right)^2}\right)^2}\right)^2}\right)^2\right) = 6,\\ \log \log n = 5 \text{ as } n = 65536 \times 65536 = 2^{32}$
$T\left(2^{(2^{10})}\right) = T\left(2^{512}\right) + 1 \\= T(2^{256}) + 2 \\= T(2^{128}) + 3\\ = T(2^{64}) + 4 \\= T(2^{32}) + 5 \\= T(2^{16}) + 6 \\= T(2^8)+7 \\= T(2^4) + 8 \\= T(2^2) + 9 \\= T(2) + 10 = 11,\\ \log \log n = 10$
So, answer is D
Recursive relation for the DoSomething() is
T(n) = T(√n) + C1 if n > 2
We have ignored the floor() part as it doesn’t matter here if it’s a floor or ceiling.
Let n = 2^m, T(n) = T(2^m) Let T(2^m) = S(m) From the above two, T(n) = S(m) S(m) = S(m/2) + C1 /* This is simply binary search recursion*/ S(m) = O(logm) = O(loglogn) /* Since n = 2^m */ Now, let us go back to the original recursive function T(n) T(n) = S(m) = O(LogLogn)
An array of $n$ numbers is given, where $n$ is an even number. The maximum as well as the minimum of these $n$ numbers needs to be determined. Which of the following is TRUE about the number of comparisons needed?

At least $2nc$ comparisons, for some constant $c$ are needed.

At most $1.5n2$ comparisons are needed.

At least $n\log_2 n$ comparisons are needed

None of the above
An easier way to find it is by using Tournament Method Technique 
 To find the smallest element in the array will take $\mathbf{n1}$ comparisions.
 To find the largest element 
a. After the first round of Tournament , there will be exactly $\mathbf{n/2}$ numbers that will loose the round.
b. So the biggest looser (the largest number) should be among these $50$ loosers.To find the largest number will take $\mathbf{n/2  1}$.
Total Comparisons $\mathbf{= (n1) + (n/2  1) = 1.5n  2}$.
[need @arjun sir to verify it ]
We are requested to find the MAX and MIN of the array of $n$ elements . This can be done as follws
Non Divide And Conquer
Max_Min(A,n,max,min) max=min=a[i] for i> 2 to n { if A[i] > max // First Comparision max=A[i] else if A[i] < min // Seond Comparision min = A[i] }
Analysis
Best Case
When input is in increasing order
 First Comparision : $n1$ time
 Second Comparision: $1$ time
Worst Case
When input is in Decreasing order
 First Comparision : $n1$ time
 Second Comparision: $n1$ time
Average Case
When First Comparision fails for half of the input
 First Comparision : $n1$ time
 Second Comparision: $\frac{n}{2}$ time
Divide And Conquer
Given a function to compute on $n$ inputs the divideandconquer strategy suggest splitting the inputs into $k$ distinct subsets, $1 < K \leq n$, yielding $k$ sub problems. These Sub problems must be solved, and then a method must be found to combine sub solutions into a solution of the whole.
Defining the Termination condition 'SMALL' of the problem , When $n \leq 2$. In this case, the maximum and minimum are $a[i] \ \text{if} \ n = 1$ . If $n = 2$, the problem can be solved by making one comparison.
How can we Combine The Subproblem ? If $\text{MAX(A)}$ and $\text{MIN(P)}$ are the maximum and minimum of the elements of A, then $\text{MAX(A)}$ is the larger of $\text{MAX(A1)}$ and $\text{MAX(A2)}$ Similarly for $\text{MIN(A)}$
MaxMin(i, j, max, min) // a[1:n] is a global array. Parameters i and j are integers, // 1≤i≤j≤n. The effect is to set max and min to the largest and // smallest values in a[i:j]. { if (i=j) then max := min := a[i]; //Small(P) else if (i=j1) then // Another case of Small(P) { if (a[i] < a[j]) then max := a[j]; min := a[i]; else max := a[i]; min := a[j]; } else { // if P is not small, divide P into subproblems. // Find where to split the set. mid := ( i + j )/2; // Solve the subproblems. MaxMin( i, mid, max, min ); MaxMin( mid+1, j, max1, min1 ); // Combine the solutions. if (max < max1) then max := max1; if (min > min1) then min := min1; } }
Analysis
Time Complexity
$T(n) =\begin{Bmatrix} 0 & n=1\\ 1 & n=2\\ 2T(\frac{n}{2})+2& n>2 \end{Bmatrix}$
Using Master Theorem Case 1 follows . Hence $T(n)$ is $\Theta(n)$
Solution
$\begin{align*} \\ T(n)&=2T(\frac{n}{2})+2\\ &=4T(\frac{n}{4})+2^{2}+2 \\ & ... \\ & = 2^{k}T(\frac{n}{2^{k}})+\sum_{i=1}^{k}2^{i} & (\text{where}\sum_{i=1}^{k}2^{i}=2^{k+1}2)\\ &=2^{k}T(\frac{n}{2^{k}})+2^{k+1}2 & ( \text{where,} \ k=(\log n) 1) \\ &=2^{(\log n) 1}T(1)+2^{\log n} 2 \\ &=\frac{n}{2}+n2 \\ &=\frac{3n}{2}2 \end{align*}$
For Input Class : Numbers in Increasing Order NonDivide And Conquer Strategy Perform Better than Divide And Conquer
Space Complexity
$S(n) = n+ \log n + c $ Where $n$ is the input size and $\log n$ is the size of the stack . Hence the space complexity is $O(\log n)$
Example
Consider the array
$A[] = \begin{Bmatrix}
1 &2 &3 &4 & 5 &6 &7 &8 &9 \\
22 & 13 & 5 &8 &15 &60 &17 &31 &47
\end{Bmatrix}$
Video Example : https://www.youtube.com/watch?v=EHRL2LbS5LU
References
Consider the following C program segment:
int IsPrime (n) { int i, n; for (i=2; i<=sqrt(n);i++) if(n%i == 0) {printf("Not Prime \n"); return 0;} return 1; }
Let $T(n)$ denote number of times the $for$ loop is executed by the program on input $n$. Which of the following is TRUE?

$T(n) = O(\sqrt{n}) \: \text{ and } T(n) = \Omega(\sqrt{n})$

$T(n) = O(\sqrt{n}) \: \text{ and } T(n) = \Omega(1)$

$T(n) = O(n) \: \text{ and } T(n) = \Omega(\sqrt{n})$

None of the above
Answer = option B
Worst Case : $T(n) = \mathcal{O}(\sqrt{n})$
Best Case : When $n$ is an even number body of $for$ loop is executed only $1$ time (due to "$return \ 0$" inside if) which is irrespective of $n$. $\therefore T(n) = \Omega(1)$
Exponentiation is a heavily used operation in public key cryptography. Which of the following options is the tightest upper bound on the number of multiplications required to compute $b^n \bmod{m}, 0 \leq b, n \leq m$ ?
 $O(\log n)$
 $O(\sqrt n)$
 $O\Biggl (\frac{n}{\log n} \Biggr )$
 $O(n)$
Answer is (A)
We need to divide $n$ recursively and compute like following:
$C_1 = b^{\frac{n}{2}} \times b^{\frac{n}{2}}$. In this, we need to calculate $b^{\frac{n}{2}}$ only once.
$C_2 = b^{\frac{n}{4}} \times b^{\frac{n}{4}}$
$\vdots$
$C_k = b^2 \times b^2 \qquad \Bigg \{k = \log n$
Recurrence relation: $T(n) =T \Bigl (\frac{n}{2} \Bigr ) + O(1)$
$T(n) = O(\log n)$
Let $P_1, P_2,\dots , P_n $be $n$ points in the $xy$plane such that no three of them are collinear. For every pair of points $P_i$ and $P_j$, let $L_{ij}$ be the line passing through them. Let $L_{ab}$ be the line with the steepest gradient among all $n(n 1)/2$ lines.
The time complexity of the best algorithm for finding $P_a$ and $P_b$ is
 $\Theta\left(n\right)$
 $\Theta\left(n\log n\right)$
 $\Theta\left(n\log^2 n\right)$
 $\Theta\left(n^2\right)$
Answer: B
Gradient $= y_2  y_1 / x_2  x_1$
For gradient to be maximum $x_2  x_1$_{ }should be minimum. So, sort the points (in $\Theta(n\log n)$ time) according to $x$ coordinate and find the minimum difference between them (in $\Theta(n)$ time).
Best complexity: $\Theta(n\log n +n)$ which leads to B.
https://www.careercup.com/question?id=4787065684230144
https://stackoverflow.com/questions/8222108/maxslopefromsetofpoints
The minimum number of comparisons required to determine if an integer appears more than $\frac{n}{2}$ times in a sorted array of $n$ integers is

$\Theta(n)$

$\Theta(\log n)$

$\Theta(\log^*n)$

$\Theta(1)$
answer = option B
whenever there exists an element which is present in the array : more than $\frac{n}{2}$ times, then definitely it will be present at the middle index position; in addition to that it will also be present at anyone of the neighbourhood indices namely $i1$ and $i+1$
No matter how we push that stream of More than $\frac{n}{2}$ times of elements of same value around the Sorted Array, it is bound to be present at the middle index $+$ atleast anyone of its neighbourhood
once we got the element which should have occurred more that $n/2$ times we count its total occurrences in $\mathcal{O}(\log n)$ time.
To check whether a given number is repeated n/2 times in the array can be done in O(log n) time.
Algo
1. find the first occurrence (index i) of x(given number) in the array which can be done in O(log n) time (a variant of binary search).
2. check if A[i] == A[n/2+i]
return true
3. else return false
We have a binary heap on $n$ elements and wish to insert $n$ more elements (not necessarily one after another) into this heap. The total time required for this is

$\Theta(\log n)$

$\Theta(n)$

$\Theta(n\log n)$

$\Theta(n^2)$
an insert operation on a binary heap takes $\mathcal{O}(\log n)$ time, but an alternative approach we can use. which requires us to insert $n$ elements in heap without any computation i.e. in constant time. after which we can apply Heapify operation(this operation creates heap in linear time) on the array of those element and Hence obtain a Heap in $\mathcal{O}(n)$ time.
Here "not necessarily one after another" should mean that we can insert $n$ elements at once and not necesaairly have to wait for first insert to be completed before doing second.
Consider the following C functions:
int f1 (int n) { if(n == 0  n == 1) return n; else return (2 * f1(n1) + 3 * f1(n2)); } int f2(int n) { int i; int X[N], Y[N], Z[N]; X[0] = Y[0] = Z[0] = 0; X[1] = 1; Y[1] = 2; Z[1] = 3; for(i = 2; i <= n; i++){ X[i] = Y[i1] + Z[i2]; Y[i] = 2 * X[i]; Z[i] = 3 * X[i]; } return X[n]; }
The running time of $f1(n)$ and $f2(n)$ are
 $\Theta(n)$ and $\Theta(n)$
 $\Theta(2^n)$ and $\Theta(n)$
 $\Theta(n)$ and $\Theta(2^n)$
 $\Theta(2^n)$ and $\Theta(2^n)$
Q.74 = option B
Q.75 = option C
Time complexity of $f1$ is given by
$T(n) = T(n1) + T(n2)$, (multiplication by $2$ and $3$ won't affect complexity as it is a constant time operation)
$T(0) = T(1) = 1$
The solution to this (fibonacci series) is given by Golden ratio. https://en.wikipedia.org/wiki/Golden_ratio which is $O(2^n)$. (Using theta in question must be a mistake)
Time complexity of $f2$ is $\Theta(n)$ as here all recursive calls are avoided by saving the results in an array (dynamic programming).
So, answer to 74 is (B).
75. Both $f1$ and $f2$ are calculating the same function. So,
$f1(2) = 2f1(1) + 3f1(0) = 2$
$f1(3) = 2f1(2) + 3f1(1) = 7$
$f1(4) = 20$
$f1(5) = 61$
$f1(6) = 182$
$f1(7) = 547$
$f1(8) = 1640 = f2(8)$
Consider the following C functions:
int f1 (int n) { if(n == 0  n == 1) return n; else return (2 * f1(n1) + 3 * f1(n2)); } int f2(int n) { int i; int X[N], Y[N], Z[N]; X[0] = Y[0] = Z[0] = 0; X[1] = 1; Y[1] = 2; Z[1] = 3; for(i = 2; i <= n; i++){ X[i] = Y[i1] + Z[i2]; Y[i] = 2 * X[i]; Z[i] = 3 * X[i]; } return X[n]; }
$f1(8)$ and $f2(8)$ return the values
 $1661$ and $1640$
 $59$ and $59$
 $1640$ and $1640$
 $1640$ and $1661$
Here Ans C. $1640$ and $1640$
$f1(8)$
$f2(8)$ will return
Two alternative packages $A$ and $B$ are available for processing a database having $10^k$ records. Package $A$ requires $0.0001 n^2$ time units and package $B$ requires $10n\log_{10} n$ time units to process $n$ records. What is the smallest value of $k$ for which package $B$ will be preferred over $A$?
 $12$
 $10$
 $6$
 $5$
$\implies 10 \times10^k \log_{10} 10^k \leq 0.0001 (10^k)^2$
$\implies 10^{k+1} k \leq 0.0001 \times 10^{2k}$
$\implies k \leq 10^{2k k 1  4}$
$\implies k \leq 10^{k5}$
Trying the values, 5 doesn't satisfy this but 6 satisfies.
Consider the following pseudo code. What is the total number of multiplications to be performed?
D = 2 for i = 1 to n do for j = i to n do for k = j + 1 to n do D = D * 3
 Half of