Assuming P ≠ NP, which of the following is TRUE?
(A) NPcomplete = NP
(B) NPcomplete $\cap$ P = $\phi$
(C) NPhard = NP
(D) P = NPcomplete
Consider two strings $A$="qpqrr" and $B$="pqprqrp". Let $x$ be the length of the longest common subsequence (not necessarily contiguous) between $A$ and $B$ and let $y$ be the number of such longest common subsequences between $A$ and $B$. Then $x +10y=$ ___.
Which of the following statements is true?

As the number of entries in a hash table increases, the number of collisions increases.

Recursive programs are efficient

The worst case complexity for Quicksort is $O(n^2)$

Binary search using a linear linked list is efficient
 I and II
 II and III
 I and IV
 I and III
A language with string manipulation facilities uses the following operations
head(s): first character of a string tail(s): all but exclude the first character of a string
concat(s1, s2): s1s2
For the string "acbc" what will be the output of
concat(head(s), head(tail(tail(s))))
 ac
 bc
 ab
 cc
Which of the following statements is false?

Optimal binary search tree construction can be performed efficiently using dynamic programming

Breadthfirst search cannot be used to find connected components of a graph

Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.

Depthfirst search can be used to find connected components of a graph
Let $a=(a_{ij})$ be an $n$rowed square matrix and $I_{12}$ be the matrix obtained by interchanging the first and second rows of the $n$rowed Identify matrix. Then $AI_{12}$ is such that its first

row is the same as its second row

row is the same as the second row of $A$

column is the same as the second column of $A$

row is all zero
(A) $\Theta(n\log n)$ (B) $\Theta(n2^n)$ (C) $\Theta(n)$ (D) $\Theta(\log n)$
The weight of a sequence $a_0,a_1, \dots, a_{n1}$ of real numbers is defined as $a_0+a_1/2+ \dots + a_{n1}/2^{n1}$. A subsequence of a sequence is obtained by deleting some elements from the sequence, keeping the order of the remaining elements the same. Let $X$ denote the maximum possible weight of a subsequence of $a_o,a_1, \dots, a_{n1}$ and $Y$ the maximum possible weight of a subsequence of $a_1,a_2, \dots, a_{n1}$. Then $X$ is equal to
 $max(Y, a_0+Y)$
 $max(Y, a_0+Y/2)$
 $max(Y, a_0 +2Y)$
 $a_0+Y/2$
An algorithm to find the length of the longest monotonically increasing sequence of numbers in an array $A[0:n1]$ is given below.
Let $L_i$, denote the length of the longest monotonically increasing sequence starting at index $i$ in the array.
Initialize $L_{n1} = 1$.
For all $i$ such that $0 \leq i \leq n2$
$ L_i = \begin{cases} 1+ L_{i+1} & \quad\text{if A[i] < A[i+1]} \\ 1 & \quad\text{Otherwise}\end{cases} $
Finally the the length of the longest monotonically increasing sequence is $\text{Max} \:(L_0, L_1, \dots , L_{n1}).$
Which of the following statements is TRUE?
(A) The algorithm uses dynamic programming paradigm
(B) The algorithm has a linear complexity and uses branch and bound paradigm
(C) The algorithm has a nonlinear polynomial complexity and uses branch and bound paradigm
(D) The algorithm uses divide and conquer paradigm
Suppose you want to move from $0$ to $100$ on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a prespecified pair of integers $i,\:j \:\text{with}\: i <j$. Given a shortcut $i,j$ if you are at position $i$ on the number line, you may directly move to $j$. Suppose $T(k)$ denotes the smallest number of steps needed to move from $k$ to 100. Suppose further that there is at most 1 shortcut involving any number, and in particular from 9 there is a shortcut to 15. Let $y$ and $z$ be such that $T(9) = 1 + \min(T(y),T(z))$. Then the value of the product $yz$ is _____.
The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is ________
What is the size of the smallest MIS(Maximal Independent Set) of a chain of nine nodes?
A)  5 
B)  4 
C)

3 
D)  2 
There are 5 bags labeled 1 to 5. All the coins in a given bag have the same weight. Some bags have coins of weight 10 gm, others have coins of weight 11 gm. I pick 1, 2, 4, 8, 16 coins respectively from bags 1 to 5. Their total weight comes out to 323 gm. Then the product of the labels of the bags having 11 gm coins is ___.
Given two arrays of numbers $a_{1},...,a_{n}$ and $b_{1},...,b_{n}$ where each number is $0$ or $1$, the fastest algorithm to find the largest span $(i, j)$ such that $ a_{i}+a_{i+1}+\dots+a_{j}=b_{i}+b_{i+1}+\dots+b_{j}$ or report that there is not such span,
 Takes $ 0(3^{n})$ and $\Omega(2^{n})$ time if hashing is permitted
 Takes $ 0(n^{3})$ and $\Omega(n^{2.5})$ time in the key comparison mode
 Takes $\Theta (n)$ time and space
 Takes $O(\sqrt n)$ time only if the sum of the $2n$ elements is an even number
A set $X$ can be represented by an array x[n] as follows:
x$\left [ i \right ]=\begin {cases} 1 & \text{if } i \in X \\ 0 & otherwise \end{cases}$
Consider the following algorithm in which x y, and z are Boolean arrays of size n:
algorithm zzz(x[], y[], z[]) { int i; for(i=0; i<n; ++i) z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]); }
The set Z computed by the algorithm is:
 $(X\cup Y)$
 $(X\cap Y)$
 $(XY)\cap (YX)$
 $(XY)\cup (YX)$
Let $A$ be an $n \times n$ matrix such that the elements in each row and each column are arranged in ascending order. Draw a decision tree, which finds 1^{st}, 2^{nd} and 3^{rd} smallest elements in minimum number of comparisons.
(a) $\log_2 n$
(b) $\sqrt n$
(c) $n1$
(d) $n$
Suppose we want to arrange the $n$ numbers stored in any array such that all negative values occur before all positive ones. Minimum number of exchanges required in the worst case is
 $n1$
 $n$
 $n+1$
 None of the above
Consider the following C program
main() { int x, y, m, n; scanf("%d %d", &x, &y); /* Assume x>0 and y>0*/ m = x; n = y; while(m != n) { if (m > n) m = mn; else n = nm; } printf("%d", n); }
The program computes

$x+y$ using repeated subtraction

$x \mod y$ using repeated subtraction

the greatest common divisor of $x$ and $y$

the least common multiple of $x$ and $y$
An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array
 Solves it in linear time using a left to right pass of the array
 Solves it in linear time using a right to left pass of the array
 Solves it using divide and conquer in time $\Theta (n\log n)$
 Solves it in time $\Theta( n^2)$
The average number of key comparisons required for a successful search for sequential search on $n$ items is
 $\frac{n}{2}$
 $\frac{n1}{2}$
 $\frac{n+1}{2}$
 None of the above
A program attempts to generate as many permutations as possible of the string, 'abcd' by pushing the characters a, b, c, d in the same order onto a stack, but it may pop off the top character at any time. Which one of the following strings CANNOT be generated using this program?
 abcd
 dcba
 cbad
 cabd
In the worst case, the number of comparisons needed to search a single linked list of length n for a given element is
 $\log n$
 $\frac{n}{2}$
 $\log_2 {n}  1$
 $n$
Let $n$ be a large integer. Which of the following statements is TRUE?
 $2^{\sqrt{2\log n}}< \frac{n}{\log n}< n^{1/3}$
 $\frac{n}{\log n}< n^{1/3}< 2^{\sqrt{2\log n}}$
 $2\sqrt{^{2\log n}}< n^{1/3}< \frac{n}{\log n}$
 $n^{1/3}< 2\sqrt{^{2\log n}}<\frac{n}{\log n}$
 $\frac{n}{\log n}< 2\sqrt{^{2\log n}}<n^{1/3}$
The subsetsum problem is defined as follows. Given a set of n positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2dimensional Boolean array, $X$, with n rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which entry of the array $X$, if TRUE, implies that there is a subset whose elements sum to $W$?
 $X[1, W]$
 $X[n, 0]$
 $X[n, W]$
 $X[n1, n]$
Consider the following game. There is a list of distinct numbers. At any round, a player arbitrarily chooses two numbers $a, b$ from the list and generates a new number $c$ by subtracting the smaller number from the larger one. The numbers $a$ and $b$ are put back in the list. If the number $c$ is nonzero and is not yet in the list, $c$ is added to the list. The player is allowed to play as many rounds as the player wants. The score of a player at the end is the size of the final list.
Suppose at the beginning of the game the list contains the following numbers: $48, 99, 120, 165$ and $273$. What is the score of the best player for this game?
 $40$
 $16$
 $33$
 $91$
 $123$
Given a set of $n$ distinct numbers, we would like to determine both the smallest and the largest number. Which of the following statements is TRUE?
 These two elements can be determined using $O\left(\log^{100}n\right)$ comparisons.
 $O\left(\log^{100}n\right)$ comparisons do not suffice, however these two elements can be determined using $n + O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these two elements can be determined using $3\lceil n/2 \rceil$ comparisons.
 $3\lceil n/2 \rceil$ comparisons do not suffice, however these two elements can be determined using $2(n  1)$ comparisons.
 None of the above.
Given a set of $n$ distinct numbers, we would like to determine the smallest three numbers in this set using comparisons. Which of the following statements is TRUE?
 These three elements can be determined using $O\left(\log^{2}n\right)$ comparisons.
 $O\left(\log^{2}n\right)$ comparisons do not suffice, however these three elements can be determined using $n+O(1)$ comparisons.
 $n + O(1)$ comparisons do not suffice, however these three elements can be determined using $n +O(\log n)$ comparisons.
 $n+O(\log n)$ comparisons do not suffice, however these three elements can be determined using $O(n)$ comparisons.
 None of the above.
Consider the problem of computing the minimum of a set of $n$ distinct numbers. We choose a permutation uniformly at random (i.e., each of the n! permutations of $\left \langle 1,....,n \right \rangle$ is chosen with probability $(1/n!)$ and we inspect the numbers in the order given by this permutation. We maintain a variable MIN that holds the minimum value seen so far. MIN is initialized to $\infty$ and if we see a value smaller than MIN during our inspection, then MIN is updated. For example, in the inspection given by the following sequence, MIN is updated four times.
5 9 4 2 6 8 0 3 1 7
What is the expected number of times MIN is updated?
 $O (1)$
 $H_{n}=\sum ^{n}_{i=1} 1/i$
 $\sqrt{n}$
 $n/2$
 $n$
Suppose $n$ processors are connected in a linear array as shown below. Each processor has a number. The processors need to exchange numbers so that the numbers eventually appear in ascending order (the processor $\rm P1$ should have the minimum value and the the processor $\rm Pn$ should have the maximum value).
The algorithm to be employed is the following. Odd numbered processors and even numbered processors are activated alternate steps; assume that in the first step all the even numbered processors are activated. When a processor is activated, the number it holds is compared with the number held by its righthand neighbour (if one exists) and the smaller of the two numbers is retained by the activated processor and the bigger stored in its right hand neighbour.
How long does it take for the processors to sort the values?
 $n \log n$ steps
 $n^2$ steps
 $n$ steps
 $n^{1.5}$ steps
 The algorithm is not guaranteed to sort
Given a weighted directed graph with $n$ vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time
 $O (n)$
 $O(n . \log(n))$ but not $O (n)$
 $O(n^{1.5})$ but not $O (n \log n)$
 $O(n^{3})$ but not $O(n^{1.5})$
 $O(2^{n})$ but not $O(n^{3})$
Let $T$ be a tree of $n$ nodes. Consider the following algorithm, that constructs a sequence of leaves $u_{1}, u_{2}...$. Let $u_{1}$ be some leaf of tree. Let $u_{2}$be a leaf that is farthest from $u_{1}$. Let $u_{3}$ be the leaf that is farthest from $u_{2}$, and, in general, let $u_{i+1}$ be a leaf of $T$ that is farthest from $u_{i}$ (if there are many choices for $u_{i+1}$, pick one arbitrarily). The algorithm stops when some $u_{i}$ is visited again. What can u say about the distance between $u_{i}$ and $u_{i+1}$, as $i = 1, 2,...?$
 For some trees, the distance strictly reduces in each step.
 For some trees, the distance increases initially and then decreases.
 For all trees, the path connecting $u_{2}$ and $u_{3}$ is a longest path in the tree.
 For some trees, the distance reduces initially, but then stays constant.
 For the same tree, the distance between the last two vertices visited can be different, based on the choice of the first leaf $u_{1}$.
Consider the class of recursive and iterative programs. Which of the following is false?
 Recursive programs are more powerful than iterative programs.
 For every iterative program there is an equivalent recursive program.
 Recursive programs require dynamic memory management.
 Recursive programs do not terminate sometimes.
 Iterative programs and recursive programs are equally expressive.
Consider the following C program which is supposed to compute the transpose of a given 4 x 4 matrix M. Note that, there is an X in the program which indicates some missing statements. Choose the correct option to replace X in the program.
#include<stdio.h> #define ROW 4 #define COL 4 int M[ROW][COL] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; main() { int i, j, t; for (i = 0; i < 4; ++i) { X } for (1 = 0; i < 4; ++i) for (j = 0; j < 4; ++j) printf ("%d", M[i][j]); }

for(j = 0; j < 4; ++j){ t = M[i][j]; M[i][j] = M[j][i]; M[j][i] = t; }

for(j = 0; j < 4; ++j){ M[i][j] = t; t = M[j][i]; M[j][i] = M[i][j]; }

for(j = i; j < 4; ++j){ t = M[i][j]; M[i][j] = M[j][i]; M[j][i] = t; }

for(j = i; j < 4; ++j){ M[i][j] = t; t = M[j][i]; M[j][i] = M[i][j]; }
You are given ten rings numbered from 1 to 10, and three pegs labeled A, B, and C. Initially all the rings are on peg A, arranged from top to bottom in ascending order of their numbers. The goal is to move all the rings to peg B in the minimum number of moves obeying the following constraints:
(i) In one move, only one ring can be moved.
(ii) A ring can only be moved from the top of its peg to the top of a new peg.
(iii) At no point can a ring be placed on top of another ring with a lower number.
How many moves are required?
 501
 1023
 2011
 10079
 None of the above.
Consider the following program operating on four variables $u, v, x, y$, and two constants $X$ and $Y$.
x, y, u, v:= X, Y, Y, X; While (x ≠ y) do if (x > y) then x, v := x  y, v + u; else if (y > x) then y, u:= y  x, u + v; od; print ((x + y) / 2); print ((u + v) / 2);
Given $X > 0 \land Y > 0$, pick the true statement out of the following:
 The program prints $\text{gcd}(X, Y)$ and the first prime larger than both $X$ and $Y$.
 The program prints $\text{gcd}(X, Y)$ followed by $\text{lcm}(X, Y)$.
 The program prints $\text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
 The program prints $\frac1 2 \times \text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
 The program does none of the above.
Consider the following C function.
int fun1 (int n) { int i, j, k, p, q = 0; for (i = 1; i < n; ++i) { p = 0; for (j = n; j > 1; j = j/2) ++p; for (k = 1; k < p; k = k * 2) ++q; } return q; }
Which one of the following most closely approximates the return value of the function fun1?
 n^{3}
 n(log n)^{2}
 n log n
 n log(log n)
Suppose you are provided with the following function declaration in the C programming language.
int partition(int a[], int n);
The function treats the first element of $a[\:]$ as a pivot and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part.
The following partially given function in the C programming language is used to find the $k^{th}$ smallest element in an array $a[\:]$ of size $n$ using the partition function. We assume $k \leq n$.
int kth_smallest (int a[], int n, int k) { int left_end = partition (a, n); if (left_end+1==k) { return a[left_end]; } if (left_end+1 > k) { return kth_smallest (___________); } else { return kth_smallest (___________); } }
The missing arguments lists are respectively
 (a, left_end, k) and (a+left_end+1, nleft_end1, kleft_end1)
 (a, left_end, k) and (a, nleft_end1, kleft_end1)
 (a, left_end+1, nleft_end1, kleft_end1) and (a, left_end, k)
 (a, nleft_end1, kleft_end1) and (a, left_end, k)
Given below are some algorithms, and some algorithm design paradigms.
1. Dijkstra's Shortest Path  i. Divide and Conquer 
2. FloydWarshall algorithm to compute all pairs shortest path  ii. Dynamic Programming 
3. Binary search on a sorted array  iii. Greedy design 
4. Backtracking search on a graph  iv. Depthfirst search 
v. Breadthfirst search 
Match the above algorithms on the left to the corresponding design paradigm they follow.
 1i, 2iii, 3i, 4v
 1iii, 2iii, 3i, 4v
 1iii, 2ii, 3i, 4iv
 1iii, 2ii, 3i, 4v
Match the following:
(P) Prim's algorithm for minimum spanning tree  (i) Backtracking 
(Q) FloydWarshall algorithm for all pairs shortest paths  (ii) Greedy method 
(R) Mergesort  (iii) Dynamic programming 
(S) Hamiltonian circuit  (iv) Divide and conquer 
 Piii, Qii, Riv, Si
 Pi, Qii, Riv, Siii
 Pii, Qiii, Riv, Si
 Pii, Qi, Riii, Siv
Let a be an array containing n integers in increasing order. The following algorithm determines whether there are two distinct numbers in the array whose difference is a specified number S > 0.
i = 0; j = 1; while (j < n ){ if (E) j++; else if (a[j]  a[i] == S) break; else i++; } if (j < n) printf("yes") else printf ("no");
Choose the correct expression for E.
 a[j]  a[i] > S
 a[j]  a[i] < S
 a[i]  a[j] < S
 a[i]  a[j] > S
The following C function takes two ASCII strings and determines whether one is an anagram of the other. An anagram of a string s is a string obtained by permuting the letters in s.
int anagram (char *a, char *b) { int count [128], j; for (j = 0; j < 128; j++) count[j] = 0; j = 0; while (a[j] && b[j]) { A; B; } for (j = 0; j < 128; j++) if (count [j]) return 0; return 1; }
Choose the correct alternative for statements A and B.
 A : count [a[j]]++ and B : count[b[j]]
 A : count [a[j]]++ and B : count[b[j]]++
 A : count [a[j++]]++ and B : count[b[j]]
 A : count [a[j]]++and B : count[b[j++]]
In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.
1. BellmanFord algorithm 2. Kruskal’s algorithm 3. FloydWarshall algorithm 4. Topological sorting 
A : O ( m log n) B : O (n^{3}) C : O (nm) D : O (n + m) 
 1→ C, 2 → A, 3 → B, 4 → D
 1→ B, 2 → D, 3 → C, 4 → A
 1→ C, 2 → D, 3 → A, 4 → B
 1→ B, 2 → A, 3 → C, 4 → D
The following are the starting and ending times of activities A, B, C, D, E, F, G and H respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required?
 3
 4
 5
 6
The correct matching for the following pairs is
(A) All pairs shortest path  (1) Greedy 
(B) Quick Sort  (2) DepthFirst search 
(C) Minimum weight spanning tree  (3) Dynamic Programming 
(D) Connected Components  (D) Divide and and Conquer 

A2 B4 C1 D3

A3 B4 C1 D2

A3 B4 C2 D1

A4 B1 C2 D3
A recursive program to compute Fibonacci numbers is shown below. Assume you are also given an array f [ 0......m] with all elements initialized to 0
fib(n) { if (n > M) error (); if (n == 0) return 1; if (n == 1)return 1; if (▭)________________________(1) return ▭__________________(2) t = fib(n  1) + fib(n  2); ▭_____________________________(3) return t; }
 Fill in the boxes with expressions/statement to make fib() store and reuse computed Fibonacci values. Write the box number and the corresponding contents in your answer book.
 What is the time complexity of the resulting program when computing fib(n)?
The most appropriate matching for the following pairs
X: depth first search 1: heap Y: breadthfirst search 2: queue Z: sorting 3: stack
is:
 X  1 Y  2 Z  3
 X  3 Y  1 Z  2
 X  3 Y  2 Z  1
 X  2 Y  3 Z  1
Let $W(n) $ and $A(n)$ denote respectively, the worst case and average case running time of an algorithm executed on an input of size $n$. Which of the following is ALWAYS TRUE?
(A) $A(n) = \Omega (W(n))$
(B) $A(n) = \Theta (W(n))$
(C) $A(n) = \text{O} (W(n))$
(D) $A(n) = \text{o} (W(n))$
The minimum number of comparisons required to determine if an integer appears more than $\frac{n}{2}$ times in a sorted array of $n$ integers is

$\Theta(n)$

$\Theta(\log n)$

$\Theta(\log^*n)$

$\Theta(1)$
(A) $T(n) = 2T(n − 2) + 2$
(B) $T (n) = 2T(n − 1) + n$
(C) $T (n) = 2T(n/2) + 1$
(D) $T (n) = 2T(n − 1) + 1$
(a) 165 (b) 90 (c) 75 d) 65
The subsetsum problem is defined as follows. Given a set of n positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2dimensional Boolean array, $X$, with n rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.
Which of the following is valid for $2 \leq i \leq n$, and $a_i \leq j \leq W$?
 $X[i, j] = X[i1, j] \vee X[i, ja_i]$
 $X[i, j] = X[i1, j] \vee X[i1, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i, ja_i]$
 $X[i, j] = X[i1, j] \wedge X[i1, ja_i]$
Let S be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in S. Which of the following statement is true?
 t (n) is 0(1)
 n ≤ t(n) ≤ n log_{2} n
 n log_{2} n ≤ t(n) < $\frac{n}{2}$
 t(n) = $\left (\frac{n}{2} \right)$
Suppose you are given an array s[1....n] and a procedure reverse (s, i, j) which reverses the order of elements in s between positions i and j (both inclusive). What does the following sequence do, where 1 ≤ k ≤ n:
reverse (s, 1, k); reverse (s, k+1, n); reverse (s, 1, n);
 Rotates s left by k positions
 Leaves s unchanged
 Reverses all elements of s
 None of the above
Give a method for finding and printing the cycle formed if the edge $(u,v)$ of $G$ not in $T$ (i.e., $e \in GT$) is now added to $T$.
Time taken by your method must be proportional to the length of the cycle.
Describe the algorithm in a PASCAL – like language. Assume that the variables have been suitably declared.
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
Consider the following Pascal function:
Function X(M:integer):integer; Var i:integer; Begin i := 0; while i*i < M do i:= i+1 X := i end
The function call $X(N)$, if $N$ is positive, will return
(a). $\lfloor\sqrt N \rfloor$
(b). $\lfloor\sqrt N \rfloor +1$
(c). $\lceil \sqrt N \rceil$
(d). $\lceil \sqrt N \rceil +1$
(e). None of the above
Answers:
Since, P $\neq$ NP, there is at least one problem in NP, which is harder than all P problems. Lets take the hardest such problem, say $X$. Since, P $\neq$ NP, $X \notin $ P .
Now, by definition, NPcomplete problems are the hardest problems in NP and so $X$ problem is in NPcomplete. And being in NP, $X$ can be reduced to all problems in NPcomplete, making any other NPcomplete problem as hard as $X$. So, since $X \notin $P, none of the other NPcomplete problems also cannot be in P.
qprr
Pqrr
qpqr
In first string
If we want to get 4 as max len den lcs should end with either rr or qr
Only 4combinations possible for lcs with len 4
qpqr
qqrr
pqrr
qprr
Now check for matching sequences in second string, except for qqrr all possible
Binary search using linked list is not efficient as it will not give O(logn), because we will not be able to find the mid in constant time. Finding mid in linked list takes O(n) time.
Recursive programs are not efficient because they take a lot of space, Recursive methods will often throw StackOverflowException while processing big sets. moreover it has its own advantages too.
concat(a,head(tail(tail(acbc))))
concat(a,head(tail(cbc)))
concat(a,head(bc))
concat(a,b)
ab.
(a) True.
(b) False.
(c) True.
(d) True.
A matrix:
a b c
d e f
g h i
I12 matrix
0 1 0
1 0 0
0 0 1
resulted matrix
b a c
e d f
h g i
$\Theta(\log (n2^n)) $
$= \Theta(\log n + \log 2^n) $
$= \Theta(\log n + n) $
$=\Theta(n)$
$$\begin{align}
S &= \langle a_0, S_1 \rangle\\
S_1 &= \langle a_1, a_2, a_3 \ldots a_{n1} \rangle
\end{align}$$
Two possible cases arise:
 $a_0$ is included in the max weight subsequence of $S$:
In this case, $X = \text{weight} \Bigl( \langle a_0, S_1 \rangle \Bigr) = a_0 + \dfrac Y 2$
 $a_0$ is not included in the max weight subsequence of $S$:
In this case, $X = \text{weight}(S_1) = Y$
Since the value of $a_0$ can be anything (negative or $<\frac Y 2$ in general) $\{ \because a_i \in \mathbb{R} \}$, it is possible that $Y > a_0 + \frac Y 2$.
The maximum possible weight of a subsequence of $S$ is given by: $$X = \max \left ( Y, a_0 + \frac Y 2 \right )$$
Thus, option B is correct.
(A) is the answer.
The algorithm is storing the optimal solutions to subproblems at each point (for each i), and then using it to derive the optimal solution of a bigger problem. And that is dynamic programming approach. And the program has linear time complexity.
http://stackoverflow.com/questions/1065433/whatisdynamicprogramming
Now, branch and bound comes when we explore all possible solutions (branch) and backtracks as soon as we find we won't get a solution (in classical backtracking we will retreat only when we won't find the solution). So, backtracking gives all possible solutions while branch and bound will give only the optimal one. http://www.cs.cornell.edu/~wdtseng/icpc/notes/bt2.pdf
The given algorithm here is neither backtracking nor branch and bound. Because we are not branching anywhere in the solution space.
And the algorithm is not divide and conquer as we are not dividing the problem and then merging the solution as in the case of merge sort (where merge is the conquer step).
T(9)=1+ min(T(y),T(z))=1+min(Distance from y to 100 , Distance from z to 100)
There are only two such values where we can reach from 9 , one is simple step to right on number line , i.e 10 and another is 15 (given shortcut)
Hence , y=10 , z=15
yz=10x15 = 150
Ans: minimum number of comparison require to find minimum and maximum is: Approach is divide and conquer ....
T(n) = T(floor(n/2)) + T(ceil(n/2)) + 2 T(2) = 1 // if two element then compare both and return max and min T(1) = 0 // if one element then return both max and min same
If n is a power of 2, then we can write T(n) as:
T(n) = 2T(n/2) + 2
After solving above recursion, we get
T(n) = 3/2n 2
Thus, the approach does 3/2n 2 comparisons if n is a power of 2. And it does more than 3/2n 2 comparisons if n is not a power of 2.
So, here in this case put n=100 and we will get (3/2)(100)  2 = 148 comparison .....
123456789
(2,5,8) is the maximal independent set for a chain of 9 nodes. If we add any other node to the set then it will not be MIS.
1,2,4,8,16 of all bags have 10 gm coins then total weight will come to
10 + 20 + 40 + 80+ 160 = 310 but total weight should be 323, but 13 is less, i divided 13 into 1 + 4 + 8
11 + 20 + 44+ 88 + 160, means 1st, 3rd and 4th bags have 11 gm coins. so product of labels will be 1*3*4 = 12
Answer is (C). Following algorithm would do.
Since array is binary, the max sum will go until $n$ and so the sum difference of the two arrays can vary between $n$ and $n$. We use array start to keep the starting index of each possible sum (hence of size $2n+1$) and array end to keep the ending index (these two arrays work like hash tables and since we have only $2n+1$ possible keys, we can do a perfect hashing). So, our required solution will be max(end[i]start[i]) provided both are assigned values.
The algorithm works as follows:
 Initialize diff array to contain the difference of sum of elements of array a and b. i.e., $diff[i] = \sum_{i=0}^n a[i]  b[i]$.
 Now diff[i] can have values from $n$ to $n$ which gives $2n+1$ possible values and the first occurrence of a diff value marks the beginning of a span and the last occurrence marks the end. We use start and end array for storing these two positions for the $2n+1$ possible values.
 Now, the largest value of end[i]  start[i] for any i, will be the largest span and the start of it will be start[i] + 1, and end will be end[i]. If the span is starting from first position itself (arrays a and b have same first elements), then it will start from start[i] itself.
#include <stdio.h> #define size 100 //assume n is less than 100 int main() { int n, a[size], b[size]; int start[2*size+1], end[2*size+1]; int sum1 = 0, sum2 = 0, i; int diff[size]; printf("Enter n: "); scanf("%d", &n); for(i = 0; i < n; i++) { printf("Enter a[%d]: ", i); scanf("%d", &a[i]); } for(i = 0; i < n; i++) { printf("Enter b[%d]: ", i); scanf("%d", &b[i]); } for(i = 0; i < n; i++) { if(a[i]) sum1++; if(b[i]) sum2++; diff[i] = sum1  sum2; } for(i = 0; i < 2*n; i++) start[i] = 1,end[i] = 1; start[n] = end[n] = 0; //initially sum is 0 at the beginning of array and //the first n1 elements of start and end are used //if sum of A till ith element is less than sum of B till ith element for(i=0; i < n; i++) { if(start[diff[i] + n] == 1)//interested only in the first occurrence of diff[i] start[diff[i] + n] = i; end[diff[i] + n] = i;//interested in the last occurrence of diff[i] } int max = 1; int savei = 1; //savei is for storing the sum having the largest span for(i = 0; i < 2*n; i++) { if(start[i] > 1 && (end[i]  start[i] > max)) { max = end[i]  start[i]; savei = i; } } if(savei >= 0) { printf("The largest span is from %d to %d\n", start[savei]+(savei != n), end[savei]); //when sum zero is having the largest span, span starts from first element itself. //Else, the span starts from the next element from which the span does not change } else { printf("No span\n"); } }
Option (d)
In the given algorithm the for loop contains a logical expression
z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);
The equivalent set representation of a given logical expression if we assume z[i] = z, X[i] = X, Y[i] = Y then
z = ( X ^ Y' ) V ( X ' ^ Y)
z = ( X  Y ) V ( Y  X ) [A ^ B' = A  B]
If all elements are distinct we have to check 1st row and 1st column.So complexity will be O(n^{2})
Here minimum no of comparison could be 1 , because a[0][0] will be minimum always, now we have to check between a[0][1] and a[1][0]
if all elements are not distinct we have to search all elements of the array
So, minimum comparison could be O(1)
For n = 8, we can do
$b = a \times a$
$b = b\times b$
$b = b\times b$ and we get $b = a^8$
We just require n/2 swaps in the worst case. The algorithm is as given below:
Find positive number from left side and negative number from right side and do exchange. Since, at least one of them must be less than or equal to n/2, there cannot be more than n/2 exchanges. An implementation is given below:
http://gatecse.in/wiki/Moving_Negative_Numbers_to_the_Beginning_of_Array
here while loop executes until m==n
take any two number as m,n and compute it , get the answer
Ans will be (C)
Ans B should be correct.
We can move from right keeping a note of the maximum element(suppose current_max). At the start the right most element will always be a leader. If an element is greater than our current_max, it will a leader. Add this element to leaders. Set current_max to this element and carry on leftward. Time Complexity would be O(n)
= 1 $\times$ Probability of first element be x + 2 $\times$ Probability of second element be x + .... +n $\times$ Probability of last element be x.
$= \frac{1}{n} + \frac{2}{n} + \frac{3}{n} +.... +\frac{n}{n} $
$ = \frac{\left ( \frac{n\times (n+1)}{2} \right ) } {n} $
$ = \frac{n+1}{2}$
sequence of popped elements will come to abcd
B. first push abcd, and after that pop one by one sequence of popped elements will come to dcba
C. push abc, and after that pop one by one sequence of popped elements will come to cba, now push d and pop d, final sequence comes to cbad
D. this sequence is not possible because 'a' can not be pooped before 'b' any how
B seems like average case, be we are asked for worst case.
Worst case is we do not find the element in list. We might end up searching entire list & comparing with each element. So answer > D. n
Take $n=2^{1024}$
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{45}$
$n^{\frac{1}{3}} \approx 2^{341}$
$n / \log n = 2^{1024}/1024 \approx 2^{1014}$
Just one value is not enough to confirm growth rate. So, take $n = 1024$.
Now, $2^{\sqrt{(2 \log n)}} \approx 2^{4}$
$n^{\frac{1}{3}}\approx 2^{3}$
$n / \log n = 2^{10}/10 \approx 2^{7}$
So, as $n$ increases the gap between second and third function increases and also the second function overtakes the first. So, $f1 < f2 < f3$.
Here the list is (48, 99, 120, 165 ,273.)
Gcd(48,99)=3 ,means if we subtract(9948=51) that no is also % 3,
so the no's like (3,6,999) are added , total no's=99/3=33
//y Gcd(48,120)=24,so the no's %24 are added like (24,48,120) ,total no's=120/24=5
//y Gcd(48,165)=3,so the nos(3,6,9,244899120165) are added ,totally 165/3=55
at end Gcd(48,273)=3,so the no's(3,6,9244899120165273) are added(which covers all the above no's)
so total no"s added to this list=273/3=91
to be accurate it will be need 3n/2 2 comparisons .
Option (C) is correct .. Reason is as follows :
Here , At first level we are Given n elements , out of which we have to find smallest 3 numbers.
we compare 2 2 elements as shown in figure & get n/2 elements at Second level.
Note: Minimum element is in these n/2 elements.
So, comparisons for this is n/2..
Similarly for next level we have n/4 Comparisons & n/2 elements..and so on..
Total Comparisons till now is n/2 + n/4 + n/8 + .... + 4 + 2 +1 = (2^{log n }1) = n1 {Use G.P. sum}
Now we have to get smallest 3..
We have 1st smallest at last level already => 0 Comparison for this..
=> 2^{nd} & 3^{rd }smallest can be found in O(log n) time as shown below:
Minimum Element must have descended down from some path from top to Bottom..
=> SQUARES represent Candidates for 2nd minimum..
every element that is just below m1(first minimum) is a candidate for second minimum..
So, O(log n ) Comparisons for finding second smallest..
=> similarly for 3rd minimum we get O(log n) time.. As, every element that is just below 1st & 2nd minimum is a candidate for 3rd minimum ..
We will consider the permutation along with min no of times MIN is updated .
Permutation : No of times MIN updated (Minimum)
1,2,3 1
1,3,2 1
2,1,3 2
2,3,1 2
3,1,2 2
3,2,1 3
Total number of times MIN updated is : 11 .
Average no of times MIN updated is : (11/6)
Now going by the options i am getting B .
H$_3$ = 1 + 1/2 + 1/3 = 11/6 .
H$_3$ is the answer and that is option B .
This post explains it nicely
https://www.quora.com/Howdoesfollowingalgorithmforfindinglongestpathintreework
answer = option E
Computable function: those which can be incorporated in a program using for/while loops.
Total Function: Defined for all possible inputs
Well Defined: if its definition assigns it a unique value.
It was a belief in early 1900s that every Computable function was also Primitively Recursive. But the discovery of Ackermann function provided a counter to it.
The class of primitive recursive functions is a small subclass of the class of recursive functions. This means that there are some functions which are WellDefined Total Functions and are Computable BUT Not primitively recursive; eg. Ackermann function.
This makes all options from option A to option D as True.
But option E as $\text{FALSE}$. As iterative programs are equivalent to only Primitively Recursive class.
option C:
look at the initial value of j, if j starts with 0, then double for loop will swap M[i][j] with M[j][i] and also M[j][i] and M[i][j] so the matrix M will remain unchanged, so to avoid this double swapping we need to initialize j = i and swap only upper triangular matrix with lower triangular matrix.
for(j = i; j < 4; ++j){ // code for swapping M[i][j] with M[j][i] t = M[i][j]; M[i][j] = M[j][i]; M[j][i] = t; }
I think its Tower of Hanoi problem.
Therefore Total number of function call 2^{n}1 = 1023 option B
consider x,y,u,v=17,3,3,17
X=14, v=20
X=11,v=23
X=8, v=26
X=5,v=29
X=2,v=32
Y=1,u=35
X=1,v=67
This is the value obtained
lastly print (x+y)/2 and (v+u)/2 gives 1 and 51
i loop is executing n times. j loop is executing log n times for each i, and so value of p is log n. k loop is executing log p times, which is log log n times for each iteration of i. In each of these q is incremented. So, over all iterations of i, q will be incremented n log log n times. So, D choice.
Answer is (B)
For some 'i' if we find that difference of ( A[j]  A[i] <S )we increment 'j' to make this difference wider so that it becomes equal to S .
If at times difference becomes greater than S we know that it wont reduce further for same 'i' and so we increment the 'i'.
We do it for each 'i' if not found in previous iteration. until i=n
The answer is D
#include <stdio.h> int main(void) { return 0; } int anagram (char *a, char *b) { /* ASCII characters are of 7bits so we use count array to represent all the ASCII characters (ranging 0127) */ int count [128], j; /* so this loop will initialize count of all the ASCII characters to be 0 (zero) */ for (j = 0; j < 128; j++) count[j] = 0; j = 0; /* "a[j] && b[j]" ensures that anagram returns 0 (false) in case both strings have different length. Because different length strings cannot be anagram of each other */ /* Logic: Below while loop increments ASCII equivalent position for its occurence in array 'a'in count array; and decrements ASCII equivalent position for its occurence in array 'b'in count array. Example: a = "ISS" and b = "SIS" ASCII equivalent of: I  73 S  83 j = 0: Statement A will increment count[ASCII of 'I']==>count[73] count[73] = 0 > 1 Statement B will decrement count[ASCII of 'S'] ==> count[83] count[83] = 0 > 1 and will increment j j = 0 > 1 j = 1: Statement A will increment count[ASCII of 'S'] ==> count[83] count[83] = 1 > 0 Statement B will decrement count[ASCII of 'I'] ==> count[73] count[73] = 1 > 0 and will increment j j = 1 > 2 j = 2: Statement A will increment count[ASCII of 'S'] ==> count[83] count[83] = 0 > 1 Statement B will decrement count[ASCII of 'S'] ==> count[83] count[83] = 1 > 0 and will increment j j = 2 > 3 *** END OF LOOP *** */ while (a[j] && b[j]) { A; //count [a[j]]++ /* Note: j will be increment after count[] will execute Resource: http://www.c4learn.com/cprogramming/incrementoperatorinsideprintf */ B; //count[b[j++]] } /* This loop checks that the number of occurences of the individual ASCII characters is same or not. If count[i] = 0 > same number of occurences for ASCII chracter i > return 1 (true) if count[i]!= 0 > different number of occurences for ASCII chracter i > return 0 (false) */ for (j = 0; j < 128; j++) if (count [j]) return 0; return 1; }
1. BellmanFord algorithm =>: O (nm) Assuming n as edges , m as vertices, for every vertex we relax all edges. m*n , O(mn)
2. Kruskal’s algorithm => Remaining Option ,A : O ( m log n)
3. FloydWarshall algorithm => Dynamic Programming Algo, O(N^{3})
4. Topological sorting => Boils Down to DFS, O(V+E) D
Answer A)
Solution: B
The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity $$A,B,C,D,E,F,G and H. Connect the activities that occur between the start and end time of an activity now the chromatic number of the graph is the number of rooms required.
Array f is used to store the fib() values calculated in order to save repeated calls. Since n = 0 and n = 1 are special cases we can store fib(2) to f[0], fib(3) to f[1] and so on. The missing code completed would be:
if (f[n  2] != 0){ return f[n2]; } t = fib(n1) + fib(n2); f[n2] = t; return t;
In this code, fib(i) will do a recursion only once as once fib(i) is calculated it is stored in array. So, the time complexity for fib(n) would be $\Theta(n)$.
PS: We can also store fib(n) in f(n), the above code just saves 2 elements' space in the array.
$$A(n) = O(W(n)) $$
answer = option B
whenever there exists an element which is present in the array : more than $\frac{n}{2}$ times, then definitely it will be present at the middle index position; in addition to that it will also be present at anyone of the neighbourhood indices namely $i1$ and $i+1$
No matter how we push that stream of More than $\frac{n}{2}$ times of elements of same value around the Sorted Array, it is bound to be present at the middle index + atleast anyone of its neighbourhood
once we got the element which should have occurred more that $n/2$ times we count its total occurrences in $\mathcal{O}(\log n)$ time.
Recurrence relation for Towers of Hanoi is
T(1) = 1
T(n) = 2 T( n1 ) +1
So Answer should be (D)
Arrange files in increasing order of records
D A C B E
5 10 15 20 25
75
30 45
15 15(C) 20 (B) 25(E)
5(D) 10(A)
No of movements=15+30+45+75=165
I think answers are
80. (B)
$X[i][j] = X[i1][j] \cup X[i1][ja_i]$
$\text{ We calculate the value of } X[i][j] \text{ as if we include the value }$
$ a_i \text{ in the subset } X[i1][ja_i] \text{ or we do not include the value in the subset} X[i1][j].$
81. (C)
// Returns true if there is a subset of set[] with sun equal to given sum bool isSubsetSum(int set[], int n, int sum) { // The value of subset[i][j] will be true if there is a subset of set[0..j1] // with sum equal to i bool subset[sum+1][n+1]; // If sum is 0, then answer is true for (int i = 0; i <= n; i++) subset[0][i] = true; // If sum is not 0 and set is empty, then answer is false for (int i = 1; i <= sum; i++) subset[i][0] = false; // Fill the subset table in botton up manner for (int i = 1; i <= sum; i++) { for (int j = 1; j <= n; j++) { subset[i][j] = subset[i][j1]; if (i >= set[j1]) subset[i][j] = subset[i][j]  subset[i  set[j1]][j1]; } } /* // uncomment this code to print table for (int i = 0; i <= sum; i++) { for (int j = 0; j <= n; j++) printf ("%4d", subset[i][j]); printf("\n"); } */ return subset[sum][n]; }
Answer is (a)
Effect of the above 3 reversal for any K is equivalent to left rotation of the array of size n by k.
Let , S[1......7]
1  2  3  4  5  6  7 
so, n=7 ,k = 2
reverse(S,1,2) we get [2,1,3,4,5,6,7]
reverse(S,3,7) we get [2,1,7,6,5,4,3]
reverse(S,1,7) we get [3,4,5,6,7,1,2]
hence option (a) Rotates s left by k position is correct
Ref: http://www.geeksforgeeks.org/unionfind/
For N=9, it returns 3.
For N=10 it returns 4.
For N=16 it returns 4.
For N=17 it returns 5.
So answer should be C.
Consider an array $A[1...n]$. It consists of a permutation of numbers $1....n$. Now compute another array $B[1...n]$ as follows: $B[A[i]]:= i$ for all $i$. Which of the following is true?
 $B$ will be a sorted array.
 $B$ is a permutation of array $A$.
 Doing the same transformation twice will not give the same array.
 $B$ is not a permutation of array $A$.
 None of the above.
An array A contains $n \geq 1$ positive integers in the locations $A[1], A[2], \dots A[n]$. The following program fragment prints the length of a shortest sequence of consecutive elements of $A$, $A[i], A[i+1], \dots,A[j]$ such that the sum of their values is $\geq M$, a given positive number. It prints ‘n+1’ if no such sequence exists. Complete the program by filling in the boxes. In each case use the simplest possible expression. Write only the line number and the contents of the box.
begin i:=1;j:=1; sum := ◻ min:=n; finish:=false; while not finish do if ◻ then if j=n then finish:=true else begin j:=j+1; sum:= ◻ end else begin if(ji) < min then min:=ji; sum:=sum –A[i]; i:=i+1; end writeln (min +1); end.
The following Pascal program segments finds the largest number in a twodimensional integer array $A[0\dots n1, 0\dots n1]$ using a single loop. Fill up the boxes to complete the program and write against $\fbox{A}, \fbox{B}, \fbox{C} \text{ and } \fbox{D}$ in your answer book Assume that max is a variable to store the largest value and $i, j$ are the indices to the array.
begin max:=A, i:=0, j:=0; while B do begin if A[i, j]>max then max:=A[i, j]; if C then j:=j+1; else begin j:=0; i:=D end end end
Answers: Arrays
Option b) $B$ is a permutation of array $A$.
Infact, $B$ gives the reverse index of all the elements of array $A$. Since the array $A$ contains numbers $[1 .. n]$ mapped to the locations $[1 .. n]$ and $A$ is a permutation of the numbers $[1 .. n]$, the array $B$ will also be a permutation of the numbers $[1 .. n]$.
For example: $$\begin{array}{lcccccccc} \text{index} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\[1em] \hline A & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4\\ B & 2 & 6 & 3 & 8 & 1 & 5 & 4 & 7 \end{array}$$
To see that option c is incorrect, let array $C$ be the array attained from doing the same transformation twice, that is, $C[B[i]] = i , \forall i \in [1 .. n]$. We get, $$\begin{array}{lcccccccc} \text{index} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\[1em] \hline A & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4\\ B & 2 & 6 & 3 & 8 & 1 & 5 & 4 & 7\\ C & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4 \end{array}$$
We can see that $C = A$, which makes option c incorrect.
i = 1; j = n; while(i != j) { if(A[i] + A[j] == M) break; else if(A[i] + A[j] < M) i++; else j; }
begin i:=1;j:=1; sum := A[1] min:=n; finish:=false; while not finish do if sum < M then if j=n then finish:=true else begin j:=j+1; sum:= sum + A[j] end else begin if(ji) < min then min:=ji; sum:=sum – A[i]; i:=i+1; end writeln (min +1); end.
Algorithm
'i' indicates the starting marker and 'j' acts as ending marker for the sum sequence. 'sum' is initialised as the first element in the array because the algorithm proceeds by taking the sum of remaining elements. 'finish' is a boolean variable that indicates exit from the loop.
After entering the loop for the first time with 'finish' as false, the sum is checked if it's strictly less than "M". If that's the case j is incremented and the sum is modified to sum + A[j]. When 'sum' becomes greater than or equal to 'M', 'min' is modified to the latest number of elements that make the sum greater than or equal to 'M' and then, the first element is stripped off from the sum and 'i' is incremented by one to move the initial marker of the sum sequence. The loop runs till 'j' reaches the end of the array.
The algorithm keeps track of 'min' i.e. the number of elements in the minimum sum sequence. This is very similar to the way we find the minimum value in an array by modifying the min value whenever a lesser value is encountered.
We have to traverse all elements in array. The code is doing this row wise.
begin max:=A[0,0], i:=0, j:=0; while (i < n) do begin if A[i, j]>max then max:=A[i, j]; if (j < n1) then j:=j+1; else begin j:=0; i:=i++; end end end
Let $f(n) = n^2 \log n$ and $g(n) = n(\log n)^{10}$ be two positive functions of $n$. Which of the following statements is correct?
 $f(n) = O(g(n)) \text{ and } g(n) \neq O(f(n))$
 $g(n) = O(f(n)) \text{ and } f(n) \neq O(g(n))$
 $f(n) \neq O(g(n)) \text{ and } g(n) \neq O(f(n))$
 $f(n) =O(g(n)) \text{ and } g(n) = O(f(n))$
Let $f(n) = n$ and $g(n) = n^{(1 + sin \: n)}$ where $n$ is a positive integer. Which of the following statements is/are correct?
 $f(n) = O(g(n))$
 $f(n) = \Omega(g(n))$
 Only I
 Only II
 Both I and II
 Neither I nor II
Let $n$ be a large integer. Which of the following statements is TRUE?
 $n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n} < n^{1/100}$
 $n^{1/100} < n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n}$
 $n^{1 / \sqrt{\log_2 n}} < n^{1/100} < \sqrt{\log_2 n}$
 $\sqrt{\log_2 n} < n^{1 / \sqrt{\log_2 n}} < n^{1/100}$
 $\sqrt{\log_2 n} < n^{1/100} < n^{1 / \sqrt{\log_2 n}}$
Consider the following two functions:
$g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$
$g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$
Which of the following is true?

$g_1(n) \text{ is } O(g_2(n))$

$g_1(n) \text{ is } O(n^3)$

$g_2(n) \text{ is } O(g_1(n))$

$g_2(n) \text{ is } O(n)$
Consider the equality $\sum_{(i=0)}^n i^3 = X$ and the following choices for $X$
 $\Theta(n^4)$
 $\Theta(n^5)$
 $O(n^5)$
 $\Omega(n^3)$
The equality above remains correct if $X$ is replaced by
 Only I
 Only II
 I or III or IV but not II
 II or III or IV but not I
Which of the following is false?

$100n \log n=(\frac{n\log n}{100})$

$\sqrt{\log n} = O(\log\log n)$

If $0 < x < y \text{ then } n^x = O\left(n^y\right)$

$2^n \neq O\left(nk\right)$
Arrange the following functions in increasing asymptotic order:
 $n^{1/3}$
 $e^n$
 $n^{7/4}$
 $n \log^9n$
 $1.0000001^n$
A)  A, D, C, E, B 
B)  D, A, C, E, B 
C)

A, C, D, E, B 
D)  A, C, D, B, E 
Suppose $T(n) =2T (\frac{n}{2}) + n, T(0) = T(1) =1$
Which one of the following is FALSE?

$T(n)=O(n^2)$

$T(n)=\Theta(n \log n)$

$T(n)=\Omega(n^2)$

$T(n)=O(n \log n)$
Let f(n), g(n) and h(n) be functions defined for positive inter such that
f(n) = O(g(n)), g(n) ≠ O(f(n)), g(n) = O(h(n)), and h(n) = O(g(n)).
Which one of the following statements is FALSE?
 f(n) + g(n) = O(h(n)) + h(n))
 f(n) = O(h(n))
 h(n) ≠ O(f(n))
 f(n)h(n) ≠ O(g(n)h(n))
Consider the following functions
 $f(n) = 3n^{\sqrt{n}}$
 $g(n) = 2^{\sqrt{n}{\log_{2}n}}$
 $h(n) = n!$
Which of the following is true?
 $h(n)$ is $O(f(n))$
 $h(n)$ is $O(g(n))$
 $g(n)$ is not $O(f(n))$
 $f(n)$ is $O(g(n))$
If $T_1 = O(1)$, give the correct matching for the following pairs:
(M) $T_n = T_{n1} + n$  (U) $T_n = O(n)$ 
(N) $T_n = T_{n/2} + n$  (V) $T_n = O(n \log n)$ 
(O) $T_n = T_{n/2} + n \log n$  (W) $T = O(n^2)$ 
(P) $T_n = T_{n1} + \log n$  (X) $T_n = O(\log^2 n)$ 
(A) MW NV OU PX
(B) MW NU OX PV
(C) MV NW OX PU
(D) MW NU OV PX
Consider the following functions:
$f(n) = 2^n$
$g(n) = n!$
$h(n) = n^{\log n}$
Which of the following statements about the asymptotic behavior of $f(n), g(n)$ and $h(n)$ is true?
 $f\left(n\right)=O\left(g\left(n\right)\right); g\left(n\right)=O\left(h\left(n\right)\right)$

$f\left(n\right) = \Omega\left(g\left(n\right)\right); g(n) = O\left(h\left(n\right)\right)$

$g\left(n\right) = O\left(f\left(n\right)\right); h\left(n\right)=O\left(f\left(n\right)\right)$

$h\left(n\right)=O\left(f\left(n\right)\right); g\left(n\right) = \Omega\left(f\left(n\right)\right)$
The tightest lower bound on the number of comparisons, in the worst case, for comparisonbased sorting is of the order of
 $n$
 $n^2$
 $n \log n$
 $n \log^2n$
Which of these functions grows fastest with $n$?
 $e^{n}/n$.
 $e^{n0.9 \log n}$.
 $2^{n}$.
 $\left(\log n\right)^{n1}$.
 None of the above.
Which of the given options provides the increasing order of asymptotic complexity of functions $f_1, f_2, f_3$ and $f_4$?
 $f_1(n) = 2^n$
 $f_2(n) = n^{3/2}$
 $f_3(n) = n \log_2 n$
 $f_4(n) = n^{\log_2 n}$
(A) $f_3, f_2, f_4, f_1$
(B) $f_3, f_2, f_1, f_4$
(C) $f_2, f_3, f_1, f_4$
(D) $f_2, f_3, f_4, f_1$
Consider the following three claims
 $(n+k)^m = \Theta(n^m)$ where k and m are constants
 $2^{n+1} = O(2^n)$
 $2^{2n+1} = O(2^n)$
Which of the following claims are correct
 I and II
 I and III
 II and III
 I, II, and III
Answers: Asymptotic Notations
f(n)  g(n)  

n = 2^{10}  10 * 2^{1}^{0 }* 2^{10}  2^{10} * 10^{10} 
n = 2^{256}  256 * 2^{256 }* 2^{256}  2^{256} * 256^{10} 
So, as n is going larger, f(n) is overtaking g(n) and the growth rate of f is faster than that of g. So, g(n) = O(f(n)) and f(n) ≠ O(g(n)).
B choice.
The answer is option D.
Since the value of sin(n) will always range from 1 to +1, hence g(n) can take values 1, n, n^2.
Hence, if g(n) = 1, Statement I is incorrect.
And, if g(n) = n^2, then Statement II is incorrect.
Let $n = 2^x$. Then, $\log_2 n = x$
$$\begin{array}{rlllcll} f(n) &=& n^{1/\sqrt{\log_2 n}} &=& \left (2^x \right )^{1/\sqrt{x}} = 2^{x/\sqrt{x}} &=& 2^{\sqrt{x}}\\[1em] g(n) &=& \sqrt{\log_2 n} &=& \sqrt{\log_2 {\left ( 2^x \right )}}&=& \sqrt{x}\\[1em] h(n) &=& n^{1/100} &=& \left ( 2^x \right )^{1/100} &=& 2^{x/100} \end{array}$$
Since exponentials grow faster than polynomials, $h(n) > g(n)$ for large $n$.
Since linear functions grow faster than square roots, $\frac{x}{100} > \sqrt{x}$ for large $x$. Thus, $h(n) > f(n)$ for large $n$.
Since exponentials grow faster than polynomials, $2^{\sqrt{x}} > \sqrt{x}$ for large $\sqrt{x}$. Thus, $f(n) > g(n)$ for large $n$.
Hence, the relation is,
$g(n) < f(n) < h(n)$
Thus, option D is correct.
Options A and B are true here.
Sum of the cubes of the first n natural numbers is given by (n(n+1)/2 )^{2} which is ϴ(n^{4}). So, I, III and IV are correct. II is wrong. C choice.
 $100n \log n=(\frac{n\log n}{100})$ : cant do comment about it not given properly in paper.
 $\sqrt{\log n} = O(\log\log n)$ : false take any long value like 256 LHS results 16 But RHS resuts 4 only . gerenraly we take log left side but that is wrong.
 $0 < x < y \text{ then } n^x = O\left(n^y\right)$ : true since y is always greater than x so RHS is always greater than LHS.
 $2^n \neq O\left(nk\right)$ : true since k is constant .so for large value of n LHS is very higher than RHS ( exponential function always greater than linear )
Only B is false
E < B
and
C, D < E as E is exponential function.
Now, we just need to see if C or D is larger.
In C we have a term $n^{3/4}$ and correspondingly in D we have $\log^9n$ (after taking $n$ out).
$n^{3/4}$ is asymptotically larger than $\log^9n$ as when $n = 10^{100}, \log^9 n$ gives $100^9$, while $n^{3/4}$ gives $10^{75} > 100^{37}$ a much higher value and this is true for all higher values of $n$. So, D < C.
Thus A is correct.
Applying Masters theorem T(n) = ⊖(n log n) So, it can't be Ω(n^{2})
Hence answer is C.
Answer is (D)
We can verify as : f<=g BUT g!<=f . Therefore f<g
Also g=h as g=O(h) and h=O(g)
$n = 256$  $n=65536$  

$f(n) = 3n^{\sqrt{n}}$

$3 \times 256^{16}$ $=3 \times {2^{128}}$ 
$3 \times 65536^{256}$ $ = 3 \times 2^{16 \times 256}$ $ = 3 \times 2^{4096}$ 
$g(n) = 2^{\sqrt{n}{\log_{2}n}}$ 
$2^{16 \times 8}$ $=2^{128}$ 
$2^{256 \times 16}$ $ = 2^{4096}$ 
$h(n) = n!$ 
$256!$ $= O\left(\left(2^8\right)^{256}\right)$ $=O\left(2^{2048}\right)$ 
$65536!$ $= O\left(\left(2^{16}\right)^{65536}\right)$ $=O\left(2^{1M}\right)$ 
Case of h(n) is given only by an upper bound but factorial has higher growth rate than exponential.
f(n) and g(n) are having same order of growth as f(n) is simply 3 g(n) (we can prove this by taking log also). So, (d) is correct and all other choices are false.
(N) $T(n) = \Theta(n) =O(n)$, third case of Master theorem
($f(n) = n = \Omega \left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0 + \epsilon}\right) $, satisfied for any positive $\epsilon \leq 1$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} < cn$, satisfied for any $c$ between $0$ and $0.5$)
(O) $T(n) = \Theta(n \log n) = O (n \log n)$, third case of Master theorem
($f(n) = n \log n =\Omega\left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0.5 + \epsilon}\right) $, satisfied for positive $\epsilon = 0.5$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} \log \frac{n}{2} < cn \log n$, satisfied for $c = 0.5$)
(P) Like in (M), here we are adding the log of the first n natural numbers. So,
$T_n = \log 1 + \log 2 + \log 3 + \dots + \log n$
$=\log (1\times 2\times \dots \times n)$
$=\log(n!)$
$=\Theta(n \log n)$ (Stirling's Approximation)
$g(n) = n!$ on expanding factorial we get $g(n) = \mathcal{O}(n^n)$ :
$$\begin{align*} n^n &> n^{\log n} \\ n^n &> 2^n \end{align*}$$
this condition is violated by option A, B and C by first statements of each Hence, they cannot be said true.
second statement of option D says that $g(n)$ is asymptotically biggest of all.
answer = option D
Tightest lower bound of sorting (say S$(n)$) is $n \log n$ means there is no function $f$ which has an order of growth larger than $n \log n$ and $f(n) = \Omega (S(n))$ holds.
A usual mistake is to think worst case changes with lower and upper bounds, but that is not the case. Worst case is defined for the algorithm and it is always the input which causes the algorithm the maximum complexity.
Assuming that the base of the $\log$ in the question is $e$.
Let us try to rewrite each of these functions in the form $e^{\,\small\rm something}$, to make the comparison easier.
$$\def\d{\displaystyle\,}
\begin{array}{lll}
a. & e^{n} / n & = \dfrac{e^{\d n}}{e^{\d(\ln n)}} & = e^{\d (n  \ln n)}\\[1em]
b. & e^{n  0.9 \ln n} &&= e^{\d (n  0.9 \ln n)}\\[1em]
c. & 2^n &= \left (e^{\d\ln 2} \right )^{\d n} &= e^{\d (n \ln 2)}\\[1em]
d. & (\ln n)^{n1} &= \left (e^{\d\ln \ln n} \right )^{\d n1} &= e^{\d (n \ln \ln n  \ln\ln n)}
\end{array}$$
Now, if we just compare the exponents of all, we can clearly see that $(n \ln \ln n  \ln \ln n)$ grows faster than the rest. Note that in option c. the multiplicative $\ln 2$ is a constant, and hence grows slower than the multiplicative $\ln \ln n$ from option d.
This implies that $e^{\d (n \ln \ln n  \ln\ln n)}$ grows the fastest, and hence, $(\ln n)^{n1}$ grows the fastest.
Thus, option d. is the correct answer.
[EDIT]
answer A
nlog_{2}n < n^{3/2} is quite straightforward
also n^{3/2} < n^{log2n} and n^{3/2} < 2^{n}
now only n^{log2n} and 2^{n} need to be compared
taking log of both (log_{2}n)^{2} and n
n > (log_{2}n)^{2}
hence 2^{n} > n^{log2n}
so TRUE
2) $2^{n+1} = 2*(2^n) = \Theta\left(2^n\right)$ as 2 is a constant here
As $2^{n+1}$ is both upper and lower bounded by $2^n$ we can say $2^{n+1} = O\left(2^n\right)$
so TRUE
3) $2^{2n+1}$ has same rate of growth as $2^{2n}$
$2^{2n} = {2^n}^2$
$2^n$ is upper bounded by $(2^n)^2$, not the other way round
so FALSE
In a binary max heap containing $n$ numbers, the smallest element can be found in time
 $O(n)$
 $O(\log n)$
 $O(\log \log n)$
 $O(1)$
Answers: Binary Heap
In a max heap, the smallest element is always present at a leaf node. Heap being a complete binary tree, there can be up to $\frac{n}{2}$ leaf nodes and to examine all of them we would need $O(n)$ time.
Suppose you are given an array $A$ with $2n$ numbers.
The numbers in odd positions are sorted in ascending order, that is, $A[1] \leq A[3] \leq \ldots \leq A[2n  1]$.
The numbers in even positions are sorted in descending order, that is, $A[2] \geq A[4] \geq \ldots \geq A[2n]$.
What is the method you would recommend for determining if a given number is in the array?
 Sort the array using quicksort and then use binary search.
 Merge the sorted lists and perform binary search.
 Perform a single binary search on the entire array.
 Perform separate binary searches on the odd positions and the even positions.
 Search sequentially from the end of the array.
Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
The correction needed in the program to make it work properly is
 Change line 6 to: if (Y[k] < x) i = k + 1; else j = k1;
 Change line 6 to: if (Y[k] < x) i = k  1; else j = k +1;
 Change line 6 to: if (Y[k] < x) i = k; else j = k;
 Change line 7 to: } while ((Y[k] == x) && (i < j)) ;
Consider the following program that attempts to locate an element $x$ in an array $a[ ]$ using binary search. Assume $N > 1$. The program is erroneous. Under what conditions does the program fail?
var i,j,k: integer; x: integer; a: array; [1..N] of integer; begin i:= 1; j:= n; repeat k:(i+j) div 2; if a[k] < x then i:= k else j:= k until (a[k] = x) or (i >= j); if (a[k] = x) then writeln ('x is not in the array') else writeln ('x is not in the array') end;
Consider the following three version of the binary search program. Assume that the elements of type $T$ can be compared with each other; also assume that the array is sorted.
i, j, k : integer; a : array [1....N] of T; x : T; Program 1 : i := 1; j := N; repeat k := (i + j) div 2; if a[k] < x then i := k else j := k until (a[k] = x) or (i > j) Program 2 : i := 1; j := N; repeat k := (i + j) div 2; if x < a[k] then j := k  1; if a[k] < x then i := k + 1; until i > j Program 3 := i := 1; j := N repeat k := (i + j) div 2; if x < a[k] then j := k else i := k + 1 until i > j
A binary search program is called correct provided it terminates with $a[k] = x$ whenever such an element exists, or it terminates with $a\left [ k \right ]\neq x$ if there exists no array element with value $x$. Which of the following statements is correct?
 Only Program 1 is correct
 Only Program 2 is correct
 Only Program 1 and 2 are correct.
 Both Program 2 and 3 are correct
 All the three programs are wrong
Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. The program is erroneous.
f (int Y[10] , int x) { int u, j, k; i= 0; j = 9; do { k = (i+ j) / 2; if( Y[k] < x) i = k;else j = k; } while (Y[k] != x) && (i < j)) ; if(Y[k] == x) printf(" x is in the array ") ; else printf(" x is not in the array ") ; }
On which of the following contents of Y and x does the program fail?
 Y is [1 2 3 4 5 6 7 8 9 10] and x < 10
 Y is [1 3 5 7 9 11 13 15 17 19] and x < 1
 Y is [2 2 2 2 2 2 2 2 2 2] and x > 2
 Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even
Answers: Binary Search
Option D is the correct answer.
We can simply use clever indexing to binary search the element in the odd positions, and in the even positions separately.
This will take $O(\log n)$ time and $O(1)$ space in the worst case.
A: Sorting using Quicksort will take $O(n^2)$ time.
B: Merging will take $O(n)$ time and $O(n)$ space.
C: Binary search only works on a sorted array.
E: Sequential search will take $O(n)$ time.
Ans should be A
if( Y[k] < x) then i = k + 1;
if given element that we are searching is greater then searching will be continued upper half of array
otherwise j = k  1;
lower half.
Take few case in consideration i.e.
1. all elements are same
2. increasing order with no repeatation
3. increasing order with repeatation.
if(a[k] < x) then i = k ;
else j = k ;
the (correct) code should be ..
k=(i+j) / 2;
if(a[k] < x) then i = k +1 ;
else j = k 1 ;
try an example ....with given code in question
let an array a[1,2,3,4,5,6,7,8,9,10]
index number 1,2,3,4,5,6,7,8,9,10
and x=10 ; now run the code ;
initially i = 1 ,j=10;
first time k =(i+j) /2 = 11/2 =5.5 = 5 (because of integer type) =i
second time = k =(i+j) /2 =15/2 =7.5 =7 =i
third time = k =(i+j) /2 = 17/2 = 8.5 = 8 =i
fourth time = k =(i+j) /2 = 18/2 = 9 = i
fifth time = k =(i+j) /2 = 19/2 = 9.5 =9 = i
sixth time = k =(i+j) /2 = 19/2 = 9.5 =9 = i
seventh time = k =(i+j) /2 = 19/2 = 9.5 =9 = i
.................................................................
going to infinite loop (run time error) .... ;
for terminating loop , it should be , i = k + 1 instead of i =k ;and j = k  1 instead of j = k ;
correct me ....???
For second program a[k]==x condition is mussung so it is wrong
Third program is also wrong as j!=k1 and condition a[k]==x is missing
So ans is e
for Q.84
when it is option C the control will continue to iterate as $i=8$ and $j=9$;
again and again $i$ will be assigned $k$ which itself equals $8$ as $\frac{8+9}{2}$ being stored in an integer type variable, will evaluate to $8$.
For option A, with $x=9$, $k$ will take the following values:
 4
 6
 7
 8  y[8] = 9, x found
For option D, with $x=10$, $k$ will take the following values:
 4, y[4] = 10, x found
(A) 10, 20, 15, 23, 25, 35, 42, 39, 30
(B) 15, 10, 25, 23, 20, 42, 35, 39, 30
(C) 15, 20, 10, 23, 25, 42, 35, 39, 30
(D) 15, 10, 23, 25, 20, 35, 42, 39, 30
Consider the following C program segment where CellNode represents a node in a binary tree:
struct CellNode { struct CellNode *leftChild; int element; struct CellNode *rightChild; }; int Getvalue (struct CellNode *ptr) { int value = 0; if (ptr != NULL) { if ((ptr>leftChild == NULL) && (ptr>rightChild == NULL)) value = 1; else value = value + GetValue(ptr>leftChild) + GetValue(ptr>rightChild); } return(value); }
The value returned by GetValue when a pointer to the root of a binary tree is passed as its argument is:

the number of nodes in the tree

the number of internal nodes in the tree

the number of leaf nodes in the tree

the height of the tree
Let LASTPOST, LASTIN and LASTPRE denote the last vertex visited `in a postorder, inorder and preorder traversal respectively, of a complete binary tree. Which of the following is always true?
 LASTIN = LASTPOST
 LASTIN = LASTPRE
 LASTPRE = LASTPOST
 None of the above
The inorder and preorder traversal of a binary tree are
$\text{d b e a f c g}$ and $\text{a b d e c f g}$, respectively
The postorder traversal of the binary tree is:

$\text{d e b f g c a}$

$\text{e d b g f c a}$

$\text{e d b f g c a}$

$\text{d e f g b c a}$
Suppose we have a balanced binary search tree $T$ holding $n$ numbers. We are given two numbers $L$ and $H$ and wish to sum up all the numbers in $T$ that lie between $L$ and $H$. Suppose there are $m$ such numbers in $T$. If the tightest upper bound on the time to compute the sum is $O(n^a\log^bn+m^c\log^dn)$, the value of $a+10b+100c+1000d$ is ______.
Consider the following rooted tree with the vertex labeled P as the root:
The order in which the nodes are visited during an inorder traversal of the tree is
(A) SQPTRWUV
(B) SQPTUWRV
(C) SQPTWUVR
(D) SQPTRUWV
A program takes as input a balanced binary search tree with $n$ leaf nodes and computes the value of a function $g(x)$ for each node $x$. If the cost of computing $g(x)$ is: $$\Large \min \left ( \substack{\text{number of leafnodes}\\\text{in leftsubtree of $x$}}\;,\; \substack{\text{number of leafnodes}\\\text{in rightsubtree of $x$}}\right )$$
Then the worstcase time complexity of the program is?

$\Theta (n)$

$\Theta (n \log n)$

$\Theta(n^2)$

$\Theta (n^2\log n)$
You are given the postorder traversal, $P$, of a binary search tree on the $n$ elements $1, 2, \dots, n$. You have to determine the unique binary search tree that has $P$ as its postorder traversal. What is the time complexity of the most efficient algorithm for doing this?

$\Theta(\log n)$

$\Theta(n)$

$\Theta(n\log n)$

None of the above, as the tree cannot be uniquely determined
A binary search tree contains the value 1, 2, 3, 4, 5, 6, 7, 8. The tree is traversed in preorder and the values are printed out. Which of the following sequences is a valid output?
 5 3 1 2 4 7 8 6
 5 3 1 2 6 4 8 7
 5 3 2 4 1 6 7 8
 5 3 1 2 4 7 6 8
The height of a tree is defined as the number of edges on the longest path in the tree. The function shown in the pseudocode below is invoked as height(root) to compute the height of a binary tree rooted at the tree pointer root.
(A)
B1: $(1+\text{height}(n \to \text{ right}))$
B2: $(1+\max(h1, h2))$
(B)
B1: $(\text{height}(n \to \text{ right}))$
B2: $(1+\max(h1,h2))$
(C)
B1: $\text{height}(n \to \text{ right})$
B2: $\max(h1, h2) $
(D)
B1: $(1+ \text{height}(n \to \text{ right}))$
B2: $\max(h1, h2)$
Answers: Binary Tree
Since it is a binary search tree, its inorder traversal produces a sorted sequence i.e. 10, 15, 20, 23, 25, 30, 35, 39, 42.
Now given inorder and preorder traversals, we get following tree :
From this, we can give postorder traversal sequence as 15,10,23,25,20,35,42,39,30 i.e. option (D).
Answer: C
As the function returns 1 if and only if any node has both left & right children as NULL (that node is a leaf node). Hence, value gets incremented at each leaf node.
Take any random sequence and check for the inorder, postorder and preorder Last Node.
The answer is A.
Take the first node in preorder traversal  a will be the root of the tree
All nodes to the left of 'a' in inorder traversal will be in the left subtree of 'a' and all elements on the right will be in the right subtree of 'a'.
Take the second element from preorder traversal  'b'  goes to left subtree of 'a' as it is in the left of 'a' in inorder list.
Proceeding likewise we can construct the binary tree as:
In worst case for finding $L$ and $H$ it will take $O(\log n)$ time as the given tree is balanced binary search tree.
Now there are $m$ elements between $L$ and $H$. So to traverse m element it will take $O(m)$ time (traversal algorithm given below). So, total
$O(m+\log n) \implies a=0, b=1,c=1,d=0$
$\therefore 0+(10 \times 1)+(100 \times 1)+(1000 \times 0)=110$.
To find all the numbers from $L$ to $H$ we can do an inorder traversal from root and discard all elements before $L$ and after $H$. But this has $O(n)$ time complexity. So, we can do a modification to inorder traversal and combine with binary search as follows:
 Find $L$ using binary search and keep all nodes encountered in the search in a stack.
 After finding $L$ add it to stack as well and initialize sum = 0.
 Now, for all nodes in stack, do an inorder traversal starting from their right node and adding the node value to sum. If $H$ is found, stop the algorithm.
the inorder traversal order of a ternary tree is left>root>middle>right.
can be found with formula... (2nCn/n+1)... n being the number of nodes. for the given question... where n=3... answer is 5. Let me also specify here.. that number of Binary Search Trees with n nodes is equal to number of unlabeled Binary trees.
http://gatecse.in/wiki/Number_of_Binary_trees_possible_with_n_nodes
B. At the root node (first level) the cost would be $\Theta\left(\frac{n}{2}\right)$ as the tree is balanced.
At next level, we have 2 nodes and for each of them cost of computing $g(x)$ will be $\Theta\left(\frac{n}{4}\right)$. So, total cost at second level $= \Theta\left(\frac{n}{2}\right).$ Similarly at each level (total cost per level and not the cost per node in a level) the cost would be $\Theta\left(\frac{n}{2}\right)$ and so for $\log n$ levels it would be $\Theta({n} \log n).$
PS: Even if we change $\min$ to $\max$ in the defintion of $g(x)$ we get the same answer.
Last element in post order is the root of tree find this element in inorder $\log n$ time.
Now as in quick sort consider this as pivot and split the post order array into 2 possible because all elements smaller than pivot goes to left and all elements larger than pivot goes to right and suppose we have x elements smaller than pivot, these elements will be same in both inorder as well as postorder (order may change). We already got the root, now left child is the left split and right child is the right split.
So, doing this recursively gives time complexity of this approach as
$$T(n) = T(k) + T(nk1) + \log n$$
Solving would give $T(n) = O(n \log n)$ in worst case, by putting $k=0$ and shown at bottom.
But searching for an element in the inorder traversal of given BST can be done in $O(1)$ because we have $n$ elements from $1..n$ so there is no need to search for an element if last element in post order is say $5$ we take it as root and since 4 elements (1..4) are smaller we split the post order array in to two (first 4 elements), (6th element onward) and solve recursively. Time complexity for this would be
$$T(n) = T(k) + T(nk1) + O(1)$$
which gives $T(n) = O(n).$
Since we know that all elements must be traversed at least once, $T(n) = \Omega(n)$ also and so $$T(n) = \Theta(n).$$ The following code is doing this.
//Install graphviz (sudo aptget install graphviz on Ubuntu) to view output tree #include<stdio.h> #include<stdlib.h> struct tree { struct tree* left; struct tree* right; int x; }; struct tree* makenode(int x) { struct tree * root = malloc(sizeof(struct tree)); root > x = x; root > left = root > right = NULL; return root; } struct tree* makeBST(int *post, int start, int n, int inorder){ if(n <= 0) return NULL; int pivot = post[start + n 1]; struct tree * root = makenode(pivot); root > left = makeBST(post, start, pivot1  inorder, inorder ); root > right = makeBST(post, pivot  inorder  1, n  (pivot  inorder), pivot); return root; } void preorder(struct tree* node) { if(node == NULL) return; printf("%d ", node>x); preorder(node>left); preorder(node>right); } void printdot(struct tree* node, FILE * f) { if(node == NULL) return; if(node> left != NULL) { fprintf(f, "%d  %d;\n", node>x, node>left>x); } if(node> right != NULL) { fprintf(f, "%d  %d;\n", node>x, node>right>x); } printdot(node>left, f); printdot(node>right, f); } int main(){ int i, n, *a; printf("Enter n: "); scanf("%d", &n); a = malloc(n * sizeof(int)); printf ("Enter post order traversal: "); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } struct tree * tree = makeBST(a, 0, n, 0); printf("Pre order traversal is : "); preorder(tree); printf("\n"); FILE * f = fopen("tree.dot", "w"); fprintf(f, "graph tree { \n"); printdot(tree, f); fprintf(f, "}\n"); fclose(f); #if defined(__linux__)  (defined(__APPLE__) && defined(__MACH__)  (defined (__gnu_linux__)) ) system("dot Tpng tree.dot o output.png; eog output.png"); #endif }
$$T(n) = T(k) + T(nk1) + \log n$$
Solving would give $T(n) = O(n \log n)$, by putting $k=0$,
$T(n) = T(0) +T(n1) + \log n, \\\implies T(n) = O(1)+ \log n + \log (n1) + \log (n2) + \dots + \log 1\\\implies T(n) = n + \log n! \\\implies T(n) = O(n \log n).$
Answer: D
The tree is:
answer = option A
From the diagram below we are able to see how this works :
In a depthfirst traversal of a graph $G$ with $n$ vertices, $k$ edges are marked as tree edges. The number of connected components in $G$ is
 $k$
 $k+1$
 $nk1$
 $nk$
Which of the following is the correct decomposition of the directed graph given below into its strongly connected components?
 {P, Q, R, S}, {T}, {U}, {V}
 {P,Q, R, S, T, V}, {U}
 {P, Q, S, T, V}, {R}, {U}
 {P, Q, R, S, T, U, V}
Consider the depthfirstsearch of an undirected graph with 3 vertices P, Q, and R. Let discovery time d(u) represent the time instant when the vertex u is first visited, and finish time f(u) represent the time instant when the vertex u is last visited. Given that
d(P) = 5 units  f(P) = 12 units 
d(Q) = 6 units  f(Q) = 10 units 
d(R) = 14 unit  f(R) = 18 units 
Which one of the following statements is TRUE about the graph
 There is only one connected component
 There are two connected components, and P and R are connected
 There are two connected components, and Q and R are connected
 There are two connected components, and P and Q are connected
Let $G=(V,E)$ be a directed graph where $V$ is the set of vertices and $E$ the set of edges. Then which one of the following graphs has the same strongly connected components as $G$ ?
(A) $G_1$ = $(V,E_1)$ where $E_1 = \left\{(u,v) \mid (u,v) \notin E\right\}$
(B) $G_2$ = $(V,E_2)$ where $E_2 = \left\{(u,v) \mid (v,u) \in E \right\}$
(C) $G_3$ = $(V,E_3)$ where $E_3 = \{(u,v) \mid$ there is a path of length $\leq2$ from $u$ to $v$ in $E\}$
(D) $G_4$ = $(V_4,E)$ where $V_4$ is the set of vertices in $G$ which are not isolated
Let G be the graph with 100 vertices numbered 1 to 100. Two vertices i and j are adjacent if $\vert ij \vert =8$ or $\vert ij \vert=12$. The number of connected components in G is
 8
 4
 12
 25
Answers: Connected Components
Why ?
If Graph is connected , while doing DFS we will visit some spanning Tree of Graph. So no of edges will be n1 &
No of components => n  (n1) => 1
If Graph is not connected in that case when we do the DFS on those disconnected graph,
For every disconnected component with say x vertices, we will get x1 Tree edge. When we sum up all vertices we will get total no of vertices. When we sum up all edges in spanning tree of each component, we will get => Total no of vertices  Total No of connected component ( Due to for each connected component we are getting tree of no of vertices in that connnected component  1)
So Total connected component => D) nk
Here ans is B.
A graph is said to be strongly connected if every vertex is reachable from every other vertex.
Strongly connected componect is always maximal that is if x is strongly connceted component there should not exist another strongly connected component which contains x.
If we take R as strongly connected componect but which is part of PQRS and PQRS is part of PQRSVT.
As seen in quetion after 10 we have to go for p again and since p is finish and then r is start so r must be disconnected because if their is edges from q to r then r must be visited before of q and p ending.
D is answer
Corollary: If G is a forest with n vertices and p components, then G has np edges.
As, 0 edges for p1 vertices (p1 components) and np edges for np+1 vertices (1 component). So, total of np edges for p components.
B is true. In a directed graph a SCC will have a path from each vertex to every other vertex. So, changing the direction of all the edges, won't change the SCC.
D is false. Consider any graph with isolated vertices we loose those components.
C is a bit tricky. Any edge is a path of length 1. So, the new graph will have all the edges from old one. Also, we are adding new edges $(u,v)$. So, does this modify any SCC? No, because we add an edge $(u,v)$, only if there is already a path of length <= 2 from $u$ to $v$ so we do not create a new path. So, both B and C must be answers though GATE key says only B.
1917...97
21018...98
31119...99
41220...100
51321...93
61422...94
71523...95
81624...96
We have covered all vertices using 8 vertex sets considering only $\mid i  j \mid = 8$. Using $\mid i  j \mid = 12$ we can see the vertex 1 is connected to 13, 214, 315 and 416, so the top 4 vertex sets are in fact connected to the bottom 4 sets, thus reducing the connected components to 4.
 Consider three decision problems P_{1}, P_{2} and P_{3}. It is known that P_{1} is decidable and P_{2} is undecidable. Which one of the following is TRUE?
 P_{3} is decidable if P_{1 }is reducible to P_{3}
 P_{3} is undecidable if P_{3} is reducible to P_{2}
 P_{3} is undecidable if P_{2} is reducible to P_{3}
 P_{3} is decidable if P_{3} is reducible to P_{2}'s complement
Answers: Decidability
(A) If P_{1} is reducible to P_{3}, then P_{3} is at least as hard as P_{1}. So, no guarantee if P_{3} is decidable.
(B) If P_{3} is reducible to P_{2}, then P_{3} cannot be harder than P_{2}. But P_{2} being undecidable, this can't say P_{3} is undecidable.
(C) If P_{2} is reducible to P_{3}, then P_{3} is at least as hard as P_{2}. Since, P_{2 }is undecidable, this means P_{3} is also undecidable hence the answer.
(D) Complement of an undecidable problem is undecidable. Now, reducing to an undecidable problem can't prove P_{3} is decidable.
Consider the following directed graph.
Suppose a depthfirst traversal of this graph is performed, assuming that whenever there is a choice, the vertex earlier in the alphabetical order is to be chosen. Suppose the number of tree edges is $T$, the number of back edges is $B$ and the number of cross edges is $C$. Then
 $B = 1$, $C = 1$, and $T = 4$.
 $B = 0$, $C = 2$, and $T = 4$.
 $B = 2$, $C = 1$, and $T = 3$.
 $B = 1$, $C = 2$, and $T = 3$.
 $B = 2$, $C = 2$, and $T = 1$.
Answers: Dfs
The tree then becomes ABEC . Therefore no of tree edges is 3 that is (T=3) .
Now there is one cycle BEC so we will get a back edge from C to B while performing DFS. Hence B=1 .
Now D becomes disconnected node and it can only contribute in forming cross edge . There are 2 cross edges DA , DB . Therefore C=2 .
Answer is Option D .
Correct me if am going wrong.
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(513, 2)}$?
 9
 8
 5
 2
Answers: Distance Vector Routing
The function returns the sum of digits in a binary representation of the given number
so 1+0+0+0+0+0+0+0+0+1 = 2
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths m and n, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
l(i,j) = 0, if either i = 0 or j = 0
= expr1, if i,j > 0 and X[i1] = Y[j1]
= expr2, if i,j > 0 and X[i1] ≠ Y[j1]
The value of $l(i,j)$ could be obtained by dynamic programming based on the correct recursive definition of $l(i,j)$ of the form given above, using an array $L[M,N]$, where $M = m+1$ and $N = n+1$, such that $L[i, j] = l(i, j)$.
Which one of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of $l(i, j)$?
 All elements of $L$ should be initialized to 0 for the values of $l(i, j)$ to be properly computed.
 The values of $l(i, j)$ may be computed in a row major order or column major order of $L[M, N]$.
 The values of $l(i, j)$ cannot be computed in either row major order or column major order of $L[M, N]$.
 $L[p, q]$ needs to be computed before $L[r, s]$ if either $p<r$ or $q < s$.
A subsequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths m and n, respectively with indexes of $X$ and $Y$ starting from $0$.
We wish to find the length of the longest common subsequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:
l(i,j) = 0, if either i = 0 or j = 0 = expr1, if i,j > 0 and X[i1] = Y[j1] = expr2, if i,j > 0 and X[i1] ≠ Y[j1]
Which one of the following options is correct?

$\text{expr1} = l\left(i1, j\right) +1$

$\text{expr1} = l\left(i, j1\right)$

$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$

$\text{expr2} = \max\left(l\left(i1, j1\right), l\left(i,j\right)\right)$
The FloydWarshall algorithm for allpair shortest paths computation is based on
 Greedy paradigm.
 Divideandconquer paradigm.
 Dynamic Programming paradigm.
 Neither Greedy nor DivideandConquer nor Dynamic Programming paradigm.
Answers: Dynamic Programming
$\text{expr2} = \max\left(l\left(i1, j\right), l\left(i,j1\right)\right)$
When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].
Answer is B. We can either use Row Major or column major order.
Issue of option D > Read option D carefully.
L[p,q] needs to be computed before L[r,s] if either p < q or r < s
Assuming that we want to compute L(3,3). We need not compute L(4,2) if we are using Row Major Order ! Here L(4,2) = L[p,q] & L(3,3) = L[r,s]. Then q<s still we need not compute it ! so D IS FALSE
Answer is C. When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].
In floyd warshalls we calcuate all possibilities and select best one so its neither dac nor greedy but based on Dynamic Programming Paradigm.
In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of \(1. . . n\) with at most $n$ inversions?
 \(\Theta(n^2)\)
 \(\Theta(n\log n)\)
 \(\Theta(n^{1.5})\)
 \(\Theta(n)\)
In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).
If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of \(1. . . n\)?
 \(\frac{n(n1)}{2}\)
 \(\frac{n(n1)}{4}\)
 \(\frac{n(n+1)}{4}\)
 \(2n[\log_2n]\)
Answers: Expectation
Ans is D: ⊝(n),
Insertion sort runs in Θ(n + f(n)) time, where f(n) denotes the number of inversion initially present in the array being sorted.
Read more at http://geeksquiz.com/gategatecs2003question62/
ANSWER: D. $\Theta(n)$
REASON:
Count of number of times the inner loop of insertion sort executes is actually equal to number of inversions in input permutation $a_1,a_2,\dots a_n$. Since for each value of $i = 1..n$, $j$ take the value $1..i1$, which means for every $j<i$ it checks if $a[j] > a[i]$.
In any given permutation, maximum number of inversions possible is $n(n1)/2$ which is $O(n^2)$. It is the case where the array is sorted in reverse order. Therefore, to resolve all inversions i.e., worst case time complexity of insertion sort is $\Theta(n^2)$.
However, as per the question the number of inversion in input array is restricted to $n$. The worst case time complexity of insertion sort reduces to $\Theta(n)$.
INSERTION SORT ALGORITHM (for reference)
average apart from the standard definition can be calculated as ( best case + worst case )/2
and inversion is like 9,5.
so best case will be sorted array  1,2,3,4,5 no inversion . = zero
worst case = 9,5,4,3,2,1 . here total number of inversion will be n(n1)/2 as . 9 can be paired with any 5 elements (5,4,3,2,1 ) will form a inversion pair. similarly 5 with 4.elements .
so we can say if we have n elements then. it will be (n1)+(n2)+(n3)...+2+1 which is the sum of first n1 natural numbers. so it is n(n1)/2
so expected average number of inversion = (n(n1)/2 + zero (best case)) /2= n(n1)/4
so option b.
second question.
we all know that insertion sort has complexity due to swapping and movements, if we have n n inversion pair then the movements and comparision will be restricted to n only . like if inversion is 1 , then array must be sorted and only the inversion should exist at the end, like 1,2,3,5,4. otherwise more than one inversion pair will form. so to sort this. for two it will be 1,2,3,7,5,4. so to sort this type of array using insertion sort atmost N swaps wiill be required, so d
Consider the tree arcs of a BFS traversal from a source node $W$ in an unweighted, connected, undirected graph. The tree $T$ formed by the tree arcs is a data structure for computing
(A) the shortest path between every pair of vertices.
(B) the shortest path from $W$ to every vertex in the graph.
(C) the shortest paths from $W$ to only those nodes that are leaves of $T$.
(D) the longest path in the graph.
A depthfirst search is performed on a directed acyclic graph. Let d[u] denote the time at which vertex u is visited for the first time and f[u] the time at which the dfs call to the vertex u terminates. Which of the following statements is always true for all edges (u, v) in the graph ?
 d[u] < d[v]
 d[u] < f[v]
 f[u] < f[v]
 f[u] > f[v]
Let T be a depth first search tree in an undirected graph G. Vertices u and ν are leaves of this tree T. The degrees of both u and ν in G are at least 2. which one of the following statements is true?
 There must exist a vertex w adjacent to both u and ν in G
 There must exist a vertex w whose removal disconnects u and ν in G
 There must exist a cycle in G containing u and ν
 There must exist a cycle in G containing u and all its neighbours in G
Consider a weighted undirected graph with positive edge weights and let uv be an edge in the graph. It is known that the shortest path from the source vertex s to u has weight 53 and the shortest path from s to v has weight 65. Which one of the following statements is always true?
 Weight (u,v) $\leq$ 12
 Weight (u,v) = 12
 Weight (u,v) $\geq$ 12
 Weight (u,v) > 12
An independent set in a graph is a subset of vertices such that no two vertices in the subset are connected by an edge. An incomplete scheme for a greedy algorithm to find a maximum independent set in a tree is given below:
V: Set of all vertices in the tree; I := ϕ while V ≠ ϕ do begin select a vertex u ∊ V such that _______; V := V  {u}; if u is such that ________then I := I ∪ {u} end; Output(I);

Complete the algorithm by specifying the property of vertex $u$ in each case.

What is the time complexity of the algorithm
Let $G$ be a graph with $n$ vertices and $m$ edges.What is the tightest upper bound on the running time of Depth First Search on $G$, when $G$ is represented as an adjacency matrix?
(A) $\Theta(n)$
(B) $\Theta(n+m)$
(C) $\Theta(n^2)$
(D) $\Theta(m^2)$
Consider the following sequence of nodes for the undirected graph given below.
 a b e f d g c
 a b e f c g d
 a d g e b c f
 a d b c g e f
A Depth First Search (DFS) is started at node a. The nodes are listed in the order they are first visited. Which all of the above is (are) possible output(s)?
A)  1 and 3 only 
B)

2 and 3 only 
C)  2, 3 and 4 only 
D)  1, 2 and 3 only 
Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?

Dynamic programming

Backtracking

Greedy

Divide and Conquer
Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.
A sink in a directed graph is a vertex i such that there is an edge from every vertex j ≠ i to i and there is no edge from i to any other vertex. A directed graph G with n vertices is represented by its adjacency matrix A, where A[i] [j] = 1 if there is an edge directed from vertex i to j and 0 otherwise. The following algorithm determines whether there is a sink in the graph G.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists") ; else printf ("Sink does not exist");
Choose the correct expression for E3
 (A[i][j] && !A[j][i])
 (!A[i][j] && A[j][i])
 (!A[i][j]   A[j][i])
 (A[i][j]   !A[j][i])
Consider the following graph
Among the following sequences
 abeghf
 abfehg
 abfhge
 afghbe
which are depth first traversals of the above graph?
 I, II and IV only
 I and IV only
 II, III and IV only
 I, III and IV only
Let G be an undirected graph. Consider a depthfirst traversal of G, and let T be the resulting depthfirst search tree. Let u be a vertex in G and let v be the first new (unvisited) vertex visited after visiting u in the traversal. Which of the following statement is always true?
 {u, v} must be an edge in G, and u is a descendant of v in T
 {u, v} must be an edge in G, and v is a descendant of u in T
 If {u, v} is not an edge in G then u is a leaf in T
 If {u, v} is not an edge in G then u and v must have the same parent in T
The most efficient algorithm for finding the number of connected components in an undirected graph on $n$ vertices and $m$ edges has time complexity
 $\Theta(n)$
 $\Theta(m)$
 $\Theta(m+n)$
 $\Theta(mn)$
Let $G= (V,E)$ be a directed graph with $n$ vertices. A path from $v_i$ to $v_j$ in $G$ is a sequence of vertices ($v_{i},v_{i+1}, \dots , v_j$) such that $(v_k, v_k+1) \in E$ for all $k$ in $i$ through $j1$. A simple path is a path in which no vertex appears more than once.
Let $A$ be an $n \times n$ array initialized as follows.
$$A[j,k] = \begin{cases} 1 \text { if } (j,k) \in E \\ 0 \text{ otherwise} \end{cases}$$
Consider the following algorithm.
for i=1 to n for j=1 to n for k=1 to n A[j,k] = max(A[j,k], A[j,i] + A[i,k]);
Which of the following statements is necessarily true for all j and k after termination of the above algorithm?
 $A[j,k] \leq n$
 If $A[j,j] \geq n1$ then G has a Hamiltonian cycle
 If there exists a path from j to k, A[j,k] contains the longest path length from j to k
 If there exists a path from j to k, every simple path from j to k contains at most A[j,k] edges
In an adjacency list representation of an undirected simple graph $G=(V, E)$, each edge $(u, v)$ has two adjacency list entries: [v] in the adjacency list of $u$, and $[u]$ in the adjacency list of $v$. These are called twins of each other. A twin pointer is a pointer from an adjacency list entry to its twin. If $E=m$ and $V=n$, and the memory size is not a constraint, what is the time complexity of the most efficient algorithm to set the twin pointer in each entry in each adjacency list?
 $\Theta\left(n^{2}\right)$
 $\Theta\left(n+m\right)$
 $\Theta\left(m^{2}\right)$
 $\Theta\left(n^{4}\right)$
Let G = (V,E) be an undirected graph with a subgraph $G_1 = (V_1, E_1)$. Weights are assigned to edges of G as follows.
$$w(e) = \begin{cases} 0 \text{ if } e \in E_1 \\1 \text{ otherwise} \end{cases}$$
A singlesource shortest path algorithm is executed on the weighted graph (V,E,w) with an arbitrary vertex $v_1$ of $V_1$ as the source. Which of the following can always be inferred from the path costs computed?
 The number of edges in the shortest paths from $v_1$ to all vertices of G
 $G_1$ is connected

$V_1$ forms a clique in G

$G_1$ is a tree
Which of the following statement(s) is/are correct regarding BellmanFord shortest path algorithm?
P: Always finds a negative weighted cycle, if one exists.
Q: Finds whether any negative weighted cycle is reachable from the source.
 P only
 Q only
 Both P and Q
 Neither P nor Q
Suppose we run Dijkstra’s single source shortestpath algorithm on the following edgeweighted directed graph with vertex P as the source.
In what order do the nodes get included into the set of vertices for which the shortest path distances are finalized?
 P,Q,R,S,T,U
 P,Q,R,U,S,T
 P,Q,R,U,T,S
 P,Q,T,R,U,S
Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$ is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$
 cannot have a cut vertex
 must have a cycle
 must have a cutedge (bridge)
 Has chromatic number strictly greater than those of $G_1$ and $G_2$
A sink in a directed graph is a vertex i such that there is an edge from every vertex j ≠ i to i and there is no edge from i to any other vertex. A directed graph G with n vertices is represented by its adjacency matrix A, where A[i] [j] = 1 if there is an edge directed from vertex i to j and 0 otherwise. The following algorithm determines whether there is a sink in the graph G.
i = 0; do { j = i + 1; while ((j < n) && E1) j++; if (j < n) E2; } while (j < n); flag = 1; for (j = 0; j < n; j++) if ((j! = i) && E3) flag = 0; if (flag) printf("Sink exists"); else printf ("Sink does not exist");
Choose the correct expressions for E_{1} and E_{2}
 E_{1} : A[i][j] and E_{2} : i = j;
 E_{1} : !A[i][j] and E_{2} : i = j + 1;
 E_{1}: !A[i][j] and E_{2} : i = j;
 E_{1} : A[i][j] and E_{2} : i = j + 1;
Let $G$ be an undirected graph with $n$ vertices. For any subset $S$ of vertices, the set of neighbours of $S$ consists of the union of $S$ and the set of vertices $S'$ that are connected to some vertex in $S$ by an edge of $G$. The graph $G$ has the nice property that every subset of vertices $S$ of size at most $n/2$ has at least $1.5 S$many neighbours. What is the length of a longest path in $G$?
 $O (1)$
 $O (\log \log n)$ but not $O (1)$
 $O (\log n)$ but not $O (\log \log n)$
 $O \left(\sqrt{n}\right)$ but not $O (\log n)$
 $O (n)$ but not $O \left(\sqrt{n}\right)$
To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:
 Queue
 Stack
 Heap
 BTree
Let $G(V,E)$ be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:

$O\left(V^2\right)$

$O\left(E+V\log V\right)$

$O\left(V\logV\right)$

$O\left(\left(E+V\right)\logV\right)$
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
The edge $e$ must definitely belong to:

the minimum weighted spanning tree of G

the weighted shortest path from s to t

each path from s to t

the weighted longest path from s to t
Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph G with n*n adjacency matrix A. A[i,j] equals 1 if there is an edge in G from i to j, and 0 otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;} } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n {P[__,__] = min{_______,______}; } } }
 Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
 Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
 Fill in the blank: The running time of the Algorithm is O(___).
In an unweighted, undirected connected graph, the shortest path from a node S to every other node is computed most efficiently, in terms of time complexity, by

Dijkstra’s algorithm starting from S.

Warshall’s algorithm.

Performing a DFS starting from S.

Performing a BFS starting from S.
Consider the undirected graph below:
Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?
 (E, G), (C, F), (F, G), (A, D), (A, B), (A, C)
 (A, D), (A, B), (A, C), (C, F), (G, E), (F, G)
 (A, B), (A, D), (D, F), (F, G), (G, E), (F, C)
 (A, D), (A, B), (D, F), (F, C), (F, G), (G, E)
(A) $\theta(n^2)$ (B) $\theta(n^2\log n)$ (C) $\theta(n^3)$ (D) $\theta(n^3\log n)$
Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.
Let the weight of an edge $e$ denote the congestion on that edge. The congestion on a path is defined to be the maximum of the congestions on the edges of the path. We wish to find the path from $s$ to $t$ having minimum congestion. Which of the following paths is always such a path of minimum congestion?

a path from $s$ to $t$ in the minimum weighted spanning tree

a weighted shortest path from $s$ to $t$

an Euler walk from $s$ to $t$

a Hamiltonian path from $s$ to $t$
Consider an undirected unweighted graph G. Let a breadthfirst traversal of G be done starting from a node r. Let d(r,u) and d(r,v) be the lengths of the shortest paths from r to u and v respectively in G. if u visited before v during the breadthfirst traversal, which of the following statements is correct?
 $d(r,u) < d(r,v)$
 $d(r,u) > d(r,v)$
 $d(r,u) \leq d(r,v)$
 None of the above
Answers: Graph Algorithms
I'm gonna disprove all wrong options here.
A) d[u] < d[v] , Counter Example => Well if we directly start DFS on V first. Then I call DFS on X which visits U.
B) d[u] < f[v] Counter example => Same as A)
C) f[u] < f[v] Counter example => Same as A) again
So answer is D)
Consider following graph
Dfs is .
so D is answer.
If weight (u, v) < 12, then the min. weight of (s, v) = weight of (s, u) + weight of (u, v) = 53 + (<12) will be less than 65.
(a) While adding vertex u to I it should not have an edge with any node in I.
(b) The algorithm runs till V is empty (in O(n) time) and is checking u with each vertex v in set I (in O(n) time). So, overall complexity O(n^{2}).
http://web.eecs.utk.edu/~huangj/CS302S04/notes/graphsearching.html
1. After f is visited, c or g should be visited next. So, the traversal is incorrect.
4. After c is visited, e or f should be visited next. So, the traversal is incorrect.
2 and 3 are correct.
The algorithm is an example of dynamic programming.
http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
If there is a sink in the graph, the adjacency matrix will contain all 1s (except diagonal) in one column and all 0s (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which does not have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is E2, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through
E1: !A[i][j]
and
E2: i = j;
E1 makes sure that there is no edge from i to j and i is a potential sink till A[i][j] becomes 1. If A[i][j] becomes 1, i can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from i to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is j. So, in E2, we must make i = j.
Now, the loop breaks when we found a potential sink that is a vertex which does not have any outgoing edge to any coming after it in adjacency matrix. So, if the column in which this vertex comes is all 1s and the row is all 0s (except diagonal), this is the sink. Otherwise there is no sink in the graph. So, E3 is checking this condition.
But in the code $\text{flag}$ is used for storing the state that sink is present or not. And as per the usage of flag in code, by default sink is considered present. So, the condition in E3 must make flag = 0, if the found $i$ is not a sink. So, the condition should be:
A[i][j]  !A[j][i]
So, (D) is the answer.
let this be the dfs order of tree then
u= D and v = F
so what we conclude
1. its not necessary their is edge b/w them .
2. if thier is no edge then u must be leaf i.e. D is leaf here.
3. it not always possible u and v have same parent . but they have same ancestor.
Run DFS to find connected components. Its time complexity is $\Theta(m+n)$, hence (C) is the answer.
D is correct.
Consider a graph with 2 nodes and one edge from to $V_1$ and $V_2$,
Running the above algorithm will result in A being
A  1  2 
1  1  2 
2  1  2 
Clearly options B and C are wrong. Since
 A[1][1] and A[2][2] > n1 and there exists no Hamiltonian cycle. Hence invalid
 The longest path between $V_1$ and $V_2$ is 1, but A[1][2] is 2, which is invalid. And no path between $V_2$ and $V_1$ yet A[2][1] = 1 // it should be max cost path between j and k not path length.
Hence A or D could be valid.
Now consider a graph with 2 nodes and two edges, one from $V_1$ and $V_2$ and other form $V_2$ and $V_1$. Running the above algorithm will result in A being
A  1  2 
1  2  3 
2  3  4 
Hence option A is invalid, as A[i][j] can be >n
D is correct
Applying BFS on Undirected graph give you twin pointer .Visit every vertex levelwise for every vertex fill adjacent vertex in the adjacency list. BFS take 0(m+n) time.
take extra field for storing no. of linked list for perticular vertex. take extra m+n time( m vertex and n edges).
So B is answer
Ans is b
as we can see that last step is the verification step. In that step values remained unchanged. If there was a negative edge weight cycle reachable from source then at verification step also those values will be different from the values above.
In case the cycle is not reachable from source then we can see that they will be at $\infty$ distance(or cost) from the source from the beginning till the last step. As take anything away from the $\infty$ it will still be infinite.
But it can also be the case that there are some points which are not forming a cycle and are still unreachable from source those also will be at $\infty$ distance from the source from the beginning till end.
Hence, we won't be able to make a distinction among the cycle and such vertices. Thus, we say that this algorithm can detect negative edge weight cycles only if they are reachable from the source.
answer = option B
Take a tree for example
(A) False. Every vertex of tree(other than leaves) is a cut vertex
(B)True
(C)False. Without E in G1 and G2, G1 U G2 has no bridge.
(D)False. G1 U G2, G1, G2 three graphs have same chromatic number of 2.
If there is a sink in the graph, the adjacency matrix will contain all 1's (except diagonal) in one column and all 0's (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.
The first part of the code, is finding if there is any vertex which doesn't have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is E2, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through
E1: !A[i][j]
and
E2: i = j;
E1 makes sure that there is no edge from i to j and i is a potential sink till A[i][j] becomes 1. If A[i][j] becomes 1, i can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from i to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is j. So, in E2, we must make i = j.
So, answer is (C)
For E3,
http://gateoverflow.in/3857/gate2005it_84b
Here we have to find the longest path in Graph.
Let's start with 1 node(chose any one) & find the number of hops in which whole Graph is reachable i.e. all the n vertices are reachable.
From First node (say A) 
(1) within 1 hop(1 edge traversal) we can reach atleast = 3/2 nodes
{as neighbourhood of A has atleast 3/2 nodes}
(2) within 2 hops we can reach atleast $\left (\frac{3}{2} \right )^2$ nodes
(3) within 3 hops we can reach atleast $\left (\frac{3}{2} \right )^3$ nodes
and so on.. like this when our nodes reach $\left ( \frac{n}{2} \right )$ within 1 more hop we can reach upto $\left ( \frac{n}{2} \right ) * \frac{3}{2} = \frac{3n}{4}$ nodes..
uptill here we will take O(log n) hops {to reach $\frac{3n}{4}$ nodes}.
But this property is valid upto a subset of size atmost n/2 .. so, here I stuck to proceed..
Hope it helps.. Also any suggestions to proceed further are welcome..
we can find single source shortest path in unweighted graph by using Breadth first search (BFS) algorithm which using "Queue" data structure , which time O(m+n) (i.e. linear with respect to the number of vertices and edges. )
B Fibonacci heap. E decrease key operations and each taking O(1) time plus V extractmin operations each taking O(log V).
A Array. Finding minvertex in each iteration takes O(V) and this needs to be done V times.
Binomial Heap same as Binary heap here as the critical operations are decrease key and extractmin.
For 82a The answer should be Option A because edge 'e' is the lightest safe edge connecting X and Y so the minimum spanning tree of G must contain 'e' (Greedy and optimal choice).
While B might seem correct but it is not always true. One such case is when G is not connected therefore there might not be any path between 's' and 't'.
Since the question is about definitely true B is incorrect and A is the only correct optio
Lets say AC =1 CD = 2 BD = 3 and AB=4
Then if s= A and t= B then AC is the lightest edge crossing X and Y where X = { A } and Y = { C, B, D}
But clearly AC is not on the shortest path from A to B. The shortest path is AB = 4.
Answer for 82 b) is option a) A path from s to t in MST.
T is the spanning tree and E_{T }is the edge set of the tree.
Let L denote the set of all paths P_{e} in T where e is any edge in E_{G}. For g ∈ E_{T} , we define the congestion of g as the number of paths in L in which g is an element:
c(g, T) = {P_{e} ∈ L : g ∈ P_{e}}.
We then define the congestion of G embedded onto T as the maximum c(g, T) over all edges g in E_{T} :
c(G : T) = max{c(g, T) : g ∈ E_{T} }.
The tree congestion of G, denoted t(G), is the minimum value of c(G : T) over all trees T in T_{G}. Similarly, the spanning tree congestion of G, denoted s(G), is the minimum value of c(G : T) over all spanning trees S in S_{G}:
t(G) = min{c(G : T) : T ∈ T_{G}},
s(G) = min{c(G : S) : S ∈ S_{G}}
Please refer to the link
http://www.math.csusb.edu/reu/dt07.pdf
INITIALIZATION: For i = 1 ... n {For j = 1 ... n { if a[i,j] = 0 then P[i,j] =infinite // i.e. if there is no direct path then put infinite else P[i,j] =a[i,j]; } } ALGORITHM: For i = 1 ... n {For j = 1 ... n {For k = 1 ... n { P[i, j] = min( p[i,j] , p[i,k] + p[k,j]) }; } }
time complexity 0($n^3$)
this algorithm is 4 weighted graph but it will work 4 unweighted graph 2 because if p[i,j]=1, p[i,k]=1 and p[k,j]=1 then according to the algo p[i,j] = min(p[i,j] ,p[i,k] + p[k,j]) =min(1,2) =1
And all the other case is also satisfied.(like as if p[i,j] was 0 in last iteration nd there exist a path via k)
Answer is (D)
(A) and (B) produce disconnected components with the GIVEN order in options which is NEVER allowed by prims's algorithm.
(C) produces connected component every instant a new edge is added BUT when first vertex is chosen(first vertex is chosen randomly) first edge must be the minimum weight edge that is chosen . Therefore (A,D) MUST be chosen BEFORE (A,B). Therefore (C) is FALSE
Here ans should be A.
Here shortest path will give 6 .
Spanning tree contains edges of weights 2,3,4 so congestion in this case is max(2,3,4) thai is 4. for path s to t overall congestion is max(3,4) =4 but total weight is 7.
option C and D are i think not related to this quetion.
C.
BFS is used to count shortest path from source (If all path costs are 1 !)
now if u is visited before v it means 2 things.
1. Either u is closer to v.
2. if u & v are same distance from r, then our BFS algo chose to visit u before v.
Let $G = (V, E)$ be a simple undirected graph, and $s$ be a particular vertex in it called the source. For $x \in V$, let $d(x)$ denote the shortest distance in $G$ from $s$ to $x$. A breadth first search (BFS) is performed starting at $s$. Let $T$ be the resultant BFS tree. If $(u, v)$ is an edge of $G$ that is not in $T$, then which one of the following CANNOT be the value of $d(u)  d(v)$?
 $1$
 $0$
 $1$
 $2$
Answers: Graph Search
$2$ is the answer.
$d(u)  d(v) = 0$ is possible when both $u$ and $v$ have an edge from $t$ and $t$ is in the shortest path from $s$ to $u$ or $v$.
$d(u)  d(v) = 1$ is possible when $v$ and $t$ are in the shortest path from $s$ to $u$ and both $t$ and $v$ are siblings same distance from $s$ to both $t$ and $v$ causing $tu$ edge to be in BFS tree and not $vu$.
$d(u)  d(v) = 1$ is possible as explained above by interchanging $u$ and $v$.
$d(u)  d(v) = 2$ is not possible. This is because on BFS traversal we either visit $u$ first or $v$. Let's take $u$ first. Now, we put all neighbors of $u$ on queue. Since $v$ is a neighbour and $v$ is not visited before as assumed, $d(v)$ will become $d(u) + 1$. Similarly, for $v$ being visited first.
We are given 9 tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
Task  $T_1$  $T_2$  $T_3$  $T_4$  $T_5$  $T_6$  $T_7$  $T_8$  $T_9$ 

Profit  15  20  30  18  18  10  23  16  25 
Deadline  7  2  5  3  4  5  2  7  3 
What is the maximum profit earned?
 147
 165
 167
 175
We are given 9 tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.
Task  $T_1$  $T_2$  $T_3$  $T_4$  $T_5$  $T_6$  $T_7$  $T_8$  $T_9$ 

Profit  15  20  30  18  18  10  23  16  25 
Deadline  7  2  5  3  4  5  2  7  3 
Are all tasks completed in the schedule that gives maximum profit?

All tasks are completed

$T_1$ and $T_6$ are left out

$T_1$ and $T_8$ are left out

$T_4$ and $T_6$ are left out
Answers: Greedy Algorithm
The most important statement in question is
each task requires one unit of time
This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:
Task  T7  T2  T9  T4  T5  T3  T6  T8  T1 

Deadline  2  2  3  3  4  5  5  7  7 
0 T7  1  T2  2  T9  3  T5  4  T3  5  T8  6  T1  7
so we know that T4 and T6 are left
so profit will not include T4 and T6 = 15 + 20 +30 +18 + 16 + 23 + 25 = 147
A is answer
The most important statement in question is
each task requires one unit of time
This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:
Task  T7  T2  T9  T4  T5  T3  T6  T8  T1 

Deadline  2  2  3  3  4  5  5  7  7 
0 T7  1T22T93T54T35T86T17
T4 and T6 left out
D is answer.
Which one of the following hash functions on integers will distribute keys most uniformly over 10 buckets numbered 0 to 9 for $i$ ranging from 0 to 2020?
 $h(i) = i^2 \text{mod } 10$
 $h(i) = i^3 \text{mod } 10$
 $h(i) = (11 \ast i^2) \text{mod } 10$
 $h(i) = (12 \ast i^2) \text{mod } 10$
Consider a hash table of size seven, with starting index zero, and a hash function $(3x + 4)\mod 7$. Assuming the hash table is initially empty, which of the following is the contents of the table when the sequence 1, 3, 8, 10 is inserted into the table using closed hashing? Note that − denotes an empty location in the table.

8, −, −, −, −, −, 10

1, 8, 10, −, −, −, 3

1, −, −, −, −, −, 3

1, 10, 8, −, −, −, 3
Answers: Hashing
Since mod 10 is used, the last digit matters.
If you do cube all numbers from 0 to 9, you get following
Number Cube Last Digit in Cube 0 0 0 1 1 1 2 8 8 3 27 7 4 64 4 5 125 5 6 216 6 7 343 3 8 512 2 9 729 9
Therefore all numbers from 0 to 2020 are equally divided in 10 buckets. If we make a table for square, we don’t get equal distribution. In the following table. 1, 4, 6 and 9 are repeated, so these buckets would have more entries and buckets 2, 3, 7 and 8 would be empty.
Number Square Last Digit in Cube 0 0 0 1 1 1 2 4 4 3 9 9 4 16 6 5 25 5 6 36 6 7 49 9 8 64 4 9 81 1
http://geeksquiz.com/gategatecs2015set2question43/
The answer is B.
1 will occupy location 0, 3 will occupy location 6, 8 hashed to location 0 which is already occupied so, it will be hashed to one location next to it. i.e. to location 1.
Since 10 also clashes, so it will be hashed to location 2.
Consider the process of inserting an element into a $Max \: Heap$, where the $Max \: Heap$ is represented by an $array$. Suppose we perform a binary search on the path from the new leaf to the root to find the position for the newly inserted element, the number of $comparisons$ performed is:

$\Theta(\log_2n)$

$\Theta(\log_2\log_2n)$

$\Theta(n)$

$\Theta(n\log_2n)$
In a minheap with $n$ elements with the smallest element at the root, the 7^{th} smallest element can be found in time
 T (n log n)
 T (n)
 T (log n)
 T(1)
Let $S=\left \{ x_{1},....,x_{n} \right \}$ be a set of $n$ numbers. Consider the problem of storing the elements of $S$ in an array $A\left [ 1...n \right ]$ such that the following minheap property is maintained for all $2\leq i\leq n: A [\lfloor i/2\rfloor]\leq A[i]$. (Note that $\lfloor x\rfloor$ is the largest integer that is at most $x$). Which of the following statements is TRUE?
 This problem can be solved in $O(\log n)$ time.
 This problem can be solved in $O (n)$ time but not in $O(\log n)$ time.
 This problem can be solved in $O(n\log n)$ time but not in $O (n)$ time.
 This problem can be solved in $O \left ( n^{2} \right )$ time but not in $O(n\log n)$ time.
 None of the above.
The number of elements that can be sorted in $Θ(\log n)$ time using heap sort is
 $\Theta(1)$
 $\Theta(\sqrt{\log} n)$
 $\Theta(\frac{\log n}{\log \log n})$
 $\Theta(\log n)$
Answers: Heap
Since in heap is a complete binary tree, in an array implementation, from every node index, we can know its depth and this will be the $n$ for binary search.
Time to find the smallest element on a minheap one retrieve operation  $\Theta(1)$
Time to find the second smallest element on a minheap requires $2^2  1 = 3$ check operations to find the second smallest element out of 3 elements  $\Theta(1)$
Time to find the 7^{th} smallest element  requires $O(2^71) = O(127)$ check operations to find the seventh smallest element out of 127 possible ones  $\Theta(1)$
In short if the number of required operations is independent of the input size n, then it is always $\Theta(1)$.
(Here, we are doing a level order traversal of the heap and checking the elements)
If we are not allowed to traverse the heap and allowed only default heapoperations, we will be restricted with doing Extractmin 7 times which would be $O(\log n)$.
store the elements in an array and then call build_heap(A). the build_heap takes O(n) time.
so, option 'b' is correct.
but, if we try building heap by inserting each element one by one, the total complexity will be then O(n log n). cause insertion takes O(log n) and inserting 'n' elements will take O(n log n).
To sort $k$ elements in a heap, complexity is $\Theta (k \log k)$. Lets assume there are $\frac{\log n}{\log \log n}$ elements in the heap.
Complexity = $\Theta\left(\frac{\log n}{\log \log n} \log\left(\frac{\log n}{\log \log n}\right)\right) $
$=\Theta \left( \frac{\log n}{\log \log n} \left({\log \log n} { \log \log \log n}\right) \right )$
$= \Theta \left ({ \log n}  \frac{ \log n \log \log \log n}{\log \log n} \right )$
$=\Theta (\log n)$ (as shown below)
So, (c) is the answer.
$\log \log n > \log \log \log n$
$\implies \frac{\log \log \log n}{\log \log n} < 1$
$\implies \frac{ \log n \log \log \log n}{\log \log n} < \log n$
$\implies \Theta \left ( { \log n}  \frac{ \log n \log \log \log n}{\log \log n} \right ) =\Theta (\log n)$
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively.
Which of the following is the Huffman code for the letter $a, \,b, \,c, \,d, \,e, \,f$?

0, 10, 110, 1110, 11110, 11111

11, 10, 011, 010, 001, 000

11, 10, 01, 001, 0001, 0000

110, 100, 010, 000, 001, 111
Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively.
What is the average length of the Huffman code for the letters $a, \,b, \,c, \,d, \,e, \,f$?
 3
 2.1875
 2.25
 1.9375
The characters a to h have the set of frequencies based on the first 8 Fibonacci numbers as follows
a : 1, b : 1, c : 2, d : 3, e : 5, f : 8, g : 13, h : 21
A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?
110111100111010
 fdheg
 ecgdf
 dchfg
 fehdg
Answers: Huffman Code
Based on the probabilities, we can say the probable frequency of the letters will be
16, 8, 4, 2, 1, 1
Now, the Huffman tree can be constructed as follows:
So, A is the answer for 76.
https://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html
Ans should be D)
Answer is A. Huffman's tree is as follows. The two least frequent characters are taken as the children of a newly made node and the frequency of the newly made node is made equal to the sum of those two child nodes. Then the same procedure is repeated till all nodes are finished.
110111100111010 = 110 11110 0 1110 10 = fdheg
What does the following code do?
var a, b: integer; begin a:=a+b; b:=ab; a:ab; end;
 exchanges a and b
 doubles a and stores in b
 doubles b and stores in a
 leaves a and b unchanged
 none of the above
Suppose $c = \langle c[0], \dots, c[k1]\rangle$ is an array of length $k$, where all the entries are from the set {0, 1}. For any positive integers $a \text{ and } n$, consider the following pseudocode.
DOSOMETHING (c, a, n)
$z \leftarrow 1$
for $i \leftarrow 0 \text{ to } k1$
do $z \leftarrow z^2 \text{ mod } n$
if c[i]=1
then $z \leftarrow (z \times a) \text{ mod } n$
return z
If $k=4, c = \langle 1, 0, 1, 1 \rangle , a = 2, \text{ and } n=8$, then the output of DOSOMETHING(c, a, n) is _______.
Consider the following code.
def brian(n): count = 0 while ( n ! = 0 ) n = n & ( n1 ) count = count + 1 return count
Here $n$ is meant to be an unsigned integer. The operator & considers its arguments in binary and computes their bit wise $AND$. For example, $22$ & $15$ gives $6$, because the binary (say 8bit) representation of $22$ is $00010110$ and the binary representation of $15$ is $00001111$, and the bitwise $AND$ of these binary strings is $00000110$, which is the binary representation of $6$. What does the function $\text{brian}$ return?
 The highest power of $2$ dividing $n$, but zero if $n$ is zero.
 The number obtained by complementing the binary representation of $n$.
 The number of ones in the binary representation of $n$.
 The code might go into an infinite loop for some $n$.
 The result depends on the number of bits used to store unsigned integers.
In the following Pascal program segment, what is the value of X after the execution of the program segment?
X := 10; Y := 20; If X > Y then if X < 0 then X := abs(X) else X := 2*X;
 10
 20
 10
 None
Assume that X and Y are nonzero positive integers. What does the following Pascal program segment do?
while X <> Y do if X > Y then X := X  Y else Y := Y  X; write(X);

Computes the LCM of two numbers

Divides the larger number by the smaller number

Computes the GCD of two numbers

None of the above
 Consider the following Pascal function where $A$ and $B$ are nonzero positive integers. What is the value of $GET(3, 2)$?
function GET(A,B:integer): integer; begin if B=0 then GET:= 1 else if A < B then GET:= 0 else GET:= GET(A1, B) + GET(A1, B1) end;
 The Pascal procedure given for computing the transpose of an $N \times N, (N>1)$ matrix $A$ of integers has an error. Find the error and correct it. Assume that the following declaration are made in the main program
const MAXSIZE=20; type INTARR=array [1..MAXSIZE,1..MAXSIZE] of integer; Procedure TRANSPOSE (var A: INTARR; N : integer); var I, J, TMP: integer; begin for I:=1 to N – 1 do for J:=1 to N do begin TMP:= A[I, J]; A[I, J]:= A[J, I]; A[J, I]:= TMP end end;
Consider the following C function.
int fun(int n) { int x=1, k; if (n==1) return x; for (k=1; k<n; ++k) x = x + fun(k) * fun (nk); return x; }
The return value of fun(5) is ______.
What is the output printed by the following program?
#include <stdio.h> int f(int n, int k) { if (n == 0) return 0; else if (n % 2) return f(n/2, 2*k) + k; else return f(n/2, 2*k)  k; } int main () { printf("%d", f(20, 1)); return 0; }
A)  5 
B)  8 
C)

9 
D)  20 
The following function computes the value of $\binom{m}{n}$ correctly for all legal values $m$ and $n$ ($m ≥1, n ≥ 0$ and $m > n$)
int func(int m, int n) { if (E) return 1; else return(func(m 1, n) + func(m  1, n  1)); }
In the above function, which of the following is the correct expression for E?
 (n = = 0)  (m = = 1)
 (n = = 0) && (m = = 1)
 (n = = 0)  (m = = n)
 (n = = 0) && (m = = n)
What function of x, n is computed by this program?
Function what(x, n:integer): integer: Var value : integer begin value := 1 if n > 0 then begin if n mod 2 =1 then value := value * x; value := value * what(x*x, n div 2); end; what := value; end;
Consider the following C function definition
int Trial (int a, int b, int c) { if ((a>=b) && (c<b) return b; else if (a>=b) return Trial(a, c, b); else return Trial(b, a, c); }
The functional Trial:

Finds the maximum of a, b, and c

Finds the minimum of a, b, and c

Finds the middle number of a, b, c

None of the above
In the following C program fragment, j, k, n and TwoLog_n are integer variables, and A is an array of integers. The variable n is initialized to an integer ≥ 3, and TwoLog_n is initialized to the value of $2^*\lceil \log_2(n) \rceil$
for (k = 3; k <= n; k++) A[k] = 0; for (k = 2; k <= TwoLog_n; k++) for (j = k+1; j <= n; j++) A[j] = A[j]  (j%k); for (j = 3; j <= n; j++) if (!A[j]) printf("%d", j);
The set of numbers printed by this program fragment is

$\left\{m \mid m \leq n, (\exists i)\left[m=i!\right]\right\}$

$\left\{m \mid m \leq n, (\exists i) \left[m=i^2\right]\right\}$

$\left\{m \mid m \leq n, \text{m is prime} \right\}$

{ }
Consider the following function:
int unknown(int n){ int i, j, k=0; for (i=n/2; i<=n; i++) for (j=2; j<=n; j=j*2) k = k + n/2; return (k); }
The return value of the function is
(A) $\Theta(n^2)$ (B) $\Theta(n^2\log n)$ (C) $\Theta(n^3)$ (D) $\Theta(n^3\log n)$
Consider the following Cfunction in which a[n] and b[m] are two sorted integer arrays and c[n+m] be another integer array,
void xyz(int a[], int b [], int c []){ int i,j,k; i=j=k=0; while ((i<n) && (j<m)) if (a[i] < b[j]) c[k++] = a[i++]; else c[k++] = b[j++]; }
Which of the following condition(s) hold(s) after the termination of the while loop?
 $j< m,k=n+j1$ and $a[n1]< b[j]$ if $i=n$
 $i<n,k=m+i1$ and $b[m1]\leq a[i]$ if $j=m$
(A) only (i)
(B) only (ii)
(C) either (i) or (ii) but not both
(D) neither (i) nor (ii)
Let $A$ be the square matrix of size $n \times n$. Consider the following pseudocode. What is the expected output?
C=100; for i=1 to n do for j=1 to n do { Temp = A[i][j]+C; A[i][j] = A[j][i]; A[j][i] = Temp C; } for i=1 to n do for j=1 to n do output (A[i][j]);
(A) The matrix A itself
(B) Transpose of the matrix A
(C) Adding 100 to the upper diagonal elements and subtracting 100 from lower diagonal elements of A
(D) None of the above
Consider the following C function in which size is the number of elements in the array E:
int MyX(int *E, unsigned int size) { int Y = 0; int Z; int i, j, k; for(i = 0; i< size; i++) Y = Y + E[i]; for(i=0; i < size; i++) for(j = i; j < size; j++) { Z = 0; for(k = i; k <= j; k++) Z = Z + E[k]; if(Z > Y) Y = Z; } return Y; }
The value returned by the function MyX is the
(A) maximum possible sum of elements in any subarray of array E.
(B) maximum element in any subarray of array E.
(C) sum of the maximum elements in all possible subarrays of array E.
(D) the sum of all the elements in the array E.
Consider the following C function.
For large values of y, the return value of the function f best approximates
float f,(float x, int y) { float p, s; int i; for (s=1,p=1,i=1; i<y; i++) { p *= x/i; s += p; } return s; }
 $x^y$
 $e^x$
 $\text{ln} (1+x)$
 $x^x$
Consider the function func shown below:
int func(int num) { int count = 0; while (num) { count++; num>>= 1; } return (count); }
The value returned by func(435) is ________
What does the following algorithm approximate? (Assume $m > 1, \epsilon >0$).
x = m; y = 1; While (xy > ϵ) { x = (x+y)/2; y = m/x; } print(x);
 $\log \, m$
 $m^2$
 $m^{\frac{1}{2}}$
 $m^{\frac{1}{3}}$
Answers: Identify Function
Initially k = 4, c = [1, 0, 1, 1], a = 2, n = 8
now let's iterate through the function step by step :
z = 1 (at the start of dosomething)
i = 0 (start of external for loop)
in the do loop
z = 1*1 % 8 = 1 (non zero value so considered as true and continue)
c[0] = 1 so in the if clause z = 1*2 % 8 = 2
in the do loop
z = 2*2 % 8 = 4 (since now z = 2) (non zero value so considered as true and continue)
c[0] = 1 so in the if clause z = 4*2 % 8 = 0
now no need to check further :
reason all the operations that update Z are multiplicative operations and hence the value of Z will never change from 0.
option c. It return no of 1's in binary representation of n.
here n&(n1) reset rightmost bit of n in each iteration.
e.g
Suppose n=15=00001111(binary)
n1=14(00001110)
00001111
^ 00001110

00001110
Ans of X remains unchanged. As the if condition becomes false.
X := 10
ans is C . This is classic example of ifelse issue. Always else matches for nesting to closest if in C Programming & Pascal .
https://en.wikipedia.org/wiki/Dangling_else
if (x>y) { if (x<0) x=abs(x) else x=2*x }
Let X = 3 and Y = 7.
1st pass: X=3, Y=4
2nd pass: X=3, Y=1
3rd pass: X=2, Y=1
4th pass: X=1, Y=1
write (X), which writes 1.
Ref: http://www.naturalnumbers.org/EuclidSubtract.html
For B.
begin
for I:=2 to N do
for J:=1 to ( I1) do
begin
TMP:= A[I, J];
A[I, J]:= A[J, I];
A[J, I]:= TMP
end
Should be the condition...
ans for B is:
outer for loop should run for 1 to n/2, if it run for n1 times then it will again put the transposed elements at its original place.
fun(1) = 1;
fun(2) = 1 + fun(1) * fun(1) = 1 + 1 = 2;
fun(3) = 1 + fun(1) * fun(2) + fun(2) * fun(1) = 5;
fun(4) = 1 + fun(1) * fun(3) + fun(2) * fun(2) + fun(3) * fun(1) = 1 + 5 + 4 + 5 = 15;
fun(5) = 1 + fun(1) * fun(4) + fun(2) * fun(3) + fun(3) * fun(2) + fun(4) * fun(1) = 1 + 15 + 10 + 10 + 15 = 51;
More formal way:
The recurrence relation is
$f(n) = \begin{cases} 1, n = 1 \\ 1 + \sum_{i=1}^{n1} f(i) \times f(ni), n > 1 \end{cases}$
$f(1) = 1$
$f(2) = 1+ f(1).f(1) \\= 1 + 1.1 = 2$
$f(3) = 1 + f(1). f(2) + f(2).f(1) \\= 1+ 1.2 + 2.1 = 5$
$f(4) = 1 + f(1).f(3) + f(2).f(2) + f(3).f(2) \\= 1+ 1.5 + 2.2 + 5.1 = 15$
$f(5) = 1 + f(1).f(4) + f(2).f(3) + f(3). f(2)+ f(4).f(1) \\= 1 + 1.15 + 2.5 + 5.2 + 15.1 = 51$
6.) f(20,1) = 9.
5.) f(10,2)  1 = 9
4.) f(5,4)  2 = 10
3.) f(2,8) + 4 = 12
2.) f(1,16)  8 = 8
1.) f(0,32) + 16 = 16
Because $\binom{m}{0}$ = 1 and $\binom{n}{n}$ = 1.
answer  x^{n}
a  b  c  Return 

1  1  1  The final return statement is c < b, so this never returns. Answer D. 
The nested loop is taking all integers from 2 to 2 * log_{2} n take all their nonmultiples before n, and make the corresponding entry in A as 1. For example, for 2, and n = 10, A[3], A[5], A[7], and A[9] are made 1. Similarly for 3, 4, ... till 2 * log n. So, if any entry A[p] is 1 means it must be a multiple of 2, 3, .... 2log_{2} n, which is (2 log n)! and is greater than n. So, for no index p, A[p] will be 0. So, answer is D.
Suppose the line
A[j] = A[j]  (j%k);
is replaced with
A[j] = A[j]  !(j%k);
Now, the nested loop is taking all integers from 2 to log_{2} n, take all their multiples before n, and make the corresponding entry in A as 1. For example, for 2, and n = 10, A[4], A[6], A[8] and A[10] are made 1. Similarly for 3, 4, ... till 2 * log n. So, for all nonprime indices of A, we will have a 1, and for prime indices we have a 0. And we print i if A[j] is 0 meaning j is prime.
The outer loop is running for n/2 times and inner loop is running for log_{2} n times (each iteration doubles j and j stops at n means log_{2} n times j loop will iterate).
Now in each iteration k is incremented by n/2. So, overall k will be added n/2 * log n * n/2 with an initial value of 0. So, final value of k will be Θ(n^{2} log n)
The while loop add elements from a and b (whichever is smaller) to c and terminates when either of them exhausts. So, when loop terminates either i = n or j = m.
Suppose i = n. This would mean all elements from array a are added to c => k must be incremented by n. c would also contain j elements from array b. So, number of elements in c would be n+j and hence k = n + j.
Similarly, when j = m, k = m + i.
Hence, option (D) is correct. (Had k started from 1 and not 0 and we used ++k inside loop, answer would have been option (C))
In the computation of given pseudo code for each row and column of Matrix A, each upper
triangular element will be interchanged by its mirror image in the lower triangular and after
that the same lower triangular element will be again reinterchanged by its mirror image in
the upper triangular, resulting the final computed Matrix A same as input Matrix A.
answer is (A) maximum possible sum of elements in any subarray of array E.
int MyX ( int * E, unsinged int size ) { int Y= 0; int z; int i, j,k; // calculate sum of the elements of the array E and stores it in Y for i 0;i<size;i++) Y = Y+E[i]; //calculate the sum of all possible subaarays (starting from postion 0..n1) for (i=0;i<size;i++) for(j=i;j<size ;j++) { z = 0; for(k=i; k<=j;k++) z=z+E[k]; // checks whether sum of elements of each subarray is greater than the current max, if so, then assign it to currentmax if(z>Y) Y = z; } // ultimately returns the maximum possible sum of elements in any sub array of given array E return Y; }
$$\begin{array}{l}
i = 1 &\rm then& p = x &\&& s = 1 + x\\[1em]
i = 2 &\rm then& p = \frac{x^2}{2} &\&& s = 1+x+\frac{x^2}{2}
\end{array}$$
As $y$ tends to infinity, $s$ tends to $e^x$.
Hence, the correct answer is option B.
Ans  9
435(110110011)
num >>= 1; implies a num is shifted one bit right in every while loop execution.While loop is executed 9 times successfully and 10th time num is zero.
So count is incremented 9 times.
Note:
Shifting a number "1"
bit position to the right will have the effect of dividing by 2:
8 >> 1 = 4 // In binary: (00001000) >> 1 = (00000100)
by putting y = m/x into x = ( x + y )/2
x= ( x + m/x ) /2
=> 2x^{2} = x^{2} + m
=> x = m^1/2
or we can check by putting 23 different values also.
Consider the Insertion Sort procedure given below, which sorts an array L of size $n\left ( \geq 2 \right )$ in ascending order:
begin for xindex:= 2 to n do x := L [xindex]; j:= xindex  1; while j > 0 and L[j] > x do L[j + 1]:= L[j]; j:= j  1; end {while} L [j + 1]:=X; end{for} end
It is known that insertion sort makes at most n (n  1) / 2 comparisons. Which of the following is true?
 There is no input on which insertion Sort makes n (n  1) / 2 comparisons.
 Insertion Sort makes n (n  1) / 2 comparisons when the input is already sorted in ascending order.
 Insertion Sort makes n (n  1) / 2 comparisons only when the input is sorted in descending order.
 There are more than one input orderings where insertion sort makes n (n  1) / 2 comparisons.
 Insertion Sort makes n (n  1) / 2 comparisons whenever all the elements of L are not distinct.
Answers: Insertion Sort
In worst case Insertion sort will have N(N1)/2 comparisons i.e. when input is sorted in descending order.
50 40 30 20 10
pass 1 50 40 30 20 10..............n 0 compare
pass 2 40 50 30 20 10..............n 1 compare
.
.
.
pass n n ...............10 20 30 40 50 n1 compare
1+2+3....................+n1 = N(N1)/2 comparisons
Let $P_1, P_2,\dots , P_n $be $n$ points in the $xy$plane such that no three of them are collinear. For every pair of points $P_i$ and $P_j$, let $L_{ij}$ be the line passing through them. Let $L_{ab}$ be the line with the steepest gradient among all $n(n 1)/2$ lines.
The time complexity of the best algorithm for finding $P_a$ and $P_b$ is
 $\Theta\left(n\right)$
 $\Theta\left(n\log n\right)$
 $\Theta\left(n\log^2 n\right)$
 $\Theta\left(n^2\right)$
Answers: Lines Curves
Linked lists are not suitable data structures for which one of the following problems?

Insertion sort

Binary search

Radix sort

Polynomial manipulation
The following C function takes a singlylinked list as input argument. It modifies the list by moving the last element to the front of the list and returns the modified list. Some part of the code is left blank.
typedef struct node { int value; struct node *next; } node; Node *move_tofront(Node *head) { Node *p, *q; if ((head == NULL)  (head > next == NULL)) return head; q = NULL; p = head; while (p>next != NULL) { q=p; p=p>next; } _______________ return head; }
Choose the correct alternative to replace the blank line.
 q=NULL; p>next = head; head = p;
 q>next = NULL; head = p; p>next = head;
 head = p; p>next =q; q>next = NULL;
 q>next = NULL; p>next = head; head = p;
Let P be a singly linked list. Let Q be the pointer to an intermediate node x in the list. What is the worstcase time complexity of the bestknown algorithm to delete the node x from the list ?
 O(n)
 O(log^{2} n)
 O(log n)
 O(1)
Consider the following piece of ‘C’ code fragment that removes duplicates from an ordered list of integers.
Node *removeduplicates (Node* head, int *j) { Node *t1, *t2; *j=0; t1 = head; if (t1! = NULL) t2 = t1 >next; else return head; *j = 1; if(t2 == NULL) return head; while t2 != NULL) { if (t1.val != t2.val) > S1) { (*j)++; t1 > next = t2; t1 = t2: > (S2) } t2 = t2 >next; } t1 > next = NULL; return head; }
Assume the list contains $n$ elements ($n\geq 2$) in the following questions.

How many times is the comparison in statement S1 made?

What is the minimum and the maximum number of times statements marked S2 get executed?

What is the significance of the value in the integer pointed to by $j$ when the function completes?
Answers: Linked Lists
q is the previous node of tail which should be tail for modified
answer D
these steps are mandatory for the algorithm :
$\begin{align*} temp &= X \rightarrow next\\ X \rightarrow next &= Y \\ Y \rightarrow next &= temp \end{align*}$
a. As we are comparing here pair wise so for n elements we require compulsory n1 comparision
b. S2 is executed only for distinct elements so max n1 times and min 0 when all r duplicates or list contain no or 1 element.
c. j holds the count on no. of distinct elements in the ordered list.
Consider the following program for summing the entries of the array $b$: array $[0 .. N1]$ of integers, where $N$ is a positive integer. (The symbol '<>' denotes 'not equal to').
var i, s: integer; Program i:= 0; s:= 0; [*] while i <> N do s := s + b[i]; i := i + 1; od
Which of the following gives the invariant that holds at the beginning of each loop, that is, each time the program arrives at point [*] ?
 $s = \sum\limits^{N}_{j=0}b[j] \;\&\; 0 \leq i \leq N$
 $s = \sum\limits^{i=1}_{j=0}b[j] \;\&\; 0 \leq i < N$
 $s = \sum\limits^{i}_{j=0}b[j] \;\&\; 0 < i \leq N$
 $s = \sum\limits^{N}_{j=1}b[j] \;\&\; 0 \leq i < N$
 $s = \sum\limits^{i1}_{j=0}b[j] \;\&\; 0 \leq i \leq N$
The following function computes $X^{Y}$ for positive integers $X$ and $Y$.
int exp (int X, int Y) { int res =1, a = X, b = Y; while (b != 0) { if (b % 2 == 0) {a = a * a; b = b/2; } else {res = res * a; b = b  1; } } return res; }
Which one of the following conditions is TRUE before every iteration of the loop?
 $X^{Y} = a^{b}$
 $(res * a)^{Y} = (res * X)^{b}$
 $X^{Y} = res * a^{b}$
 $X^{Y} = (res * a)^{b}$
Answers: Loop Invariants
Whenever we encounter the [*], the variable $s$ holds the sum of all elements $b[0]$ to $b[i1]$.
When we first enter the loop, $i=0$, and $s$ doesn't have any elements summed up.
When we last enter the loop, $i = (N1)$ and $s$ contains the sum of elements $b[0]$ through $b[N2]$.
We leave the loop when $i=N$, and $s$ gets the sum of elements $b[0]$ to $b[N1]$
The only option that matches this behavior is option E
$$s = \sum\limits^{i1}_{j=0}b[j] \;\&\; 0 \leq i \leq N$$
Answer ==> C
If $p=10, q=100, r=20, s=5$ and $t=80$, then the minimum number of scalar multiplications needed is
(A) 248000
(B) 44000
(C) 19000
(D) 25000
Answers: Matrix Chain Ordering
Answer is C.
$\text{ Ordering: }\\ \text{ First Multiply } M_2 \times M_3. \text{ This requires 100*20*5 multiplications.}$ $\text{ Then Multiply } M_1 \times (M_2 \times M_3). \text{ This requires 10*100*5 multiplications. }$ $\text{ Then Multiply } (M_1 \times (M_2 \times M_3)) \times M_4. \text{ This requires 10*5*8 multiplications. }$ $\text{ Total 19000 Multiplications.}$
Brute Force approach  anyone can do.
No. of possible ordering for 4 matrices is $C_3$ where $C_3$ is the $3^{rd}$ Catalan number and given by $n=3$ in $\frac{1}{n+1} {}^{2n}C_n = 5.$
So, here we have
 $(M_1 \times M_2) \times (M_3 \times M_4)$
 $(M_1 \times (M_2 \times M_3)) \times M_4$
 $((M_1 \times M_2) \times M_3) \times M_4$
 $M_1 \times (M_2 \times (M_3 \times M_4))$
 $M_1 \times ((M_2 \times M_3) \times M_4))$
Each of these would give no. of multiplications required as
 $pqr + rst + prt $
 $qrs + pqs + pst$
 $pqr + prs + pst$
 $rst + qrt + pqt$
 $qrs + qst + pst$
The last 2 are having $qt$ terms which are the highest terms by far and hence we can avoid them from consideration $qt = 8000$ multiplied by one other term would be larger than any value in choice. So, just find the value of first 3 terms.
 $pqr + rst + prt = 20000 + 8000 + 16000 = 44000$
 $qrs + pqs + pst = 10000 + 5000 + 4000 = 19000$  smallest value in choice, we can stop here.
 $pqr + prs + pst$
Dynamic Programming Solution (should know Matrix Chain Ordering algorithm)
Here we have a chain of length 4.
Dynamic programming solution of Matrix chain ordering has the solution
$m[i, j] = \begin{cases} 0 &\text{ if } i=j\\ \min_{i\leq k < j } m[i]k] + m[k+1][j] + p_{i1}p_{j}p_{k} &\text{ if } i < j\end{cases}$
So, we can fill the following table starting with the diagonals and moving upward diagonally. Here $k < j$ but $\geq i.$
j=1  j=2  j=3  j=4  

i=1  0  $p_0p_1p_2 = 20000$  $\min(10000+p_0p_1p_3, \\20000+p_0+p_2p_3) = 15000$  $\min(18000+p_0p_1p_4, \\20000+8000+p_0+p_2+p_4, \\15000+p_0p_3p_4) = 19000$ 
i=2  0  $p_1p_2p_3 = 10000$  $\min(10000 + p_2p_3p_4), \\p_1p_3p_4) = 18000$  
i=3  0  $p_2p_3p_4 = 8000$  
i=4  0 
Our required answer is given by $m[1,4] = 19000.$
Answer is 1500 !
Matrix Paranthesizing => A1 ((A2 A3) A4)
Check my solution below, using dynamic programming (There was little mistake while writing in parantheses in this image, ignore it Check paranthesis above ) =>
It takes $O(n)$ time to find the median in a list of $n$ elements, which are not necessarily in sorted order while it takes only $O(1)$ time to find the median in a list of $n$ sorted elements. How much time does it take to find the median of $2n$ elements which are given as two lists of $n$ sorted elements each?
 $O (1)$
 $O \left(\log n\right)$ but not $O (1)$
 $O (\sqrt{n})$ but not $O \left(\log n\right)$
 $O (n)$ but not $O (\sqrt{n})$
 $O \left(n \log n\right)$ but not $O (n)$
Answers: Median
1) Calculate the medians m1 and m2 of the input arrays ar1[]
and ar2[] respectively.
2) If m1 and m2 both are equal.
return m1 (or m2)
3) If m1 is greater than m2, then median is present in one
of the below two subarrays.
a) From first element of ar1 to m1 (ar1[0 to n/2])
b) From m2 to last element of ar2 (ar2[n/2 to n1])
4) If m2 is greater than m1, then median is present in one
of the below two subarrays.
a) From m1 to last element of ar1 (ar1[n/2 to n1])
b) From first element of ar2 to m2 (ar2[0 to n/2])
5) Repeat the above process until size of both the subarrays
becomes 2.
6) If size of the two arrays is 2 then
the median.
Median = (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1]))/2
Time complexity O(logn)
Let a and b be two sorted arrays containing n integers each, in nondecreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements
 a[i] ≥ b [i] => c[2i] ≥ a [i]
 a[i] ≥ b [i] => c[2i] ≥ b [i]
 a[i] ≥ b [i] => c[2i] ≤ a [i]
 a[i] ≥ b [i] => c[2i] ≤ b [i]
Which of the following is TRUE?
 only I and II
 only I and IV
 only II and III
 only III and IV
For merging two sorted lists of sizes $m$ and $n$ into a sorted list of size $m+n$, we require comparisons of

$O(m)$

$O(n)$

$O(m+n)$

$O(\log m + \log n)$
Assume that a mergesort algorithm in the worst case takes 30 seconds for an input of size 64. Which of the following most closely approximates the maximum input size of a problem that can be solved in 6 minutes?
 256
 512
 1024
 2018
Answers: Merge Sort
Since both a and b are sorted in the beginning, there are i elements smaller than or equal to a[i] (i starts from 0), and similarly i elements smaller than or equal to b[i]. So, a[i] ≥ b[i] means there are 2i elements smaller than or equal to a[i], and hence in the merged array a[i] will come after these 2i elements (its index will be > 2i). So, c[2i] ≤ a[i] (equality takes care of the "equal to" case which comes when array contains repeated elements).
Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array (i elements from b, and maximum another i elements from a). So, b[i] ≤ c[2i]
So, II and III are correct > option (C)
The number of moves are however always m+n so that we can term it as theta(m+n). But the number of comparisons vary as per the input. In the best case the comparisons are Min(m,n) and in worst case they are m+n1.
For an input of size $64$, the algorithm takes $30s$. Therefore,
$$\begin{align}
k \times 64 \log_2{64} &= 30s\\[1em]
k \times 384 &= 30s\\[1em]
\implies k &= 0.078125s
\end{align}$$
Let the size of the problem that can be solved in $6$ minutes be $x$. Then,
$$k \times x \log_2 x = 360s$$
From this, we get:
$$\begin{align}
x \log_2 x &= \frac{360s}{0.078125s}\\[1em]
\implies x &= 512
\end{align}$$
$G$ is a graph on $n$ vertices and $2n2$ edges. The edges of $G$ can be partitioned into two edgedisjoint spanning trees. Which of the following is NOT true for $G$?
 For every subset of $k$ vertices, the induced subgraph has at most $2k2$ edges.
 The minimum cut in $G$ has at least 2 edges.
 There are at least 2 edgedisjoint paths between every pair of vertices.
 There are at least 2 vertexdisjoint paths between every pair of vertices.
Answers: Minimum
Lets take any subgraph of $G$ with $k$ vertices. The remaining subgraph will have $nk$ vertices. Between these two subgraphs (provided both has at least one vertex) there should be at least 2 edges, as we have 2 spanning trees in $G$. So, (b) is TRUE. Also, in the subgraph with $k$ vertices, we cannot have more than $2k2$ edges as this would mean that in the other subgraph with $nk$ vertices, we would have less than $2n2k$ edges, making 2 spanning trees impossible in it. So, (a) is also TRUE.
A spanning tree covers all the vertices. So, 2 edgedisjoint spanning trees in $G$ means, between every pair of vertices in $G$ we have two edgedisjoint paths (length of paths may vary). So, (c) is also TRUE.
So, that leaves option (d) as answer. It is not quite hard to give a counter example for (d).
Let $G$ be a weighted connected undirected graph with distinct positive edge weights. If every edge weight is increased by the same value, then which of the following statements is/are TRUE?
 P: Minimum spanning tree of $G$ does not change.
 Q: Shortest path between any pair of vertices does not change.
 $P$ only
 $Q$ only
 Neither $P$ nor $Q$
 Both $P$ and $Q$
Consider the following undirected connected graph $G$ with weights on its edges as given in the figure below. A minimum spanning tree is a spanning tree of least weight and a maximum spanning tree is one with largest weight. A second best minimum spanning tree whose weight is the smallest among all spanning trees that are not minimum spanning trees in $G$.
Which of the following statements is TRUE in the above graph? (Note that all the edge weights are distinct in the above graph)
 There is more than one minimum spanning tree and similarly, there is more than one maximum spanning tree here.
 There is a unique minimum spanning tree, however there is more than one maximum spanning tree here.
 There is more than one minimum spanning tree, however there is a unique maximum spanning tree here.
 There is more than one minimum spanning tree and similarly, there is more than one secondbest minimum spanning tree here.
 There is unique minimum spanning tree, however there is more than one secondbest minimum spanning tree here.
$G=(V, E)$ is an undirected simple graph in which each edge has a distinct weight, and $e$ is a particular edge of $G$. Which of the following statements about the minimum spanning trees $(MSTs)$ of $G$ is/are TRUE?
I. If $e$ is the lightest edge of some cycle in $G$, then every MST of $G$ includes $e$.
II. If $e$ is the heaviest edge of some cycle in $G$, then every MST of $G$ excludes $e$.
 I only.
 II only.
 Both I and II.
 Neither I nor II.
Answers: Minimum Spanning Trees
Statement P is true.
For statement Q consider a simple graph with 3 nodes.
A  B  C  
A  0  1  100 
B  1  0  2 
C  100  2  0 
Shortest path from A to C is ABC = 1 + 2 = 3
Now if the value of each edge is increased by 100,
A  B  C  
A  0  101  200 
B  101  0  102 
C  200  102  0 
The shortest path from A to C is AC = 200, (ABC = 101 + 102 = 203)
Hence option A is correct.
Each Cycle must exclude maximum weight edge in minimum spanning tree.
Here we have two cycle of 3 edges , ade and cgk .
for second best minimum spanning tree = exclude ae edge and include de edge
other way : second best minimum spanning tree= exclude cg edge and include gk edge.
so e should be the ans.
Graph G can be like this:
Statement 2 is correct absolutely. if e is the heaviest edge in cycle every mst excludes it.
Regarding statement 1, It is not fully right i think. When we think of a complete graph with 4 vertices and edge weights 1,2,5,6 in non diagonal and diagonal edges 3 and 4. 4,5,6 will create a cycle and we can exclude the lighest edge e (4) from it, in a MST
So i think answer could be B
Which of the following statements are TRUE?
 The problem of determining whether there exists a cycle in an undirected graph is in P.
 The problem of determining whether there exists a cycle in an undirected graph is in NP.
 If a problem A is NPComplete, there exists a nondeterministic polynomial time algorithm to solve A.
(A) 1, 2 and 3 (B) 1 and 2 only (C) 2 and 3 only (D) 1 and 3 only
Consider the following two problems on undirected graphs:
 $\alpha$: Given G(V, E), does G have an independent set of size V  4?
 $\beta$: Given G(V, E), does G have an independent set of size 5?
Which one of the following is TRUE?

$\alpha$ is in P and $\beta$ is NPcomplete

$\alpha$ is NPcomplete and $\beta$ is in P

Both $\alpha$ and $\beta$ are NPcomplete

Both $\alpha$ and $\beta$ are in P
Let $\pi_A$ be a problem that belongs to the class NP. Then which one of the following is TRUE?

There is no polynomial time algorithm for $\pi_A$.

If $\pi_A$ can be solved deterministically in polynomial time, then P = NP.

If $\pi_A$ is NPhard, then it is NPcomplete.

$\pi_A$ may be undecidable.
Ram and Shyam have been asked to show that a certain problem $\Pi$ is NPcomplete. Ram shows a polynomial time reduction from the 3SAT problem to $\Pi$, and Shyam shows a polynomial time reduction from $\Pi$ to 3SAT. Which of the following can be inferred from these reductions?

$\Pi$ is NPhard but not NPcomplete

$\Pi$ is in NP, but is not NPcomplete

$\Pi$ is NPcomplete

$\Pi$ is neither NPhard, nor in NP
Suppose a language $L$ is $\mathbf{NP}$ complete. Then which of the following is FALSE?
 $L \in \mathbf{NP}$
 Every problem in $\mathbf{P}$ is polynomial time reducible to $L$.
 Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
 The Hamilton cycle problem is polynomial time reducible to $L$.
 $\mathbf{P} \neq \mathbf{NP}$ and $L \in \mathbf{P}$.
Let $A_{TM}$ be defined as follows:
$A_{TM}=\left \{ \left \langle M, w \right \rangle \mid \text{ The Turning machine $M$ accepts the word } w \right \}$
And let $L$ be some $\mathbf{NP}$ complete language. Which of the following statements is FALSE?
 $L\in \mathbf{NP}$
 Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
 Every problem in $\mathbf{NP}$ is polynomial time reducible to $A_{TM}$.
 Since $L$ is $\mathbf{NP}$ complete, $A_{TM}$ is polynomial time reducible to $L$.
 $A_{TM} \notin \mathbf{NP}$.
02. Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:
(vi) Which of the following problems is not NPhard?
 Hamiltonian circuit problem
 The 0/1 Knapsack problem
 Finding biconnected components of a graph
 The graph coloring problem
Given an integer $n\geq 3$, consider the problem of determining if there exist integers $a,b\geq 2$ such that $n=a^{b}$. Call this the forward problem. The reverse problem is: given $a$ and $b$, compute $a^{b}$ (mod b). Note that the input length for the forward problem is $\lfloor\log n\rfloor + 1$, while the input length for the reverse problem is $\lfloor\log a\rfloor + \lfloor\log b\rfloor + 2$. Which of the following statements is TRUE?
 Both the forward and reverse problems can be solved in time polynomial in the lengths of their respective inputs.
 The forward problem can be solved in polynomial time, however the reverse problem is $NP$hard.
 The reverse problem can be solved in polynomial time, however the forward problem is $NP$hard.
 Both the forward and reverse problem are $NP$hard.
 None of the above.
Let $L$ be a language over $\Sigma$ i.e., $L\subseteq \Sigma^*$. Suppose $L$ satisfies the two conditions given below.

$L$ is in NP and

For every $n$, there is exactly one string of length $n$ that belongs to $L$.
Let $L^c$ be the complement of $L$ over $\Sigma^*$. Show that $L^c$ is also in NP.
The subsetsum problem is defined as follows: Given a set S of n positive integers and a positive integer W, determine whether there is a subset of S whose elements sum to W. An algorithm Q solves this problem in O(nW) time. Which of the following statements is false?
 Q solves the subsetsum problem in polynomial time when the input is encoded in unary
 Q solves the subsetsum problem in polynomial time when the input is encoded in binary
 The subset sum problem belongs to the class NP
 The subset sum problem is NPhard
Which of the following is not implied by $P=NP$?
 3SAT can be solved in polynomial time.
 Halting problem can be solved in polynomial time.
 Factoring can be solved in polynomial time.
 Graph isomorphism can be solved in polynomial time.
 Travelling salesman problem can be solved in polynomial time.
Let S be an NPcomplete problem and Q and R be two other problems not known to be in NP. Q is polynomial time reducible to S and S is polynomialtime reducible to R. Which one of the following statements is true?
 R is NPcomplete
 R is NPhard
 Q is NPcomplete
 Q is NPhard
The problem 3SAT and 2SAT are

both in P

both NP complete

NPcomplete and in P respectively

undecidable and NP complete respectively
Let SHAM_{3}_{ }be the problem of finding a Hamiltonian cycle in a graph $G=(V,E)$ with $V$ divisible by 3 and DHAM_{3} be the problem of determining if a Hamiltonian cycle exists in such graphs. Which one of the following is true?
 Both DHAM_{3} and SHAM_{3} are NPhard
 SHAM_{3} is NPhard, but DHAM_{3} is not
 DHAM_{3} is NPhard, but SHAM_{3} is not
 Neither DHAM_{3} nor SHAM_{3} is NPhard
Answers: P Np Npc Nph
Cycle detection is graph is in P as it can be done using a graph traversal (O(V+E))
Ref: http://www.geeksforgeeks.org/detectcycleundirectedgraph/
If a problem is in P then it is also in NP as P is a subset of NP. So, both 1 and 2 are TRUE.
Statement 3 is also true as NPComplete requires a problem to be in NP and for any problem in NP, we have a nondeterministic polynomial time algorithm.
So, answer is A 1, 2 and 3 are TRUE.
Independent Set a set of vertices in a graph no two of which are adjacent.
Maximal Independent set problem  Given a graph $G$, find the size of the maximal independent set in it. This problem in NPhard.
Independent set decision problem  Given a graph $G$ and a number $k$, does $G$ have an independent set of size $k$. This problem is NPcomplete (NPhard but in NP).
Now, in the given problem $\beta$ corresponds to the Independent set decision problem. But there is a difference there. We have $5$ instead of $k$. And this drastically changes the problem statement. We can now give a polynomial time deterministic algorithm for $\beta$.
 Find all vertex sets of size $5$. We get ${}^{V}C_5$ such vertex sets
 For each of them check if there is any adjacent vertices. This check can be done in constant time if we use an Adjacency matrix representation
Thus the whole time complexity reduces to ${}^{V}C_5$ which is $O\left({V}^5\right)$ and hence polynomial. $({}^{V}C_k$ is not polynomial but ${}^{V}C_5$ is $)$.
Problem $\alpha$ is asking for an independent set of size $V  4$. This is equivalent to asking if $G$ has a vertex cover* of size $4$. Following a similar approach as done for $\beta$ this problem also can be done in polynomial time.
So, both $\alpha$ and $\beta$ are in $P$.
D choice.
Vertex cover of a graph $G$ is the set of vertices such that each edge of the graph is incident on atleast one of those vertices.
Independent Set and Vertex cover Reduction: https://www.cs.cmu.edu/~ckingsf/bioinfolectures/npcomplete.pdf
D is false because all NP problems are not only decidable but decidable in polynomial time using a nondeterministic Turing machine.
C
Ram's reduction shows ∏ is NP hard.
Shyam's reduction shows ∏ is in NP.
So NPC.
Option E leads to a contradiction, hence is false.
We know that $L$ is $\mathbf{NPC}$, hence $\in \mathbf{NP}$. If $\mathbf{P} \neq \mathbf{NP}$, then $L$ can't be in $\mathbf{P}$
$\newcommand{NP}{\mathbf{NP}} \newcommand{NPC}{\mathbf{NPC}}$
$A_{TM}$ is the language of the Halting Problem. It is undecidable, but Recursively Enumerable.
$L$ is $\NPC$
 True. Any language in $\NPC$ is also in $\NP$ by definition.
 True. By definition, any problem in $\NP$ is polynomial time reducible to any $\NPC$ problem.
 True. $A_{TM}$ is undecidable. Any language that is decidable is polynomial time reducible to $A_{TM}$!
 False. $A_{TM}$ is undecidable. No Turing Machine can guarantee an answer in a finite time, let alone a polynomial time.
 True. $A_{TM}$ is undecidable. It is certainly not in $\NP$.
Hence, the correct answer is option d.
a. Is NPC and hence NP hard.
b. Is again NP hard (optimization version is NP hard and decision version is NPC). Ref: http://stackoverflow.com/questions/3907545/howtounderstandtheknapsackproblemisnpcomplete
c.Is in P. See the algorithm here based on DFS: http://en.wikipedia.org/wiki/Biconnected_component
d. NPC and hence NP hard.
The reverse problem can be solved in polynomial time as $a^b$ requires at most $\log b$ recursive calls using the approach given below:
pow(int a, int b) { if(b%2) return a* pow(a*a, b/2); else return pow(a*a, b/2); }
Now, the forward problem is also solvable in polynomial time. We need to check for all the roots of $n$ (from $\sqrt n$ till $n^{\frac{1}{\log n}}$) whether it is an integer . But each of these check can be done in $\log n$ time using a binary search on the set of integers from 2..$n$ and so, the overall complexity will be $(\log n)^2$ which is polynomial in $\log n$ ($\log n$ is the size of input). So, (a) must be the answer.
Since $L$ is in NP it is decidable (recursive) and so is its complement $L^c$. Now, $L^c$ may or may not be in NP. But we are given that for any string length $n$, exactly one string belong to $L$, which means for any string length all but one string belong to $L^c$.
Now, definition of NP says that all "yes" instances of the problem can be solved in polynomial time using a nondeterministic TM. So, given an instance of $\langle L^c, x\rangle$, we nondeterministically take all words of length $n$, where $n$ is the length of $w$, and see if it is in $L$. As soon as we get the word (such a word is sure to exist as exactly one word of length $n$ belong to $L$), we see if this word is same as $x$. If it is not same (and only if it is not same), $x \in L^c$ and we get this answer in polynomial time making $L^c$ an NP problem.
I believe Umang is right, Option B is the correct answer.
Intractability : We are looking for EFFICIENT algorithms.
Intractable Problems are problems that are decidable, although the algorithm to decide that problem might be efficient (P) or inefficient (NP), but at least an algorithm exists to solve these problems.
Here we talk about efficient vs inefficient computations.
Thus the language of problems in P and NP classes is the language of Decidable Problems i.e. Recursive Language.
Undecidability: We are looking for algorithms.
Undecidable Problems are problems for which there is no algorithm to solve these problems.
Here we talk about what can or can not be computed.
The language of Undecidable Problems are "Recursively Enumerable but not recursive languages" & "Not Recursively Enumerable Languages".
Clearly we can talk about the intractability of any problem if we know at least one algorithm to solve the problem, if there is no algorithm to solve a problem how can we talk about efficiency?
Halting Problem is undecidable.
I guess, all other problems mentioned here are decidable.
I don't know the most efficient algorithms to solve these problems but at least I can say that Brute force approach will work on all the other options except the Halting Problem.
What P = NP implies?
"Any problem that is solved by a non deterministic Turing machine in polynomial time also be solved by some deterministic Turing machine in polynomial time, even if the degree of the polynomial is higher."
and
There is neither a Non Deterministic Turing Machine nor Deterministic Turing Machine that can solve the Halting Problem.
So any inference about P & NP is not going to affect the solvability of Halting Problem, since it is undecidable.
So, option (A)
The following function computes the maximum value contained in an integer array $P[ ]$ of size $n$ $(n>=1)$.
int max (int *p,int n) { int a = 0, b=n1; while (__________) { if (p[a]<= p[b]) {a = a+1;} else {b = b1;} } return p[a]; }
The missing loop condition is
 $a != n$
 $b != 0$
 $b>(a+1)$
 $b != a$
Consider the following two C code segments. $Y$ and $X$ are one and two dimensional arrays of size $n$ and $ n \times n$ respectively, where $2 \leq n \leq 10$. Assume that in both code segments, elements of $Y$ are initialized to 0 and each element $X[i][j]$ of array $X$ is initialized to $i+j$. Further assume that when stored in main memory all elements of $X$ are in same main memory page frame.
Code segment 1:
// initialize elements of Y to 0 // initialize elements of X[i][j] of X to i+j for (i=0; i<n; i++) Y[i] += X[0][i];
Code segment 2:
// initialize elements of Y to 0 // initialize elements of X[i][j] of X to i+j for (i=0; i<n; i++) Y[i] += X[i][0];
Which of the following statements is/are correct?
S1: Final contents of array $Y$ will be same in both code segments
S2: Elements of array $X$ accessed inside the for loop shown in code segment 1 are contiguous in main memory
S3: Elements of array $X$ accessed inside the for loop shown in code segment 2 are contiguous in main memory
 Only S2 is correct
 Only S3 is correct
 Only S1 and S2 are correct
 Only S1 and S3 are correct
Answers: Programming In C
Option C fails for $n=2, p = [1, 2].$
option C. Only S1 and S2 are correct because Y have same element in both code and in code1
Y[i] += X[0][i];
this row major order (In C, arrays are stored in rowmajor order) which gives address of each element in sequential order(1,2,3,....,n) means we cross single element each time to move next shows contiguous in main memory but in code2 for
Y[i] += X[i][0];
we are crossing n element (row crossing with n element )to move next
Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sublists each of which contains at least onefifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

$T(n) \leq 2T(n/5) + n$

$T(n) \leq T(n/5) + T(4n/5) + n$

$T(n) \leq 2T(4n/5) + n$

$T(n) \leq 2T(n/2) + n$
Let P be a quicksort program to sort numbers in ascending order. Let $t_{2}$ and $t_{2}$ be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1], respectively. Which of the following holds?
 $t_{1} = t_{2}$
 $t_{1} > t_{2}$
 $t_{1} < t_{2}$
 $t_{1}=t_{2}+5 \log 5$
Quicksort is run on two inputs shown below to sort in ascending order taking first element as pivot
 $1, 2, 3, \dots n$
 $n, n1, n2, \dots, 2, 1$
Let $C_1$ and $C_2$ be the number of comparisons made for the inputs (i) and (ii) respectively. Then,
 $C_1 < C_2$
 $C_1 > C_2$
 $C_1 = C_2$
 we cannot say anything for arbitrary $n$
Consider the quick sort algorithm on a set of $n$ numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?
 The running time of the algorithm is $\Theta (n).$
 The running time of the algorithm is $\Theta (n\log n)$.
 The running time of the algorithm is $\Theta (n^{1.5})$.
 The running time of the algorithm is $\Theta (n^{2})$.
 None of the above.
Answers: Quicksort
$T(n) \leq T(n/5) + T(4n/5) + n$
One part contains $n/5$ elements
and the other part contains $4n/5$ elements
$+n$ is common to all options, so we need not to worry about it.
Hence, answer = option B
1) The array is already sorted in same order.
2) The array is already sorted in reverse order.
3) All elements are same in the array.
both are the worst cases of quick sort. (assuming pivot is either first or last element)
i) is sorted in ascending order.
ii) is sorted in descending order.
Hence, the array is divided as:
$$\large
\begin{array}{cc}
\hline\\
\substack{\LARGE \left (\frac n 2 1 \right )\\[0.5em] \text{ elements}}
\quad&\quad
\substack{\text{Median at}\\[0.5em]\LARGE \frac n 2 \text{th}\\[0.5em]\text{location}}
\quad&\quad
\substack{\LARGE \left (n  \frac n 2 \right )\\[0.5em] \text{ elements}}
\quad
\\\\
\hline
\end{array}$$
Therefore quick sort recurrence relation is given by:
$$\begin{align}
T(n) &= T \left ( \frac n 2 1 \right ) + T \left ( n \frac n 2 \right ) + \Theta(n)\\[1em]
&= \Theta ( n \log n)
\end{align}$$
Hence, Option B is the correct answer.
If we use Radix Sort to sort $n$ integers in the range $\left (n^{k/2}, n^k \right ]$, for some $k > 0$ which is independent of $n$, the time taken would be?
A)  $\Theta(n)$ 
B)  $\Theta(kn)$ 
C)

$\Theta(n \log n)$ 
D)  $\Theta(n^2)$ 
Answers: Radix Sort
Answer: C
The complexity of Radix Sort is $O(wn)$, for $n$ keys which are integers of word size $w$.
Here, $w = \log_2(n^k) = k \times \log_2(n)$
So, the complexity is $O(wn) = O(k \times \log_2(n) \times n)$, which leads to option C.
Consider the following recurrence relation:
$T(n)
= \begin{cases}
2T (\lfloor\sqrt{n}\rfloor)+ \log n & \text{if }n \geq 2 \\
1& \text{if }n = 1
\end{cases}$
Which of the following statements is TRUE?
 $T(n)$ is $O(\log n)$.
 $T(n)$ is $O(\log n. \log \log n)$ but not $O(\log n)$.
 $T(n)$ is $O(\log^{3/2} n)$ but not $O(\log n. \log \log n)$.
 $T(n)$ is $O(\log^{2} n)$ but not $O(\log^{3/2} n)$.
 $T(n)$ is $O(\log^{2} n. \log \log n)$ but not $O(\log^{2} n)$.
Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.
Recursive Algorithm  Recurrence Relation  

P.  Binary search  I.  T(n) = T(nk) + T(k) + cn 
Q.  Merge sort  II.  T(n) = 2T(n1) + 1 
R.  Quick sort  III.  T(n) = 2T(n/2) + cn 
S.  Tower of Hanoi  IV.  T(n) = T(n/2) + 1 
Which of the following is the correct match between the algorithms and their recurrence relations?
 PII, QIII, RIV, SI
 PIV, QIII, RI, SII
 PIII, QII, RIV, SI
 PIV, QII, RI, SIII
Let $T(n)$ be a function defined by the recurrence
$T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
$T(1) = 1$
Which of the following statements is TRUE?
 $T(n) = \Theta(\log n)$
 $T(n) = \Theta(\sqrt n)$
 $T(n) = \Theta(n)$
 $T(n) = \Theta(n \log n)$
Consider the following recurrence relation:
$T\left(n\right)=
\begin{cases}
T\left(\frac{n}{k}\right)+ T\left(\frac{3n}{4}\right)+ n & \text{if } n \geq 2 \\
1& \text{if } n=1
\end{cases}$
Which of the following statements is FALSE?
 $T(n)$ is $O(n^{3/2})$ when $k=3$.
 $T(n)$ is $O(n \log n)$ when $k=3$.
 $T(n)$ is $O(n \log n)$ when $k=4$.
 $T(n)$ is $O(n \log n)$ when $k=5$.
 $T(n)$ is $O(n)$ when $k=5$.
The recurrence equation
$ T(1) = 1$
$T(n) = 2T(n1) + n, n \geq 2$
evaluates to
 $2^{n+1}  n  2$
 $2^n  n$
 $2^{n+1}  2n  2$
 $2^n + n $
Consider the function F(n) for which the pseudocode is given below :
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do begin C ← C + 1 end F1 = F1 * C end F = F1 end
[n is a positive integer greater than zero]
Solve the recurrence relation for a closed form solution of F(n).
The running time of an algorithm is represented by the following recurrence relation:
$T(n) = \begin{cases}
n & n \leq 3 \\
T(\frac{n}{3})+cn & \text{ otherwise }
\end{cases}$
Which one of the following represents the time complexity of the algorithm?

$\Theta(n)$

$\Theta(n \log n)$

$\Theta(n^2)$

$\Theta(n^2 \log n)$
The given diagram shows the flowchart for a recursive function $A(n)$. Assume that all statements, except for the recursive calls, have $O(1)$ time complexity. If the worst case time complexity of this function is $O(n^{\alpha})$, then the least possible value (accurate up to two decimal positions) of $\alpha$ is ________.
Flow chart for Recursive Function $A(n)$.
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
Which of the following recurrences does $x_n$ satisfy?
 $x_n = 2x_{n1}$
 $x_n = x_{\lfloor n/2 \rfloor} + 1$
 $x_n = x_{\lfloor n/2 \rfloor} + n$
 $x_n = x_{n1} + x_{n2}$
Let a$_{n}$ represent the number of bit strings of length n containing two consecutive 1s. What is the recurrence relation for $a_{n}$?
 $a_{n  2} + a_{n  1} + 2^{n  2}$
 $a_{n  2} + 2a_{n  1} + 2^{n  2}$
 $2a_{n  2} + a_{n  1} + 2^{n  2}$
 $2a_{n  2} + 2a_{n  1} + 2^{n  2}$
Consider the following recursive C function.
void get(int n) { if (n<1) return; get (n1); get (n3); printf("%d", n); }
If get(6) function is being called in main() then how many times will the get() function be invoked before returning to the main()?
 15
 25
 35
 45
Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.
The value of $x_5$ is
 5
 7
 8
 16
The time complexity of the following C function is (assume n > 0)
int recursive (int n) { if(n == 1) return (1); else return (recursive (n1) + recursive (n1)); }
 $O(n)$
 $O(n \log n)$
 $O(n^2)$
 $O(2^n)$
Consider the following recurrence relation
$T(1)=1$
$T(n+1) = T(n)+\lfloor \sqrt{n+1} \rfloor$ for all $n \geq 1$
The value of $T(m^2)$ for $m \geq 1$ is
 $\frac{m}{6}\left(21m39\right)+4$
 $\frac{m}{6}\left(4m^23m+5\right)$
 $\frac{m}{2}\left(3m^{2.5}11m+20\right)5$
 $\frac{m}{6}\left(5m^334m^2+137m104\right)+\frac{5}{6}$
The solution to the recurrence equation $T(2^k) = 3T(2^{k1})+1, T(1) =1$ is
 $2^k$
 $\frac{(3^{k+1}1)}{2}$
 $3^{\log_2 k}$
 $2^{\log_3 k}$
Which one of the following is the recurrence equation for the worst case time complexity of the quick sort algorithm for sorting n ( $\geq $ 2) numbers? In the recurrence equations given in the options below, c is a constant.
 T(n) = 2 T (n/2) + cn
 T(n) = T ( n  1) + T(1) + cn
 T(n) = 2T ( n  1) + cn
 T(n) = T (n/2) + cn
The recurrence relation
 $T(1) = 2$
 $T(n) = 3T (\frac{n}{4}) +n$
has the solution $T(n)$ equal to

$O(n)$

$O (\log n)$

$O\left(n^\frac{3}{4}\right)$

None of the above
Which one of the following correctly determines the solution of the recurrence relation with $T(1) = 1$?
$$T(n)= 2T\left(\frac {n} {2}\right) + \log n$$
(A) $\Theta(n)$
(B) $\Theta(n\log n)$
(C) $\Theta(n^2)$
(D) $\Theta(\log n)$
Consider the following recurrence:
$ T(n)=2T\left ( \sqrt{n}\right )+1,$ $T(1)=1$
Which one of the following is true?
 $ T(n)=\Theta (\log\log n)$
 $ T(n)=\Theta (\log n)$
 $ T(n)=\Theta (\sqrt{n})$
 $ T(n)=\Theta (n)$
The running time of the following algorithm
Procedure A(n)
If n<=2 return (1) else return $(A( \lceil \sqrt{n} \rceil))$;
is best described by
 O(n)
 O( log n)
 O(log log n)
 O(1)
When n = 2^{2k} for some k ≥ 0, the recurrence relation
T(n) = √(2) T(n/2) + √n, T(1) = 1
evaluates to :
A)

√(n) (log n + 1) 
B)  √(n) log n 
C)  √(n) log √(n) 
D)  n log √n 
Consider the recursive algorithm given below:
procedure bubblesort (n); var i,j: index; temp : item; begin for i:=1 to n1 do if A[i] > A[i+1] then begin temp := A[i]; A[i] := A[i+1]; A[i+1] := temp; end; bubblesort (n1) end
Let $a_n$ be the number of times the ‘if…then…’ statement gets executed when the algorithm is run with value $n$. Set up the recurrence relation by defining $a_n$ in terms of $a_{n1}$. Solve for $a_n$.
Consider the following function.
Function F(n, m:integer):integer; begin If (n<=0 or (m<=0) then F:=1 else F:F(n1, m) + F(n, m1); end;
Use the recurrence relation $\begin{pmatrix} n \\ k \end{pmatrix} = \begin{pmatrix} n1 \\ k \end{pmatrix} + \begin{pmatrix} n1 \\ k1 \end{pmatrix}$ to answer the following questions. Assume that $n, m$ are positive integers. Write only the answers without any explanation.

What is the value of $F(n, 2)$?

What is the value of $F(n, m)$?

How many recursive calls are made to the function $F$, including the original call, when evaluating $F(n, m)$.
Let $T(n)$ be the function defined by $T(1) =1, \: T(n) = 2T (\lfloor \frac{n}{2} \rfloor ) + \sqrt{n}$ for $n \geq 2$.
Which of the following statements is true?

$T(n) = O \sqrt{n}$

$T(n)=O(n)$

$T(n) = O (\log n)$

None of the above
Consider the function F(n) for which the pseudocode is given below :
Function F(n) begin F1 ← 1 if(n=1) then F ← 3 else For i = 1 to n do begin C ← 0 For j = 1 to n – 1 do begin C ← C + 1 end F1 = F1 * C end F = F1 end
[n is a positive integer greater than zero]
(a) Derive a recurrence relation for F(n)
The recurrence relation that arises in relation with the complexity of binary search is:

$T(n) = T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

$T(n) = 2T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

$T(n) = T\left(\frac{n}{2}\right)+\log n$

$T(n) = T\left(\frac{n}{2}\right)+n$
Answers: Recurrence
=2(2T(n2)1) 1\\
=2^2T(n2) 2 1\\
=2^2(2T(n3)1) 2 1\\
=2^3T(n3)  2^2 2 1\\
\dots \\
=2^{n1}T(n(n1))  \left(2^{n2} +2^{n3}+\dots+2^2 + 2 +1\right) \\
=2^{n1}\times 2  \frac{2^{n1}1}{21}\\\because T(1) = 2, S_n = \frac{a.\left(r^n 1\right)}{r1}\\
=2^n  \left(2^{n1} 1\right)\\
=2^{n1} + 1$
Let $n=2^k$
$T\left(2^k\right)= 2T\left(2^{k/2}\right) + k $
Let $T\left(2^k\right)=S(k) \implies T\left(2^{k/2}\right) = S(k/2)$
$S(k)=2S(k/2)+k$
This gives $S(k) = \Theta(k \log k)$ (Master theorem case 2)
$ = \Theta(\log n \log \log n)$
So ans is b
$n^{\log_b a} = n^{\log_2 2} = n$.
$f(n) = \sqrt n = n^{\frac{1}{2}}$.
So, $f(n) = O\left(n^{\log_b a \epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
$$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$
c) apply AkraBazzi method .
for ex k=4, then eq is T(n)=T(n/4)+T(3n/4)+n
let g(n)=n and a1=1,b1=1/4,a2=1,b2=3/4
then ∑ ai (bi)p=1 :: 1 (1/4)p + 1 (3/4)p =1 > p=1
then T(n)=O(np(1+1∫n (g(u) / u1+p).du )
=n(1+1∫n u/u2 du
=n(1+logn)
=O(nlogn) ::option c is correct
d) apply back substitution
if k=5 T(n)=T(n/5) + T(3n/4)+n eq1
T(n/5)=T(n/52) + T((1/5)(3/4)n)+n/5
T(3n/4)=T((3/4)((1/5)n)+T((3/4)2n)+3n/4 substitute in eq1
we got T(n)=T(n/52) + 2 T((1/5)(3/4)n)+T((3/4)2n)+n(1/5+3/4) +n
in the next we get T(n)=T(n/53)++n(1/5+3/4)2+n(1/5+3/4) +n
T(n)=T(n/53)++n(19/20)2+n(19/20) +n
T(n)=T(n/5k) + +n(1+(19/20)+(19/20)2)+(19/20)k1)
T(n)=T(n/5k)++n((19/20)k +1)/(1/20))
n/5k=1 > k=logn
::T(n)=1+ 20 n 20 n(19/20)logn)
:: T(n)=O(n) which inturn O(n logn) both d,e are correct
a)by observing option a &b T(n)=(n √n) T(n)=O(n logn) and logn=O(√n)
so if option b is correct then option a is also correct > so option b is false (we can eliminate)
$ T(n) = n + 2(n1) + 2^2 (n2) + \dots + 2^{(n1)}(n (n1))$
$ =n( 1 + 2 + \dots + 2^{n1})  (1.2 + 2.2^{2} +3.2^{3}+\dots+ (n1).2^{n1})$
$=n(2^n1) (n.2^n2^{n+1}+2)$
$=2^{n+1}n 2$
F(n) = (n1)^{n } When n>=2
F(n) = 3 When n == 1
$a=1, b=3, \log_b a=0$
So $n^{\log_b a} = n^0=1$
$f(n)=n$
So, $f(n)=Ω(1)$
To, check Master theorem case 3, we need $c>0$,
$f(n/3)\leq c f(n)$
$c=1$
So using case three of master theorem
$T(n)= \Theta(f(n)) = \Theta(n)$
answer is a
By calling A(n) we get A(n/2) 5 times,
$A(n)=5A (n /2) + O(1)$
Hence by applying masters theorem,
Case 1 : $a>b^k$
$n^{\log_25}$
Thus value of alpha will be $2.32$
0 1 2
01 10 11 3
010 011 101 110 111 5
0101 0110 0111 1010 1011 1101 1110 1111 8
So, x_{n} = x_{n1} + x_{n2 }(For all the strings ending in 1, we get two new strings and for all strings ending in 0, we get a new string. So, the new set of strings for n+1, will have exactly n strings ending in 1)
x_{5} = 8+5 = 13
Counting the number of bit strings NOT containing two consecutive 1's. (It is easy to derive a recurrence relation for the NOT case as shown below.)
0 1
00 01 10  3 (append both 0 and 1 to any string ending in 0, and append 0 to any string ending in 1)
000 001 010 100 101  5 (all strings ending in 0 give two strings and those ending in 1 give 1 string)
0000 0001 0010 0100 0101 1000 1001 1010  8
....
a_{n}' = a_{n1}' + a_{n2}' (where a_{n} denote the number of bit strings of length n containing two consecutive 1s)
2^{n}  a_{n} = (2^{n1}  a_{n1}) + (2^{n2}  a_{n2})
a_{n} = 2^{n2}(4  2  1) + a_{n1} + a_{n2}
a_{n} = a_{n1} + a_{n2} + 2^{n2}
A choice.
T(n<=0) = 0
T(1) = 2
T(2) = 4
T(3) = 6
T(4) = 10
T(5) = 16
T(6) = 24
So, answer is 24 + 1 call from main = 25.
T(n) = T(n1) + T(n2) n>2
base condition T(1) = 2 and T(2) = 3
T(1) =2 There will be 2 strings of length 1, i.e 0 & 1
T(2) =3 There will be 3 strings of length 2, i.e. 01,10,11
T(3) = T(1) + T(2) = 2+3 = 5
T(4) = T(3) + T(2) = 5 + 3 = 8
T(5) = T(4) +T(3) = 8 + 5 = 13
Hence, answer is 13, but no option matches!
option D
int recursive (int n) { if(n == 1) // takes constant time say 'A' time return (1); // takes constant time say 'A' time else return (recursive (n1) + recursive (n1)); // takes T(n1) + T(n1) time }
$T(n) = 2T(n1) +a$ is the recurrence equation found from the pseudo code .
Solving the Recurrence Equation By Back Substitution Method
$T (n) = 2 T ( n1 ) +a$ ( equation 1 )
$T(n1) = 2T ( n2)+a$
$T(n2) = 2T ( n3)+a$
We can re write Equation 1 as
$\begin{align*}T(n) &= 2 [2T ( n2)+a ] +a = 4T(n2)+ 3a \\ &= 2[2T ( n3)+a] + 3a = 8T (n3) + 7a \\&\dots \\&= 2^kT(nk) + 2^{k1} a \end{align*}$  (Equation 2)
On Substituting Limiting Condition
$T(1) = 1$ implies $nk = 1 \\ \implies k= n1$
Therefore Equation 2 becomes
$2^{ n1} +2^{n2}a = O(2^n)$
$T(m^2) = T(m^21) + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  2) + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$= T(m^2  3) + \left\lfloor\sqrt{(m^2  2)} \right\rfloor + \left\lfloor\sqrt{(m^2  1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$
$= \dots$
$= T(1) + \left\lfloor\sqrt{(2)} \right\rfloor + \left\lfloor\sqrt{(3)} \right\rfloor + \dots + \left\lfloor\sqrt{(m^2)} \right\rfloor$
$= 3 \times 1 + 5 \times 2 + \dots + \left(2m  1\right) \times (m1) +m $ (We are taking floor of square root of numbers, and between successive square roots number of numbers are in the series $3,5,7 \dots$ like 3 numbers from $1..4$, $5$ numbers from $59$ and so on).
We can try out options here or solve as shown at end:
Put $m = 5, T(25) = 3 \times 1 + 5 \times 2 + 7 \times 3 + 9 \times 4 + 5 = 75$
Option A: 59
Option B: 75
Option C: noninteger
Option D: 297.5
So, answer must be B.
$T(m^2) = 3 \times 1 + 5 \times 2 + \dots + \left(2m  1\right) \times (m1) +m
\\= m + \sum_i=1^{m1} (2i+1). (i)
\\= m + \sum_i=1^{m1} 2i^2 + i
\\= m + \frac{(m1) .m .(2m1)}{3} + \frac{(m1)m}{2}
\\= \frac{m}{6} \left(6 + 4m^2 2m 4m + 2 + 3m  3\right)
\\= \frac{m}{6} \left(4m^2 3m + 5\right) $
[Sum of the first $n$ natural numbers $=\frac{n. (n+1)}{2}.$
Sum of the squares of first $n$ natural numbers $ = \frac{n. (n+1). (2n+1)}{6}.$]
Let $x = 2^k$
$T(x) = 3T\left(\frac{x}{2}\right) + 1$
We can apply Master's theorem case 1 with $a = 3$ and $b = 2$ as $f(x) = 1 = O\left(x^{\log_2 3  \epsilon} \right) $
So, $T(x) = \Theta \left(x^{\log_2 3}\right) \\= \Theta \left( {2^k}^{\log_2 3} \right)\\= \Theta \left({2^{\log_2 3}}^k \right)\\ = \Theta \left(3^k \right)$
So, only option possible is B.
We can also directly solve as follows:
$T(x) = 3T\left(\frac{x}{2}\right) + 1\\= 9T \left (\frac{x}{4}\right) + 1 + 3 \\ \dots \\= 3^{\log_2 2^k} + \left( 1 + 3 + 9 + \dots + 3^{\log_2 {2^k1}}\right)\\ \left(\text{recursion depth is }\log_2 x \text{ and } x = 2^k\right) \\= 3^{k} + \frac{3^{\log_2 {2^k}}  1}{31} \\ \left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)\\=3^k + \frac{3^k  1}{2} \\=\frac{3. 3^k  1}{2} \\=\frac{3^{k+1} 1}{2} $
OR
$T\left(2^k\right) = 3T\left(2^{k1}\right) + 1 \\= 3^2T\left(2^{k2}\right) + 1 +3 \\ \dots \\= 3^k T\left(2^{kk}\right) + \left( 1 + 3 + 9 + \dots + 3^{k1}\right) \\ \left(\text{recursion depth is }k\right)\\= 3^k + \frac{3^{k 1}} {31}\\\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right) \\=3^k + \frac{3^k 1}{2}\\=\frac{3. 3^k  1}{2} \\=\frac{3^{k+1} 1}{2} $
Worst case for quick sort happens when 1 element is on one list and n1 elements on another list.
According to Master theorem,
$T(n) = aT(\frac{n}{b}) + f(n)$ can be expressed as:
$T(n) = [n^{\log_ba}][T(1) + u(n)]$
where $u(n) = \Theta(h(n))$ where $h(n) = \frac{f(n)}{n^{\log_ba}} = \frac{n}{n^{\log_43}} = n^{1\log_43}$ as $h(n) = n^r$ where $r>0$.
So, $T(n) = [n^{\log_ba}][T(1) + u(n)] = T(n) = [n^{\log_43}][T(1) + \Theta(n^{1\log_43})] = \Theta(n^{1})$.
$f(n) = \log n$
$a = 2, b = 2 \implies n^{\log_b a} = n$
So, $f(n) = \log n = O\left(n^{1\epsilon} \right)$, we can take any $\epsilon$ from 01 for example 0.5 which gives $\log n = O(\sqrt(n))$, whose proof is given here: http://math.stackexchange.com/questions/145739/provethatlognosqrtn
So, Master theorem Case 1, and answer will be $O\left(n^{\log_2 2}\right) = O(n)$
Alternate way:
$T(1) = 1 \\ T(2) = 2T(1) + \log 2 = 3 = 3n  2\\T(4) = 2T(2) + \log 4 = 8 = 3n  4 \\T(8) = 2T(4) + \log 8 = 19 = 3n  5\\ T(16) = 2T(8) + \log 16 = 42 = 3n  6$
The second term being subtracted is growing at a lower rate than the first term. So, we can say $T(n) = O(n)$.
$T(n) = 2T\left(n^{\frac{1}{2}}\right)+1 \\=2\left(T\left(n^{\frac{1}{2^2}}\right) + 1\right) + 1 \\=2\times T\left(n^{\frac{1}{2^2}}\right) + 3\\= 4\times T\left(n^{\frac{1}{2^3}}\right) + 5 \\ \cdots \\=2^{(\lg \lg n)} + 2 \times \lg \lg n + 1\text{ (Proved below)} \\= \Theta(\lg n)$
$n^{\frac{1}{2^k}} = 2 \\ \text{(Putting 2 so that we can take log.}\\\text{One more step of recurrence can't change the complexity.)} \\\implies \frac{1}{{2^k}} \lg n = 1 \text{(Taking log both sides)}\\\implies \lg n = 2^k \\\implies k = \lg \lg n$
So, answer is B, $T(n) = \Theta(\log n)$
$T(n) = T\left(\lceil \sqrt n \rceil \right) + 1$
$T(2) = 1$
$T\left(2^2\right) = T(2) + 1 = 2$
$T\left(2^{2^2}\right) = T(4) + 1 = 3$
$T\left(2^{2^3}\right) = T(16) + 1 = 4$
So, $T(n) = \lg\lg n + 1 = O\left(\log \log n\right)$
$T(n) = \sqrt(2) T\left(\frac{n}{2}\right) + \sqrt n$
$= {\sqrt 2}^2 T\left(\frac{n}{2^2} \right) +\sqrt {2} \sqrt {\frac{n}{2}} + \sqrt n$
$\dots$
$= {\sqrt 2}^ {\lg n} T(1) +\lg n \sqrt n$
$=\sqrt n + \lg n \sqrt n$
$= \sqrt n \left( \lg n + 1\right)$
If we use Master theorem we get option B. But one must know that Matser theorem is used to find the asymptotic bound and not an EXACT value. And in the question here it explicitly says "evaluates to".
a_{n }= a_{n1} + n1 (n1 comparisons for n numbers)
a_{n} = a_{n2} + (n2) + (n1)
a_{n} = a_{n3 }+ (n3) + (n2) + (n1)
.
.
a_{n} = a_{nn} + (nn) + (n(n1)) + .... + (n3) + (n2) + (n1)
a_{n} = 0 + 1 + 2 + ....+ (n3) + (n2) + (n1)
which given a_{n} = $\frac{(n1)\times (n)}{2}$
b) $\frac{n(nm+1)}{m!}$
answer B
using master method (case 1)
where a = 2, b = 2
O(n^{1/2}) < O(n^{logba})
O(n^{1/2}) < O(n^{log22})
1  The function $F(n)$ is NOT a recursive function. You can't have a recurrence relation for it in the first place!
2  $F(n)$ calculates $(n1)^n$.
The equivalent C++ code is as follows: (You can try it out here: http://ideone.com/w0u4lk)
long F(long n) { long F1 = 1; if(n==1) { return 3; } else { for(long i = 1; i <= n; i++) { long C = 0; // Note: the belore For loop only has one line for(long j = 1; j <= n1; j++) { C = C+1; } // At the end of this for loop, C will be = (n1) F1 = F1 * C; } } return F1; }
It is clear that the inner for loop can be replaced by a single statement as follows:
long F(long n) { long F1 = 1; if(n==1) { return 3; } else { for(long i = 1; i <= n; i++) F1 = F1 * (n1); } return F1; }
And this calculates $(n1)^n$
The function f is defined as follows:
int f (int n) { if (n <= 1) return 1; else if (n % 2 == 0) return f(n/2); else return f(3n  1); }
Assuming that arbitrarily large integers can be passed as a parameter to the function, consider the following statements.
 The function f terminates for finitely many different values of n ≥ 1.
 The function f terminates for infinitely many different values of n ≥ 1.
 The function f does not terminate for finitely many different values of n ≥ 1.
 The function f does not terminate for infinitely many different values of n ≥ 1.
Which one of the following options is true of the above?
 i and iii
 i and iv
 ii and iii
 ii and iv
Consider the program below:
#include <stdio.h> int fun(int n, int *f_p) { int t, f; if (n <= 1) { *f_p = 1; return 1; } t = fun(n1, f_p); f = t + *f_p; *f_p = t; return f; } int main() { int x = 15; printf("%d/n", fun(5, &x)); return 0; }
The value printed is
 6
 8
 14
 15
Consider the following C function:
int f(int n) { static int r = 0; if (n <= 0) return1; if (n > 3) { r = n; return f(n2) + 2; } return f(n1) + r; }
What is the value of $f(5)$?
 5
 7
 9
 18
Consider the code fragment written in C below :
void f (int n) { if (n <= 1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
Which of the following implementations will produce the same output for f(173) as the above code?
P1  P2 

void f (int n) { if (n/2) { f(n/2); } printf ("%d", n%2); }

void f (int n) { if (n <=1) { printf ("%d", n); } else { printf ("%d", n%2); f (n/2); } }


What is the $\text{time complexity}$ of the following recursive function?
int DoSomething (int n) { if (n <= 2) return 1; else return (DoSomething (floor (sqrt(n))) + n); }

$\Theta(n^2)$

$\Theta(n\log_2n)$

$\Theta(\log_2n)$

$\Theta(\log_2\log_2n)$
Consider the following recursive definition of $fib$:
fib(n) := if n = 0 then 1 else if n = 1 then 1 else fib(n1) + fib(n2)
The number of times $fib$ is called (including the first call) for evaluation of $fib(7)$ is___________.
Consider the following Cprogram:
void foo (int n, int sum) { int k = 0, j = 0; if (n == 0) return; k = n % 10; j = n/10; sum = sum + k; foo (j, sum); printf ("%d,",k); } int main() { int a = 2048, sum = 0; foo(a, sum); printf("%d\n", sum); }
What does the above program print?

8, 4, 0, 2, 14

8, 4, 0, 2, 0

2, 0, 4, 8, 14

2, 0, 4, 8, 0
The following recursive function in C is a solution to the Towers of Hanoi problem.
void move(int n, char A, char B, char C) { if (......................) { move (.............................); printf("Move disk %d from pole %c to pole %c\n", n, A,C); move (.....................); } }
Fill in the dotted parts of the solution.
Consider the following recursive C function that takes two arguments.
unsigned int foo(unsigned int n, unsigned int r) { if (n>0) return ((n%r) + foo(n/r, r)); else return 0; }
What is the return value of the function $\text{foo}$ when it is called as $\text{foo(345, 10)}$?
 345
 12
 5
 3
What is the value printed by the following C program?
#include<stdio.h> int f(int *a, int n) { if (n <= 0) return 0; else if (*a % 2 == 0) return *a+f(a+1, n1); else return *a  f(a+1, n1); } int main() { int a[] = (12, 7, 13, 4, 11, 6); printf("%d", f(a, 6)); return 0; }
 9
 5
 15
 19
What value would the following function return for the input $x=95$?
Function fun (x:integer):integer; Begin If x > 100 then fun = x – 10 Else fun = fun(fun (x+11)) End;
 89
 90
 91
 92
In the following C function, let $n \geq m$.
int gcd(n,m) { if (n%m == 0) return m; n = n%m; return gcd(m,n); }
How many recursive calls are made by this function?

$\Theta(\log_2n)$

$\Omega(n)$

$\Theta(\log_2\log_2n)$

$\Theta(\sqrt{n})$
Consider the code fragment written in C below :
void f (int n) { if (n <=1) { printf ("%d", n); } else { f (n/2); printf ("%d", n%2); } }
What does f(173) print?
1)  010110101 
2)  010101101 
3)  10110101 
4)

10101101 
Consider the following function
double f(double x){ if( abs(x*x  3) < 0.01) return x; else return f(x/2 + 1.5/x); }
Give a value $q$ (to 2 decimals) such that $f(q)$ will return $q$:_____.
Answers: Recursion
The function terminates for all powers of 2 (which is infinite), hence (i) is false and (ii) is TRUE.
Let n = 5.
Now, recursive calls will go like 5  14  7  20  10  5 
And this goes into infinite recursion. And if we multiply 5 with any power of 2, also result will be infinite recursion. Since, there are infinite powers of 2 possible, there are infinite recursions possible (even considering this case only). So, (iv) is TRUE and (iii) is false.
So, correct answer is (D)
Let the address of x be 1000.
1.f(5,1000) = 8
2.f(4,1000) = 5
3.f(3,1000) = 3
4.f(2,1000) = 2
5.f(1,1000) = 1.
The evaluation is done from 5 to 1. Since recursion is used.
f(5) = 18.
f(3) + 2 = 16 + 2 = 18
f(2) + 5 = 11 + 5 = 16
f(1) + 5 = 6 + 5 = 11
f(0) + 5 = 1+5 = 6
Consider from last to first. Since it is recursive function.
here P1 and P2 will print opposite in direction as shown in diagram
and given code fragment will print like P1 and not like P2
Hence ans will be (C)
We are asked the time complexity which will be the number of recursive calls in the function as in each call we perform a constant no. of operations and a recursive call. The recurrence relation for this is (considering constant time "c" as 1)
$T(n) = T(\sqrt n) + 1 \\= T\left(n^{1/4}\right) + 2 \\=T\left(n^{1/8}\right) + 3 $
Going like this we will eventually reach T(3) or T(2). For asymptotic case this doesn't matter and we can assume we reach T(2) and in next step reach T(1). So, all we want to know is how many steps it takes to reach T(1) which will be 1+ no. of steps to reach T(2).
From the recurrence relation we know that T(2) happens when $n^{\left(\frac{1}{2^k}\right)} = 2$.
Taking $\log $ and equating,
$\frac{1}{2^k} \log n = 1 \\2^k = \log n \\k = \log \log n$.
So, T(1) happens in $\log \log n + 1$ calls, but for asymptotic complexity we can write as $\Theta \left( \log \log n\right)$
Alternatively,
Substituting values
$T(1) = 1$
$T(2) = 1$
$T(3) = T(1) + 1 = 2$
$\dots$
$T(8) = T(2) + 1 = 2$
$T(9) = T(3) + 1 = 3$
$\dots$
$T\left(\left({\left({2^2}\right)^2}\right)^2\right) = T\left(\left({2^2}\right)^2\right) + 1 \\= T(2^2)+ 2 \\= T(2) + 3 = 1 + 3 = 4, \\ \log \log n = 3 \text{ as } n = 256$.
$T\left(\left({\left({\left({\left({2^2}\right)^2}\right)^2}\right)^2}\right)^2\right) = 6,\\ \log \log n = 5 \text{ as } n = 65536 \times 65536 = 2^{32}$
$T\left(2^{(2^{10})}\right) = T\left(2^{512}\right) + 1 \\= T(2^{256}) + 2 \\= T(2^{128}) + 3\\ = T(2^{64}) + 4 \\= T(2^{32}) + 5 \\= T(2^{16}) + 6 \\= T(2^8)+7 \\= T(2^4) + 8 \\= T(2^2) + 9 \\= T(2) + 10 = 11,\\ \log \log n = 10$
So, answer is D
The recurrence relation for the no. of calls is $$T(n) = T(n1) + T(n2) + 2$$
$T(0) =T(1) = 0$ (for fib(0) and fib(1), there are no recursive calls).
$T(2) = 2$
$T(3) = 4$
$T(4) = 8$
$T(5) = 14$
$T(6) = 24$
$T(7) = 40$.
Counting the initial call we get 40 + 1 = 41.
foo is printing the lowest digit. But the printf inside it is after the recursive call. This forces the output to be in reverse order
2, 0, 4, 8
The final value "sum" printed will be 0 as C uses pass by value and hence the modified value inside foo won't be visible inside main.
void move(int n, char A, char B, char C) { if (n > 0) { move (n1, A, C, B); printf("Move disk %d from pole %c to pole %c\n", n, A, C); move (n1, B, A, C); } }
Red color represent return values
Ans is 12
12 + ( 7  (13  (4 + (11  ( 6 + 0)))))
= 12 + (7  (13  ( 4 + ( 11 6)))))
= 12 + 7  13 + 9
= 15
Worst case will arise when both n and m are consecutive Fibonacci numbers.
$\text{gcd}(F_n, F_{n1}) = \text{gcd}(F_{n1},F_{n2}) = \dots = \text{gcd}(F_1,F_0) = 1$
and $n^{th}$ Fibonacci number is $1.618^n$, where $1.618$ is the Golden ratio.
So, to find $\text{gcd} (n,m)$, number of recursive calls will be $\Theta (\log n) $.
The function prints the binary equivalent of the number n.
Binary equivalent of 173 is 10101101.
(We can directly go to the "if" part to get one answer, but we need to solve "else" part too to get all possible answers which though is not asked in question)
Solving the else part:
$\frac{x}{2} + \frac{3}{2x} = \frac{x^2+3}{2x}$
So, the new value of $x$ will be $\frac{x^2+3}{2x}$ and we need it equal to $x$.
$\frac{x^2+3}{2x} = x \\ \implies x^2 + 3 = 2x^2 \\ \implies x^2 = 3 \\ \implies x = 1.732 $
Now solving the if part.
abs(x*x  3) < 0.01
So, $x^2  3 < 0.01 \text { and } \left(x^2  3\right) < 0.01\\ \implies x^2 < 3.01 \text{ and } x^2 > 2.99\\ \implies x < 1.735 \text { and }x > 1.729$
Corrected to 2 decimal places answer should be 1.73 or 1.74.
Language $L_1$ is polynomial time reducible to language $L_2$. Language $L_3$ is polynomial time reducible to language $L_2$, which in turn polynomial time reducible to language $L_4$. Which of the following is/are true?
 $\text{ if } L_4 \in P, \text{ then } L_2 \in P$
 $\text{ if } L_1 \in P\text{ or } L_3 \in P, \text{ then } L_2 \in P$
 $L_1 \in P, \text{ if and only if } L_3 \in P$
 $\text{ if } L_4 \in P, \text{ then } L_3 \in P$
 II only
 III only
 I and IV only
 I only
Answers: Reduction
 L1 is polynomial time reducible to L2. So, L2 is at least as hard as L1.
 L3 is polynomial time reducible to L2. So, L2 is at least as hard as L3.
 L2 is polynomial time reducible to L4. So, L4 is at least as hard as L2.
If L4 is in P, L3, L2 and L1 must also be in P. So, I and IV are true.
We can have L1 in P and L2 not in P, and none of the given conditions are violated. So, II is false.
Assume L3 not in P. Now, Since L2 must be at least as hard as L3, it must also be not in P. But L1 is less harder than L1 as per condition 1, and it can be in P without violating any given conditions. So, III is false.
Hence C choice.
More Info: http://gatecse.in/wiki/Some_Reduction_Inferences
Consider the following recursive function:
function fib (n:integer);integer; begin if (n=0) or (n=1) then fib := 1 else fib := fib(n1) + fib(n2) end;
The above function is run on a computer with a stack of $64$ bytes. Assuming that only return address and parameter are passed on the stack, and that an integer value and an address takes $2$ bytes each, estimate the maximum value of $n$ for which the stack will not overflow. Give reasons for your answer.
Answers: Runtime Environments
Size of an activation record $= 2 + 2 = 4$ bytes.
So, no. of possible activation records which can be live at a time $= 64/4 = 16$.
So, we can have 16 function calls live at a time (recursion depth = 16), meaning the maximum value for $n$ without stack overflow is $16$ (calls from 116). For $n=17$, stack will overflow.
This is different from the total no. of recursive calls which will be as follows:
<n  No. of calls 

1  1 
2  3 
3 