This book was created programmatically by GATE Overflow on Jul 6, 2020. If you feel any doubt regarding the answer, click on the question link and give a comment on the site. Studying all these questions might get you 60 marks in GATE but that might not be enough for an IIT. So, read standard books, solve exercise questions and use these questions for cementing the concepts and aim 85+. At least if you are not getting the solution to a given problem first refer standard book. If any error is found on any question it shall be updated at https://gateoverflow.in/corrections.

PDFs for the remaining subjects will be made available at http://classroom.gateoverflow.in and you can enroll in this course to get notification for the same. Enrollment is free and account details are of GATE Overflow with a new password which have been sent to all registered emails on GATE Overflow. New users will receive this email within a few minutes of confirming their email address.

You can now join our Facebook group for GATE CSE discussions.

You can visit http://playlists.gatecse.in for high quality videos for GATE CSE and how to use GO site/ebook.

This book consists of only previous year GATE and TIFR questions (CS from 1987 and all 5 years of IT) all of which are relevant for GATE. Out of syllabus subjects as of GATE 2019 are removed from this book except in rare cases.

Since GATE Overflow started in August 2014, a lot of people have dedicated their time and effort in bringing this book now. Initiated by Omesh Pandita and Arjun Suresh as a Q/A platform for CSE students, Kathleen Bankson was instrumental in getting all previous year GATE questions here. Then experts like Praven Saini, Happy Mittal, Sankaranarayanan P.N., Suraj Kumar etc. have contributed a lot to the answers here. Pragy Agarwal even after topping GATE has continuously contributed here with his knowledge as well as in making the contents beautiful with fine latex skills. We also have to thank the work by Jothee, Misbah, Ishrat and Nataliyah who are continuously adding and keeping the contents here neat and clean. There are also many toppers of GATE 2015, 2016, 2017 and probably 2018 who are contributing a lot here. The list of all the contributors can be found here but even that does not include the contributions of some like Arif Ali Anapparakkal in helping design this book, Arvind Devaraj and others who have provided guidance and help etc. Last but not the least, we thank all the users of GATE Overflow.

We thank the contributions of Silpa V.S., Rahul Kumar Yadav and others for getting the GATECSE Lastrank page maintained. Bikram Ballav is behind most of the exams on GO (http://mockgate.com) and Arindam Sarkar made the interface for it. Pragy Agarwal is also behind the rank and score predictor tool, (http://mymarks.gatecse.in) used by GO which has 99-100% accuracy over the last 2 years.

Special thanks to Sachin Mittal for making the How to Use GO vidoes, Silpa V.S. for classifying the questions topicwise for the book, Pooja Palod for making the GATE 2018 schedule and Debashish Deka for GO classroom contributions.

Also thanks to all toppers who took time to write a review for GO.

Contributors

Table of Contents

Let $T$ be a Depth First Tree of a undirected graph $G$. An array $P$ indexed by the vertices of $G$ is given. $P[V]$ is the parent of vertex $V$, in $T$. Parent of the root is the root itself.

Give a method for finding and printing the cycle formed if the edge $(u,v)$ of $G$ not in $T$ (i.e., $e \in G-T$) is now added to $T$.

Time taken by your method must be proportional to the length of the cycle.

Describe the algorithm in a PASCAL $(C)$ – like language. Assume that the variables have been suitably declared.

An array $A$ contains $n$ integers in locations $A[0], A[1], \dots A[n-1]$. It is required to shift the elements of the array cyclically to the left by $K$ places, where $1\leq K \leq n-1$. An incomplete algorithm for doing this in linear time, without using another array is given below. Complete the algorithm by filling in the blanks. Assume all variables are suitably declared.

min:=n;
i=0;
while _____ do
begin
    temp:=A[i];
    j:=i;
    while ____ do
    begin
        A[j]:=____;
        j:=(j+K) mod n;
       if j<min then
        min:=j;
    end;
    A[(n+i-K)mod n]:=____;
    i:=______;
end;

An element in an array $X$ is called a leader if it is greater than all elements to the right of it in $X$. The best algorithm to find all leaders in an array 

  1. solves it in linear time using a left to right pass of the array
  2. solves it in linear time using a right to left pass of the array
  3. solves it using divide and conquer in time $\Theta (n\log n)$
  4. solves it in time $\Theta( n^2)$

Given two arrays of numbers $a_{1},...,a_{n}$ and $b_{1},...,b_{n}$ where each number is $0$ or $1$, the fastest algorithm to find the largest span $(i, j)$ such that $ a_{i}+a_{i+1}+\dots+a_{j}=b_{i}+b_{i+1}+\dots+b_{j}$ or report that there is not such span,

  1. Takes $ O(3^{n})$ and $\Omega(2^{n})$ time if hashing is permitted
  2. Takes $ O(n^{3})$ and $\Omega(n^{2.5})$ time in the key comparison mode
  3. Takes $\Theta (n)$ time and space
  4. Takes $O(\sqrt n)$ time only if the sum of the $2n$ elements is an even number
There are $5$ bags labeled $1$ to $5$. All the coins in a given bag have the same weight. Some bags have coins of weight $10$ gm, others have coins of weight $11$ gm. I pick $1, 2, 4, 8, 16$ coins respectively from bags $1$ to $5$ Their total weight comes out to $323$ gm. Then the product of the labels of the bags having $11$ gm coins is ___.
Consider a sequence of $14$ elements: $A=[-5, -10, 6, 3, -1, -2, 13, 4, -9, -1, 4, 12, -3, 0]$. The sequence sum $S(i,j) = \Sigma_{k=i}^j A[k]$. Determine the maximum of $S(i,j)$, where $0 \leq i \leq j <14$. (Divide and conquer approach may be used.)

Answer: ___________

You are given ten rings numbered from $1$ to $10$, and three pegs labeled $A$, $B$, and $C$. Initially all the rings are on peg $A$, arranged from top to bottom in ascending order of their numbers. The goal is to move all the rings to peg $B$ in the minimum number of moves obeying the following constraints:

  1. In one move, only one ring can be moved.
  2. A ring can only be moved from the top of its peg to the top of a new peg.
  3. At no point can a ring be placed on top of another ring with a lower number.

How many moves are required?

  1. $501$
  2. $1023$
  3. $2011$
  4. $10079$
  5. None of the above.

Asha and Lata play a game in which Lata first thinks of a natural number between $1$ and $1000$. Asha must find out that number by asking Lata questions, but Lata can only reply by saying “Yes” or “no”. Assume that Lata always tells the truth. What is the least number of questions that Asha needs to ask within which she can always find out the number Lata has thought of?

  1. $10$
  2. $32$
  3. $100$
  4. $999$
  5. $\text{None of the above}$

Answers: Algorithm Design

Selected Answer

Union-Find Algorithm can be used to find the cycle.

Ref: http://www.geeksforgeeks.org/union-find/

References
12 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer

The inner loop is left rotating elements each separated by $k$ elements. i.e. A[i+k] goes to A[i], A[i+2k] goes to A[i+k] and so on.

This loop stops, when A[i] gets assigned to some place which should be A[n-k+i]. (we do not need mod n here because i < k)

If $n$ is a multiple of $K,$ the inner loop iterates $n/K$ times and outer loop iterates $K$ times.

Otherwise inner loop iterates more than $n/K$ times and correspondingly the outer loop gets adjusted using the min variable.  

min:=n;
i=0;
while i < min do
begin
    temp:=A[i];
    j:=i;
    while (j != n+i-K)  do //we completed a cycle when this equals
    begin
        A[j]:= A[(j+K) mod n];
        j:=(j+K) mod n;
        if j<min then //updates the iteration count for i loop
            min:=j;
    end;
    A[(n+i-K)mod n]:=temp; //we do not need mod n, because i is <= K
    i:= i+1;
end;

C code for the problem is as follows:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main()
{
    int *A;
    int n, i, j, K, min;
    time_t t;
    time(&t); //to get current time
    srand(t); //initialize random seed using current time
    printf("Please enter n and K: ");
    scanf("%d%d", &n, &K);
    A = malloc(n * sizeof(int));
    printf("Enter n elements: \n");
    for(i = 0; i < n; i++)
        A[i] = rand()%1000;
    printf("n elements are: \n");
    for(i = 0; i < n; i++)
        printf("%d ", A[i]);
    i = 0, min = n;
    while(i < min)
    {
        int temp = A[i];
        j = i;
        while(j != n+i-K)//we completed a cycle when this equals
        {
            A[j] = A[(j+K)%n];
            j = (j+K) % n;
            if(j < min) min = j;
        }
        A[n+i-K] = temp;
        i++;
    }
    printf("\nThe numbers left rotated by %d places are: \n", K);
    for(i = 0; i < n; i++)
        printf("%d ", A[i]);
    free(A);
}
5 votes -- Arjun Suresh ( 328k points)
Selected Answer

Option B) We can move from right to left, while keeping a note of the maximum element so far (let's call it current_max).

  •  Starting from the rightmost element, we initialize our current_max with it, since the rightmost element will always be a leader.
  •  Moving from right to left, if an element $x$ is greater than our current_max, then $x$ is also a leader. Add this element to list of leaders (or simply print it). Set current_max to $x$ and carry-on leftward.

Time Complexity would be $\Theta(n)$.

46 votes -- Madhur Rawat ( 2k points)
Selected Answer

Answer is (C). Following algorithm would do.

Since array is binary, the max sum will go until $n$ and so the sum difference of the two arrays can vary between $-n$ and $n$. We use array $start$ to keep the starting index of each possible sum (hence of size $2n+1$) and array $end$ to keep the ending index (these two arrays work like hash tables and since we have only $2n+1$ possible keys, we can do a perfect hashing). So, our required solution will be $max(end[i]-start[i])$ provided both are assigned values. 

The algorithm works as follows:

  1. Initialize  $diff$  array to contain the difference of sum of elements of array a and b. i.e., $diff[i] = \sum_{i=0}^n a[i] - b[i]$.
  2. Now $diff[i]$ can have values from $-n$ to $n$ which gives $2n+1$ possible values and the first occurrence of a $diff$ value marks the beginning of a span and the last occurrence marks the end. We use $start$ and $end$ array for storing these two positions for the $2n+1$ possible values.
  3. Now, the largest value of $end[i] - start[i]$ for any $i$, will be the largest span and the start of it will be $start[i] + 1$, and end will be $end[i]$. If the span is starting from first position itself (arrays $a$ and $b$ have same first elements), then it will start from $start[i]$ itself. 
#include <stdio.h>

#define size 100 //assume n is less than 100
int main()
{
    int n, a[size], b[size]; 
    int start[2*size+1], end[2*size+1];
    int sum1 = 0, sum2 = 0, i;
    int diff[size];
    printf("Enter n: ");
    scanf("%d", &n);
    for(i = 0; i < n; i++)
    {
        printf("Enter a[%d]: ", i);
        scanf("%d", &a[i]);
    }
    for(i = 0; i < n; i++)
    {
        printf("Enter b[%d]: ", i);
        scanf("%d", &b[i]);
    }
        
    for(i = 0; i < n; i++)
    {
        if(a[i]) sum1++;
        if(b[i]) sum2++;
        diff[i] = sum1 - sum2;
    }
    for(i = 0; i < 2*n; i++)
        start[i] = -1,end[i] = -1;
    start[n] = end[n] = 0;  
    //initially sum is 0 at the beginning of array and 
    //the first n-1 elements of start and end are used 
    //if sum of A till ith element is less than sum of B till ith element
    for(i=0; i < n; i++)
    {
        if(start[diff[i] + n] == -1)//interested only in the first occurrence of diff[i]
            start[diff[i] + n] = i;
        end[diff[i] + n] = i;//interested in the last occurrence of diff[i]
    }
    int max = -1;
    int savei = -1; //savei is for storing the sum having the largest span
    
    for(i = 0; i < 2*n; i++)
    {
        if(start[i] > -1 && (end[i] - start[i] > max))
        {
            max = end[i] - start[i];
            savei = i;
        }
        
    }
    if(savei >= 0)
    {
        printf("The largest span is from %d to %d\n", start[savei]+(savei != n), end[savei]);
    //when sum zero is having the largest span, span starts from first element itself. 
    //Else, the span starts from the next element from which the span does not change
    }
    else
    {
        printf("No span\n");
    }
}
31 votes -- Arjun Suresh ( 328k points)
Selected Answer

Suppose $X$ is the number of coins of $11$ gm and $Y$ is the number of $10$ gm coins.
According to question,

$\qquad 11X+10Y= 323 \qquad \to (1)$
$ \qquad X+Y=31\qquad \qquad \to (2)$

Solving $(1),(2),$ we get $X=13$ and $Y=18.$ So, here number of coins of $11$ gm is $13$ and the only possible combination for $13$ coins is

  • $1$ coin from bag$1$
  • $4$ coins from bag$3$ 
  • $8$ coins from bag$4$

So, product of label of bags will be $=1\times 3\times 4=12.$

77 votes -- minal ( 12.9k points)
Selected Answer

Subsequence :  A subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.

Subarray : A subarray of n-element array is an array composed from a contiguous block of the original array's elements.

Question is asking for subsequence.

$S(i,j) = \sum_{k=i}^j A[k]$
 

What does $\sum_{k=i}^j $ mean? It means you start from index $i$ and go till index $j$ with unit increment each time.

Ultimately they are asking 'Maximum subarray sum'.

int maxsum = A[0], sum = A[0], n=14;
for(int i=1; i<n; i++){
    if (A[i]>A[i-1) {
        if (sum > 0) sum+=A[i];
        else sum = A[i];
    }
    else sum = A[i];
    if (maxsum < sum) maxsum = sum;
}

 

sum $=6 + 3 - 1 - 2 + 13 + 4 - 9 - 1 + 4 + 12 = 29$

18 votes -- Digvijay ( 44.8k points)
Selected Answer
I think its Tower of Hanoi problem.
Therefore, total number of function call $2^{n} -1 = 1023$.

 $\therefore$ Option $B$.
16 votes -- Umang Raman ( 12k points)

By using binary search Asha needs only $  \left \lceil \text{log}_2 1000   \right \rceil = 10$ comparisons 

4 votes -- Mk Utkarsh ( 25.5k points)
Consider the following problem. Given $n$ positive integers $a_{1}, a_{2}\dots a_n,$ it is required to partition them in to two parts $A$ and $B$ such that

$|\sum_{i \in A} a_{i} - \sum_{i \in B} a_{i}|$ is minimised

Consider a greedy algorithm for solving this problem. The numbers are ordered so that $a_{1} \geq a_{2} \geq \dots a_{n},$ and at $i^{th}$ step, $a_i$ is placed in that part whose sum in smaller at that step. Give an example with $n=5$ for which the solution produced by the greedy algorithm is not optimal.
Match the pairs in the following questions:$$\begin{array}{|ll|ll|}\hline (a) & \text{Strassen's matrix multiplication algorithm} & (p) & \text{Greedy method} \\\hline  (b) & \text{Kruskal's minimum spanning tree algorithm} & (q) & \text{Dynamic programming} \\\hline   (c) & \text{Biconnected components algorithm} & (r)  & \text{Divide and Conquer} \\\hline (d) & \text{Floyd's shortest path algorithm} & (s)  &\text{Depth-first search}  \\\hline \end{array}$$

The correct matching for the following pairs is$$\begin{array}{ll|ll}\hline \text{A.} & \text{All pairs shortest path} & \text{1.} & \text{Greedy} \\\hline  \text{B.} & \text{Quick Sort} & \text{2.}& \text{Depth-First Search} \\\hline   \text{C.}& \text{Minimum weight spanning tree} & \text{3.}  & \text{Dynamic Programming} \\\hline  \text{D.} & \text{Connected Components} &\text{4.}  & \text{Divide and Conquer}  \\\hline \end{array}$$

  1. $\text{A-2 B-4 C-1 D-3}$

  2. $\text{A-3 B-4 C-1 D-2}$

  3. $\text{A-3 B-4 C-2 D-1}$

  4. $\text{A-4 B-1 C-2 D-3}$

Match the following:
$$\begin{array}{|ll|ll|}\hline \text{P.} & \text{Prim's algorithm for minimum spanning tree}  &  \text{i.}   & \text{Backtracking} \\\hline  \text{Q.} & \text{Floyd-Warshall algorithm for all pairs shortest path} & \text{ii.}& \text{ Greedy method} \\\hline  \text{R.} & \text{ Merge sort  } & \text{iii.} & \text{ Dynamic programming} \\\hline  \text{S.}&  \text{Hamiltonian circuit  } & \text{iv.} & \text{  Divide and conquer} \\\hline \end{array}$$

  1. $\text{P-iii, Q-ii, R-iv, S-i}$
  2. $\text{P-i, Q-ii, R-iv, S-iii}$
  3. $\text{P-ii, Q-iii, R-iv, S-i}$
  4. $\text{P-ii, Q-i, R-iii, S-iv}$

Given below are some algorithms, and some algorithm design paradigms.$$\begin{array}{ll|ll}\hline \text{1.} & \text{Dijkstra's Shortest Path} & \text{i.} & \text{Divide and Conquer} \\\hline  \text{2.} & \text{Floyd-Warshall algorithm to compute} & \text{ii.}& \text{Dynamic Programming} \\& \text{ all pairs shortest path} \\\hline   \text{3.}& \text{Binary search on a sorted array} & \text{iii.}  & \text{Greedy design} \\\hline  \text{4.} & \text{Backtracking search on a graph} &\text{iv.}  & \text{Depth-first search} \\\hline  &  &\text{v.}  & \text{Breadth-first search}  \\\hline \end{array}$$
Match the above algorithms on the left to the corresponding design paradigm they follow.

  1. $\text{1-i, 2-iii, 3-i, 4-v}$
  2. $\text{1-iii, 2-iii, 3-i, 4-v}$
  3. $\text{1-iii, 2-ii, 3-i, 4-iv}$
  4. $\text{1-iii, 2-ii, 3-i, 4-v}$

Consider the following table:$$\begin{array}{|ll|ll|}\hline \rlap{\textbf {Algorithms}} &  & \rlap{\textbf{Design Paradigms }} & \\\hline  \text{P.} & \text{Kruskal} & \text{i.}& \text{Divide and Conquer}   \\\hline  \text{Q.} & \text{Quicksort} & \text{ii.}& \text{Greedy}  \\\hline  \text{R.} & \text{Floyd-Warshall} & \text{iii.}& \text{Dynamic Programming}\\\hline  \end{array}$$

Match the algorithms to the design paradigms they are based on.

  1. $(P) \leftrightarrow (ii), (Q) \leftrightarrow (iii), (R) \leftrightarrow (i)$
  2. $(P) \leftrightarrow (iii), (Q) \leftrightarrow (i), (R) \leftrightarrow (ii)$
  3. $(P) \leftrightarrow (ii), (Q) \leftrightarrow (i), (R) \leftrightarrow (iii)$
  4. $(P) \leftrightarrow (i), (Q) \leftrightarrow (ii), (R) \leftrightarrow (iii)$

Answers: Algorithm Design Techniques

Let $S = \{12,11,8,8,8\}$

As per the given greedy algorithm we get $A = \{12,8 \}$ and $B = \{11,8,8\}$ and $|\sum_{i \in A} a_{i} - \sum_{i \in B} a_{i}| = |20 - 27| = 7.$ 

This is not minimal because if we partition $S$ as $A = \{12,11\}$ and $B = \{8,8,8\}$ we get $|\sum_{i \in A} a_{i} - \sum_{i \in B} a_{i}| = |23 - 24| = 1.$ 

Thus greedy algorithm fails for this example.

Working algorithm: https://www.geeksforgeeks.org/partition-a-set-into-two-subsets-such-that-the-difference-of-subset-sums-is-minimum/ 

References
4 votes -- Arjun Suresh ( 328k points)
Selected Answer
$(a)$ Strassen's matrix multiplication algorithm -  $(r)$ Divide and Conquer

$(b)$ Kruskal's minimum spanning tree algorithm - $(p)$ Greedy method

$(c)$ Biconnected components algorithm - $(s)$ Depth first search

$(d)$ Floyd's shortest path algorithm - $(q)$ Dynamic programming

$$\begin{array}{|ll|ll|}\hline (a) & \text{Strassen's matrix multiplication algorithm} & (r) & \text{Divide and Conquer} \\\hline  (b) & \text{Kruskal's minimum spanning tree algorithm} & (p) & \text{Greedy method} \\\hline   (c) & \text{ Biconnected components algorithm} & (s)  & \text{Depth-first search} \\\hline (d) & \text{Floyd's shortest path algorithm} & (q)  & \text{Dynamic programming}  \\\hline \end{array}$$
15 votes -- vishwa ratna ( 1.8k points)
Selected Answer
Answer : (B) A-3 B-4 C-1 D-2$$\begin{array}{|ll|ll|}\hline \text{A.} & \text{All pairs shortest path} & \text{3.} & \text{Dynamic Programming} \\\hline  \text{B.} & \text{Quick Sort} & \text{2.}& \text{Divide and Conquer} \\\hline   \text{C.}& \text{Minimum weight spanning tree} & \text{1.}  & \text{Greedy} \\\hline  \text{D.} & \text{Connected Components} &\text{2.}  & \text{Depth-First Search}  \\\hline \end{array}$$

Reference: Read the Intro/Algo Sub-Heading.

References
18 votes -- Siddharth Mahapatra ( 1.2k points)
Selected Answer
Option C.

Dijkstra's Shortest path algorithm uses greedy design (always chosing the shortest neighbour) while Floyd Warshall Algorithm to compute all shortest paths uses Dynamic Programming.
Binary search uses Divide and Conquer (though we eliminate one part at each time) and Back tracking traversal to a graph uses Depth First Search (DFS) (in DFS we have to backtrack to a node after searching through all its descendant nodes if the searched node is not found).
18 votes -- Raghuveer Dhakad ( 1.5k points)
Selected Answer

In Kruskal, in every iteration, an edge of the most minimum weight (greediest) possible is selected and added to MST construction. Hence, greedy.

In Quick Sort, we partition the problem into subproblems, solve them and then combine. Hence, it is Divide & Conquer.

Floyd-Warshall uses Dynamic programming.

Hence, correct answer is : OPTION (C).

23 votes -- sriv_shubham ( 2.8k points)

Consider the following two functions:

$g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$

$g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$

Which of the following is true?

  1. $g_1(n) \text{ is } O(g_2(n))$

  2. $g_1(n) \text{ is } O(n^3)$

  3. $g_2(n) \text{ is } O(g_1(n))$

  4. $g_2(n) \text{ is } O(n)$

Which of the following is false?

  1. $100n \log n=O(\frac{n\log n}{100})$

  2. $\sqrt{\log n} = O(\log\log n)$

  3. If $0 < x < y \text{ then } n^x = O\left(n^y\right)$

  4. $2^n \neq O\left(nk\right)$

Consider the following functions

  • $f(n) = 3n^{\sqrt{n}}$
  • $g(n) = 2^{\sqrt{n}{\log_{2}n}}$
  • $h(n) = n!$

Which of the following is true?

  1. $h(n)$ is $O(f(n))$
  2. $h(n)$ is $O(g(n))$
  3. $g(n)$ is not $O(f(n))$
  4. $f(n)$ is $O(g(n))$

Let $f(n) = n^2 \log n$ and $g(n) = n(\log n)^{10}$ be two positive functions of $n$. Which of the following statements is correct?

  1. $f(n) = O(g(n)) \text{ and } g(n) \neq O(f(n))$
  2. $g(n) = O(f(n)) \text{ and } f(n) \neq O(g(n))$
  3. $f(n) \neq O(g(n)) \text{ and } g(n) \neq O(f(n))$
  4. $f(n) =O(g(n)) \text{ and } g(n) = O(f(n))$

Consider the following three claims:

  1. $(n+k)^m = \Theta(n^m)$ where $k$ and $m$ are constants
  2. $2^{n+1} = O(2^n)$
  3. $2^{2n+1} = O(2^n)$

Which of the following claims are correct?

  1. I and II
  2. I and III
  3. II and III
  4. I, II, and III

Let $f(n)$, $g(n)$ and $h(n)$ be functions defined for positive integers such that 
$f(n) = O(g(n))$, $g(n) \neq O(f(n))$, $g(n) = O(h(n))$, and $h(n) = O(g(n))$.

Which one of the following statements is FALSE?

  1. $f(n) + g(n) = O(h(n) + h(n))$
  2. $f(n) = O(h(n))$
  3. $h(n) \neq O(f(n))$
  4. $f(n)h(n) \neq O(g(n)h(n))$

Consider the following functions:

  • $f(n) = 2^n$
  • $g(n) = n!$
  • $h(n) = n^{\log n}$

Which of the following statements about the asymptotic behavior of $f(n)$, $g(n)$ and $h(n)$ is true?

  1. $f\left(n\right)=O\left(g\left(n\right)\right); g\left(n\right)=O\left(h\left(n\right)\right)$
  2. $f\left(n\right) = \Omega\left(g\left(n\right)\right); g(n) = O\left(h\left(n\right)\right)$
  3. $g\left(n\right) = O\left(f\left(n\right)\right); h\left(n\right)=O\left(f\left(n\right)\right)$
  4. $h\left(n\right)=O\left(f\left(n\right)\right); g\left(n\right) = \Omega\left(f\left(n\right)\right)$

Arrange the following functions in increasing asymptotic order:

  1. $n^{1/3}$
  2. $e^n$
  3. $n^{7/4}$                                                                              
  4. $n \log^9n$
  5. $1.0000001^n$
  1. a, d, c, e, b
  2. d, a, c, e, b
  3. a, c, d, e, b
  4. a, c, d, b, e

Which of the given options provides the increasing order of asymptotic complexity of functions $f_1, f_2, f_3$ and $f_4$?

  • $f_1(n) = 2^n$
  • $f_2(n) = n^{3/2}$
  • $f_3(n) = n \log_2 n$
  • $f_4(n) = n^{\log_2 n}$
  1. $f_3,  f_2, f_4, f_1$ 
  2. $f_3, f_2, f_1, f_4$
  3. $f_2, f_3, f_1, f_4$
  4. $f_2, f_3, f_4, f_1$

Let $W(n) $ and $A(n)$ denote respectively, the worst case and average case running time of an algorithm executed on an input of size $n$.  Which of the following is ALWAYS TRUE?

  1. $A(n) = \Omega (W(n))$
  2. $A(n) = \Theta (W(n))$
  3. $A(n) = \text{O} (W(n))$
  4. $A(n) = \text{o} (W(n))$

Consider the equality $\displaystyle{\sum_{i=0}^n} i^3 = X$ and the following choices for $X$:

  1. $\Theta(n^4)$
  2. $\Theta(n^5)$
  3. $O(n^5)$
  4. $\Omega(n^3)$

The equality above remains correct if $X$ is replaced by

  1. Only I
  2. Only II
  3. I or III or IV but not II
  4. II or III or IV but not I

Let $f(n) = n$ and $g(n) = n^{(1 + \sin \: n)}$, where $n$ is a positive integer. Which of the following statements is/are correct?

  1. $f(n) = O(g(n))$
  2. $f(n) = \Omega(g(n))$
  1. Only I
  2. Only II
  3. Both I and II
  4. Neither I nor II

Consider the following functions from positive integers to real numbers:

$10$, $\sqrt{n}$, $n$, $\log_{2}n$, $\frac{100}{n}$.
The CORRECT arrangement of the above functions in increasing order of asymptotic complexity is:

  1. $\log_{2}n$, $\frac{100}{n}$, $10$, $\sqrt{n}$, $n$
  2. $\frac{100}{n}$, $10$, $\log_{2}n$, $\sqrt{n}$, $n$
  3. $10$, $\frac{100}{n}$, $\sqrt{n}$, $\log_{2}n$, $n$
  4. $\frac{100}{n}$, $\log_{2}n$, $10$, $\sqrt{n}$, $n$

Let $n$ be a large integer. Which of the following statements is TRUE?

  1. $n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n} < n^{1/100}$
     
  2. $n^{1/100} < n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n}$
     
  3. $n^{1 / \sqrt{\log_2 n}} < n^{1/100} < \sqrt{\log_2 n}$
     
  4. $\sqrt{\log_2 n} < n^{1 / \sqrt{\log_2 n}} < n^{1/100}$
     
  5. $\sqrt{\log_2 n} < n^{1/100} < n^{1 / \sqrt{\log_2 n}}$

Let $n$ be a large integer. Which of the following statements is TRUE?

  1. $2^{\sqrt{2\log n}}< \frac{n}{\log n}< n^{1/3}$
  2. $\frac{n}{\log n}< n^{1/3}< 2^{\sqrt{2\log n}}$
  3. $2\sqrt{^{2\log n}}< n^{1/3}< \frac{n}{\log n}$
  4. $n^{1/3}< 2\sqrt{^{2\log n}}<\frac{n}{\log n}$
  5. $\frac{n}{\log n}< 2\sqrt{^{2\log n}}<n^{1/3}$

Which of these functions grows fastest with $n$?

  1. $e^{n}/n$.
  2. $e^{n-0.9 \log n}$.
  3. $2^{n}$.
  4. $\left(\log n\right)^{n-1}$.
  5. None of the above.

Let $n = m!$. Which of the following is TRUE?

  1. $m = \Theta (\log n / \log \log n)$
  2. $m = \Omega (\log n / \log \log n)$ but not $m = O(\log n / \log \log n)$
  3. $m = \Theta (\log^2 n)$
  4. $m = \Omega (\log^2 n)$ but not $m = Ο( (\log^2 n)$
  5. $m = \Theta (\log^{1.5} n)$

Which of the following functions asymptotically grows the fastest as $n$ goes to infinity?

  1. $(\log \: \log \: n)!$
  2. $(\log \: \log \: n)^ {\log \: n}$
  3. $(\log \: \log \: n)^{\log \: \log \: \log \: n}$
  4. $(\log \: n)^{\log \: \log \: n}$
  5. $2^{\sqrt{\log \: \log \: n}}$

Which of the following statements is TRUE for all sufficiently large integers n ?

  1. $2^{2^{\sqrt{\log \log n}}} <2^{\sqrt{\log n}} < n$
  2. $2^{\sqrt{\log n}}<n<2^{2^{\sqrt{\log \log n}}}$
  3. $n<2^{\sqrt{\log n}}<2^{2^{\sqrt{\log \log n}}}$
  4. $n<2^{2^{\sqrt{\log \log n}}} <2^{\sqrt{\log n}}$
  5. $2^{\sqrt{\log n}}<2^{2^{\sqrt{\log \log n}}} < n$

Stirling’s approximation for $n!$ states for some constants $c_1,c_2$

$$c_1 n^{n+\frac{1}{2}}e^{-n} \leq n! \leq c_2 n^{n+\frac{1}{2}}e^{-n}.$$

What are the tightest asymptotic bounds that can be placed on $n!$ $?$

  1. $n! = \Omega(n^n) \text{ and } n! = \mathcal{O}(n^{n+\frac{1}{2}})$
  2. $n! = \Theta(n^{n+\frac{1}{2}})$
  3. $n! =\Theta((\frac{n}{e})^n)$
  4. $n! =\Theta((\frac{n}{e})^{n+\frac{1}{2}})$
  5. $n! =\Theta(n^{n+\frac{1}{2}}2^{-n})$

Among the following asymptotic expressions, which of these functions grows the slowest (as a function of $n$) asymptotically?

  1. $2^{\log n}$
  2. $n^{10}$
  3. $(\sqrt{\log n})^{\log ^{2} n}$
  4. $(\log n)^{\sqrt{\log n}}$
  5. $2^{2^{\sqrt{\log\log n}}}$

Answers: Asymptotic Notations

Selected Answer
For asymptotic complexity, we assume sufficiently large $n$. So, $g_1(n) = n^2$ and $g_2(n) = n^3$. Growth rate of $g_1$ is less than that of $g_2$, i.e., $g_1(n) = O(g_2(n)).$

Options $A$ and $B$ are TRUE here.
46 votes -- Arjun Suresh ( 328k points)
Selected Answer
  1. $100n \log n=O(\frac{n\log n}{100})$  : Big-$O$ denotes the growth rate of functions and multiplication or division by a constant does not change the growth rate. So, this is TRUE and here $O$ can even be replaced by $\Theta$ or $\Omega$.
  2. $\sqrt{\log n} = O(\log\log n)$  : FALSE. $\sqrt \log n = ({\log n}) ^{0.5}$ grows faster than $\log \log n$ as any positive polynomial function $($including powers between $0-1)$ grows faster than any polylogarithmic function. (Ref: Section 3.2- Logarithms, Cormen). We can also do substitution here, but we must do for at least $2$ large values and ensure it works for any larger $n$ also. 
  3. $0 < x < y \text{ then } n^x = O\left(n^y\right)$ : TRUE since $y$ is always greater than $x$. So, RHS is always greater than LHS.
  4. $2^n \neq O\left(nk\right)$ : TRUE since $k$ is constant. So, for large values of $n$, LHS is much higher than RHS (exponential function always greater than linear).

Only B is FALSE.

38 votes -- Prashant Singh ( 46.5k points)
Selected Answer

$$\begin{array}{|l|l|l|}\hline \text{} &  \textbf{n= 256} & \textbf{n = 65536} \\ \hline \text{$f(n) = 3n^{\sqrt{n}}$} &  \text{$3 \times 256^{16}$} & \text{$3 \times 65536^{256}$}\\ & \text{$=3 \times {2^{128}}$} & \text{$ = 3 \times 2^{16 \times 256}$} \\ & & \text{$ = 3 \times 2^{4096}$}\\\hline \text{$g(n) = 2^{\sqrt{n}{\log_{2}n}}$} &  \text{$2^{16 \times 8}$}  & \text{$2^{256 \times 16}$} \\ & \text{$=2^{128}$} & \text{$ = 2^{4096}$}\\ \hline \text{$h(n) = n!$} & \text{$256!$} & \text{$65536!$}\\ & \text{$= O\left(\left(2^8\right)^{256}\right)$} & \text{$= O\left(\left(2^{16}\right)^{65536}\right)$} \\ & \text{$=O\left(2^{2048}\right)$} & \text{$=O\left(2^{1M}\right)$} \\ \hline\end{array}$$

Case of $h(n)$ is given only by an upper bound but factorial has higher growth rate than exponential. 

http://math.stackexchange.com/questions/351815/do-factorials-really-grow-faster-than-exponential-functions

$f(n)$ and $g(n)$ are having same order of growth as $f(n)$ is simply $3\times g(n)$ (we can prove this by taking $\log$ also). So, (d) is correct and all other choices are false. 

References
56 votes -- Arjun Suresh ( 328k points)
Selected Answer

A more formal approach:

$f(n) = n^2 \log n$

$g(n) = n (\log n)^{10}$

We can use the limit definition of $O$-notation

http://cse.unl.edu/~choueiry/S06-235/files/Asymptotics-HandoutNoNotes.pdf

$\displaystyle{\lim_{n \to \infty }} \frac{f(n)}{g(n)} = 0, \implies  f(n) = o(g(n))$
small $o$ implying $f$ is strictly asymptotically lower than $g$. Also by definition, $o \implies O \text{ but } O \not\implies o$.

$\displaystyle{\lim_{n\to \infty}} \frac{f(n)}{g(n)} = c,  c > 0 \implies f(n) = \Theta(g(n))$

$\displaystyle{\lim_{n \to \infty}} \frac{f(n)}{g(n)} = \infty, \implies  f(n) = \omega(g(n))$
small $\omega$ implying $f$ is strictly asymptotically higher than $g$. Also by definition, $\omega \implies \Omega, \text{ but } \Omega \not \implies \omega.$

We can use this to prove the above question

For any $k > 0$

 $\displaystyle{\lim_{n \to \infty}} \frac{(\log n)^{k}}{ n}$

Applying L'Hôpital's rule,

$\implies \displaystyle{ \lim_{n \to \infty}} \frac{ k*(\log n)^{k-1}}{ n}$

$= \displaystyle{\lim_{n \to \infty}}  \frac{ k!}{n} = 0$

So, $(\log n)^k = o(n)$

Now for large $n$, $n >> (\log n)^9$

i.e., $n^{2}\log n >> n(\log n)^{10}$ (Multiplying both sides by $n \log n)$

So, $n(\log n )^{10} = o(n^{2}\log n)$ and  $n(\log n )^{10} \neq \Theta(n^{2}\log n)$ (making LHS strictly asymptotically lower than RHS)

Or

$n(\log n )^{10} = O(n^{2}\log n)$ and $n^{2}\log n \neq O(n(\log n )^{10})$

Option $B$.

References
26 votes -- janakyMurthy ( 841 points)

$$\begin{array}{|l|c|c|} \hline \text {} & \textit{f(n)} & \textit{g(n)} \\\hline \text{$n = 2^{10}$} &  \text{$10 \times 2^{10} \times 2^{10}$} & \text{$2^{10} \times 10^{10}$}\\\hline \text{$n = 2^{256}$} & \text{$256 \times 2^{256} \times 2^{256}$} &\text{$2^{256} \times 256^{10}$}  \\\hline \end{array}$$
As $n$ is going larger, $f(n)$ is overtaking $g(n)$ and the growth rate of $f$ is faster than that of $g$. So, $g(n) = O(f(n))$ and $f(n) \neq O(g(n))$.

B choice. 

65 votes -- Arjun Suresh ( 328k points)
Selected Answer

I. Rate of growth of $(n+k)^m$ is same as that of $(n^m)$ as $k$ and $m$ are constants. (If either $k$ or $m$ is a variable then the equality does not hold), i.e., for sufficiently large values of $n,$

$$(n+k)^m \leq a n^m \text{ and}$$

$$n^m \leq b (n+k)^m$$

where $a$ and $b$ are positive constants. Here, $a$ can be $k^m$and $b$ can be $1.$

So, TRUE.

II. $2^{n+1} = 2\times (2^n) = \Theta\left(2^n\right)$ as $2$ is a constant here.

As $2^{n+1}$ is both upper and lower bounded by $2^n$ we can say $2^{n+1} = O\left(2^n\right).$ $(\Theta$ implies both $O$ as well as $\Omega)$

So, TRUE.

III. $2^{2n+1}$ has same rate of growth as $2^{2n}$.

$2^{2n} = {2^n}^2 = 2^n \times 2^n$

$2^n$ is upper bounded by $(2^n)^2$, not the other way round as $2^{2n}$ is increasing by a factor of $2^n$ which is not a constant.

So, FALSE.

Correct Answer: $A$

42 votes -- Danish ( 3.3k points)
Selected Answer

Answer is (D).

We can verify as :  $f \leqslant g$  but $g \nleqslant f$. Therefore, $f<g$.

Also, $g=h$, as $g=O(h)$ and $h=O(g)$.

26 votes -- Sandeep_Uniyal ( 6.4k points)
Selected Answer

$g(n) = n!$.

On expanding the factorial we get $g(n) = O(n^n)$ :$$\begin{align*} n^n &> n^{\log n} \\ n^n &> 2^n \end{align*}$$This condition is violated by options $A$, $B$ and $C$ by first statements of each. Hence, they cannot be said to be TRUE.

Second statement of option $D$ says that $g(n)$ is asymptotically biggest of all.

Answer is option (D).

29 votes -- Amar Vashishth ( 24.9k points)
Selected Answer
$A < C$ and $A < D$ are straight forward.

$E < B$ and $C, D < E$ as $E$ and $B$ are exponential functions and $B$ having a larger base.

Now, we just need to see if $C$ or $D$ is larger.

In $C$ we have a term $n^{3/4}$ and correspondingly in D we have $\log^9n$ (after taking $n$ out).

$n^{3/4}$ is asymptotically larger than $\log^9n$ as when $n = 10^{100}, \log^9 n$ gives $100^9$, while $n^{3/4}$ gives $10^{75} > 100^{37}$ a much higher value and this is true for all higher values of $n$. So, D < C.

Thus, $A$ is correct.
44 votes -- Arjun Suresh ( 328k points)
Selected Answer
Answer is $(A)$.

$n \log_2 n < n^{3/2}$ is quite straightforward as $n^{{3}/{2}} = n \times n^{1/2}$ and $\log n < n^{1/2}$ as logarithmic growth is smaller than exponential growth however small be the exponentiation factor.

Also, $n^{3/2} < n^{\log_2 n} $ and $n^{3/2} < 2^n$.

Now only $n^{\log_2 n}$ and $ 2^n$ need to be compared.

Taking $\log$ of both $(\log_2 n)^2$  and $n$,

$n > (\log_2 n)^2$

Hence, $2^n >  n^{\log_2 n} $.

NOTE: We cannot compare two functions for asymptotic growth by taking $\log$ if they are giving constants after $\log$ operation.
43 votes -- Ankit Rokde ( 6.9k points)
Selected Answer
Worst case complexity can never be lower than the average case complexity, but it can be higher. So, $(C)$ is the answer.
$A(n) = O(W(n))$.
42 votes -- Arjun Suresh ( 328k points)
Selected Answer

Sum of the cubes of the first $n$ natural numbers is given by $(n(n+1) / 2) ^ 2$  which is $\Theta(n^4)$. So, I, III and IV are correct. II is wrong.

$\therefore$ (C) is correct.

52 votes -- Arjun Suresh ( 328k points)
Selected Answer

The answer is option (D).

Since the value of $\sin(n)$ will always range from $-1$ to $+1$, hence $g(n)$ can take values $1$, $n$, $n^2$.

Hence, if $g(n) = 1$, then statement I is incorrect.

And, if $g(n) = n^2$, then statement II is incorrect.

61 votes -- saurabhrk ( 1.1k points)
Selected Answer

$10$ is constant.  $\therefore$ Growth rate is $0$.

$\sqrt{n}$  grows slower than linear but faster than $\log$. (Consider $\frac {\sqrt {n^2}}{\sqrt n} =\sqrt n$, whereas $\frac {\log \left(n^2\right)}{\log n} = 2$)

 $n$  : Growth rate is linear.

$\log_{2}n$  : Growth rate is logarithmic. For asymptotic growth, the base does not matter.

$\frac{100}{n}$  : Growth rate decreases with $n$.

So, correct answer is (B).

NOTE: Please never substitute large values of $n$ in such questions. If ever you do, at least do for $2$ such values and take ratio to get the growth rate or plot a graph. Remember ${1.01}^{n} \neq O\left(n^{100}\right)$.

46 votes -- Arjun Suresh ( 328k points)
  • $10$, Constant
  • $\sqrt{n}$, Square root
  • $n$, Polynomial
  • $\log_{2}n$, Logarithmic
  • $\frac{100}{n}$. Constant division by polynomial (clearly less than constant for every value of n >100)

Now we know  Constant division by polynomial < Constant  < Logarithmic <Square root < Polynomial

So, correct order is $\frac{100}{n}$, $10$, $\log_{2}n$, $\sqrt{n}$, $n$

PS: In the above graph, for asymptotic notations any graph can be multiplied by a constant and so the $\textbf{SLOPE}$ is the important factor. If any plot is having higher slope, it'll eventually overtake the lower slope one at some value of $x$ and hence asymptotically larger than the other.

31 votes -- Prashant Singh ( 46.5k points)
Selected Answer

Let $n = 2^x$. Then, $\log_2 n = x$.

$$\begin{array}{rlllcll} f(n) &=& n^{1/\sqrt{\log_2 n}} &=& \left (2^x \right )^{1/\sqrt{x}} = 2^{x/\sqrt{x}} &=& 2^{\sqrt{x}}\\[1em] g(n) &=& \sqrt{\log_2 n} &=& \sqrt{\log_2 {\left ( 2^x \right )}}&=& \sqrt{x}\\[1em] h(n) &=& n^{1/100} &=& \left ( 2^x \right )^{1/100} &=& 2^{x/100} \end{array}$$Since exponentials grow faster than polynomials, $h(n) > g(n)$ for large $n$.

Since linear functions grow faster than square roots, $\frac{x}{100} > \sqrt{x}$ for large $x$. Thus, $h(n) > f(n)$ for large $n$.

Since exponentials grow faster than polynomials, $2^{\sqrt{x}} > \sqrt{x}$ for large $\sqrt{x}$. Thus, $f(n) > g(n)$ for large $n$.

Hence, the relation is,

$g(n) < f(n) < h(n)$

Thus, option (D) is correct.

39 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer
Answer will be $(C)$.

Take $n=2^{1024}$

Now,   $2^{\sqrt{(2 \log n)}} \approx 2^{45}$

          $n^{\frac{1}{3}} \approx 2^{341}$

          $n / \log n = 2^{1024}/1024 \approx 2^{1014}$

Just one value is not enough to confirm growth rate. So, take $n = 1024$.

Now,  $2^{\sqrt{(2 \log n)}} \approx 2^{4}$

          $n^{\frac{1}{3}}\approx 2^{3}$

          $n / \log n = 2^{10}/10 \approx 2^{7}$

So, as $n$ increases, the gap between second and third function increases and also the second function overtakes the first. So, $f1 < f2 < f3$.
18 votes -- srestha ( 83.4k points)
Selected Answer

Assuming that the base of the $\log$ in the question is $e$.

Let us try to rewrite each of these functions in the form $e^{\,\small\rm something}$, to make the comparison easier.

$$\def\d{\displaystyle\,}
\begin{array}{l|ll}
a. & e^{n} / n & = \dfrac{e^{\d n}}{e^{\d(\ln n)}} & = e^{\d (n - \ln n)}\\[1em]
b. & e^{n - 0.9 \ln n} &&= e^{\d (n - 0.9 \ln n)}\\[1em]
c. & 2^n &= \left (e^{\d\ln 2} \right )^{\d n} &= e^{\d (n \ln 2)}\\[1em]
d. & (\ln n)^{n-1} &= \left (e^{\d\ln \ln n} \right )^{\d n-1} &= e^{\d (n \ln \ln n - \ln\ln n)}
\end{array}$$

Now, if we just compare the exponents of all, we can clearly see that $(n \ln \ln n - \ln \ln n)$ grows faster than the rest. Note that in option $(C)$, the multiplicative $\ln 2$ is a constant, and hence grows slower than the multiplicative $\ln \ln n$ from option $(D)$.

This implies that $e^{\d (n \ln \ln n - \ln\ln n)}$ grows the fastest, and hence, $(\ln n)^{n-1}$ grows the fastest.

Thus, option (D) is the correct answer.

47 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer
$$\begin{array}{|l|l|} \hline \textbf{$m$} & \text{$n=m!$} & \text{$\log n / \log \log n$} \\\hline \text {4} &  \text{24} & \text{4/2 = 2} \\\hline  \text {6} &  \text{720} & \text{9/3 = 3} \\\hline  \text {8} &  \text{720$\times$56} & \text{15/3 = 5} \\\hline  \text {10} &  \text{720$\times$56$\times$90} & \text{21/4 = 5} \\\hline \end{array}$$If we see, $m$ is growing at same rate as $ \log n / \log \log n$.
Correct Answer: $A$
19 votes -- Arjun Suresh ( 328k points)
Selected Answer

Let $N = 2^{256}$

  1. $(\log \log N)! = 8!$
  2. $(\log \log N)^{\log N} = (8)^{256} = 2^{768}$
  3. $(\log \log N)^{\log\log \log N} = 8^{3}=512$
  4. $(\log N)^{\log\log N} = (256)^{8}=2^{64}$
  5. $2^{\sqrt{\log \log N}} = 2^{2\sqrt 2}$

Let $N = 2^{16}$

  1. $(\log \log N)! = 4!$
  2. $(\log \log N)^{\log N} = (4)^{16} = 2^{32}$
  3. $(\log \log N)^{\log\log \log N} = 4^{2}=16$
  4. $(\log N)^{\log\log N} = (16)^{4}=2^{16}$
  5. $2^{\sqrt{\log \log N}} = 2^{2}=4$

Taking ratio for both $N$ values,

  1. $(\log \log N)! \rightarrow 8!/4!$
  2. $(\log \log N)^{\log N} \rightarrow  2^{736}$
  3. $(\log \log N)^{\log\log \log N} \rightarrow 32$
  4. $(\log N)^{\log\log N} \rightarrow 2^{48}$
  5. $2^{\sqrt{\log \log N}} \rightarrow \approx 2$

Option $B$ = $(\log \log N)^{\log N}$ asymptotically grows the fastest, as for the same change in $N$, the value increased the most (growth) for option $B$ and this growth is monotonic (continuous).

29 votes -- Kapil Phulwani ( 35.1k points)
Selected Answer
Take $n = 2^{2^{k}}$

Now, option $(A)$ gives  $n = 2^{2^{k}}$

Option $(B)$ gives  $2^{\sqrt{\log2^{2^{k}}}}= 2^{\sqrt{2^{k}}}$

Option $(C)$ gives  $2^{2^{\sqrt{\log(\log2^{2^{k}})}}}= 2^{2^{\sqrt{k}}}$

Now, check power only for $(B)$ and $(C)$.

Take $\log$ of power of both functions, i.e.,

$(\log(2^{\sqrt{k}})= O(\log(\sqrt{2^{k}})$

$\sqrt{k} = O(\frac{k}{2})$

Clearly, $(C)<(B)<(A)$.

Option $(A)$ is answer.
14 votes -- Anu007 ( 13.9k points)
Given that -

$c_1n^{n+\frac{1}{2}}e^{-n}\leq n!\leq c_2n^{n+\frac{1}{2}}e^{-n}$.

$n!=\Theta\left(\frac{n^{n+1/2}}{e^n}\right)$

We can divide it by $e^{1/2}$  (multiplying or dividing by a +ve constant doesn't affect the asymptotic nature of a function)

$\implies n!=\Theta\left(\frac{n^{n+1/2}}{e^{n+1/2}}\right)=\Theta\left(\left(\frac{n}{e}\right)^{n+\frac{1}{2}}\right)$

Option (D) is the answer.
5 votes -- Ashish verma ( 6.3k points)

For larger value of $n$ Option E) grows slowest


Put $n=2^{64}$

$A)$ $2^{log n}= 2^{64}$

$B)$ $n^{10}= 2^{640}$

$C)$ $(√log n)^{log^2 n}= 8^{64*64}= 2^{3*64*64}=2^{12288}$

$D)$ $(log n)^{√log n}= 64^{√8} = 2^{6√8}= 2^{16.97}$

$E)$ $2^{2^{√log log n}}= 2^{2^{√8}}=2^{7.10}$

3 votes -- Ashwani Kumar ( 12.2k points)

The subset-sum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a $\text{2-dimensional}$ Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.

Which of the following is valid for $2 \leq i \leq n$, and $a_i \leq j \leq W$?

  1. $X[i, j] = X[i-1, j] \vee X[i, j-a_i]$
  2. $X[i, j] = X[i-1, j] \vee X[i-1, j-a_i]$
  3. $X[i, j] = X[i-1, j] \wedge X[i, j-a_i]$
  4. $X[i, j] = X[i-1, j] \wedge X[i-1, j-a_i]$

The subset-sum problem is defined as follows. Given a set of $n$ positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a $\text{2-dimensional}$ Boolean array, $X$, with $n$ rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.

Which entry of the array $X$, if TRUE, implies that there is a subset whose elements sum to $W$?

  1. $X[1, W]$
  2. $X[n, 0]$
  3. $X[n, W]$
  4. $X[n-1, n]$

A sub-sequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.

We wish to find the length of the longest common sub-sequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:

l(i,j)  = 0, if either i = 0 or j = 0
        = expr1, if i,j > 0 and X[i-1] = Y[j-1]
        = expr2, if i,j > 0 and X[i-1] ≠ Y[j-1]

Which one of the following options is correct?

  1. $\text{expr1} = l\left(i-1, j\right) +1$

  2. $\text{expr1} = l\left(i, j-1\right)$

  3. $\text{expr2} = \max\left(l\left(i-1, j\right), l\left(i,j-1\right)\right)$

  4. $\text{expr2} = \max\left(l\left(i-1, j-1\right), l\left(i,j\right)\right)$

A sub-sequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths $m$ and $n$, respectively with indexes of $X$ and $Y$ starting from $0$.

We wish to find the length of the longest common sub-sequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:

$l(i,j)  = 0$, if either $i = 0$ or $j = 0$
        $= expr1,$  if  $i,j > 0$  and  $X[i-1] = Y[j-1]$
        $= expr2,$  if  $i,j > 0$  and  $X[i-1] ≠ Y[j-1]$
The value of $l(i,j)$ could be obtained by dynamic programming based on the correct recursive definition of $l(i,j)$ of the form given above, using an array $L[M,N]$, where $M = m+1$ and $N = n+1$, such that $L[i, j] = l(i, j)$.

Which one of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of $l(i, j)$?

  1. All elements of $L$ should be initialized to 0 for the values of $l(i, j)$ to be properly computed.
  2. The values of $l(i, j)$ may be computed in a row major order or column major order of $L[M, N]$.
  3. The values of $l(i, j)$ cannot be computed in either row major order or column major order of $L[M, N]$.
  4. $L[p, q]$ needs to be computed before $L[r, s]$ if either $p<r$ or $q < s$.

The weight of a sequence $a_0,a_1, \dots, a_{n-1}$ of real numbers is defined as $a_0+a_1/2+ \dots + a_{n-1}/2^{n-1}$. A subsequence of a sequence is obtained by deleting some elements from the sequence, keeping the order of the remaining elements the same. Let $X$ denote the maximum possible weight of a subsequence of $a_o,a_1, \dots, a_{n-1}$ and $Y$ the maximum possible weight of a subsequence of $a_1,a_2, \dots, a_{n-1}$. Then $X$ is equal to

  1. $max(Y, a_0+Y)$
  2. $max(Y, a_0+Y/2)$
  3. $max(Y, a_0 +2Y)$
  4. $a_0+Y/2$

An algorithm to find the length of the longest monotonically increasing sequence of numbers in an array $A[0:n-1]$ is given below.

Let $L_i$, denote the length of the longest monotonically increasing sequence starting at index $i$ in the array.

Initialize $L_{n-1} = 1$.

For all $i$ such that $0 \leq i \leq n-2$

$ L_i = \begin{cases} 1+ L_{i+1} &  \quad\text{if A[i] < A[i+1]} \\ 1 & \quad\text{Otherwise}\end{cases} $

Finally, the length of the longest monotonically increasing sequence is $\text{max} \:(L_0, L_1, \dots , L_{n-1}).$

Which of the following statements is TRUE?

  1. The algorithm uses dynamic programming paradigm
  2. The algorithm has a linear complexity and uses branch and bound paradigm
  3. The algorithm has a non-linear polynomial complexity and uses branch and bound paradigm
  4. The algorithm uses divide and conquer paradigm

Four Matrices $M_1, M_2, M_3$ and $M_4$ of dimensions $ p \times q, \:\:q \times r, \:\:r \times s$ and $s \times t$ respectively can be multiplied in several ways with different number of total scalar multiplications. For example when multiplied as $((M_1 \times M_2) \times (M_3 \times M_4))$, the total number of scalar multiplications is $pqr + rst + prt$. When multiplied as $(((M_1 \times M_2) \times M_3) \times M_4)$, the total number of scalar multiplications is $pqr + prs + pst$.

If $p=10, q=100, r=20, s=5$ and $t=80$, then the minimum number of scalar multiplications needed is

  1.  $248000$
  2.  $44000$
  3.  $19000$
  4.  $25000$

Consider two strings $A$="qpqrr" and $B$="pqprqrp". Let $x$ be the length of the longest common subsequence (not necessarily contiguous) between $A$ and $B$ and let $y$ be the number of such longest common subsequences between $A$ and $B$. Then $x +10y=$ ___.

Suppose you want to move from $0$ to $100$ on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a pre-specified pair of integers $i,\:j \:\text{with}\: i <j$. Given a shortcut $(i,j)$, if you are at position $i$ on the number line, you may directly move to $j$. Suppose $T(k)$ denotes the smallest number of steps needed to move from $k$ to $100$. Suppose further that there is at most $1$ shortcut involving any number, and in particular, from $9$ there is a shortcut to $15$. Let $y$ and $z$ be such that $T(9) = 1 + \min(T(y),T(z))$. Then the value of the product $yz$ is _____.

The Floyd-Warshall algorithm for all-pair shortest paths computation is based on

  1. Greedy paradigm.
  2. Divide-and-conquer paradigm.
  3. Dynamic Programming paradigm.
  4. Neither Greedy nor Divide-and-Conquer nor Dynamic Programming paradigm.
Let $A_{1}, A_{2}, A_{3}$ and $A_{4}$ be four matrices of dimensions $10 \times 5, 5 \times 20, 20 \times 10$ and $10 \times 5$, respectively. The minimum number of scalar multiplications required to find the product $A_{1}A_{2}A_{3}A_{4}$ using the basic matrix multiplication method is _________.

Assume that multiplying a matrix $G_1$ of dimension $ p \times q$ with another matrix $G_2$ of dimension $q \times r$ requires $pqr$ scalar multiplications. Computing the product of $n$ matrices $G_1G_2G_3 \dots G_n$ can be done by parenthesizing in different ways. Define $G_i G_{i+1}$ as an explicitly computed pair for a given paranthesization if they are directly multiplied. Fr example, in the matrix multiplication chain $G_1G_2G_3G_4G_5G_6$ using parenthesization $(G_1(G_2G_3))(G_4(G_5G_6)), G_2G_3$ and $G_5G_6$ are only explicitly computed pairs.

Consider a matrix multiplication chain $F_1F_2F_3F_4F_5$, where matrices $F_1, F_2, F_3, F_4$ and $F_5$ are of dimensions $ 2 \times 25, 25 \times 3, 3 \times 16, 16 \times 1 $ and $ 1 \times 1000$, respectively. In the parenthesization of $F_1F_2F_3F_4F_5$ that minimizes the total number of scalar multiplications, the explicitly computed pairs is/are

  1. $F_1F_2$ and $F_3F_4$ only
  2. $F_2F_3$ only
  3. $F_3F_4$ only
  4. $F_1F_2$ and $F_4F_5$ only

Answers: Dynamic Programming

Selected Answer

This is analogous to the dynamic programming solution to $0/1$ knapsack problem.

Consider the capacity of the knapsack, i.e., $W$ to be analogous to $J$(the total sum here).

The solution exploits the optimal substructure of the problem.

At each stage we can have $2$ options:

Case $(1)$: Either we take an item(in this question either we consider the element $A_i$)  along with the total solution to previous sub-problem(total solution here means the total sum obtained  till previous sub-problem)

in which case we choose $A[i-1]$$\left [ j-a_{i} \right ]$

$A[i-1]$ indicates we are considering solution to previous subproblem and

$A[j-a_i]$ means we have considered element $a_i$ and now remaining sum is $J-a_i$ which has to be further considered.

Case $(2)$: Or we do not consider the item(in this case the element $a_{i}$) in which case we only consider the solution to previous subproblem, which is, $A[i-1][J]$

Since the whole solution to this subset-sum problem is Logical OR(+) of cases 1 and 2, we eliminate options $C$ and $D$ because both are considering the Logical AND of the two parts of the solution.

Now, since here in the given question we are given a boolean array $X[n][W+1]$

So, an entry $X[i][j]$ is true only if sum $j$ is possible with array elements from $0$ to $i$.

So, for Each element of array, $a_{i}$, we consider two possibilities:

(1) Either we can ignore it and still get our possible sum, which is, $X[i-1][j]$

OR

(2) We could include element $a_i$ and then get our required sum, which is, $X[i-1][j-a_i]$

And finally, to get $X[i][j]$, we take logical or of the above two cases.

Hence, answer is option B.

Reference : 

Video:

By using the analogy of the problem and solution between subset-sum problem and $0/1$ knapsack problem, the above video clearly explains the how the solution to the problem is structured .

Video:
19 votes -- Ayush Upadhyaya ( 25.8k points)
Selected Answer

ANSWER is C.

If LAST ROW and LAST COLUMN entry is $1$, then there exists a subset whose elements sum to $W$.

19 votes -- Shivam Bhardwaj ( 265 points)
Selected Answer

Answer is C. When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].

/* Returns length of LCS for X[0..m-1], Y[0..n-1] */
int lcs( char *X, char *Y, int m, int n )
{
   if (m == 0 || n == 0)
     return 0;
   if (X[m-1] == Y[n-1])
     return 1 + lcs(X, Y, m-1, n-1);
   else
     return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n));
}
27 votes -- Sona Praneeth Akula ( 3.4k points)
Selected Answer

$\text{expr2} = \max\left(l\left(i-1, j\right), l\left(i,j-1\right)\right)$

When the currently compared elements doesn't match, we have two possibilities for the LCS, one including $X[i]$ but not $Y[j]$ and other including $Y[j]$ but not $X[i]$.

/* Returns length of LCS for X[0..m-1], Y[0..n-1] */
int lcs( char *X, char *Y, int m, int n )
{
   if (m == 0 || n == 0)
     return 0;
   if (X[m-1] == Y[n-1])
     return 1 + lcs(X, Y, m-1, n-1);
   else
     return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n));
}

Answer is B. Dynamic programming is used to save the previously found LCS. So, for any index [p,q] all smaller ones should have been computed earlier. Option D is not correct as the condition given requires even L[3,2] to be computed before L[2,4] which is not a necessity if we follow row-major order. 

int lcs( char *X, char *Y, int m, int n )
{
   int L[m+1][n+1];
   int i, j;
  
   /* Following steps build L[m+1][n+1] in bottom up fashion. Note 
      that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1] */
   for (i=0; i<=m; i++)
   {
     for (j=0; j<=n; j++)
     {
       if (i == 0 || j == 0)
         L[i][j] = 0;
  
       else if (X[i-1] == Y[j-1])
         L[i][j] = L[i-1][j-1] + 1;
  
       else
         L[i][j] = max(L[i-1][j], L[i][j-1]);
     }
   }
    
   /* L[m][n] contains length of LCS for X[0..n-1] and Y[0..m-1] */
   return L[m][n];
}
35 votes -- Arjun Suresh ( 328k points)
Selected Answer

$$\begin{align}
&S = \langle a_0, S_1 \rangle\\
&S_1 = \langle a_1, a_2, a_3 \ldots a_{n-1} \rangle
\end{align}$$

Two possible cases arise:

  1. $\bf{a_0}$ is included in the max weight subsequence of $\textbf{S:}$

            In this case, $X = \text{weight} \Bigl( \langle a_0, S_1 \rangle \Bigr) = a_0 + \dfrac Y 2$
     
  2. $\bf{a_0}$ is not included in the max weight subsequence of $\textbf{S:}$

            In this case, $X = \text{weight}(S_1) = Y$

Since the value of $a_0$ can be anything (negative or $<\frac Y 2$ in general) $\{ \because a_i \in \mathbb{R} \}$, it is possible that $Y > a_0 + \frac Y 2$.

The maximum possible weight of a subsequence of $S$ is given by: $$X = \max \left ( Y, a_0 + \frac Y 2 \right )$$

Thus, option B is correct.

80 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer

$(A)$ is the answer. 

The algorithm is storing the optimal solutions to subproblems at each point (for each $i$), and then using it to derive the optimal solution of a bigger problem. And that is dynamic programming approach. And the program has linear time complexity. 

http://stackoverflow.com/questions/1065433/what-is-dynamic-programming

Now, branch and bound comes when we explore all possible solutions (branch) and we backtrack as soon as we realise we won't get a solution (in classical backtracking we will retreat only when we won't find the solution). In backtracking : In each step, you check if this step satisfies all the conditions.
If it does : you continue generating subsequent solutions
If not : you go one step backward to check for another path

So, backtracking gives all possible solutions while branch and bound will give only the optimal one.

The given algorithm here is neither backtracking nor branch and bound. Because we are not branching anywhere in the solution space. 

And the algorithm is not divide and conquer as we are not dividing the problem and then merging the solution as in the case of merge sort (where merge is the conquer step). 

https://en.wikipedia.org/wiki/Divide_and_conquer_algorithms

References
55 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is C.

Ordering: 

  • First Multiply  $M_2 \times M_3.$
    This requires $100*20*5$ multiplications. 
  • Then Multiply  $M_1 \times (M_2 \times M_3).$
    This requires $10*100*5$ multiplications.
  • Then Multiply $(M_1 \times (M_2 \times M_3)) \times M_4.$
    This requires $10*5*8$ multiplications.

Total $19000$ Multiplications.


Brute Force approach - anyone can do. 

No. of possible ordering for 4 matrices is $C_3$ where $C_3$ is the $3^{rd}$ Catalan number and given by $n=3$ in $\frac{1}{n+1} {}^{2n}C_n = 5.$

So, here we have 

  1. $(M_1 \times M_2) \times (M_3 \times M_4)$
  2. $(M_1 \times (M_2 \times M_3)) \times M_4$
  3. $((M_1 \times M_2) \times M_3) \times M_4$
  4. $M_1 \times (M_2 \times (M_3 \times M_4))$
  5. $M_1 \times ((M_2 \times M_3) \times M_4))$

Each of these would give no. of multiplications required as

  1. $pqr + rst + prt $
  2. $qrs + pqs + pst$
  3. $pqr + prs + pst$
  4. $rst + qrt + pqt$
  5. $qrs + qst + pst$

The last 2 are having $qt$ terms which are the highest terms by far and hence we can avoid them from consideration $qt = 8000$ multiplied by one other term would be larger than any value in choice. So, just find the value of first 3 terms. 

  1. $pqr + rst + prt  = 20000 + 8000 + 16000 = 44000$
  2. $qrs + pqs + pst = 10000 + 5000 + 4000 = 19000$ - smallest value in choice, we can stop here. 
  3. $pqr + prs + pst$

Dynamic Programming Solution (should know Matrix Chain Ordering algorithm)

Here we have a chain of length 4.

Dynamic programming solution of Matrix chain ordering has the solution

$m[i, j] = \begin{cases} 0 &\text{ if } i=j\\ \min_{i\leq k < j } m[i]k] + m[k+1][j] + p_{i-1}p_{j}p_{k} &\text{ if } i < j\end{cases}$

So, we can fill the following table starting with the diagonals and moving upward diagonally. Here $k < j$ but $\geq i.$
$$\begin{array}{|l|l|l|l|l|} \hline \text{} & \textbf{j=1} & \textbf{j=2} & \textbf{j=3} & \textbf{j=4}\\\hline 
\textbf{i=1} &  \text{0} & \text{$p_0p_1p_2$} & \min(10000+p_0p_1p_3, & \min(18000+ p_0p_1p_4, \\
&&=20000&20000+p_0+p_2p_3)&20000+8000+p_0+p_2+p_4, \\
&&&=15000&15000+p_0p_3p_4) = 19000
\\\hline 
\textbf{i=2} & \text{} & \text{0} & \text{$p_1p_2p_3 = 10000$} & \text{$\min(10000 + p_2p_3p_4), p_1p_3p_4)$}\\
&&&&=18000
\\\hline 
\textbf{i=3} &\text{} & \text{} & \text{0} &\text{$p_2p_3p_4 = 8000$} \\\hline \textbf{i=4} &\text{} &\text{} & \text{} & \text{0} \\\hline \end{array}$$

Our required answer is given by $m[1,4] = 19000.$

References
33 votes -- Sona Praneeth Akula ( 3.4k points)
Selected Answer
Answer is $34$.

In first string, if we want to get $4$ as maximum length then LCS should end with either "$rr$" or "$qr$".
Only $4$ combinations are possible for LCS with length $4$:

"$qpqr$", "$qqrr$","$pqrr$","$qprr$"

Now, check for matching sequences in second string, except for "$qqrr$" all are possible.
32 votes -- Anurag Semwal ( 6.7k points)
Selected Answer
$T(k)$ is the smallest number of steps needed to move from $k$ to $100$.

Now, it is given that $y$ and $z$ are two numbers such that,

$T(9) = 1 + \min (T(y), T(z))$ , i.e.,

$T(9) = 1 + \min ($Steps from $y$ to $100$, Steps from $z$ to $100)$, where $y$ and $z$ are two possible values that can be reached from $9$.

One number that can be reached from $9$ is $10$, which is the number obtained if we simply move one position right on the number line. Another number is $15$, the shortcut path from $9$, as given in the question. So, we have two paths from $9$, one is $10$ and the other is $15$.

Therefore, the value of $y$ and $z$ is $10$ and $15$ (either variable may take either of the values).

Thus, $yz = 150$.
41 votes -- Divya Bharti ( 8.4k points)
$T(9) =$ Distance from $9$ to $100$
$T(9)=1+ \min(T(y),T(z))=1+$min(Distance from $y$ to $100$ , Distance from $z$ to $100$)

There are only two such values where we can reach from $9$ , one is simple step to right on number line , i.e $10$ and another is $15$ (given shortcut)

Hence , $y=10$ , $z=15$
$yz=10 \times 15 = 150$
36 votes -- Srinath Jayachandran ( 2.9k points)
Selected Answer
In Floyd Warshall's, we calculate all possibilities and select best one so its neither Divide & Conquer nor Greedy but based on Dynamic Programming Paradigm.

Correct Answer: $C$
30 votes -- Anurag Semwal ( 6.7k points)
Selected Answer

The answer is $1500$.

Matrix Parenthesizing :  $A_1 ((A_2 A_3) A_4)$

Check my solution below, using dynamic programming $$\begin{array}{|l|l|}\hline A_{1} &A_{2} & A_{3} &A_{4}\\\hline 10 \times 5 & 5 \times 20 & 20 \times 10 & 10\times 5  \\\hline \end{array}$$

  • $A_{12} = 10 \times 5 \times 20 =1000$
  • $A_{23} = 5 \times 20 \times 10 =1000$
  • $A_{34} = 20 \times 10 \times 5 =1000$

$$A_{13}=min\left\{\begin{matrix} A_{12} + A_{33}+ 5 \times 20 \times 10 = 2000\\ A_{11} + A_{23} + 10 \times 5 \times 10 = 1500 \end{matrix}\right.$$

$$A_{24}=min\left\{\begin{matrix} A_{23} + A_{44}+ 5 \times 10 \times 5 = 1250\\ A_{22} + A_{34} + 5 \times 20 \times 5 = 1500 \end{matrix}\right.$$

$$A_{14}=min\left\{\begin{matrix} A_{11} + A_{24}+ 10 \times 5 \times 5 = 1500\\ A_{12} + A_{34} + 10 \times 20\times 5 \geqslant 2000\\ A_{13}+A_{44} + 10 \times 20\times 5 = 2000 \end{matrix}\right.$$

Answer is 1500.

41 votes -- Akash Kanase ( 35.2k points)
Selected Answer
If we multiply anything with $F_{5}$ we will get much greater multiplication cost because $F_{5}$ is $1*1000$ matrix so $1000$ will play vital role in cost. So we will multiply $F_{5}$ at very last step.

So, here is the sequence giving minimal cost:

$(((F_{1}(F_{2}(F_{3}F_{4}))(F_{5})) =  48 + 75 + 50 + 2000 =  2173$

Explicitly computed pairs is $(F_{3} F_{4} )$

Correct Answer: $C$
27 votes -- Digvijay ( 44.8k points)

Which of the following statements is false?

  1. Optimal binary search tree construction can be performed efficiently using dynamic programming

  2. Breadth-first search cannot be used to find connected components of a graph

  3. Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.

  4. Depth-first search can be used to find connected components of a graph

An independent set in a graph is a subset of vertices such that no two vertices in the subset are connected by an edge. An incomplete scheme for a greedy algorithm to find a maximum independent set in a tree is given below:

V: Set of all vertices in the tree;	
I := ϕ
while	V ≠ ϕ do	
begin	
    select a vertex u ∊ V such that	
    _______;
    V := V - {u};	
    if u is such that	
    ________then I := I ∪ {u}	
end;	
Output(I);	
  1. Complete the algorithm by specifying the property of vertex $u$ in each case.

  2. What is the time complexity of the algorithm?

Let $G$ be the directed, weighted graph shown in below figure

We are interested in the shortest paths from $A$.

  1. Output the sequence of vertices identified by the Dijkstra’s algorithm for single source shortest path when the algorithm is started at node $A$

  2. Write down sequence of vertices in the shortest path from $A$ to $E$

  3. What is the cost of the shortest path from $A$ to $E$?

Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?

  1. Dynamic programming

  2. Backtracking

  3. Greedy

  4. Divide and Conquer

The most appropriate matching for the following pairs

$$\begin{array}{|l|l|}\hline \text{X: depth first search}  &  \text{1: heap } \\\hline  \text{Y: breadth first search} & \text{2: queue} \\\hline  \text{Z: sorting} & \text{3: stack} \\\hline \end{array}$$

is:

  1. $\text{X - 1,  Y - 2,  Z - 3}$
  2. $\text{X - 3,  Y - 1,  Z - 2}$
  3. $\text{X - 3,  Y - 2,  Z - 1}$
  4. $\text{X - 2,  Y - 3,  Z - 1}$

Consider an undirected, unweighted graph $G$. Let a breadth-first traversal of $G$ be done starting from a node $r$. Let $d(r,u)$ and $d(r,v)$ be the lengths of the shortest paths from $r$ to $u$ and $v$ respectively in $G$. If $u$ is visited before $v$ during the breadth-first traversal, which of the following statements is correct?

  1. $d(r,u) < d(r,v)$
  2. $d(r,u) > d(r,v)$
  3. $d(r,u) \leq d(r,v)$
  4. None of the above

Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph $G$ with $n*n$ adjacency matrix $A$. $A[i,j]$ equals $1$ if there is an edge in $G$ from $i$ to $j$, and $0$ otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.

INITIALIZATION: For i = 1 ... n
    {For j = 1 ... n
        { if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;}
    }
    
ALGORITHM: For i = 1 ... n
    {For j = 1 ... n
        {For k = 1 ... n
            {P[__,__] = min{_______,______}; }
        }
    }    
  1. Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
  2. Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
  3. Fill in the blank: The running time of the Algorithm is $O$(___).

Consider the following graph: 

Among the following sequences:

  1. abeghf
  2. abfehg
  3. abfhge
  4. afghbe

Which are the depth-first traversals of the above graph?

  1. I, II and IV only
  2. I and IV only
  3. II, III and IV only
  4. I, III and IV only

Let $G =(V,E)$ be an undirected graph with a subgraph $G_1 = (V_1, E_1)$. Weights are assigned to edges of $G$ as follows.

$$w(e) = \begin{cases} 0 \text{,   if } e \in E_1 \\1 \text{,   otherwise} \end{cases}$$

A single-source shortest path algorithm is executed on the weighted graph $(V,E,w)$ with an arbitrary vertex $v_1$ of $V_1$ as the source. Which of the following can always be inferred from the path costs computed?

  1. The number of edges in the shortest paths from $v_1$ to all vertices of $G$
  2. $G_1$ is connected
  3. $V_1$ forms a clique in $G$

  4. $G_1$ is a tree

Let $G= (V,E)$ be a directed graph with $n$ vertices. A path from $v_i$ to $v_j$ in $G$ is a sequence of vertices ($v_{i},v_{i+1}, \dots , v_j$) such that $(v_k, v_{k+1}) \in E$ for all $k$ in $i$ through $j-1$. A simple path is a path in which no vertex appears more than once.

Let $A$ be an $n \times n$ array initialized as follows:

$$A[j,k] = \begin{cases} 1 \text {,   if } (j,k) \in E \\ 0 \text{,   otherwise} \end{cases}$$

Consider the following algorithm:

for i=1 to n
    for j=1 to n
        for k=1 to n
            A[j,k] = max(A[j,k], A[j,i] + A[i,k]);  

Which of the following statements is necessarily true for all $j$ and $k$ after termination of the above algorithm?

  1. $A[j,k] \leq n$
  2. If $A[j,j] \geq n-1$ then $G$ has a Hamiltonian cycle
  3. If there exists a path from $j$ to $k$, $A[j,k]$ contains  the longest path length from $j$ to $k$
  4. If there exists a path from $j$ to $k$, every simple path from $j$ to $k$ contains at most $A[j,k]$ edges

Suppose we run Dijkstra’s single source shortest path algorithm on the following edge-weighted directed graph with vertex $P$ as the source.

In what order do the nodes get included into the set of vertices for which the shortest path distances are finalized?

  1. $P,Q,R,S,T,U$
  2. $P,Q,R,U,S,T$
  3. $P,Q,R,U,T,S$
  4. $P,Q,T,R,U,S$

Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$  is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$

  1. cannot have a cut vertex
  2. must have a cycle
  3. must have a cut-edge (bridge)
  4. has chromatic number strictly greater than those of $G_1$ and $G_2$

Consider the undirected graph below:

Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?

  1. $\text{(E, G), (C, F), (F, G), (A, D), (A, B), (A, C)}$
  2. $\text{(A, D), (A, B), (A, C), (C, F), (G, E), (F, G)}$
  3. $\text{(A, B), (A, D), (D, F), (F, G), (G, E), (F, C)}$
  4. $\text{(A, D), (A, B), (D, F), (F, C), (F, G), (G, E)}$

Let $G(V,E)$ be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:

  1. $O\left(|V|^2\right)$

  2. $O\left(|E|+|V|\log |V|\right)$

  3. $O\left(|V|\log|V|\right)$

  4. $O\left(\left(|E|+|V|\right)\log|V|\right)$

Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum  weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.

The edge $e$ must definitely belong to:

  1. the minimum weighted spanning tree of $G$

  2. the weighted shortest path from $s$ to $t$

  3. each path from $s$ to $t$

  4. the weighted longest path from $s$ to $t$

Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum  weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.

Let the weight of an edge $e$ denote the congestion on that edge. The congestion on a path is defined to be the maximum of the congestions on the edges of the path. We wish to find the path from $s$ to $t$ having minimum congestion. Which of the following paths is always such a path of minimum congestion?

  1. a path from $s$ to $t$ in the minimum weighted spanning tree

  2. a weighted shortest path from $s$ to $t$

  3. an Euler walk from $s$ to $t$

  4. a Hamiltonian path from $s$ to $t$

In a depth-first traversal of a graph $G$ with $n$ vertices, $k$ edges are marked as tree edges. The number of connected components in $G$ is

  1. $k$
  2. $k+1$
  3. $n-k-1$
  4. $n-k$

In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.$$\begin{array}{|ll|ll|}\hline \text{1.} & \text{Bellman-Ford algorithm} & \text{A:} & \text{$O(m\log n)$} \\\hline  \text{2.} & \text{Kruskal’s algorithm} & \text{B:}& \text{$O(n^3)$} \\\hline   \text{3.}& \text{Floyd-Warshall algorithm} & \text{C:}  & \text{$O(nm)$} \\\hline  \text{4.} & \text{Topological sorting} &\text{D:}  & \text{$O(n+m)$}  \\\hline \end{array}$$

  1. $\text{1→ C, 2 → A, 3 → B, 4 → D}$
  2. $\text{1→ B, 2 → D, 3 → C, 4 → A}$
  3. $\text{1→ C, 2 → D, 3 → A, 4 → B}$
  4. $\text{1→ B, 2 → A, 3 → C, 4 → D}$

A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.

i = 0;
do {
    j = i + 1;
    while ((j < n) && E1) j++;
    if (j < n) E2;
} while (j < n);
flag = 1;
for (j = 0; j < n; j++)
    if ((j! = i) && E3) flag = 0;
if (flag) printf("Sink exists"); 
else printf ("Sink does not exist");

Choose the correct expressions for $E_1$ and $E_2$

  1. $E_1 : A[i][j]$ and $E_2 : i = j$;
  2. $E_1 :\ !A[i][j]$ and $E_2  : i = j + 1$;
  3. $E_1:\ !A[i][j]$ and $E_2 : i = j$;
  4. $E_1 : A[i][j]$ and $E_2 : i = j + 1$;

A sink in a directed graph is a vertex i such that there is an edge from every vertex $j \neq i$ to $i$ and there is no edge from $i$ to any other vertex. A directed graph $G$ with $n$ vertices is represented by its adjacency matrix $A$, where $A[i] [j] = 1$ if there is an edge directed from vertex $i$ to $j$ and $0$ otherwise. The following algorithm determines whether there is a sink in the graph $G$.

i = 0;
do {
    j = i + 1;
    while ((j < n) && E1) j++;
    if (j < n) E2;
} while (j < n);
flag = 1;
for (j = 0; j < n; j++)
    if ((j! = i) && E3) flag = 0;
if (flag) printf("Sink exists") ;
else printf ("Sink does not exist");

Choose the correct expression for $E_3$

  1. $(A[i][j] \ \&\& \ !A[j][i])$
  2. $(!A[i][j] \ \&\& \ A[j][i])$
  3. $(!A[i][j] \ \left | \right | A[j][i])$
  4. $(A[i][j] \ \left | \right | \ !A[j][i])$

To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:

  1. Queue
  2. Stack
  3. Heap
  4. B-Tree

Let $T$ be a depth first search tree in an undirected graph $G$. Vertices $u$ and $ν$ are leaves of this tree $T$. The degrees of both $u$ and $ν$ in $G$ are at least $2$. which one of the following statements is true?

  1. There must exist a vertex $w$ adjacent to both $u$ and $ν$ in $G$
  2. There must exist a vertex $w$ whose removal disconnects $u$ and $ν$ in $G$
  3. There must exist a cycle in $G$ containing $u$ and $ν$
  4. There must exist a cycle in $G$ containing $u$ and all its neighbours in $G$

Which of the following is the correct decomposition of the directed graph given below into its strongly connected components?

  1. $\left \{ P, Q, R, S \right \}, \left \{  T \right \},\left \{  U \right \}, \left \{  V \right \}$
  2. $\left \{ P,Q, R, S, T, V \right \}, \left \{  U  \right \}$ 
  3. $\left \{ P, Q, S, T, V \right \}, \left \{  R \right \},\left \{  U \right \}$ 
  4. $\left \{ P, Q, R, S, T, U, V \right \}$ 

Consider the depth-first-search of an undirected graph with $3$ vertices $P$, $Q$, and $R$. Let discovery time $d(u)$ represent the time instant when the vertex $u$ is first visited, and finish time $f(u)$ represent the time instant when the vertex $u$ is last visited. Given that

$$\begin{array}{l|l}\hline \text{$d(P) = 5$ units }  &  \text{ $f(P) = 12$ units  } \\\hline  \text{$d(Q) = 6$ units} & \text{ $f(Q) = 10$ units} \\\hline  \text{$d(R) = 14$ unit} & \text{$f(R) = 18$ units}  \\\hline \end{array}$$
Which one of the following statements is TRUE about the graph?

  1. There is only one connected component
  2. There are two connected components, and $P$ and $R$ are connected
  3. There are two connected components, and $Q$ and $R$ are connected
  4. There are two connected components, and $P$ and $Q$ are connected

In an unweighted, undirected connected graph, the shortest path from a node $S$ to every other node is computed most efficiently, in terms of time complexity, by

  1. Dijkstra’s algorithm starting from $S$.

  2. Warshall’s algorithm.

  3. Performing a DFS starting from $S$.

  4. Performing a BFS starting from $S$.

Consider the DAG with $V = \{1,2,3,4,5,6\}$ shown below.

Which of the following is not a topological ordering?

  1. $1$ $2$ $3$ $4$ $5$ $6$
  2. $1$ $3$ $2$ $4$ $5$ $6$
  3. $1$ $3$ $2$ $4$ $6$ $5$
  4. $3$ $2$ $4$ $1$ $6$ $5$

A depth-first search is performed on a directed acyclic graph. Let $d[u]$ denote the time at which vertex $u$ is visited for the first time and $f[u]$ the time at which the DFS call to the vertex $u$ terminates. Which of the following statements is always TRUE for all edges $(u, v)$ in the graph ?

  1. $d[u] < d[v]$
  2. $d[u] < f[v]$
  3. $f[u] < f[v]$
  4. $f[u] > f[v]$

Consider a weighted, undirected graph with positive edge weights and let $uv$ be an edge in the graph. It is known that the shortest path from the source vertex $s$ to $u$ has weight 53 and the shortest path from $s$ to $v$ has weight 65. Which one of the following statements is always TRUE?

  1. Weight $(u,v) \leq 12$
  2. Weight $(u,v) = 12$
  3. Weight $(u,v) \geq 12$
  4. Weight $(u,v) > 12$

The Breadth First Search algorithm has been implemented using the queue data structure. One possible order of visiting the nodes of the following graph is:

  1. $MNOPQR$
  2. $NQMPOR$
  3. $QMNPRO$
  4. $QMNPOR$

Dijkstra's single source shortest path algorithm when run from vertex $a$ in the above graph, computes the correct shortest path distance to

  1. only vertex $a$

  2. only vertices $a, e, f, g, h$

  3. only vertices $a, b, c, d$

  4. all the vertices

The most efficient algorithm for finding the number of connected components in an undirected graph on $n$ vertices and $m$ edges has time complexity

  1. $\Theta(n)$
  2. $\Theta(m)$
  3. $\Theta(m+n)$
  4. $\Theta(mn)$

Consider the following sequence of nodes for the undirected graph given below:

  1. $a$ $b$ $e$ $f$ $d$ $g$ $c$
  2. $a$ $b$ $e$ $f$ $c$ $g$ $d$
  3. $a$ $d$ $g$ $e$ $b$ $c$ $f$
  4. $a$ $d$ $b$ $c$ $g$ $e$ $f$

A Depth First Search (DFS) is started at node $a$. The nodes are listed in the order they are first visited. Which of the above is/are possible output(s)?

  1. $1$ and $3$ only
  2. $2$ and $3$ only
  3. $2, 3$ and $4$ only
  4. $1, 2$ and $3$ only

Which of the following statement(s) is/are correct regarding Bellman-Ford shortest path algorithm?

P: Always finds a negative weighted cycle, if one exists.

Q: Finds whether any negative weighted cycle is reachable from the source.

  1. $P$ only
  2. $Q$ only
  3. Both $P$ and $Q$
  4. Neither $P$ nor $Q$

Consider the directed graph shown in the figure below. There are multiple shortest paths between vertices $S$ and $T$. Which one will be reported by Dijkstra’s shortest path algorithm? Assume that, in any iteration, the shortest path to a vertex $v$ is updated only when a strictly shorter path to $v$ is discovered.

  1.  $\text{SDT}$
  2.  $\text{SBDT}$
  3.  $\text{SACDT}$
  4.  $\text{SACET}$

What is the time complexity of Bellman-Ford single-source shortest path algorithm on a complete graph of n vertices?

  1. $\theta(n^2)$     
  2. $\theta(n^2\log n)$     
  3. $\theta(n^3)$     
  4. $\theta(n^3\log n)$

Let $G$ be a graph with $n$ vertices and $m$ edges.What is the tightest upper bound on the running time of Depth First Search on $G$, when $G$ is represented as an adjacency matrix?

  1. $\Theta(n)$
  2. $\Theta(n+m)$
  3. $\Theta(n^2)$
  4. $\Theta(m^2)$

Consider the directed graph below given. 

Which one of the following is TRUE?

  1. The graph does not have any topological ordering.
  2. Both PQRS and SRQP are topological orderings.
  3. Both PSRQ and SPRQ are topological orderings.
  4. PSRQ is the only topological ordering.

Consider the tree arcs of a BFS traversal from a source node $W$ in an unweighted, connected, undirected graph. The tree $T$ formed by the tree arcs is a data structure for computing

  1. the shortest path between every pair of vertices.
  2. the shortest path from $W$ to every vertex in the graph.
  3. the shortest paths from $W$ to only those nodes that are leaves of $T$.
  4. the longest path in the graph.

Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.

Let $G = (V, E)$ be a simple undirected graph, and $s$ be a particular vertex in it called the source. For $x \in V$, let $d(x)$ denote the shortest distance in $G$ from $s$ to $x$. A breadth first search (BFS) is performed starting at $s$. Let $T$ be the resultant BFS tree. If $(u, v)$ is an edge of $G$ that is not in $T$, then which one of the following CANNOT be the value of $d(u) - d(v)$?

  1. $-1$
  2. $0$
  3. $1$
  4. $2$

Consider the following directed graph:

The number of different topological orderings of the vertices of the graph is _____________.

Breadth First Search (BFS) is started on a binary tree beginning from the root vertex. There is a vertex $t$ at a distance four from the root. If $t$ is the $n^{th}$ vertex in this BFS traversal, then the maximum possible value of $n$ is __________

In an adjacency list representation of an undirected simple graph $G=(V, E)$, each edge $(u, v)$ has two adjacency list entries: $[v]$ in the adjacency list of $u$, and $[u]$ in the adjacency list of $v$. These are called twins of each other. A twin pointer is a pointer from an adjacency list entry to its twin. If $|E|=m$ and $|V|=n$, and the memory size is not a constraint, what is the time complexity of the most efficient algorithm to set the twin pointer in each entry in each adjacency list?

  1. $\Theta\left(n^{2}\right)$
  2. $\Theta\left(n+m\right)$
  3. $\Theta\left(m^{2}\right)$
  4. $\Theta\left(n^{4}\right)$

Let $G=\left ( V,E \right )$ be $any$ connected, undirected, edge-weighted graph. The weights of the edges in $E$ are positive and distinct. Consider the following statements:

  1. Minimum Spanning Tree of $G$ is always unique.
  2. Shortest path between any two vertices of $G$ is always unique.

Which of the above statements is/are necessarily true?

  1. I only
  2. II only
  3. both I and II
  4. neither I nor II

The Breadth First Search (BFS) algorithm has been implemented using the queue data structure. Which one of the following is a possible order of visiting the nodes in the graph below?

  1. $\text{MNOPQR}$
  2. $\text{NQMPOR}$
  3. $\text{QMNROP}$
  4. $\text{POQNMR}$

Given a weighted directed graph with $n$ vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time

  1. $O(n)$
  2. $O(n . \log(n))$ but not $O (n)$
  3. $O(n^{1.5})$ but not $O (n \log n)$
  4. $O(n^{3})$ but not $O(n^{1.5})$
  5. $O(2^{n})$ but not $O(n^{3})$

Consider the following directed graph.

Suppose a depth-first traversal of this graph is performed, assuming that whenever there is a choice, the vertex earlier in the alphabetical order is to be chosen. Suppose the number of tree edges is $T$, the number of back edges is $B$ and the number of cross edges is $C$. Then

  1. $B = 1$, $C = 1$, and $T = 4$.
  2. $B = 0$, $C = 2$, and $T = 4$.
  3. $B = 2$, $C = 1$, and $T = 3$.
  4. $B = 1$, $C = 2$, and $T = 3$.
  5. $B = 2$, $C = 2$, and $T = 1$.

Answers: Graph Algorithms

Selected Answer

Answer is $B$.

  1. True.
  2. False.
  3. True.
  4. True.
18 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer
  1. While adding vertex $u$ to I it should not have an edge with any node in $I$.
  2. The algorithm runs till V is empty (in $O(n)$ time) and is checking u with each vertex $v$ in set $I$ (in $O(n)$ time). So, overall complexity $O(n^2)$.
20 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer

$\quad\textbf{DIJKSTRA}{(G,w,s)}$
$\quad 1 \quad \textbf{INITIALIZE-SINGLE-SOURCE}(G,s)$
$\quad 2 \quad S = \emptyset$
$\quad 3\quad Q = G.V$
$\quad 4\quad \textbf{while } Q \neq \emptyset$
$\quad 5\quad \quad u = \textbf{EXTRACT-MIN}(Q)$
$\quad 6\quad \quad S = S \cup \{u\}$
$\quad 7\quad \quad \textbf{for each vertex } v \in G.Adj[u]$
$\quad 8\quad \quad\quad \textbf{RELAX}(u,v,w)$

Correct Solutions:
$(A).$

$Q=\left\{\overset{\boxed{0}}{\text{A}},\overset{\boxed{\infty}}{\text{B}}, \overset{\boxed{\infty}}{\text{C}}, \overset{\boxed{\infty}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{\infty}}{\text{F}}   \right\}$

  • First we visit $\text{A}$ as it is the one with smallest distance $0$
  • Relax operation updates distances to $\text{B,C,F}$ as $6,90,70$ respectively
    $Q=\left\{\overset{\boxed{6}}{\text{B}}, \overset{\boxed{90}}{\text{C}}, \overset{\boxed{\infty}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}}   \right\}$
  • $\text{B}$ is next visited as its distance is now $6$
  • Relax operation updates distance to $\text{D}$ as $41+6=47$
    $Q=\left\{ \overset{\boxed{90}}{\text{C}}, \overset{\boxed{47}}{\text{D}}, \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}}   \right\}$
  • $\text{D}$ is next visited as its distance is now $47$
  • Relax operation updates distance to $\text{C}$ as $47+12=59$
    $Q=\left\{ \overset{\boxed{59}}{\text{C}},  \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{70}}{\text{F}}   \right\}$
  • $\text{C}$ is next visited as its distance is now $59$
  • Relax operation updates distance to $\text{F}$ as $59+10=69$
    $Q=\left\{ \overset{\boxed{\infty}}{\text{E}}, \overset{\boxed{69}}{\text{F}}   \right\}$
  • $\text{F}$ is next visited as its distance is now $69$
  • Relax operation updates distance to $\text{E}$ as $69+15=84$
    $Q=\left\{\overset{\boxed{84}}{\text{E}}\right\}$
  • Finally $\text{E}$ is visited.

So, the sequence of node visits are $\text{A}, \text{B}, \text{D}, \text{C}, \text{F}, \text{E}$

$(B).$ Sequence of vertices in the shortest path from $\text{A}$ to $\text{E}$: $\text{A}- \text{B}- \text{D}-\text{C}- \text{F}- \text{E}$

$(C).$ Cost of the shortest path from $\text{A}$ to $\text{E} = 84.$

18 votes -- Manu Thakur ( 32.9k points)
Selected Answer
Answer is $(A)$ because Floyd Warshall algorithm is used to find all shortest paths which is a dynamic programming approach.
21 votes -- shashi shekhar ( 439 points)
Selected Answer

Answer is $C$.

X - 3 DFS uses stack implicitly
Y - 2 BFS uses queue explicitly in Algo
Z - 1 Heap-Heapsort
 

19 votes -- Akash Kanase ( 35.2k points)
Selected Answer

Answer is $(C)$.

BFS is used to count shortest path from source (If all path costs are $1$)

Now, if $u$ is visited before $v$ it means $2$ things:

  1. Either $u$ is closer to $v$, or,
  2. If $u$ & $v$ are same distance from $r$, then our BFS algo chose to visit $u$ before $v$.
40 votes -- Akash Kanase ( 35.2k points)
INITIALIZATION: For i = 1 ... n
    {For j = 1 ... n
        { if a[i,j] = 0 then P[i,j] =infinite  
        // i.e. if there is no direct path then put infinite
          else P[i,j] =a[i,j];
         }
    }
ALGORITHM: 
For i = 1 ... n
    {For j = 1 ... n
        {For k = 1 ... n
            {
               P[i, j] = min( p[i,j] , p[i,k] + p[k,j])
            };
        }
    }              

Time complexity $O(n^3)$

This algorithm is for weighted graph but it will work for unweighted graph too because if $p[i,j]=1$, $p[i,k]=1$ and $p[k,j]=1$  then according to the algorithm $p[i,j] = \min(p[i,j] ,p[i,k] + p[k,j])  = \min(1,2) =1$

And all the other cases are also satisfied. $($like if $p[i,j]$ was $0$ in last iteration and there exist a path via $k)$

17 votes -- Saurav Kumar Gupta ( 1.6k points)
Selected Answer
For GATE purpose, without actually applying DFS, we can answer by just seeing options.
In DFS, we go in depth first i.e., one node to another in depth first order.

Here, $abfehg$  is not possible as we can not go from $f$ to $e$ directly.
Thus, option $(D)$ is correct.

In all the other options we can reach directly from the node to the next node.

So, just visualize and do.
19 votes -- Monanshi Jain ( 6.9k points)
Selected Answer

After applying the shortest path algorithm, check cost of vertex from source to every vertex in $G_1$. If $G_1$ is connected all these costs must be $0$ as edge weights of subgraph $G_1$ is $0$ and that should be the shortest path. If cost is not $0$, to at least one vertex in $G_1$ (not necessarily $G$), then $G_1$ is disconnected.

Answer is B.

56 votes -- Anurag Semwal ( 6.7k points)
Selected Answer

D is correct.

Consider a graph with $2$ nodes and one edge from $V_1$ to $V_2$,

Running the above algorithm will result in $A$ being 

$$\begin{array}{|l|l|l|} \hline \textbf{A} & \textbf{1} & \textbf{2} \\\hline \textbf{1} & \text{1} & \text{2} \\\hline \textbf{2} & \text{1} & \text{2} \\\hline \end{array}$$

Clearly options B and C are wrong. Since

  1. $A[1][1]$ and $A[2][2] > n-1$ and there exists no Hamiltonian cycle. Hence invalid.
  2. The longest path between $V_1$ and $V_2$ is $1$, but $A[1][2]$ is $2$, which is invalid. And no path between  $V_2$ and $V_1$ yet $A[2][1] = 1$ // it should be max cost path between j and k, not path length.

Hence A or D could be valid.

Now consider a graph with $2$ nodes and two edges, one from  $V_1$ and $V_2$ and other form $V_2$ and $V_1$. Running the above algorithm will result in $A$ being
$$\begin{array}{|l|l|l|} \hline \textbf{A} & \textbf{1} & \textbf{2} \\\hline \textbf{1} & \text{2} & \text{3} \\\hline \textbf{2} & \text{3} & \text{4} \\\hline \end{array}$$

Hence option $A$ is invalid, as $A[i][j]$ can be $>n.$

D is correct

45 votes -- ryan sequeira ( 3k points)
Selected Answer

Answer is (B). In Dijkstra's algorithm at each point we choose the smallest weight edge which starts from any one of the vertices in the shortest path found so far and add it to the shortest path.

26 votes -- gate_asp ( 619 points)

Take a tree for example 

 

  1. False. Every vertex of tree(other than leaves) is a cut vertex.
  2. True.
  3. False. Without E in $G1$ and $G2$, $G1 \cup G2$  has no bridge.
  4. False. $G1 \cup G2$, $G1$, $G2$ three graphs have same chromatic number of $2$. 
41 votes -- srestha ( 83.4k points)
Selected Answer

Answer is D.

$A$ and $B$ produce disconnected components with the GIVEN order in options which is NEVER allowed by prims's algorithm.

$C$ produces connected component every instant a new edge is added BUT when first vertex is chosen(first vertex is chosen randomly) first edge must be the minimum weight edge that is chosen . Therefore, $(A,D)$ MUST be chosen BEFORE $(A,B)$. Therefore,  $C$ is FALSE.

24 votes -- Sandeep_Uniyal ( 6.4k points)
Selected Answer
Option $(D)$ :  Binary heap. $|E|$ decrease key operations and each taking $O\left(\log|V|\right)$ time $+$ $|V|$ extract-min operations each taking $O\left(\log|V|\right)$.

Option $(B)$ :  Fibonacci heap. $|E|$ decrease key operations and each taking $O(1)$ time $+$ $|V|$ extract-min operations each taking $O\left(\log|V|\right)$.

Option $(A)$ :  Array. Finding min-vertex in each iteration takes $O(V)$ and this needs to be done $|V|$ times.

Binomial Heap is same as Binary heap here, as the critical operations are decrease key and extract-min.

Correct Answer: $D$
57 votes -- Gate Keeda ( 15.8k points)
Selected Answer

The answer should be Option A because edge $e$ is the lightest safe edge connecting $X$ and $Y$ so the minimum spanning tree of $G$ must contain $e$ (Greedy and optimal choice).
While option $(B)$ might seem correct but it is not always true. One such case is when $G$ is not connected therefore there might not be any path between $s$ and $t$.
Since the question is about definitely TRUE, $(B)$ is incorrect and $(A)$ is the only correct option

Lets say $AC =1$, $CD = 2$ ,$BD = 3$ and  $AB=4$

Then if  $s= A$  and  $t= B$ then $AC$  is the lightest edge crossing $X$ and $Y$ where  $X = { A }$  and $Y = { C, B, D}$
But clearly $AC$ is not on the shortest path from $A$ to $B$. The shortest path is $AB = 4$.

27 votes -- chandan1223 ( 169 points)
Selected Answer

Here answer should be A.

Here, shortest path will give $6$.
Spanning tree contains edges of weights $2$,$3$,$4$ so congestion in this case is $ \max (2,3,4)$, that is, $4$. For path $s$ to $t$, overall congestion is $\max(3,4) =4$ but total weight is $7$.

Option $C$ and $D$ are I think not related to this question.

25 votes -- papesh ( 17.9k points)
Selected Answer

Tree edges are those edges which appear in the final DFS forest. For example in case of connected graph (i.e. resulting DFS forest containing only one tree), if we run DFS over it, edges belonging to the resulting DFS tree are called tree edges.

Let us assume the graph has $x$ number of connected (or strongly connected in case of a directed graph) components. And assume 1st component has $K_1$ tree edges, 2nd component has $K_2$ tree edges and xth component has $K_x$ tree edges.

Such that  $K_1 + K_2 + K_3 + \ldots + K_x = K $ ( = total)

Or in other way we can imagine like, the final DFS forest has $x$ trees and those trees are having $K_1$ , $K_2$ , $K_3, \ldots, K_x$ edges respectively.

Now we know that a tree having $K_x$ edges contains $K_x + 1$ nodes. Similarly a tree having $K_1$ edges contains $K_1 + 1$ nodes, etc. and so on.

So, summation of nodes in each tree $= n$

$(K_1 + 1) + (K_2 + 1) + (K_3 + 1) + \ldots+ (K_x + 1) = n \\ \ \ \ \ \implies (K_1+K_2+K_3+\ldots+K_x) + x = n \\ \ \ \ \ \implies x = n - K$ 

Correct Answer: $D$

72 votes -- Debashish Deka ( 40.4k points)
Selected Answer
  1. Bellman-Ford algorithm $\implies \text {option} (C)$, $O (nm)$. Assuming $n$ as edges , $m$ as vertices, for every vertex we relax all edges. $m*n$ , $O(mn)$.
  2. Kruskal’s algorithm $\implies$ Remaining Option $(A)$ :  $O ( m \log n)$.
  3. Floyd-Warshall algorithm $\implies$ option $(B)$, Dynamic Programming Algo, $O(N^3)$.
  4. Topological sorting $\implies \text {option} (D)$,  boils down to DFS, $O(V+E)$.

Answer (A).

22 votes -- Akash Kanase ( 35.2k points)
Selected Answer

If there is a sink in the graph, the adjacency matrix will contain all 1's (except diagonal) in one column and all 0's (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity. 

The first part of the code, is finding if there is any vertex which doesn't have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from $i$ to it, making it impossible for them to form a sink. This is done through

  • $E_1: \!A[i][j]$ and $E_2: i = j$; 

$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$. 

So, answer is (C)

For $E_3$,  https://gateoverflow.in/3857/gate2005-it_84b

References
38 votes -- Arjun Suresh ( 328k points)
Selected Answer

If there is a sink in the graph, the adjacency matrix will contain all 1s (except diagonal) in one column and all 0s (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.

The first part of the code, is finding if there is any vertex which does not have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is $E_2$, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through

$E_1:  \ !A[i][j]$
and
$E_2: i = j$;

$E_1$ makes sure that there is no edge from $i$ to $j$ and $i$ is a potential sink till $A[i][j]$ becomes $1$. If $A[i][j]$ becomes $1$, $i$ can no longer be a sink, similarly all previous $j$ can also not be a sink (as there was no edge from $i$ to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is $j$. So, in $E_2$, we must make $i = j$.

Now, the loop breaks when we found a potential sink- that is a vertex which does not have any outgoing edge to any coming after it in adjacency matrix. So, if the column in which this vertex comes is all 1s and the row is all 0s (except diagonal), this is the sink. Otherwise there is no sink in the graph. So, $E_3$ is checking this condition. 

But in the code $\text{flag}$ is used for storing the state that sink is present or not. And as per the usage of flag in code, by default sink is considered present.  So, the condition in $E_3$  must make flag = 0, if the found $i$ is not a sink. So, the condition should be: 

$A[i][j] \  \left | \right |\ !A[j][i]$

So, (D) is the answer.

46 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is A: Queue

We can find single source shortest path in unweighted graph by using Breadth First Search (BFS) algorithm by using "Queue" data structure , in time $O(m+n)$ (i.e. linear with respect to the number of vertices and edges. )

55 votes -- Mithlesh Upadhyay ( 4.3k points)
Selected Answer

One diagram, which is eliminating option A, B, C. 
Hence D is the answer.

52 votes -- Ahwan Mishra ( 10k points)
Selected Answer

Here the answer is B.

A graph is said to be strongly connected if every vertex is reachable from every other vertex.

The strongly connected component is always maximal that is if $x$ is strongly connected component there should not exist another strongly connected component which contains $x$.

If we take $R$ as a strongly connected component but which is part of $PQRS$ and $PQRS$ is part of $PQRSVT$.

36 votes -- papesh ( 17.9k points)
Selected Answer

As seen in question, after $10$ we have to go for $p$ again and since $p$ is finished and then $r$ is started it means $r$ must be disconnected. If there is an edge from $q$ to $r$ then $r$ must be visited before $q$ and $p$ end. 

D is answer.

23 votes -- Prashant Singh ( 46.5k points)
Selected Answer

Dijkastra and Warshall 's algorithm used only for weighted graph.

Both $DFS$ and $BFS$ can be used for finding path between $2$ vertices in undirected and unweighted graph but $BFS$ can only give the shortest path as concerned in given question. So, BFS is answer.

Note :  Finding only path$(DFS)$ and finding shortest path$(BFS)$ matters a lot.
 
Must Read:

https://www.quora.com/What-are-the-advantages-of-using-BFS-over-DFS-or-using-DFS-over-BFS-What-are-the-applications-and-downsides-of-each

Correct Answer: $D$

References
32 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer

Go with vertex with indegree 0. Remove the vertex with all edges going from it. Repeat the procedure.

We see that $3$ cannot come at first because indegree is not $0$. So, D is the answer here.

ALL other options are in Topological order.

Only $1$ and $4$ order matter for this question.

25 votes -- Prashant Singh ( 46.5k points)
Selected Answer

 

I'm gonna disprove all wrong options here:

  1. $d[u] < d[v] $, Counter Example $\implies$ Well if we directly start DFS on V first, then I call DFS on $X$ which visits $U$.
  2. $d[u] < f[v]$, Counter example $\implies$ Same as A
  3. $f[u] < f[v]$, Counter example $\implies$ Same as A again

So, answer is D.

35 votes -- Akash Kanase ( 35.2k points)
Selected Answer
C. Weight $(u,v) \geq 12$

If weight $(u, v) < 12$, then the min. weight of $(s, v) = $weight of $(s, u) + $ weight of $(u, v) = 53 + (<12) $ will be less than $65$.
32 votes -- Arjun Suresh ( 328k points)
Selected Answer
  1. MNOPQR: If you try to run BFS, after M, you must traverse NQR (In some order). Here, P is traversed before Q, which is wrong.
  2. NQMPOR: This is also not BFS. P is traversed before O.
  3. QMNPRO: Correct.
  4. QMNPOR: Incorrect. Because R needs to be traversed before O.(Because M is ahead of N in queue).

Answer :-  C

24 votes -- Akash Kanase ( 35.2k points)
Selected Answer
D.  all the vertices. Just simulate the Dijkstra's algorithm on it. Dijkstra's algorithm is not meant for graphs with negative-edge-weight-cycle, but here it does give the correct shortest path.
36 votes -- Arjun Suresh ( 328k points)
Selected Answer

Run DFS to find connected components. Its time complexity is $\Theta(m+n)$, hence (C) is the answer.

25 votes -- Happy Mittal ( 8.1k points)
Selected Answer

Answer: B
1. After $f$ is visited, $c$ or $g$ should be visited next. So, the traversal is incorrect.
4. After $c$ is visited, $e$ or $f$ should be visited next. So, the traversal is incorrect.
$2$ and $3$ are correct.

17 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer

$\underline{\text{Bellman-ford Algorithm}}$

Single source shortest Path $O(VE)$

Relax every edge once in each iteration

$E \times \underbrace{(V-1)} = E.V-E = O(V.E)$

            at max $(V-1)$ edges can be there

                                                               

$$\begin{array}{|l|l|l|l|l|}\hline  & \textbf{A}  &  \textbf{B}  &  \textbf{C} & \textbf{D}  \\\hline & \text{0} & \text{$\infty$} &\text{$\infty$} & \text{$\infty$}\\ &\text{null} & \text{null} & \text{null} & \text{null}\\\hline \text{i=0}&\text{0} & \text{50} & \text{70} & \text{$\infty$}\\ & \text{X} & \text{A} & \text{A} & \text{X}\\\hline \text{i=1} & \text{0} & \text{2} & \text{70} & \text{53} \\  & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \text{i=2} & \text{0} & \text{2} & \text{70} & \text{5} \\  & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \text{i=3} & \text{0} & \text{2} & \text{70} & \text{5} \\  & \text{X} & \text{C} & \text{A} & \text{B}\\\hline \end{array}$$ 

As we can see that the last step is the verification step. In that step, values remained unchanged. If there was a negative edge weight cycle reachable from source, then at verification step also, those values will be different from the values above.

In case the cycle is not reachable from source then we can see that they will be at $\infty$ distance(or cost) from the source from the beginning till the last step. As take anything away from the $\infty$ it will still be infinite.

But it can also be the case that there are some points which are not forming a cycle and are still unreachable from source, those also will be at $\infty$ distance from the source from the beginning till end.

Hence, we won't be able to make a distinction among the cycle and such vertices. Thus, we say that this algorithm can detect negative edge weight cycles only if they are reachable from the source.

Answer is option B

34 votes -- Amar Vashishth ( 24.9k points)
Selected Answer

Relaxation at every vertex is as follows:

Note that the next picked vertex corresponds to the next row in Table
$$\scriptsize{\begin{array}{|c|c|} \hline \text{} & \textbf{A} & \textbf{B} & \textbf{C} & \textbf{D} & \textbf{E} & \textbf{F} & \textbf{G} & \textbf{T} \\\hline 
\textbf{S} & \text{4} &  \boxed{\text{$3$}} & \text{$\infty$} & \text{7} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&S\to A&\boxed{\bf{S\to B}}&&S \to D\\
\\\hline  
\textbf{B} &  \boxed{\text{$4$}} &  & \text{$\infty$} & 7 & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&\boxed{\bf{S \to A}} &&& S \to D  \\\hline 
\textbf{A} & \text{} &  \text{} & \boxed{5} & \text{7} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&&&\boxed{\bf{S\to A \to C}}&S \to D
\\\hline 
\textbf{C} & \text{} &  \text{} & \text{} & \text{7} & \boxed{6} & \text{$\infty$} & \text{$\infty$} & \text{$\infty$}\\
&&&&S \to D&\boxed{\bf{S\to A\\\to C\to E}}\\
\hline 
\textbf{E} & \text{} &  \text{} & \text{} & \boxed{\text{$7$}} & \text{} & \text{$\infty$} & \text{$8$} & \text{$10$} \\
&&&&\boxed{\bf{S \to D}}&&&S\to A \to C &S\to A \to C
\\&&&&&&&\to E \to G &  \to E \to T
\\ \hline  
\textbf{D} & \text{} &  \text{} & \text{} & \text{} & \text{} & \text{$12$} & \boxed{8} & \text{$10$}\\
&&&&&&{S \to D \to F}&\boxed{\bf{S\to A \to C\\ \to E \to G}}&S\to A \to C\\
&&&&&&&& \to E \to T
\\\hline  
\textbf{G} & \text{} &  \text{} & \text{} & \text{} & \text{} & \text{12} & \text{} & \boxed{10}
\\
&&&&&&{S \to D \to F}&&\boxed{\bf{S\to A \to C \\\to E \to T}}
\\\hline  
\textbf{T} & \text{} &  \text{} & \text{} & & \text{} & \boxed{\text{12}} & \text{} & \text{}\\
&&&&&&\boxed{\bf{S \to D \to F}}&\\
\hline \end{array}}$$

For $S$ to $T$ shortest path is $\boxed{S \to A \to C \to E \to T}$

Option : D

41 votes -- Kalpish Singhal ( 1.6k points)
Selected Answer
Time complexity of Bellman-Ford algorithm is $\Theta(|V||E|)$ where $|V|$ is number of vertices and $|E|$ is number of edges. If the graph is complete, the value of $|E|$ becomes $\Theta\left(|V|^2\right)$. So overall time complexity becomes $\Theta\left(|V|^3\right)$. And given here is $n$ vertices. So, the answer ends up to be $\Theta\left(n^3\right)$.

Correct Answer: $C$
47 votes -- Gate Keeda ( 15.8k points)
Selected Answer

Depth First Search of a graph takes $O(m+n)$ time when the graph is represented using adjacency list. In adjacency matrix representation, graph is represented as an $n * n$ matrix. To do DFS, for every vertex, we traverse the row corresponding to that vertex to find all adjacent vertices (In adjacency list representation we traverse only the adjacent vertices of the vertex). Therefore time complexity becomes $O(n^2)$.

Correct Answer: $C$

References
18 votes -- Divya Bharti ( 8.4k points)
Selected Answer

C. Both PSRQ and SPRQ are topological orderings

  1. Apply DFS by choosing P or S as starting vertices
  2. As the vertex gets a finishing time assign it to the head of a linked list
  3. The linked list is your required topological ordering
27 votes -- Akshay Jindal ( 309 points)
Selected Answer
BFS always has a starting node. It does not calculate shortest path between every pair but it computes shortest path between $W$ and any other vertex.

Correct Answer: $B$
46 votes -- Digvijay ( 44.8k points)
Selected Answer

Total $21$ nodes are there. $2$ nodes require back track here in this question.

So, max recursion depth is $21-2= 19$

( Do DFS from extreme ends such that max recursion depth will occur i.e. take leftmost top node as initial node for $DFS$ as shown in below image)

 

Note:- Backtrack means it reduces recursion depth in stack.

47 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer
$2$ is the answer.

$d(u) - d(v) = 0$ is possible when both $u$ and $v$ have an edge from $t$ and $t$ is in the shortest path from $s$ to $u$ or $v$.

$d(u) - d(v) = 1$ is possible when $v$ and $t$ are in the shortest path from $s$ to $u$ and both $t$ and $v$ are siblings- same distance from $s$ to both $t$ and $v$ causing $t-u$ edge to be in BFS tree and not $v-u$.

$d(u) - d(v) = -1$ is possible as explained above by interchanging $u$ and $v$.

$d(u) - d(v) = 2$ is not possible. This is because on BFS traversal we either visit $u$ first or $v$. Let's take $u$ first. Now, we put all neighbors of $u$ on queue. Since $v$ is a neighbour and $v$ is not visited before as assumed, $d(v)$ will become $d(u) + 1$. Similarly, for $v$ being visited first.
46 votes -- Arjun Suresh ( 328k points)
Selected Answer
Here, start with $a$ and end with $f$.
$$a \;\_ \; \_ \; \_ \; \_ \; f$$
Blank spaces are to be filled with $b$, $c$, $d$, $e$ such that $b$ comes before $c$, and $d$ comes before $e$.
Number of ways to arrange $b$, $c$, $d$, $e$ such that $b$ comes before $c$ and $d$ comes before $e$, will be =  $4!/(2!*2!) = 6$
86 votes -- Abhilash Panicker ( 7.4k points)

In topological sorting all nodes are like tasks and edges show the dependency among the tasks.

Node i to j an edge is there means task i must complete before task j.(in the mean time some other task may get complete after task i and before task j..but task i and j sequence need to be maintained)

Here in following 6 ways all the 6 tasks can get completed.

 

58 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer
No of nodes at level $0$(root)  of tree $\Rightarrow  1$

No of nodes at level $1$ of tree $\Rightarrow  2$

No of nodes at level $2$ of tree $\Rightarrow 4$

No of nodes at level $3$ of tree $\Rightarrow 8$

No of nodes at level $4$ of tree $\Rightarrow 16$

Last node in level $4$th is the node we are looking for $\Rightarrow  1+2+4+8+16 \Rightarrow 31$
38 votes -- Akash Kanase ( 35.2k points)
Selected Answer

We can take extra array of size $V^2$ as memory is not a constraint.

Do the BFS traversal and for each edge $u-v$ in advance list in graph store the $v$ node address in $a[i][j].$

For e.g., if we find $1\to 2$ as an edge, then store node $2$ node address in location $a[i][j].$

Now once we have stored all the edge $(u-v)$ addresses, then do BFS again.

Now when we encounter $1\to 2,$ then goto $a[2][1]$ which is having twin address and set this twin pointer for node $2$ in list $1.$

Do for all edges similarly.


Remember, we have not initialized memory as we will use only slots we need. If we initialize memory it can take $O(V^2)$ time. We can also use one hash table per node to store the node address while doing first traversal.The key to has table for Node $u$ is the vertex $v$ (for edge $u-v$ means in $u$ table it has edge to $v$ and  we are storing $v$ address) and value will be $v$ address.

Correct Answer: $B$

35 votes -- rahul sharma ( 19.8k points)
Selected Answer

Answer is A.

  • MST is not unique only when edges are not distinct. Here the edges are distinct. Be careful for the keyword DISTINCT.
     
  • Shortest Path can be different even if the edges are distinct. Example is shown below. Shortest path from A to C is not unique here.

pic

44 votes -- Ahwan Mishra ( 10k points)
Selected Answer

In BFS, starting from a node, we traverse all node adjacent to it at first then repeat same for next nodes.

Here, we can see that only option (D) is following BFS sequence properly.

  • As per BFS, if we start from $M$ then $RQN$ $($immediate neighbors of $M)$ have to come after it in any order but in $A$ here, $O$ comes in between. So, it is not BFS.
  • As per BFS, if we start from $N$ then $QMO$ has to come after it in any order but in $B$ here, $P$ comes. So, it is not BFS.
  • As per BFS, if we start from $Q$ then $MNOP$ has to come after it in any order but in $C$ here, $R$ comes. So, it is not BFS.

But $D$ is following the sequences.

So, D is the correct answer.

18 votes -- Aboveallplayer ( 13.1k points)
Selected Answer
I think arbitrary large weights means having positive weight cycle.

So, Bellman Ford algorithm can be used.

$O(VE)$

Changing sign of weights of edges.

Correct Answer: $D$
11 votes -- papesh pathare ( 561 points)
Selected Answer

Since they said that whenever there is a choice we will have to select the node which is alphabetically earlier, therefore we choose the starting node as $A$.

The tree then becomes  $A-B-E-C$ .  Therefore number of tree edges is $3$, that is, $(T=3)$ .

Now, there is one cycle $B-E-C$, so, we will get a back edge from $C$ to $B$ while performing DFS. Hence $B=1$.

Now, $D$ becomes disconnected node and it can only contribute in forming cross edge . There are $2$ cross edges $D-A$ , $D-B$. Therefore $C=2$.

Answer is Option D.

Correct me if am going wrong.

32 votes -- Riya Roy(Arayana) ( 5.3k points)
Let $G$ be a graph with 100! vertices, with each vertex labelled by a distinct permutation of the numbers $1, 2,\ldots, 100.$ There is an edge between vertices $u$ and $v$ if and only if the label of $u$ can be obtained by swapping two adjacent numbers in the label of $v$. Let $y$ denote the degree of a vertex in $G$, and $z$ denote the number of connected components in $G$. Then, $y+10z$ = ____

Answers: Graph Connectivity

Selected Answer

Answer: 109


Explanation:

We have to find $2$ things here, the degree of every vertex(which will be same for all vertices) and number of connected components.

Instead of $100$, let's solve this by taking lesser value, say $4$.

With $4$! vertices, each vertex is a permutation of $\left \{ 1,2,3,4 \right \}$. So, we have vertices like $\left \{ 1,2,3,4 \right \}$, $\left \{ 1,3,2,4 \right \}$, $\left \{ 4,1,3,2 \right \}$, ... etc.

Here $\left \{ 1,2,3,4 \right \}$will be connected with

$\left \{ 2,1,3,4 \right \}$

$\left \{ 1,3,2,4 \right \}$

$\left \{ 1,2,4,3 \right \}$

To get this list, just take $2$ adjacent numbers and swap them. eg. $\left \{ 1,2,3,4 \right \}$ swap $1$ and $2$ to get $\left \{ 2,1,3,4 \right \}$. 

The given $3$ are the only permutations we can get by swapping only $2$ adjacent numbers from $\left \{ 1,2,3,4 \right \}$. So, the degree of vertex $\left \{ 1,2,3,4 \right \}$ will be $3$. Similarly for any vertex it's degree will be $3$.

Here we got "3" because we can chose any $3$ pairs of adjacent numbers. So, with n, we have $n-1$ adjacent pairs to swap. So, degree will be n-1.

In our question, degree will be $100 - 1$ $=$ 99

Now let's see how many connected components we have. 

It will be $1$. Why?

If one can reach from one vertex to any other vertex, then that means that the graph is connected.

Now if we start with a vertex say $\left \{ 1,2,3,4 \right \}$ we can reach to other vertex, say $\left \{ 4,3,2,1 \right \}$ by the following path:

$\left \{ 1234 \right \}$ -> $\left \{ 1243 \right \}$ -> $\left \{ 1423 \right \}$ -> $\left \{ 4123 \right \}$ -> $\left \{ 4132 \right \}$ -> $\left \{ 4312 \right \}$ -> $\left \{ 4321 \right \}$

Just take two adjacent numbers and swap them. With this operation you can create any permutation, from any given initial permutation.

This way you can show that from any given vertex we can reach any other vertex. This shows that the graph is connected and the number of connected components is 1.

y = 99 and z = 1

y + 10z = 99 + 10*1 = 109

137 votes -- Rishabh Gupta ( 12.3k points)

The minimum number of record movements required to merge five files A (with $10$ records), B (with $20$ records), C (with $15$ records), D (with $5$ records) and E (with $25$ records) is:

  1. $165$
  2. $90$
  3. $75$
  4. $65$

The following are the starting and ending times of activities $A, B, C, D, E, F, G$ and $H$ respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required?

  1. $3$
  2. $4$
  3. $5$
  4. $6$

We are given $9$ tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.

$$\begin{array}{|l|l|l|l|l|l|l|l|l|}\hline \textbf{Task}  &  \text{$T_1$ } & \text{$T_2$}& \text{$T_3$} & \text{$T_4$} & \text{$T_5$} & \text{$T_6$} & \text{$T_7$} & \text{$T_8$} & \text{$T_9$}   \\\hline \textbf{Profit}  &  \text{$15$ } & \text{$20$}& \text{$30$} & \text{$18$} & \text{$18$} & \text{$10$} & \text{$23$} & \text{$16$} & \text{$25$}   \\\hline  \textbf{Deadline} & \text{$7$} & \text{$2$} & \text{$5$} & \text{$3$} & \text{$4$}  & \text{$5$}  & \text{$2$}  & \text{$7$}  & \text{$3$} \\\hline \end{array}$$

Are all tasks completed in the schedule that gives maximum profit?

  1. All tasks are completed

  2. $T_1$ and $T_6$ are left out

  3. $T_1$ and $T_8$ are left out

  4. $T_4$ and $T_6$ are left out

We are given $9$ tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.

$$\begin{array}{|l|l|l|l|l|l|l|l|l|}\hline \textbf{Task}  &  \text{$T_1$ } & \text{$T_2$}& \text{$T_3$} & \text{$T_4$} & \text{$T_5$} & \text{$T_6$} & \text{$T_7$} & \text{$T_8$} & \text{$T_9$}   \\\hline \textbf{Profit}  &  \text{$15$ } & \text{$20$}& \text{$30$} & \text{$18$} & \text{$18$} & \text{$10$} & \text{$23$} & \text{$16$} & \text{$25$}   \\\hline  \textbf{Deadline} & \text{$7$} & \text{$2$} & \text{$5$} & \text{$3$} & \text{$4$}  & \text{$5$}  & \text{$2$}  & \text{$7$}  & \text{$3$} \\\hline \end{array}$$

What is the maximum profit earned?

  1. $147$
  2. $165$
  3. $167$
  4. $175$

The characters $a$ to $h$ have the set of frequencies based on the first $8$ Fibonacci numbers as follows

$a : 1$, $b : 1$, $c : 2$, $d : 3$, $e : 5$, $f : 8$, $g : 13$, $h : 21$

A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?

$110111100111010$

  1. $fdheg$
  2. $ecgdf$
  3. $dchfg$
  4. $fehdg$

Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively. 

Which of the following is the Huffman code for the letter $a, \,b, \,c, \,d, \,e, \,f$?

  1. $0$, $10$, $110$, $1110$, $11110$, $11111$

  2. $11$, $10$, $011$, $010$, $001$, $000$

  3. $11$, $10$, $01$, $001$, $0001$, $0000$

  4. $110$, $100$, $010$, $000$, $001$, $111$

Consider the weights and values of items listed below. Note that there is only one unit of each item.
$$\begin{array}{|c|c|c|}\hline \textbf{Item number}  &  \textbf{Weight (in Kgs) }& \textbf{Value (in rupees)} \\\hline  \text{$1$} & \text{$10$} & \text{$60$} \\\hline  \text{$2$} & \text{$7$} & \text{$28$} \\\hline \textbf{$3$} & \text{$4$} & \text{$20$} \\\hline  \text{$4$} & \text{$2$} & \text{$24$}  \\\hline \end{array}$$
The task is to pick a subset of these items such that their total weight is no more than $11$ Kgs and their total value is maximized. Moreover, no item may be split. The total value of items picked by an optimal algorithm is denoted by $V_{opt}$. A greedy algorithm sorts the items by their value-to-weight ratios in descending order and packs them greedily, starting from the first item in the ordered list. The total value of items picked by the greedy algorithm is denoted by $V_{greedy}$.

The value of $V_{opt}-V_{greedy}$ is ____

Answers: Greedy Algorithm

Selected Answer
Arrange files in increasing order of records:

$\overset{\boxed5}{\text{D}}\quad \overset{\boxed{10}}{\text{A}}\quad\overset{\boxed{15}}{\text{C}}\quad\overset{\boxed{20}}{\text{B}}\quad\overset{\boxed{25}}{\text{E}}$


$\qquad\qquad\qquad\qquad\color{blue}{75}$

$\qquad\qquad\quad\color{blue}{30}\qquad\qquad\qquad\color{blue}{45}$

$\qquad\quad\color{blue}{15}\qquad\overset{\boxed{\text{C}}}{15}\qquad\qquad\overset{\boxed{\text{B}}}{20}\qquad\overset{\boxed{\text{E}}}{25}$

$\quad\overset{\boxed{\text{D}}}{5}\qquad\overset{\boxed{\text{A}}} {10}$

No. of movements $=15+30+45+75=165.$

Correct Answer: $A$
26 votes -- Pooja Palod ( 23.7k points)
Selected Answer

Solution: B

The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity $A,B,C,D,E,F,G$ and $H$. Connect the activities that occur between the start and end time of an activity. Now, the chromatic number of the graph is the number of rooms required.

43 votes -- Gowthaman Arumugam ( 1.2k points)
Selected Answer

$ \begin{array}{|c|c|c|}\hline
\textbf{Task}&\textbf{Profit}&\textbf{Deadline}\\ \hline
T_3&30&5&\checkmark \\ \hline
T_9&25&3&\checkmark \\ \hline
T_7&23&2&\checkmark \\ \hline
T_2&20&2&\checkmark \\ \hline
T_5&18&4&\checkmark \\ \hline
T_4&18&3& ✗ \\ \hline
T_8&16&7&\checkmark \\ \hline
T_1&15&7&\checkmark \\ \hline
T_6&10&2& ✗ \\ \hline
\end{array}$

Step -1 Sort the tasks in decreasing order of profit and if any conflict arises between two or more tasks,resolve them by sorting them on basis of having greater deadline first(Because we have more time to complete the task with greater deadline and same profit).

Step 2- Since Maximum deadline given is $7$, so we consider we have 7 time slots ranging from $0-7$ where a task $T_i$ having deadline say $2$ can be filled in slots either $0-1$ or $1-2$ and not beyond $2$ because this task has deadline of $2$ time units, so this task has to be completed by at most time $T=2$.

Now according to question, since Each task completes in Unit time, so a single tasks takes only one slot as shown.

Now Take the first task in the list i.e. $T_3$ which has a deadline of $5$, so it can be completed in maximum $5$ time units, so place it in slot $4-5$ which is the maximum deadline by which this task can be completed.

Task $T_9$ with deadline $3$ is similarly placed in  slot $2-3$.

Task $T_7$ with deadline $2$ is placed in slot $1-2$.

Now for task $T_2$ having deadline $2$ can be placed in either $0-1$ or $1-2$ (Occupied by $T_7$). So $T_2$ will occupy slot $0-1$.

Task $T_5$ with deadline $4$ is placed in slot $3-4$.

Now comes task $T_4$ which has deadline $3$ can be put in slots $0-1$ or $1-2$ or $2-3$ and not beyond that.Unfortunately, all such slots are occupied so $\bf{T_4}$ will be left out.

Task $T_8$ with deadline $7$ goes in slot $6-7$.

Task $T_1$ with deadline $7$ can be placed in  slot $5-6$.

Now all time slots are full.

So, Task $\bf{T_6}$ will be left out.

So, option (d) is the answer.

32 votes -- Ayush Upadhyaya ( 25.8k points)
Selected Answer

The most important statement in question is 

each task requires one unit of time

This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:

$$\begin{array}{|l|l|l|l|l|l|l|l|l|}\hline \textbf{Task}  &  \text{$T_7$ } & \text{$T_2$}& \text{$T_9$} & \text{$T_4$} & \text{$T_5$} & \text{$T_3$} & \text{$T_6$} & \text{$T_8$} & \text{$T_1$}   \\\hline \textbf{Deadline}  &  \text{2} & \text{2}& \text{3} & \text{3} & \text{4} & \text{5} & \text{5} & \text{7} & \text{7} \\\hline \end{array}$$

$$0 \overset{T_7}{-} 1    \overset{T_2}{-} 2  \overset{T_9}{-}  3  \overset{T_5}{-}  4 \overset{T_3}{-}  5 \overset{T_8}{-}  6 \overset{T_1}{-}  7$$

Thus, $T_4$ and $T_6$ are left.

So, maximum profit will not include those of $T_4$ and $T_6$ and  will be $=15 + 20 +30 +18 + 16 + 23 + 25 = 147$ 

A is answer

26 votes -- Prashant Singh ( 46.5k points)
Selected Answer

Answer is A. Huffman's tree is as follows. The two least frequent characters are taken as the children of a newly made node and the frequency of the newly made node is made equal to the sum of those two child nodes. Then the same procedure is repeated till all nodes are finished. 

$110111100111010 = 110\;11110\;0\;1110\;10 = fdheg$

41 votes -- Arjun Suresh ( 328k points)
Selected Answer

Based on the probabilities, we can say the probable frequency of the letters will be 

$16$, $8$, $4$, $2$, $1$, $1$

Now, the Huffman tree can be constructed as follows:

So, A is the answer for $76$.

https://en.wikipedia.org/wiki/Huffman_coding

References
31 votes -- Arjun Suresh ( 328k points)
Selected Answer

$V_{opt}$ is clearly $60$. You can go for brute force or by normal intuition you can get it.

Now solving for $V_{greedy}$.

$$\begin{array}{|c|c|c|c|}\hline \textbf{Item name}  &  \textbf{Weight (in Kgs) }& \textbf{Value (in Rupees)} & \textbf{Value/Weight} \\\hline  \text{1} & \text{10} & \text{60} & \text{6} \\\hline  \text{2} & \text{7} & \text{28} & \text{4} \\\hline \text{3} & \text{4} & \text{20} & \text{5} \\\hline  \text{4} & \text{2} & \text{24} & \text{12} \\\hline \end{array}$$

Sort them in descending order of Value/Weight as per the question.
$$\begin{array}{|c|c|c|c|}\hline \textbf{Item name}  &  \textbf{Weight (in Kgs) }& \textbf{Value (in Rupees)} & \textbf{Value/Weight} \\\hline  \text{4} & \text{2} & \text{24} & \text{12} \\\hline  \text{1} & \text{10} & \text{60} & \text{6} \\\hline \text{3} & \text{4} & \text{20} & \text{5} \\\hline  \text{2} & \text{7} & \text{28} & \text{4} \\\hline \end{array}$$
Now start picking items.(Note: You cannot take a fraction of the given weight as per the question). Max weight size is given as $11$(Inclusive).

  • Item $4$ is picked. Weight remaining = $11-2 = 9$kg.
  • Item $1$ cannot be picked as $10$kg $>9$kg.
  • Item $3$ can be picked as $4$kg $<$ $9$kg. Weight Remaining = $9-4 = 5$kg
  • Item $2$ cannot be picked as $7$kg $>5$kg.

So, item $4$ and Item $3$ are picked. Their values are $24$ and $20$ respectively.

$\implies V_{greedy} = 24+20 = 44.$

$V_{optimal} - V_{greedy}= 60 - 44 = 16.$

33 votes -- Ruturaj Mohanty ( 3k points)

Consider a hash table with chaining scheme for overflow handling:

  1. What is the worst-case timing complexity of inserting $n$ elements into such a table?
  2. For what type of instance does this hashing scheme take the worst-case time for insertion?

Answers: Hashing

1.) The worst case can be O(n) if all the insertions collide at the same key.

2.) This happens when we are not using an uniformly distributed hash function or when our hash table has only a single key where all the elements are chained up.
2 votes -- Aakash Das ( 489 points)
A language uses an alphabet of six letters, $\left\{a, b, c, d, e, f\right\}$. The relative frequency of use of each letter of the alphabet in the language is as given below: $$\begin{array}{|c|c|}\hline \text{LETTER} & \text{RELATIVE FREQUENCY OF USE}\\\hline a & 0.19  \\\hline b & 0.05  \\\hline c & 0.17  \\\hline d & 0.08  \\\hline e & 0.40 \\\hline f & 0.11  \\\hline \end{array}$$Design a prefix binary code for the language which would minimize the average length of the encoded words of the language.

Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}, \dfrac{1}{16}, \dfrac{1}{32}, \dfrac{1}{32}$, respectively. 

What is the average length of the Huffman code for the letters $a, \,b, \,c, \,d, \,e, \,f$?

  1. $3$
  2. $2.1875$
  3. $2.25$
  4. $1.9375$
A message is made up entirely of characters from the set $X=\{P, Q, R, S, T\}$. The table of probabilities for each of the characters is shown below:$$\begin{array}{|c|c|}\hline \textbf{Character}  &  \textbf{Probability } \\\hline  \text{$P$} & \text{$0.22$} \\\hline  \text{$Q$} & \text{$0.34$} \\\hline  \text{$R$} & \text{$0.17$} \\\hline \text{$S$} & \text{$0.19$} \\\hline  \text{$T$} & \text{$0.08$} \\\hline \text{Total} & \text{$1.00$} \\\hline  \end{array}$$If a message of $100$ characters over $X$ is encoded using Huffman coding, then the expected length of the encoded message in bits is ______.

Answers: Huffman Code

Selected Answer

$a-0.19,b-0.05, c-0.17,d-0.08,e-0.40,f-0.11$

Since it is relative frequency we can multiply each with $100$ and result remains the same.

$a-19,b-5, c-17,d-8,e-40,f-11$

For Assigning prefix binary code lets first create the Huffman tree

$(1) \ a-19,{\color{Red} {b-5} },c-17,{\color{Red} {d-8} },e-40,f-11$

$(2) \ a-19,{\color{Red} {(b,d)-13} },c-17,e-40,{\color{Red} {f-11} }$

 

$(3) \ {\color{Red} {a-19} },(b,d,f)-24,{\color{Red} {c-17} },e-40$

 

$(4) \ {\color{Red} {(a,c)-36,(b,d,f)-24}},e-40$

$(5) \ {\color{Red} {(a,b,c,d,f)-60,e-40}}$

Now put $0$ on each of the left edges and $1$ on each of the right edges as shown below

$\text{Prefix code}$ = Traverse from the root node to the leaf node and write the symbol $(1 \  or \ 0)$ present on each edge.

  • For $a$ the prefix code is $111$  i.e. $3$ bits
  • For $b$ the prefix code is $1010$ i.e. $4$ bits
  • For $c$ the prefix code is $110$ i.e. $3$ bits
  • For $d$ the prefix code is $1011$ i.e. $4$ bits
  • For $e$ the prefix code is $0$ i.e. $1$ bits
  • For $f$ the prefix code is $100$ i.e. $3$ bits

And the average length of encoded words $=\dfrac{(19\times 3)+(5\times 4)+(17\times 3)+(8\times 4)+(40\times 1)+(11\times 3)}{100}$

$\quad =\dfrac{57+20+51+32+40+33}{100}$

$\quad =\dfrac{233}{100}$

$\quad =2.33$

6 votes -- Satbir Singh ( 19k points)
Selected Answer

$$\begin{array}{|c|c|} \hline \textbf{Letter} & \textbf{Probability} \\\hline  a  & 1/2  \\\hline  b & 1/4 \\\hline c & 1/8  \\\hline  d & 1/16 \\\hline e & 1/32 \\\hline f & 1/32 \\\hline \end{array}$$

$\begin{array}{cc}  \textbf{Prefix Code} & \textbf{Code Length} \\ a = 0 & 1 \\  b =10 & 2 \\  c = 110 & 3  \\  d = 1110 & 4 \\  e = 11110 & 5 \\  f = 11111 & 5 \end{array}$

Avg length $= \dfrac{1}{2} \times 1 + \dfrac{1}{4} \times 2+ \dfrac{1}{8}  \times 3+  \dfrac{1}{16} \times 4+ \dfrac{1}{32} \times 5+ \dfrac{1}{32}  \times 5 \\ = \dfrac{16 + 16 + 12 + 8 + 5 + 5}{32} \\ = 1.9375 $ 

Correct Answer: $D$

29 votes -- minal ( 12.9k points)
Selected Answer

$X = \{ P, Q, R, S, T\}$

$∴ \text{Expected length of an encoded character}$
$\qquad \qquad= (0.22 \times 2) + (0.34 \times 2) + (0.17 \times 3) + (0.19 \times 2) + (0.08 \times 3) \hspace{0.1cm} \text{ bits }$
$\qquad \qquad= 0.44 + 0.68 + 0.51+ 0.38 + 0.24 \hspace{0.1cm} \text{ bits}$
$ \qquad \qquad= 2.25 \hspace{0.1cm} \text{ bits } $

$\therefore \text{Expected length of a encoded message of $100$ characters in bits} = 100 \times 2.25 = 225$

29 votes -- Akash Dinkar ( 27.9k points)

What is the output produced by the following program, when the input is "HTGATE"

Function what (s:string): string;
var n:integer;
begin
     n = s.length
     if n <= 1
     then what := s
     else what :=contact (what (substring (s, 2, n)), s.C [1])
end;

Note

  1. type string=record
                          length:integer;
                          C:array[1..100] of char
                          end
  2. Substring (s, i, j): this yields the string made up of the $i^{th}$ through $j^{th}$ characters in s; for appropriately defined in $i$ and $j$.
  3. Contact $(s_{1}, s_{2})$: this function yields a string of length $s_{1}$ length + $s_{2}$ - length obtained by concatenating $s_{1}$ with $s_{2}$ such that $s_{1}$ precedes $s_{2}$.

The following program computes values of a mathematical function $f(x)$. Determine the form of $f(x)$.

main ()
{
    int m, n; float x, y, t;
    scanf ("%f%d", &x, &n);
    t = 1; y = 0; m = 1;
    do
    {
        t *= (-x/m);
        y += t;
    } while (m++ < n);
    printf ("The value of y is %f", y);
}

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:

Consider the following Pascal function:

Function X(M:integer):integer;
Var i:integer;
Begin
    i := 0;
    while i*i < M
    do i:= i+1
    X := i
 end

The function call $X(N)$, if $N$ is positive, will return

  1. $\lfloor\sqrt N \rfloor$
  2. $\lfloor\sqrt N \rfloor +1$
  3. $\lceil \sqrt N \rceil$
  4. $\lceil \sqrt N \rceil +1$
  5. None of the above

What does the following code do?

var a, b: integer;
begin
    a:=a+b;
    b:=a-b;
    a:a-b;
end;
  1. exchanges $a$ and $b$
  2. doubles $a$ and stores in $b$
  3. doubles $b$ and stores in $a$
  4. leaves $a$ and $b$ unchanged
  5. none of the above
     

What function of $x$, $n$ is computed by this program?

Function what(x, n:integer): integer:
Var
    value : integer
begin
    value := 1
    if n > 0 then
    begin
        if n mod 2 =1 then
            value := value * x;
        value := value * what(x*x, n div 2);
    end;
    what := value;
end;

In the following Pascal program segment, what is the value of X after the execution of the program segment?

X := -10; Y := 20;
If X > Y then if X < 0 then X := abs(X) else X := 2*X;
  1. $10$
  2. $-20$
  3. $-10$
  4. None

Assume that $X$ and $Y$ are non-zero positive integers. What does the following Pascal program segment do?

while X <> Y do
if  X > Y then
    X := X - Y
else
    Y := Y - X;
write(X);
  1. Computes the LCM of two numbers

  2. Divides the larger number by the smaller number

  3. Computes the GCD of two numbers

  4. None of the above

  1. Consider the following Pascal function where $A$ and $B$ are non-zero positive integers. What is the value of $GET(3, 2)$?
    function GET(A,B:integer): integer;
    begin
        if B=0 then
            GET:= 1
        else if A < B then
            GET:= 0
        else
            GET:= GET(A-1, B) + GET(A-1, B-1)
    end;
  2. The Pascal procedure given for computing the transpose of an $N \times N, (N>1)$ matrix $A$ of integers has an error. Find the error and correct it. Assume that the following declaration are made in the main program
    const
        MAXSIZE=20;
    type
        INTARR=array [1..MAXSIZE,1..MAXSIZE] of integer;
    Procedure TRANSPOSE (var A: INTARR; N : integer);
    var
        I, J, TMP: integer;
    begin
        for I:=1 to N – 1 do
        for J:=1 to N do
        begin
            TMP:= A[I, J];
            A[I, J]:= A[J, I];
            A[J, I]:= TMP
        end
    end;
    

What value would the following function return for the input $x=95$?

Function fun (x:integer):integer;
Begin
    If x > 100 then fun = x – 10
    Else fun = fun(fun (x+11))
End;
  1. $89$
  2. $90$
  3. $91$
  4. $92$

Consider the following $C$ function definition

int Trial (int a, int b, int c)
{
    if ((a>=b) && (c<b)) return b;
    else if (a>=b) return Trial(a, c, b);
    else return Trial(b, a, c);
}

The functional Trial:

  1. Finds the maximum of $a$, $b$, and $c$

  2. Finds the minimum of $a$, $b$, and $c$

  3. Finds the middle number of $a$, $b$, $c$

  4. None of the above

Suppose you are given an array $s[1....n]$ and a procedure reverse $(s, i, j)$ which reverses the order of elements in $s$ between positions $i$ and $j$ (both inclusive). What does the following sequence do, where $1 \leqslant k \leqslant n$:

           reverse (s, 1, k);
           reverse (s, k+1, n);
           reverse (s, 1, n);
  1. Rotates $s$ left by $k$ positions
  2. Leaves $s$ unchanged 
  3. Reverses all elements of $s$
  4. None of the above

Consider the following $C$ function.

For large values of $y$, the return value of the function $f$ best approximates

float f,(float x, int y) {
    float p, s; int i;
    for (s=1,p=1,i=1; i<y; i++) {
        p *= x/i;
        s += p;
    }
    return s;
}

  1. $x^y$
  2. $e^x$
  3. $\text{ln} (1+x)$
  4. $x^x$

In the following $C$ program fragment, $j$, $k$, $n$ and TwoLog_n are integer variables, and $A$ is an array of integers. The variable $n$ is initialized to an integer $\geqslant 3$, and TwoLog_n is initialized to the value of $2^*\lceil \log_2(n) \rceil$

for (k = 3;  k <= n; k++)
        A[k] = 0;
for (k = 2; k <= TwoLog_n; k++)
    for (j = k+1; j <= n; j++)
        A[j] = A[j] || (j%k);
for (j = 3; j <= n; j++)
    if (!A[j]) printf("%d", j);

The set of numbers printed by this program fragment is

  1. $\left\{m \mid m \leq n, (\exists i)\left[m=i!\right]\right\}$

  2. $\left\{m \mid m \leq n, (\exists i) \left[m=i^2\right]\right\}$

  3. $\left\{m \mid m \leq n, \text{m is prime} \right\}$

  4. { }

Consider the following C program

main()
{ 
    int x, y, m, n;
    scanf("%d %d", &x, &y);
    /* Assume x>0 and y>0*/
    m = x; n = y;
    while(m != n)
        { 
            if (m > n)
                m = m-n;
            else
                n = n-m;
        }
    printf("%d", n);
}

The program computes

  1. $x+y$ using repeated subtraction

  2. $x \mod y$ using repeated subtraction

  3. the greatest common divisor of $x$ and $y$

  4. the least common multiple of $x$ and $y$

What does the following algorithm approximate? (Assume $m > 1, \epsilon >0$).

x = m;
y = 1;
While (x-y > ϵ)
{
    x = (x+y)/2;
    y = m/x;
}
print(x);
  1. $\log \, m$
  2. $m^2$
  3. $m^{\frac{1}{2}}$
  4. $m^{\frac{1}{3}}$

Consider the following C-program:

void foo (int n, int sum) {
    int k = 0, j = 0;
    if (n == 0) return;
    k = n % 10; j = n/10;
    sum = sum + k;
    foo (j, sum);
    printf ("%d,",k);
}

int main() {
    int a = 2048, sum = 0;
    foo(a, sum);
    printf("%d\n", sum);
}

What does the above program print?

  1. $\text{8, 4, 0, 2, 14}$

  2. $\text{8, 4, 0, 2, 0}$

  3. $\text{2, 0, 4, 8, 14}$

  4. $\text{2, 0, 4, 8, 0}$

What is the output printed by the following program?

#include <stdio.h>

int f(int n, int k) {
    if (n == 0) return 0;
    else if (n % 2) return f(n/2, 2*k) + k;
    else return f(n/2, 2*k) - k;
}

int main () {
    printf("%d", f(20, 1));
    return 0;
}
  1. $5$
  2. $8$
  3. $9$
  4. $20$

A set $X$ can be represented by an array $x[n]$ as follows: 

 $x\left [ i \right ]=\begin {cases} 1 & \text{if } i \in X \\ 0 & \text{otherwise}  \end{cases}$

Consider the following algorithm in which $x$, $y$, and $z$ are Boolean arrays of size $n$: 

algorithm zzz(x[], y[], z[]) {
    int i;
    
    for(i=0; i<n; ++i)
        z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);
}

The set $Z$ computed by the algorithm is: 

  1. $(X\cup Y)$
  2. $(X\cap Y)$
  3. $(X-Y)\cap (Y-X)$
  4. $(X-Y)\cup (Y-X)$

Consider the following C-function in which $a[n]$ and $b[m]$ are two sorted integer arrays and $c[n+m]$ be another integer array,

void xyz(int a[], int b [], int c []){ 
    int i,j,k; 
    i=j=k=0; 
    while ((i<n) && (j<m)) 
        if (a[i] < b[j]) c[k++] = a[i++]; 
        else c[k++] = b[j++]; 
}

Which of the following condition(s) hold(s) after the termination of the while loop?

  1. $j< m,k=n+j-1$ and $a[n-1]< b[j]$ if $i=n$
  2. $i<n,k=m+i-1$ and $b[m-1]\leq a[i]$ if $j=m$
  1. only (i) 
  2. only (ii) 
  3. either (i) or (ii) but not both 
  4. neither (i) nor (ii) 

The following function computes the value of  $\binom{m}{n}$ correctly for all legal values $m$ and $n$ ($m ≥1, n ≥ 0$ and $m > n$)

int func(int m, int n)
{
    if (E) return 1;
    else return(func(m -1, n) + func(m - 1, n - 1));
}


In the above function, which of the following is the correct expression for E?

  1. $(n = = 0) || (m = = 1)$
  2. $(n = = 0)$ && $(m = = 1)$
  3. $(n = = 0) || (m = = n)$
  4. $(n = = 0)$ && $(m = = n)$

Consider the code fragment written in C below :

void f (int n)
{ 
  if (n <=1)  {
   printf ("%d", n);
  }
  else {
   f (n/2);
   printf ("%d", n%2);
  }
}

What does f(173) print?

  1. $010110101$
  2. $010101101$
  3. $10110101$
  4. $10101101$

Consider the code fragment written in C below :
        

void f (int n)
{ 
    if (n <= 1)  {
        printf ("%d", n);
    }
    else {
        f (n/2);
        printf ("%d", n%2);
    }
}

Which of the following implementations will produce the same output for $f(173)$ as the above code?

P1 P2
void f (int n)
{ 
    if (n/2)  {
        f(n/2);
    }
    printf ("%d", n%2);
}
void f (int n)
{ 
    if (n <=1)  {
        printf ("%d", n);
    }
    else {
        printf ("%d", n%2);
        f (n/2);
    }
}
  1. Both $P1$ and $P2$
  2. $P2$ only
  3. $P1$ only
  4. Neither $P1$ nor $P2$

Consider the program below:

#include <stdio.h>
int fun(int n, int *f_p) {
    int t, f;
    if (n <= 1) {
        *f_p = 1;
        return 1;
    }
    t = fun(n-1, f_p);
    f = t + *f_p;
    *f_p = t;
    return f;
}

int main() {
    int x = 15;
    printf("%d/n", fun(5, &x));
    return 0;
}

The value printed is:

  1. $6$
  2. $8$
  3. $14$
  4. $15$

What is the value printed by the following C program?

#include<stdio.h>

int f(int *a, int n)
{
    if (n <= 0) return 0;
    else if (*a % 2 == 0) return *a+f(a+1, n-1);
    else return *a - f(a+1, n-1);
}

int main()
{
    int a[] = (12, 7, 13, 4, 11, 6);
    printf("%d", f(a, 6));
    return 0;
}
  1. $-9$
  2. $5$
  3. $15$
  4. $19$

Consider the following recursive C function that takes two arguments.

unsigned int foo(unsigned int n, unsigned int r) {
    if (n>0) return ((n%r) + foo(n/r, r));
    else return 0;
}

What is the return value of the function $\text{foo}$ when it is called as $\text{foo(345, 10)}$?

  1. $345$
  2. $12$
  3. $5$
  4. $3$

Consider the following recursive C function that takes two arguments.

unsigned int foo(unsigned int n, unsigned int r) {
    if (n>0) return ((n%r) + foo(n/r, r));
    else return 0;
}

What is the return value of the function $\text{foo}$ when it is called as $\text{foo(513, 2)}$?

  1. $9$
  2. $8$
  3. $5$
  4. $2$

Consider the following function:

int unknown(int n){ 

int i, j, k=0; 
for (i=n/2; i<=n; i++) 
    for (j=2; j<=n; j=j*2) 
        k = k + n/2; 
 return (k); 

} 

The return value of the function is

  1. $\Theta(n^2)$      
  2. $\Theta(n^2\log n)$      
  3. $\Theta(n^3)$      
  4. $\Theta(n^3\log n)$

Consider the following C function in which size is the number of elements in the array E

int MyX(int *E, unsigned int size) 
{ 
   int Y = 0; 
   int Z; 
   int i, j, k; 

   for(i = 0; i< size; i++) 
          Y = Y + E[i]; 
          
    for(i=0; i < size; i++) 
        for(j = i; j < size; j++)
        {
            Z = 0; 
            for(k = i; k <= j; k++) 
               Z = Z + E[k];
            if(Z > Y) 
               Y = Z; 
        } 
   return Y; 
} 

The value returned by the function MyX is the

  1. maximum possible sum of elements in any sub-array of array E.
  2. maximum element in any sub-array of array E.
  3. sum of the maximum elements in all possible sub-arrays of array E.
  4. the sum of all the elements in the array E.

Consider the function func shown below: 

int func(int num) { 
   int count = 0; 
   while (num) { 
     count++; 
     num>>= 1; 
   } 
   return (count); 
} 

The value returned by func($435$) is ________

Let $A$ be the square matrix of size $n \times n$. Consider the following pseudocode. What is the expected output?

C=100;
for i=1 to n do
    for j=1 to n do
    {
        Temp = A[i][j]+C;
        A[i][j] = A[j][i];
        A[j][i] = Temp -C;
    }
for i=1 to n do
    for j=1 to n do 
        output (A[i][j]);
  1. The matrix $A$ itself
  2. Transpose of the matrix $A$
  3. Adding $100$ to the upper diagonal elements and subtracting $100$ from lower diagonal elements of $A$
  4. None of the above

Consider the following C function.

int fun1 (int n) { 
     int i, j, k, p, q = 0; 
     for (i = 1; i < n; ++i) 
     {
        p = 0; 
       for (j = n; j > 1; j = j/2) 
           ++p;  
       for (k = 1; k < p; k = k * 2) 
           ++q;
     } 
     return q;
}

Which one of the following most closely approximates the return value of the function fun1?

  1. $n^3$
  2. $n(\log n)^2$
  3. $n \log n$
  4. $n \log(\log n)$

Consider the following C function.

int fun(int n) {
    int x=1, k;
    if (n==1) return x;
    for (k=1; k<n; ++k)
        x = x + fun(k) * fun (n-k);
    return x;
}

The return value of $fun(5)$ is ______.

Suppose $c = \langle c[0], \dots, c[k-1]\rangle$ is an array of length $k$, where all the entries are from the set $\{0, 1\}$. For any positive integers $a \text{ and } n$, consider the following pseudocode.

DOSOMETHING (c, a, n)

$z \leftarrow 1$
for $i \leftarrow 0 \text{ to } k-1$
     do $z \leftarrow z^2 \text{ mod } n$
     if $c[i]=1$
           then $z \leftarrow (z \times a) \text{ mod } n$
return z

If $k=4, c = \langle 1, 0, 1, 1 \rangle , a = 2, \text{ and } n=8$, then the output of DOSOMETHING(c, a, n) is _______.

Consider the following C function.

void convert (int n ) {
        if (n<0)
            printf{“%d”, n);
        else {
            convert(n/2);
            printf(“%d”, n%2);
        }
}

Which one of the following will happen when the function convert is called with any positive integer $n$ as argument?

  1. It will print the binary representation of $n$ and terminate
  2. It will print the binary representation of $n$ in the reverse order and terminate
  3. It will print the binary representation of $n$ but will not terminate
  4. It will not print anything and will not terminate

Consider the following program operating on four variables $u, v, x, y$, and two constants $X$ and $Y$.

x, y, u, v:= X, Y, Y, X;    
While (x ≠ y)  
do
    if (x > y) then x, v := x - y, v + u;
    else if (y > x) then y, u:= y - x, u + v;  
od;
print ((x + y) / 2);  print ((u + v) / 2);  

Given $X > 0 \land Y > 0$, pick the true statement out of the following:

  1. The program prints $\text{gcd}(X, Y)$ and the first prime larger than both $X$ and $Y$.
  2. The program prints $\text{gcd}(X, Y)$ followed by $\text{lcm}(X, Y)$.
  3. The program prints $\text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
  4. The program prints $\frac1 2 \times \text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
  5. The program does none of the above.

Consider the following code.

def brian(n):
    count = 0 

    while ( n ! = 0 )
          n = n & ( n-1 ) 
         count = count + 1 

  return count

Here $n$ is meant to be an unsigned integer. The operator & considers its arguments in binary and computes their bit wise $AND$. For example, $22$ & $15$ gives $6$, because the binary (say 8-bit) representation of $22$ is $00010110$ and the binary representation of $15$ is $00001111$, and the bit-wise $AND$ of these binary strings is $00000110$, which is the binary representation of $6$. What does the function $\text{brian}$ return?

  1. The highest power of $2$ dividing $n$, but zero if $n$ is zero.
  2. The number obtained by complementing the binary representation of $n$.
  3. The number of ones in the binary representation of $n$.
  4. The code might go into an infinite loop for some $n$.
  5. The result depends on the number of bits used to store unsigned integers.

Consider the following game. There is a list of distinct numbers. At any round, a player arbitrarily chooses two numbers $a, b$ from the list and generates a new number $c$ by subtracting the smaller number from the larger one. The numbers $a$ and $b$ are put back in the list. If the number $c$ is non-zero and is not yet in the list, $c$ is added to the list. The player is allowed to play as many rounds as the player wants. The score of a player at the end is the size of the final list.

Suppose at the beginning of the game the list contains the following numbers: $48, 99, 120, 165$ and $273$. What is the score of the best player for this game?

  1. $40$
  2. $16$
  3. $33$
  4. $91$
  5. $123$

Consider the following program modifying an $n \times n$ square matrix $A$:

for i=1 to n:
    for j=1 to n:
        temp=A[i][j]+10
        A[i][j]=A[j][i]
        A[j][i]=temp-10
    end for
end for

Which of the following statements about the contents of matrix $A$ at the end of this program must be TRUE?

  1. the new $A$ is the transpose of the old $A$
  2. all elements above the diagonal have their values increased by $10$ and all the values below have their values decreased by $10$
  3. all elements above the diagonal have their values decreased by $10$ and all the values below have their values increased by $10$
  4. the new matrix $A$ is symmetric, that is, $A[i][j]=A[j][i]$ for all $1 \leq i, j \leq n$
  5. $A$ remains unchanged

Answers: Identify Function

Selected Answer

This function is reversing the string "HTGATE". 

Here, function substring() gives the substring from the 2nd character of the original string till the end and contact(p,q) is concatenating the strings p and q...

n = s.length means n = length of the string. and c[1] means 1st character of the passed string

So, it will work like this:

what(HTGATE)

contact(what(TGATE),H)

contact(contact(what(GATE),T),H)

contact(contact(contact(what(ATE),G),T),H)

contact(contact(contact(contact(what(TE),A),G),T),H)

contact(contact(contact(contact(contact(what(E),T),A),G),T),H)

contact(contact(contact(contact(contact(E,T),A),G),T),H)

contact(contact(contact(contact(ET,A),G),T),H)

contact(contact(contact(ETA,G),T),H)

contact(contact(ETAG,T),H)

contact(ETAGT,H)

ETAGTH

5 votes -- Ankit Gupta ( 13.2k points)
Selected Answer
while(m++ < n)

here , we have used post-increment operator for variable $m$. So, first we will check whether $m<n$ is true or not then we will increase the value of '$m$' as $m=m+1$.

So, for $n=2$ , first we will go in the loop.

So, $t = 1*(\frac{-x}{1})$ and $y = 0 + (1*(\frac{-x}{1})) = \frac{-x}{1}$

Now, we will check the condition as :- $(1<2)$ . Since, it is true so we will go in the loop again as well as we will increment the value of $m$. So, now, $m=2$

Now again, in the body of the loop,

$t = (1*(\frac{-x}{1}))* (\frac{-x}{2})$ and $y = (1*(\frac{-x}{1})) + (1*(\frac{-x}{1})*(\frac{-x}{2})) = (-1)^{1}*\frac{x^{1}}{1} + (-1)^{2}*\frac{x^{2}}{1*2}$

Like this when we do then at certain point of time , we will check $((n-1) < n)$  or not for the value of $m=n-1$ . since it is true  . So, again we will go in the loop and increment in value of m.So, now , $m=n$

So, Now , $y = 0 + \frac{(-1)^{1}*(x^{1})}{1} + \frac{(-1)^{2}*x^{2}}{1*2} + \frac{(-1)^{3}*x^{3}}{1*2*3}+....+\frac{(-1)^{n}*x^{n}}{1*2*3*...*n}$ = $\sum_{i=1}^{n} \frac{ (-1)^{i}*x^{i}}{i!}$

Now again, we will check the condition. Since $(n<n)$ is false. So, we will not go in loop again and print the value of $y$ which is $\sum_{i=1}^{n} \frac{ (-1)^{i}*x^{i}}{i!}$
7 votes -- Ankit Gupta ( 13.2k points)
Selected Answer

For $N=9$, it returns $3$.

For $N=10$ it returns $4$.

For $N=16$ it returns $4$.

For $N=17$ it returns $5$.

So answer should be C.

14 votes -- Taymiyyah Bhat ( 2.3k points)
Selected Answer
Answer is simply $A$ i.e. it swaps the values of the two.. Take any two values for $A$ and $B$. and perform the given operations over them.
22 votes -- Gate Keeda ( 15.8k points)
Selected Answer
answer - $x^n$
12 votes -- Ankit Rokde ( 6.9k points)
Selected Answer

Answer of $X$ remains unchanged. As the if condition becomes false.

X := -10

Answer is C . This is classic example of $if-else$ issue. Always $else$ matches for nesting to the closest $if$  in $C$ Programming & Pascal .
https://en.wikipedia.org/wiki/Dangling_else

if (x>y)
{
   if (x<0)
       x=abs(x)
    else
       x=2*x
}
References
26 votes -- Akash Kanase ( 35.2k points)
Selected Answer

Answer: C

Let $X = 3$ and $Y = 7$.
1st pass: $X = 3$, $Y = 4$
2nd pass: $X = 3$, $Y = 1$
3rd pass: $X = 2$, $Y = 1$
4th pass: $X = 1$, $Y = 1$
write $(X)$, which writes $1$.

Ref: https://en.wikipedia.org/wiki/Euclidean_algorithm

References
15 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer
  1. =3
  2.  
 begin
for I:=2 to N do
for J:=1 to ( I-1) do
 begin
TMP:= A[I, J];
A[I, J]:= A[J, I];
A[J, I]:= TMP
 end

Should be the condition...

10 votes -- papesh ( 17.9k points)
Selected Answer
Value returned by $\text{fun}(95) = \text{fun}(\text{fun}(106))$
$\qquad\qquad= \text{fun}(96)$
$\qquad\qquad= \text{fun}(\text{fun}(107))$
$\qquad\qquad= \text{fun}(97)$
$\qquad\qquad= \text{fun}(\text{fun}(108))$
$\qquad\qquad= \text{fun}(98)$
$\qquad\qquad = \text{fun}(\text{fun}(109))$
$\qquad\qquad= \text{fun}(99)$
$\qquad\qquad= \text{fun}(\text{fun}(110))$
$\qquad\qquad= \text{fun}(100) $
$\qquad\qquad= \text{fun}(\text{fun}(111)) $
$\qquad\qquad= \text{fun}(101) = 91.$

Correct Answer: $C$
32 votes -- Digvijay ( 44.8k points)
Selected Answer
$$\begin{array}{|l|l|l|l|} \hline \textbf{a} & \textbf{b} & \textbf{c} & \textbf{Return} \\\hline \text{1} &  \text{1} & \text{1} & \text{The final return statement is $c < b.$}\\&&&\text{ So, this never returns. } \\\hline \end{array}$$
Answer D.
59 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is A.

Effect of the above $3$ reversals for any $K$ is equivalent to left rotation of the array of size $n$ by $k$.

Let , $S[1\ldots7] = \begin{array}{|c|c|c|c|c|c|c|}\hline1&2&3&4&5&6&7\\ \hline\end{array}$
So, $n=7,k = 2$

reverse $(S,1,2)$ we get $[2,1,3,4,5,6,7]$

reverse $(S,3,7)$ we get $[2,1,7,6,5,4,3]$

reverse $(S,1,7)$ we get $[3,4,5,6,7,1,2]$

Hence, option $(A)$ rotates $s$ left by $k$ positions and is correct.

38 votes -- Kalpna Bhargav ( 2.5k points)
Selected Answer

A simplified version of the given program can be written as:

float f(float x, int y) {
    float p=1, s=1; 
    int i;
    for (i=1; i<y; i++) {
        p = p * (x/i);
        s = s + p;
    }
    return s;
}

We can take $p=1,s=1$ initialization outside of for loop because there is no condition checking in for loop involving $p,s$ .
$$\begin{array}{|l|l|l|} \hline \text{$i$} & \text{$p= p $*$ (x/i)$} & \text{$s=s+p$}\\\hline \text{1} & \text{$x$} & \text{$1+x$} \\\hline \text{2} & \text{$\frac{x^{2}}{2}$} & \text{$1+x+\frac{x^{2}}{2}$}\\\hline \text{3} & \text{$\frac{x^{3}}{6}$} & \text{$1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6}$} \\\hline  \text{4} & \text{$\frac{x^{4}}{24}$} & \text{$1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6}+\frac{x^{4}}{24}$}\\\hline \text{n} & \text{$\frac{x^{n}}{n!}$} & \text{$e^x$} \\\hline \end{array}$$

$$\begin{array}{|l|} \hline  \text{$e^x = \displaystyle{\sum_{n=0}^{\infty} \frac{x^n}{n!}}  = 1+x+\frac{x^{2}}{2}+\frac{x^{3}}{6} +\frac{x^{4}}{24}+ \ldots+\frac{x^{n}}{n!}$} \\\hline \end{array}$$

Hence, option B is answer.

35 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer

The nested loop is taking all integers from $2$ to $2\ast \log_2 n$. Take all their non-multiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[3], A[5], A[7]$, and $A[9] $are made $1$. Similarly for $3$, $4$, ... till $2\ast \log n$. So, if any entry $A[p]$ is $1$ means it must be a multiple of $2, 3, .... 2\ast \log_2 n$, which is $\left ( 2 \log n \right )!$ and is greater than $n$. So, for no index $p$, $A[p]$ will be $0$. So, answer is D

Suppose the line

A[j] = A[j] || (j%k);

is replaced with

A[j] = A[j] || !(j%k);

Now, the nested loop is taking all integers from $2$ to $\log_2 n$, take all their multiples before $n$, and make the corresponding entry in $A$ as $1$. For example, for $2$, and $n = 10, A[4], A[6], A[8]$ and $A[10]$ are made $1$. Similarly for $3, 4, ...$ till $2\ast \log n$ . So, for all non-prime indices of $A$, we will have a $1$, and for prime indices we have a $0$. And we print $i$ if $A[j]$ is $0$ meaning $j$ is prime. 

29 votes -- Arjun Suresh ( 328k points)
Selected Answer

It is an algorithm for $\gcd$ computation.

Here, while loop executes until $m=n$.
We can test by taking any two numbers as $m,n$.

Answer will be (C).

Ref: https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/the-euclidean-algorithm

References
11 votes -- srestha ( 83.4k points)
Selected Answer
By putting $y = m/x$ into $x = ( x + y )/2$
$\quad x= ( x + m/x ) /2$

$\implies 2x^2= x^2 + m$
$\implies x = m^{1/2}$

We can also check by putting $2$ or $3$ different values also.

Correct Answer: $C$
46 votes -- gate_asp ( 619 points)
Selected Answer

Option is D.

$foo$ is printing the lowest digit. But the $printf$ inside it is after the recursive call. This forces the output to be in reverse order

$2, 0, 4, 8$

The final value $sum$ printed will be $0$ as $C$ uses pass by value and hence the modified value inside $foo$ won't be visible inside $main$.

22 votes -- anshu ( 2.6k points)
Selected Answer

See the following calling sequence:

Hence, answer is option C.

23 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer

Option (D)

In the given algorithm the for loop contains a logical expression

 z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);

The equivalent set representation of a given logical expression if we assume $z[i] = Z, x[i] = X, y[i] = Y$ then 

$Z = ( X \wedge \neg Y ) \vee ( \neg X \wedge  Y)$

$\implies Z = ( X - Y ) \cup ( Y - X )  [\because A \wedge \neg B = A - B]$

22 votes -- Prasanna Ranganathan ( 3.9k points)
Selected Answer

The while loop adds elements from $a$ and $b$ (whichever is smaller) to $c$ and terminates when either of them exhausts. So, when loop terminates either $i = n$ or $j = m$. 

Suppose $i = n$. This would mean all elements from array $a$ are added to $c => k$ must be incremented by $n$. $c$ would also contain $j$ elements from array $b$. So, number of elements in $c$ would be $n+j$ and hence $k = n + j$. 

Similarly, when $j = m$, $k = m + i$. 

Hence, option (D) is correct. (Had $k$ started from $-1$ and not $0$ and we used $++k$ inside loop, answer would have been option (C))

64 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer: C

Because $\binom{m}{0}$ = $1$ and $\binom{n}{n}$ = $1$.

25 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer

Answer: D

The function prints the binary equivalent of the number $n$.

Binary equivalent of $173$ is $10101101$.

18 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer
Answer: C

The code fragment written in C and P1 prints the binary equivalent of the number n.

P2 prints the binary equivalent of the number n in reverse.
20 votes -- Rajarshi Sarkar ( 27.6k points)

Here, $P1$ and $P2$ will print opposite in direction as shown in diagram.

And given code fragment will print like $P1$ and not like $P2$

Hence, answer will be (C).

26 votes -- srestha ( 83.4k points)
Selected Answer

The answer is $B$.

Let the address of $x$ be $1000.$

  1. $f(5,1000)$ $=$ $8$
  2. $f(4,1000)$ $=$ $5$
  3. $f(3,1000)$ $=$ $3$
  4. $f(2,1000)$ $=$ $2$
  5. $f(1,1000)$ $=$ $1$.


The evaluation is done from $5$ to $1$. Since recursion is used.

18 votes -- Gate Keeda ( 15.8k points)
Selected Answer
Suppose $int$ array takes $4$ bytes for each element and stored at base address $100$.

Follow below image. Red color shows the return value.

So, $15$ is the answer.

Correct Answer: $C$
14 votes -- Rajesh Pradhan ( 18.8k points)
It will print
$12 + ( 7 - (13 - (4 + (11 - ( 6 + 0)))))$
$\quad = 12 + (7 - (13 - ( 4 + ( 11 -6)))))$
$\quad= 12 + 7 - 13 + 9$
$\quad= 15$
24 votes -- gatecse ( 18.8k points)
Selected Answer

Red colour represents return values.

Answer is 12.

19 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer
The function returns the sum of digits in a binary representation of the given number

so $1+0+0+0+0+0+0+0+0+1 = 2$

Correct Answer: $D$
21 votes -- Sandeep_Uniyal ( 6.4k points)
Selected Answer
The outer loop is running for $n/2$ times and inner loop is running for $\log_2 n$ times (each iteration doubles $j$ and $j$ stops at $n$ means $\log_2 n$ times $j$ loop will iterate).

Now in each iteration $k$ is incremented by $n/2$. So, overall $k$ will be added $n/2 * log n * n/2$ with an initial value of $0$. So, final value of $k$ will be $\Theta(n^2 log n)$

Correct Answer: $B$
58 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is (A) maximum possible sum of elements in any sub-array of array E.

int MyX ( int * E, unsigned int size )
{
    int Y= 0;
    int z;
    int i, j,k;
   
//calculate sum of the elements of the array E and stores it in Y
    for i 0;i<size;i++)   
      Y = Y+E[i]; 
      
//calculate the sum of all possible subaarays 
//(starting from postion 0..n-1)  
    for (i=0;i<size;i++)        
        for(j=i;j<size ;j++)       
        {                                    
            z = 0;                    
            for(k=i; k<=j;k++)  
                z=z+E[k]; 
                
//checks whether sum of elements of each subarray is greater
//than the current max, if so, then assign it to currentmax
            if(z>Y)           
                Y = z;
        }
//ultimately returns the maximum possible sum of elements 
//in any sub array of given array E    
    return Y;  
}

 

39 votes -- Kalpna Bhargav ( 2.5k points)
Selected Answer

Answer is $9$.

$435-(110110011) $

num $>>=$ $1$; implies a num is shifted one bit right in every while loop execution.While loop is executed $9$ times successfully and $10$th time num is zero.

So count is incremented $9$ times.

Note:

Shifting a number "1"bit position to the right will have the effect of dividing by $2$:

8 >> 1 = $4    // In binary: (00001000) >> 1 = (00000100)
32 votes -- Prasanna Ranganathan ( 3.9k points)
Selected Answer

A.
In the computation of given pseudo code for each row and column of Matrix $A$, each upper triangular element will be interchanged by its mirror image in the lower triangular and after that the same lower triangular element will be again re-interchanged by its mirror image in the upper triangular, resulting the final computed Matrix $A$ same as input Matrix $A$.

30 votes -- Gate Keeda ( 15.8k points)
Selected Answer

$i$ loop is executing $n$ times. $j$ loop is executing $\log\ n$ times for each $i$, and so value of $p$ is $\log n$. $k$ loop is executing $\log\ p$ times, which is $log\ log\ n$ times for each iteration of $i$. In each of these $q$ is incremented. So, over all iterations of $i$, $q$ will be incremented $n\ log\ log\ n$ times. So, D choice. 

70 votes -- Arjun Suresh ( 328k points)
Selected Answer

$\text{fun}(1) = 1$;

$\text{fun}(2) = 1 + \text{fun}(1) * \text{fun}(1) = 1 + 1 = 2$;

$\text{fun}(3) = 1 + \text{fun}(1) * \text{fun}(2) + \text{fun}(2) * \text{fun}(1) = 5$;

$\text{fun}(4) = 1 + \text{fun}(1) * \text{fun}(3) + \text{fun}(2) * \text{fun}(2) + \text{fun}(3) * \text{fun}(1)$
$\qquad = 1 + 5 + 4 + 5 = 15$;

$\text{fun}(5) = 1 + \text{fun}(1) * \text{fun}(4) + \text{fun}(2) * \text{fun}(3) + \text{fun}(3) * \text{fun}(2) + \text{fun}(4) * \text{fun}(1)$
$\qquad = 1 + 15 + 10 + 10 + 15  = 51$;


More formal way:

The recurrence relation is

$f(n) = \begin{cases} 1, n = 1 \\ 1 + \sum_{i=1}^{n-1} f(i) \times f(n-i), n > 1 \end{cases}$

$f(1) = 1$

$f(2) = 1+ f(1).f(1)$
$\qquad = 1 + 1.1 = 2$

$f(3) = 1 + f(1). f(2) + f(2).f(1)$
$\qquad= 1+ 1.2 + 2.1 = 5$

$f(4) = 1 + f(1).f(3) + f(2).f(2) + f(3).f(2)$
$ \qquad= 1+ 1.5 + 2.2 + 5.1 = 15$

$f(5) = 1 + f(1).f(4) + f(2).f(3) + f(3). f(2)+ f(4).f(1)$
$\qquad = 1 + 1.15 + 2.5 + 5.2 + 15.1 = 51$

91 votes -- Arjun Suresh ( 328k points)
Selected Answer
Initially $k = 4$, $c = [1, 0, 1, 1]$, $a = 2$, $n = 8$.

Now let's iterate through the function step by step :

$z = 1$ (at the start of do-something)

$i = 0$ (start of external for loop)

In the do loop

$z = 1*1 % 8 = 1$ (non zero value so considered as true and continue)

$c[0] = 1$, so in the if clause, $z = 1*2 \% 8 = 2$

In the do loop

$z = 2*2 \% 8 = 4$ (since now $z = 2$) (non zero value so considered as true and continue)

$c[0] = 1$, so in the if clause, $z = 4*2 \% 8 = 0$

Now no need to check further :

Reason :  All the operations that update $Z$ are multiplicative operations and hence the value of $Z$ will never change from $0$.
33 votes -- Tamojit Chatterjee ( 1.9k points)
Selected Answer

As per the question, the function convert needs to be called with a positive integer n as argument.

So, when a positive int is passed as argument, function will compute$(n/2)$ which will return the quotient of integer division. 

Again this result of integer division will be passed through compute$(n/2)$ and so on. Each time the no will be divided by $2$ and will gradually become smaller and smaller and close to 0 but it will never be less than 0

Hence, the program will not print anything (as after every function call, the value returned is greater than $0$) and will not terminate. 

PS: Being a C function and hence when run on a computer each recursive call will require creation of activation record consuming memory space. So, eventually the stack memory will run out and program terminates. But such an option is not there. D is the best option here.

Answer (D)

16 votes -- Sayan Bose ( 4.9k points)
It prints with $gcd(x,y)$ and $lcm(x,y)$.

Consider $x,y,u,v=17,3,3,17$.

$X=14$, $v=20$

$X=11$, $v=23$

$X=8$, $v=26$

$X=5$, $v=29$

$X=2$, $v=32$

$Y=1$, $u=35$

$X=1$, $v=67$

This is the value obtained.

Lastly, printing $(x+y)/2$ and $(v+u)/2$ gives $1$ and $51$.
9 votes -- zambus ( 291 points)
Selected Answer

Option C. It returns no of 1's in binary representation of $n$.
Here, $n$&$(n-1)$ reset rightmost bit of $n$ in each iteration.

For example,

Suppose $n=15=00001111$(binary)

$n-1=14(00001110)$

  $ 00001111$ 
^  $00001110$
---------------------
   $00001110$

8 votes -- Avdhesh Singh Rana ( 2.3k points)
Selected Answer

Option D is correct

Here the list is $(48, 99, 120, 165 ,273)$.

$Gcd(48,99)=3$ ,means if we subtract ($99-48=51$) then that number is also % $3$,

So the numbers like $(3,6,9----99)$ are added. Total numbers $=99/3=33$

//$y \ Gcd(48,120)=24$,so the numbers %$24$ are added like $(24,48,---120)$. Total numbers $=120/24=5$

//$y \ Gcd(48,165)=3$,so the numbers $(3,6,9,--24--48--99--120---165)$ are added. Totally, $165/3=55$

At end, $Gcd(48,273)=3$,so the numbers $(3,6,9--24---48--99---120--165---273)$ are added(which covers all the above numbers)

So total numbers added to this list $=273/3=91$

19 votes -- venky.victory35 ( 719 points)
Selected Answer
1.for i=1 to n:
2.    for j=1 to n:
3.        temp=A[i][j]+10
4.        A[i][j]=A[j][i]
5.        A[j][i]=temp-10
6.    end for
7.end for

The $3,4,5$ lines swap $A [ j ] [ i ]$ and $A [ i ] [ j ]$.

The same variables are swapped twice. For eg when: $i =5$, $j = 10$. $A [10] [5]$ and $A [5] [10]$ will be swapped. They will be swapped again when $i = 10$, $j =5$.

Two times swap of same elements will lead to $A$ remaining unchanged.

Hence, E is correct.

9 votes -- tarun_svbk ( 1.3k points)
The minimum number of comparisons required to find the minimum and the maximum of $100$ numbers is ________

Given a set of $n$ distinct numbers, we would like to determine both the smallest and the largest number. Which of the following statements is TRUE?

  1. These two elements can be determined using $O\left(\log^{100}n\right)$ comparisons.
  2. $O\left(\log^{100}n\right)$ comparisons do not suffice, however these two elements can be determined using $n + O(\log n)$ comparisons.
  3. $n+O(\log n)$ comparisons do not suffice, however these two elements can be determined using $3\lceil n/2 \rceil$ comparisons.
  4. $3\lceil n/2 \rceil$ comparisons do not suffice, however these two elements can be determined using $2(n - 1)$ comparisons.
  5. None of the above.

Consider the problem of computing the minimum of a set of $n$ distinct numbers. We choose a permutation uniformly at random (i.e., each of the n! permutations of $\left \langle 1,....,n \right \rangle$ is chosen with probability $(1/n!)$ and we inspect the numbers in the order given by this permutation. We maintain a variable MIN that holds the minimum value seen so far. MIN is initialized to $\infty$ and if we see a value smaller than MIN during our inspection, then MIN is updated. For example, in the inspection given by the following sequence, MIN is updated four times.

    $5$ $9$ $4$ $2$ $6$ $8$ $0$ $3$ $1$ $7$

What is the expected number of times MIN is updated?

  1. $O (1)$
  2. $H_{n}=\sum ^{n}_{i=1} 1/i$
  3. $\sqrt{n}$
  4. $n/2$
  5. $n$

Given a set of $n$ distinct numbers, we would like to determine the smallest three numbers in this set using comparisons. Which of the following statements is TRUE?

  1. These three elements can be determined using $O\left(\log^{2}n\right)$ comparisons.
  2. $O\left(\log^{2}n\right)$ comparisons do not suffice, however these three elements can be determined using $n+O(1)$ comparisons.
  3. $n + O(1)$ comparisons do not suffice, however these three elements can be determined using $n +O(\log n)$ comparisons.
  4. $n+O(\log n)$ comparisons do not suffice, however these three elements can be determined using $O(n)$ comparisons.
  5. None of the above.

Answers: Minimum Maximum

Selected Answer

We can solve this question by using Tournament Method Technique -

1. To find the smallest element in the array will take $n-1$ comparisons $= 99.$

2. To find the largest element - 

  1. After the first round of Tournament , there will be exactly $n/2$ numbers $= 50$ that will loose the round.
  2. So, the biggest looser (the largest number) should be among these $50$ loosers.To find the largest number will take $n/2 - 1$ comparisons $= 49.$

Total $99+49 = 148.$

73 votes -- Harsh181996 ( 3.3k points)
Selected Answer

I think answer will be C.

To be accurate, it will need $3n/2 -2$ comparisons .

10 votes -- Pranay Datta ( 7.7k points)
Selected Answer

Let us consider $3$ numbers $\{1,2,3\}$

We will consider the permutation along with min no of times MIN is updated.
$$\begin{array}{c|c} \hline \textbf{Permutation} & \textbf{Minimum of times} \\
& \textbf{ MIN is updated} \\\hline
 1,2,3 & 1 \\ 1,3,2 & 1 \\ 2,1,3 & 2\\ 2,3,1 & 2 \\ 3,1,2 & 2 \\ 3,2,1 & 3   \end{array}$$
Total number of times MIN updated is  : $11$.

Average no of times MIN updated is  : $(11/6)$

Now going by the options i am getting B .   

$H_3 = 1 + 1/2 + 1/3 = 11/6$ .

$H_3$ is the answer and that is option B .

28 votes -- Riya Roy(Arayana) ( 5.3k points)
Selected Answer

Option (C) is correct. Reason is as follows : 

3

Here, at first level we are Given $n$ elements, out of which we have to find smallest $3$ numbers.

We compare $2 -2$ elements as shown in figure & get $n/2$ elements at Second level.

Note: Minimum element is in these $n/2$ elements.

So, comparisons for this is $n/2$.

Similarly for next level we have $n/4$ Comparisons & $n/2$ elements and so on.

Total Comparisons till now is $n/2 + n/4 + n/8 + \ldots + 4 + 2 +1  = (2^{\log n}-1) = n-1$ {Use G.P. sum}


Now we have to get smallest $3$.

We have 1st smallest at last level already  =>$0$ Comparison for this.

=> $2$nd & $3$rd smallest can be found in $O(\log n)$ time as shown below:

h

 

Minimum Element must have descended down from some path from top to Bottom.


=> SQUARES represent Candidates for $2$$^{nd}$ minimum.

Every element that is just below $m$1(first  minimum) is a candidate for second minimum.

So, $O(\log n)$ Comparisons for finding second smallest.


=> Similarly for $3$$^{rd}$ minimum we get $O(\log n)$ time.. As, every element  that is just below 1st & 2nd minimum is a candidate for $3$$^{rd}$ minimum.

31 votes -- Himanshu Agarwal ( 12.2k points)

Consider the following undirected graph $G$:

Choose a value for $x$ that will maximize the number of minimum weight spanning trees (MWSTs) of $G$. The number of MWSTs of $G$ for this value of $x$ is ____.

Let $n\geq 3,$ and let $G$ be a simple, connected, undirected graph with the same number $n$ of vertices and edges. Each edge of  $G$ has a distinct real weight associated with it. Let $T$ be the minimum weight spanning tree of $G.$ Which of the following statements is NOT ALWAYS TRUE ?

  1. The minimum weight edge of $G$ is in $T.$
  2. The maximum weight edge of $G$ is not in $T.$
  3. $G$ has a unique cycle $C$ and the minimum weight edge of $C$ is also in $T.$
  4. $G$ has a unique cycle $C$ and the maximum weight edge of $C$ is not in $T.$
  5. $T$ can be found in $O(n)$ time from the adjacency list representation of $G.$

How many distinct minimum weight spanning trees does the following undirected, weighted graph have ?

  1. $8$
  2. $16$
  3. $32$
  4. $64$
  5. None of the above

Answers: Minimum Spanning Trees

Selected Answer

Number of possible MSTs increase, when we have multiple edges with same edge weights.

To maximize the number of MST, $x$ should be $5.$
In the question, number of MST is asked for the value of $X$.

So, number of MST $= 2\times 2 = 4$  (Answer)
$($Because one $4$ forms cycle, cant be included in any way. Now from two $4$ and $5$ we can select one in $2\times 2 = 4$ ways$)$

30 votes -- Ahwan Mishra ( 10k points)
Selected Answer

Take 2 Random graph with same number of vertices and edges:

Now check option 1 by 1:

  1. The minimum weight edge of $G$ is in $T.$ Always true
  2. The maximum weight edge of $G$ is not in $T.$ True in 1st graph but false in 2nd graph Maximum weight edge will be in $T$ if it is not part of the cycle.
  3. $G$ has a unique cycle $C$ and the minimum weight edge of $C$ is also in $T.$ Always true
  4. $G$ has a unique cycle $C$ and the maximum weight edge of $C$ is not in $T.$ Always true
  5. $T$ can be found in $O(n)$ time from the adjacency list representation of $G.$ Always true. The given graph is actually a tree with one extra edge. So, using BFS we can find the cycle in the graph and just eliminate the maximum edge weight in it. Time complexity will be $O(n).$

Hence, B is answer

5 votes -- Anu007 ( 13.9k points)
Selected Answer
There are $64$ minimum spanning trees possible.

Two choices for edge of weight $4$ for each of the outer triangle, ie, $ \binom{2}{1}*\binom{2}{1}*\binom{2}{1}*\binom{2}{1}=2^4$ and two choices for edge of weight $2$ for each of the inner triangle ie, $\binom{2}{1}*\binom{2}{1}=2^2$.

$2^4*2^2=64$
4 votes -- Manoja Rajalakshmi Aravindakshan ( 7.7k points)

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:

Which of the following problems is not $NP$-hard?

  1. Hamiltonian circuit problem
  2. The $0/1$ Knapsack problem
  3. Finding bi-connected components of a graph
  4. The graph coloring problem

Ram and Shyam have been asked to show that a certain problem $\Pi$ is NP-complete. Ram shows a polynomial time reduction from the $3$-SAT problem to $\Pi$, and Shyam shows a polynomial time reduction from $\Pi$ to $3$-SAT. Which of the following can be inferred from these reductions?

  1. $\Pi$ is NP-hard but not NP-complete

  2. $\Pi$ is in NP, but is not NP-complete

  3. $\Pi$ is NP-complete

  4. $\Pi$ is neither NP-hard, nor in NP

The problem $3$-SAT and $2$-SAT are 

  1. both in $P$

  2. both $NP$ complete

  3. $NP$-complete and in $P$ respectively

  4. undecidable and $NP$ complete respectively

Let S be an NP-complete problem and Q and R be two other problems not known to be in NP. Q is polynomial time reducible to S and S is polynomial-time reducible to R. Which one of the following statements is true?

  1. R is NP-complete
  2. R is NP-hard
  3. Q is NP-complete
  4. Q is NP-hard

The subset-sum problem is defined as follows: Given a set $S$ of $n$ positive integers and a positive integer $W$, determine whether there is a subset of $S$ whose elements sum to $W$. An algorithm $Q$ solves this problem in $O(nW)$ time. Which of the following statements is false?

  1. $Q$ solves the subset-sum problem in polynomial time when the input is encoded in unary
  2. $Q$ solves the subset-sum problem in polynomial time when the input is encoded in binary
  3. The subset sum problem belongs to the class NP
  4. The subset sum problem is NP-hard

Suppose a language $L$ is $\mathbf{NP}$ complete. Then which of the following is FALSE?

  1. $L \in \mathbf{NP}$
  2. Every problem in $\mathbf{P}$ is polynomial time reducible to $L$.
  3. Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
  4. The Hamilton cycle problem is polynomial time reducible to $L$.
  5. $\mathbf{P} \neq \mathbf{NP}$ and $L \in \mathbf{P}$.

Given an integer $n\geq 3$, consider the problem of determining if there exist integers $a,b\geq 2$ such that $n=a^{b}$. Call this the forward problem. The reverse problem is: given $a$ and $b$, compute $a^{b}$ (mod b). Note that the input length for the forward problem is $\lfloor\log n\rfloor + 1$, while the input length for the reverse problem is $\lfloor\log a\rfloor + \lfloor\log b\rfloor + 2$. Which of the following statements is TRUE?

  1. Both the forward and reverse problems can be solved in time polynomial in the lengths of their respective inputs.
  2. The forward problem can be solved in polynomial time, however the reverse problem is $NP$-hard.
  3. The reverse problem can be solved in polynomial time, however the forward problem is $NP$-hard.
  4. Both the forward and reverse problem are $NP$-hard.
  5. None of the above.

This question concerns the classes $P$ and $NP.$ If you are familiar with them, you may skip the definitions and go directly to the question.
Let $L$ be a set. We say that $L$ is in $P$ if there is some algorithm which given input $x$ decides if $x$ is in $L$ or not in time bounded by a polynomial in the length of $x.$ For example, the set of all connected graphs is in $P,$ because there is an algorithm which, given a graph graph, can decide if it is connected or not in time roughly proportional to the number of edges of the graph.

The class $NP$ is a superset of class $P.$ It contains those sets that have membership witnesses that can be verified in polynomial time. For example, the set of composite numbers is in $NP.$ To see this take the witness for a composite number to be one of its divisors. Then the verification process consists of performing just one division using two reasonable size numbers. Similarly, the set of those graphs that have a Hamilton cycle, i.e. a cycle containing all the vertices of the graph, is in in $NP.$ To verify that the graph has a Hamilton cycle we just check if the witnessing sequence of vertices indeed a cycle of the graph that passes through all the vertices of the graph. This can be done in time that is polynomial in the size of the graph.

More precisely, if $L$ is a set in $P$ consisting of elements of the form $(x, w),$ then the set $$M = \{x : \exists w, |w| ≤ |x|^k\text{ and }(x, w) \in L\},$$
is in N P .
Let $G = (V, E)$ be a graph. $G$ is said to have perfect matching if there is a subset $M$ of the edges of $G$ so that

  1. No two edges in $M$ intersect (have a vertex in common); and
  2. Every vertex of $G$ has an edge in $M.$

Let $\text{MATCH}$ be the set of all graphs that have a perfect matching. Let $\overline{\text{MATCH}}$ be the set of graphs that do not have a perfect matching. Let $o(G)$ be the number of components of $G$ that have an odd number of vertices.

Tutte’s Theorem: $G \in \text{MATCH}$ if and only if for all subsets $S$ of $V,$ the number of components in $G − S$ (the graph formed by deleting the vertices in $S)$ with an odd number of vertices is at most $|S|.$ That is, $$G \in \text{MATCH} ↔ \forall S \subseteq V o(G − S) \leq |S|.$$

Which of the following is true?

  1. $\text{MATCH} ∈ NP\text{ and }\overline{\text{MATCH}} \notin NP$
  2. $\overline{\text{MATCH}} ∈ NP\text{ and }\text{MATCH} \notin NP$
  3. $\text{MATCH} ∈ NP\text{ and }\overline{\text{MATCH}} ∈ NP$
  4. $\text{MATCH} \notin P\text{ and } \overline{\text{MATCH}} \notin P$
  5. none of the above

Which of the following is not implied by $P=NP$?

  1. $3$SAT can be solved in polynomial time.
  2. Halting problem can be solved in polynomial time.
  3. Factoring can be solved in polynomial time.
  4. Graph isomorphism can be solved in polynomial time.
  5. Travelling salesman problem can be solved in polynomial time.

A multivariate polynomial in $n$ variables with integer coefficients has a binary root if it is possible to assign each variable either 0 or 1, so that the polynomial evaluates to 0. For example, the multivariate polynomial $-2x_1^3 -x_1x_2+2$ has the binary root $(x_1=1, x_2=0)$. Then determining whether a multivariate polynomial, given as the sum of monimials, has a binary root:

  1. is trivial: every polynomial has a binary root
  2. can be done in polynomial time
  3. is NP-hard, but not in NP
  4. is in NP, but not in P and not NP-hard
  5. is both in NP and NP-hard

Consider the following statements:

  1. Checking if a given $undirected$ graph has a cycle is in $\mathsf{P}$
  2. Checking if a given $undirected$ graph has a cycle is in $\mathsf{NP}$
  3. Checking if a given $directed$ graph has a cycle is in $\mathsf{P}$
  4. Checking if a given $directed$ graph has a cycle is in $\mathsf{NP}$

Which of the above statements is/are TRUE? Choose from the following options.

  1. Only i and ii
  2. Only ii and iv
  3. Only ii, iii, and iv
  4. Only i, ii and iv
  5. All of them

A formula is said to be a $3$-CF-formula if it is a conjunction (i.e., an AND) of clauses, and each clause has at most $3$ literals. Analogously, a formula is said to be a $3$-DF-formula if it is a disjunction (i.e., an OR) of clauses of at most $3$ literals each.

Define the languages $3$-CF-SAT and $3$-DF-SAT as follows:

$$3 \text{-CF-SAT}=\{ \Phi \mid \Phi \text{ is a } \textit{ satisfiable } 3\text{-CF formula} \}$$

$$3\text{-DF-SAT}=\{ \Phi \mid  \Phi \text{ is a } \textit{satisfiable } 3\text{-DF formula} \}$$

Which of the following best represents our current knowledge of these languages ?

  1. Both $\text{3-CF-SAT}$ and $\text{3-DF-SAT}$ are in NP but only $\text{3-CF-SAT}$ is NP-complete
  2. Both $\text{3-CF-SAT}$ and $\text{3-DF-SAT}$ are in NP-complete
  3. Both $\text{3-CF-SAT}$ and $\text{3-DF-SAT}$ are in P
  4. Both $\text{3-CF-SAT}$ and $\text{3-DF-SAT}$ are in NP but only $\text{3-DF-SAT}$ is NP-complete
  5. Neither $\text{3-CF-SAT}$ nor $\text{3-DF-SAT}$ are in P

Answers: P Np Npc Nph

Selected Answer
  1. Is NPC and hence NP hard.
  2. Is again NP hard (optimization version is NP hard and decision version is NPC). Ref: http://stackoverflow.com/questions/3907545/how-to-understand-the-knapsack-problem-is-np-complete
  3. Is in P. See the algorithm here based on DFS: http://en.wikipedia.org/wiki/Biconnected_component
  4. NPC and hence NP hard. 

Correct Answer: $C$

References
11 votes -- Arjun Suresh ( 328k points)
Selected Answer
C. For a problem to be NP-Complete, it must be NP-hard and it must also be in NP.

Ram's reduction shows that $\Pi$ is NP hard because it must be at least as hard as $3$-SAT which is a known NP-Complete problem. Here, $\Pi$ need not be NP-Complete.

Now, Shyam's reduction shows that $3$-SAT problem is at least as hard as $\Pi$ or equivalently $\Pi$ is not harder than $3$-SAT. Since $3$-SAT is in NP, $\Pi$ must also be in NP.

Thus, $\Pi$ is NP-hard and also in NP $\implies$ $\Pi$ is NP-Complete.
13 votes -- Anurag Semwal ( 6.7k points)
Selected Answer

Answer B.

As $S$ is $NPC$ i.e $NP$-$Hard$ and $NP$.

  • We know that, if $NP$-$Hard$ problem is reducible to another problem in Polynomial Time, then that problem is also $NP$-$Hard$ which means every $NP$ problem can be reduced to this problem in Polynomial Time.

Therefore, $R$ is $NP$-$Hard$.

Now $Q$ is reduced to $S$ in polynomial time. 

  • If $Q$ is reducible to $S$ in polynomial time, $Q$ could be $NP$ because all $NP$ problems can be reduced to $S.$ Since $Q$ could be $NP$ therefore $Q$ could be $P$ also as $P$ is subset of $NP$. Also $Q$ could be $NPC$ because every $NPC$ problem can be reduced to another $NPC$ problem in polynomial time. 

So, nothing can be concluded about $Q$.

16 votes -- Mehak Sharma ( 1.3k points)
Selected Answer
Subset problem is NP-Complete - there is reduction proof but I don't remember (Can see the below link). So, (C) and (D) are true as an NPC problem is in NP as well as NPH.
 
Now, complexity of $Q$ is $O(nW)$, where $W$ is an integer.
  1. Input is encoded in unary. So, length of input is equal to the value of the input. So, complexity $= O(nW)$ where both $n$ and $W$ are linear multiples of the length of the inputs. So, the complexity is polynomial in terms of the input length. So, (A) is true.
  2. Input is encoded in binary. So, length of W will be $\lg W$. (for $W=1024$, input length will be just $10$). So, now $W$ is exponential in terms of the input length of $W$ and $O(nW)$ also becomes exponential in terms of the input lengths. So, $Q$ is not a polynomial time algorithm. So, (B) is false.

Correct Answer: $B$

References
23 votes -- Arjun Suresh ( 328k points)
Selected Answer

Option E leads to a contradiction, hence is false.

We know that $L$ is $\mathbf{NPC}$, hence  $\in \mathbf{NP}$. If $\mathbf{P} \neq \mathbf{NP}$, then $L$ can't be in $\mathbf{P}$

8 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer

The reverse problem can be solved in polynomial time as $a^b$ requires at most $\log b$ recursive calls using the approach given below:

pow(int a, int b)
{
    if(b%2)
        return a* pow(a*a, b/2);
    else
        return pow(a*a, b/2);
}

Now, the forward problem is also solvable in polynomial time. We need to check for all the roots of $n$  (from $\sqrt n$ till $n^{\frac{1}{\log n}}$) whether it is an integer . But each of these check can be done in $\log n$ time using a binary search on the set of integers from 2..$n$ and so, the overall complexity will be $(\log n)^2$ which is polynomial in $\log n$ ($\log n$ is the size of input). So, (a) must be the answer. 
 

8 votes -- gatecse ( 18.8k points)
Selected Answer

First of all let us see some things asked in question:

  1. We say $\text{MATCH} \in P$ if given a graph $G$ we can say in polynomial time using a deterministic Turing machine if $G$ has a perfect matching.
  2. We say $\text{MATCH} \in NP$ if given a graph $G$ we can say in polynomial time using a non-deterministic Turing machine if $G$ has a perfect matching.
    1. Equivalently given a subset $M$ of edges of $G$ we can say in polynomial time using a deterministic Turing machine, if it is a perfect matching for $G.$
    2. Similarly, we say $\overline{\text{MATCH}} \in NP$ if given a subset $M$ of edges of $G$ we can say in polynomial time using a deterministic Turing machine, if it is NOT a perfect matching for $G.$

So, we can see if point $2.1$ given above is possible. To do this for the given subset of edges $M$ we have to see

  1. No two edges in $M$ intersect (have a vertex in common); 
    Can be done just by traversing the edges and seeing for common vertex. Naive implementation can be done in $O(V|M|)$ time by marking the two vertices of each edge and see if any vertex gets marked twice which is polynomial.
  2. Every vertex of $G$ has an edge in $M.$ 
    We can do the same procedure done for above and finally see if any vertex is left unmarked. So, polynomial time possible.

If the answer to steps $1$ and $2$ are "Yes", then $M$ is a perfect matching for $G$ and we proved that $\text{MATCH} \in NP.$

If the answer to any one of steps $1$ or $2$ above is "No", then $M$ is not a perfect matching for $G$ and we proved that $\overline{\text{MATCH}} \in NP.$

Thus, both the problems are in $NP.$

Option C is correct.

Further Read: http://mathworld.wolfram.com/PerfectMatching.html 

References
0 votes -- Arjun Suresh ( 328k points)
Selected Answer

I believe Umang is right, option B is the correct answer.


Intractability : We are looking for EFFICIENT algorithms.

Intractable Problems are problems that are decidable, although the algorithm to decide that problem might be efficient (P) or inefficient (NP), but at least an algorithm exists to solve these problems.

Here we talk about efficient vs inefficient computations.

Thus the language of problems in P and NP classes is the language of Decidable Problems i.e. Recursive Language.


Undecidability: We are looking for algorithms.

Undecidable Problems are problems for which there is no algorithm to solve these problems.

Here we talk about what can or can not be computed.

The language of Undecidable Problems are "Recursively Enumerable but not recursive languages" & "Not Recursively Enumerable Languages".


Clearly we can talk about the intractability of any problem if we know at least one algorithm to solve the problem, if there is no algorithm to solve a problem how can we talk about efficiency?


Halting Problem is undecidable.

I guess, all other problems mentioned here are decidable.

I don't know the most efficient algorithms to solve these problems but at least I can say that Brute force approach will work on all the other options except the Halting Problem.


What P = NP implies?

"Any problem that is solved by a non deterministic Turing machine in polynomial time also be solved by some deterministic Turing machine in polynomial time, even if the degree of the polynomial is higher."

and

There is neither a Non Deterministic Turing Machine nor Deterministic Turing Machine that can solve the Halting Problem.


So any inference about P & NP is not going to affect the solvability of Halting Problem, since it is undecidable.

10 votes -- Anurag Pandey ( 10.4k points)

This problem is similar to Boolean Satisfiability Problem (SAT) problem which is First NP-Complete problem.

A problem is NP-Complete it means it is in NP as well as NP-Hard, i.e. intersection of both NP and NP-Hard is NP-Complete.

Hence, option (E) is right.

2 votes -- SHIVESH KUMAR ROY ( 863 points)
Selected Answer

E. All of them. Because all of them can be solved by Depth first traversal.

Every P problem is a subset of NP.

11 votes -- Motamarri Anusha ( 8.6k points)

3 SAT where all the clauses are in conjunctive normal form is the well known NP-complete problem.

3 SAT with clauses in Disjunctive normal form lies in P class. Now one doubt you might get while solving this question is, if 3 SAT with CNF is NP-complete and 3 SAT with DNF belongs to P, then why not to convert CNF formula to equivalent DNF formula?? If we can do this conversion, then we can solve 3 SAT CNF too in polynomial time using the algorithm for checking 3 SAT DNF. 

But please note that this conversion process is not polynomial. When you convert a CNF formula to equivalent DNF formula, it can have much much larger terms than that of original CNF formula, so this conversion process is not polynomial, it is exponential. Hence the idea of converting 3CNF to 3DNF won't work.

Option A is correct.

Nice read: https://math.stackexchange.com/questions/159591/solving-sat-by-converting-to-disjunctive-normal-form

 

 

References
1 votes -- Utkarsh Joshi ( 5.4k points)
An array of $25$ distinct elements is to be sorted using quicksort. Assume that the pivot element is chosen uniformly at random. The probability that the pivot element gets placed in the worst possible location in the first round of partitioning (rounded off to $2$ decimal places) is ________

Consider the recursive quicksort algorithm with "random pivoting". That is, in each recursive call, a pivot is chosen uniformly at random from the sub-array being sorted.When this randomized algorithm is applied to an array of size $n$ all whose elements are distinct, what is the probability that the smallest and the largest elements in the array are compared during a run of the algorithm ?

  1. $\left(\dfrac{1}{n}\right)$
  2. $\left(\dfrac{2}{n}\right)$
  3. $\Theta \left(\dfrac{1}{n\log n}\right)$
  4. ${O} \left(\dfrac{1}{n^{2}}\right)$
  5. $\Theta\left(\dfrac{1}{n \log^{2} n}\right)$

Answers: Quicksort

Selected Answer
Worst case of quicksort, if pivot element is Minimum or Maximum.

Total elements $= 25$
For worst case number of candidates $= 2$

$P = \frac{2}{25} = 0.08 $
30 votes -- Digvijay ( 44.8k points)
Selected Answer

 Probability that the smallest and the largest elements in the array are compared during a run of the randomized quicksort algorithm.

After the first split, first subpart of array will contain the minimum element and second subpart contains the maximum element. (Why?)

Choose any element say $x$, greater than or equal to smallest element. After partition, smallest element will be present in the left of it. The largest element will be present on the right sub array.

After the first split, there will be no more comparison between the largest and smallest element.

To have comparison between largest and smallest element we must choose anyone of the as the pivot element in the first split itself.

Among the $N$ elements we have to choose either the smallest element or the largest element. Selection will be uniform.

Probability $= 2 / n.$

16 votes -- Hemant Parihar ( 12k points)

Solve the recurrence equations:

  • $T(n) = T(n - 1)+ n$
  • $T(1) = 1$
Find a solution to the following recurrence equation:

$T(n)=\sqrt{n}+T(\frac{n}{2})$

$T(1)=1$
Express $T(n)$ in terms of the harmonic number $H_{n}= \sum_{t=1}^{n} 1/i, n \geq 1$, where $T(n)$ satisfies the recurrence relation,

$T(n)=\frac{n+1}{n} T(n - 1)+1$, for $n \geq \sum$ and $T(1) = 1$

What is the asymptotic behaviour of $T(n)$ as a function of $n$ ?

Consider the function $F(n)$ for which the pseudocode is given below :

Function F(n)
begin
F1 ← 1
if(n=1) then F ← 3
 else
   For i = 1 to n do
        begin
            C ← 0
           For j = 1 to n – 1 do
           begin C ← C + 1 end
           F1 = F1 * C
       end
 F = F1
end


[$n$ is a positive integer greater than zero]

(a) Derive a recurrence relation for $F(n)$

Consider the function $F(n)$ for which the pseudocode is given below :

Function F(n)
begin
F1 ← 1
if(n=1) then F ← 3
 else
   For i = 1 to n do
        begin
            C ← 0
           For j = 1 to n – 1 do
           begin C ← C + 1 end
           F1 = F1 * C
       end
 F = F1
end

[$n$ is a positive integer greater than zero]

Solve the recurrence relation for a closed form solution of $F(n)$.

Consider the recursive algorithm given below:

procedure bubblesort (n);
var i,j: index; temp : item;
begin
    for i:=1 to n-1 do
        if A[i] > A[i+1] then
        begin
            temp := A[i];
            A[i] := A[i+1];
            A[i+1] := temp;
        end;
    bubblesort (n-1)
end

Let $a_n$ be the number of times the ‘if…then…’ statement gets executed when the algorithm is run with value $n$. Set up the recurrence relation by defining $a_n$ in terms of $a_{n-1}$. Solve for $a_n$.

The recurrence relation that arises in relation with the complexity of binary search is:

  1. $T(n) = 2T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

  2. $T(n) = T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

  3. $T(n) = T\left(\frac{n}{2}\right)+\log n$

  4. $T(n) = T\left(\frac{n}{2}\right)+n$

The recurrence relation

  • $T(1) = 2$
  • $T(n) = 3T (\frac{n}{4}) +n$

has the solution $T(n)$ equal to

  1. $O(n)$

  2. $O (\log n)$

  3. $O\left(n^\frac{3}{4}\right)$ 

  4. None of the above

Consider the following function.

Function F(n, m:integer):integer;
begin
    If (n<=0 or (m<=0) then F:=1
    else
      F:F(n-1, m) + F(n, m-1);
    end;

Use the recurrence relation  $\begin{pmatrix} n \\ k \end{pmatrix} = \begin{pmatrix} n-1 \\ k \end{pmatrix} + \begin{pmatrix} n-1 \\ k-1 \end{pmatrix}$  to answer the following questions. Assume that $n, m$ are positive integers. Write only the answers without any explanation.

  1. What is the value of $F(n, 2)$?

  2. What is the value of $F(n, m)$?

  3. How many recursive calls are made to the function $F$, including the original call, when evaluating $F(n, m)$.

Let $T(n)$ be the function defined by $T(1) =1, \: T(n) = 2T (\lfloor \frac{n}{2} \rfloor ) + \sqrt{n}$ for $n \geq 2$.

Which of the following statements is true?

  1. $T(n) = O \sqrt{n}$

  2. $T(n)=O(n)$

  3. $T(n) = O (\log n)$

  4. None of the above

Solve the following recurrence relation

$x_n = 2x_{n-1}-1, n>1$

$x_1=2$

If $T_1 = O(1)$, give the correct matching for the following pairs:
$$\begin{array}{l|l}\hline \text{(M) $T_n = T_{n-1} + n$}  &  \text{(U) $T_n = O(n)$} \\\hline  \text{(N) $T_n = T_{n/2} + n$} & \text{(V) $T_n = O(n \log n)$} \\\hline  \text{(O) $T_n = T_{n/2} + n \log n$} & \text{(W) $T_n = O(n^2)$} \\\hline  \text{(P) $T_n = T_{n-1} + \log n$} & \text{(X) $T_n = O(\log^2 n)$} \\\hline \end{array}$$

  1. $\text{M-W, N-V, O-U, P-X}$
  2. $\text{M-W, N-U, O-X, P-V}$
  3. $\text{M-V, N-W, O-X, P-U}$
  4. $\text{M-W, N-U, O-V, P-X}$

The solution to the recurrence equation $T(2^k) = 3T(2^{k-1})+1, T(1) =1$ is

  1. $2^k$
  2. $\frac{(3^{k+1}-1)}{2}$
  3. $3^{\log_2 k}$
  4. $2^{\log_3 k}$

The running time of the following algorithm

Procedure $A(n)$

If $n \leqslant 2$ return ($1$) else return $(A( \lceil  \sqrt{n}  \rceil))$;

is best described by

  1. $O(n)$
  2. $O(\log n)$
  3. $O(\log \log n)$
  4. $O(1)$

Consider the following recurrence relation

$T(1)=1$

$T(n+1) = T(n)+\lfloor \sqrt{n+1} \rfloor$ for all $n \geq 1$

The value of $T(m^2)$ for $m \geq 1$ is

  1. $\frac{m}{6}\left(21m-39\right)+4$
  2. $\frac{m}{6}\left(4m^2-3m+5\right)$
  3. $\frac{m}{2}\left(3m^{2.5}-11m+20\right)-5$
  4. $\frac{m}{6}\left(5m^3-34m^2+137m-104\right)+\frac{5}{6}$

The time complexity of the following C function is (assume $n > 0$)

int recursive (int n) {
    if(n == 1)
        return (1);
    else
        return (recursive (n-1) + recursive (n-1));
}
  1. $O(n)$
  2. $O(n \log n)$
  3. $O(n^2)$
  4. $O(2^n)$

The recurrence equation
$ T(1) = 1$
$T(n) = 2T(n-1) + n, n \geq 2$

evaluates to

  1. $2^{n+1} - n - 2$
  2. $2^n - n$
  3. $2^{n+1} - 2n - 2$
  4. $2^n + n $

Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.

$$\begin{array}{|l|l|l|l|}\hline \text{}  &  \textbf{Recursive Algorithm } & \text{} & \textbf{Recurrence Relation} \\\hline  \text{P} & \text{Binary search} & \text{l.} & \text{$T(n) = T(n-k) +T(k) +cn$} \\\hline  \text{Q.} & \text{Merge sort} &\text{ll.}  &  \text{$T(n) = 2T(n-1) + 1$ }\\\hline\text{R.}  &  \text{Quick sort}& \text{lll.}  &  \text{$T(n) = 2T(n/2) + cn$}\\\hline \text{S.}  &  \text{Tower of Hanoi} & \text{lV.}  &  \text{$T(n) = T(n/2) + 1$} \\\hline \end{array}$$

Which of the following is the correct match between the algorithms and their recurrence relations?

  1. $\text{P-II, Q-III, R-IV, S-I}$
  2. $\text{P-IV, Q-III, R-I, S-II}$
  3. $\text{P-III, Q-II, R-IV, S-I}$
  4. $\text{P-IV, Q-II, R-I, S-III}$

Suppose $T(n) =2T (\frac{n}{2}) + n$, $T(0) = T(1) =1$

Which one of the following is FALSE?

  1. $T(n)=O(n^2)$

  2. $T(n)=\Theta(n \log n)$

  3. $T(n)=\Omega(n^2)$

  4. $T(n)=O(n \log n)$

Let $T(n)$ be a function defined by the recurrence

$T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
$T(1) = 1$

Which of the following statements is TRUE?

  1. $T(n) = \Theta(\log n)$
  2. $T(n) = \Theta(\sqrt n)$
  3. $T(n) = \Theta(n)$
  4. $T(n) = \Theta(n \log n)$

Consider the following recurrence:

$ T(n)=2T\left ( \sqrt{n}\right )+1,$ $T(1)=1$

Which one of the following is true?

  1. $ T(n)=\Theta (\log\log n)$
  2. $ T(n)=\Theta (\log n)$
  3. $ T(n)=\Theta (\sqrt{n})$
  4. $ T(n)=\Theta (n)$

Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.

Which of the following recurrences does $x_n$ satisfy?

  1. $x_n = 2x_{n-1}$
  2. $x_n = x_{\lfloor n/2 \rfloor} + 1$
  3. $x_n = x_{\lfloor n/2 \rfloor} + n$
  4. $x_n = x_{n-1} + x_{n-2}$

Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.

The value of $x_5$ is 

  1. $5$
  2. $7$
  3. $8$
  4. $16$

When $n = 2^{2k}$ for some $k \geqslant 0$, the recurrence relation

$T(n) = √(2) T(n/2) + √n$, $T(1) = 1$

evaluates to :

  1. $√(n) (\log n + 1)$
  2. $√(n) \log n$
  3. $√(n) \log √(n)$
  4. $n \log √n$

The running time of an algorithm is represented by the following recurrence relation:

$T(n) =  \begin{cases}
  n & n \leq 3 \\
  T(\frac{n}{3})+cn & \text{ otherwise }
 \end{cases}$

Which one of the following represents the time complexity of the algorithm?

  1. $\Theta(n)$

  2. $\Theta(n \log  n)$

  3. $\Theta(n^2)$

  4. $\Theta(n^2 \log  n)$

The recurrence relation capturing the optimal execution time of the $Towers \ of \ Hanoi$ problem with $n$ discs is

  1. $T(n) =  2T(n − 2) + 2$
  2. $T (n) =  2T(n − 1) + n$
  3. $T (n) =  2T(n/2) + 1$
  4. $T (n) =  2T(n − 1) + 1$

Which one of the following correctly determines the solution of the recurrence relation with $T(1) = 1$?

$$T(n)= 2T\left(\frac {n} {2}\right) + \log n$$

  1. $\Theta(n)$
  2. $\Theta(n\log n)$
  3. $\Theta(n^2)$
  4. $\Theta(\log n)$

Which one of the following is the recurrence equation for the worst case time complexity of the quick sort algorithm for sorting $n$ ( $\geq $ 2) numbers? In the recurrence equations given in the options below, $c$ is a constant.

  1. $T(n) = 2 T (n/2) + cn$
  2. $T(n) = T ( n - 1) + T(1) + cn$
  3. $T(n) = 2T ( n - 1) + cn$
  4. $T(n) = T (n/2) + cn$

Let a$_{n}$ represent the number of bit strings of length n containing two consecutive $1$s. What is the recurrence relation for $a_{n}$?

  1. $a_{n - 2} + a_{n - 1} + 2^{n - 2}$
  2. $a_{n - 2} + 2a_{n - 1} + 2^{n - 2}$
  3. $2a_{n - 2} + a_{n - 1} + 2^{n - 2}$
  4. $2a_{n - 2} + 2a_{n - 1} + 2^{n - 2}$

Consider the following recursive C function.

void get(int n)
{
    if (n<1) return;
    get (n-1);
    get (n-3);
    printf("%d", n);
}

If $get(6)$ function is being called in $main()$ then how many times will the $get()$ function be invoked before returning to the $main()$?

  1. $15$
  2. $25$
  3. $35$
  4. $45$

The given diagram shows the flowchart for a recursive function $A(n)$. Assume that all statements, except for the recursive calls, have $O(1)$ time complexity. If the worst case time complexity of this function is $O(n^{\alpha})$, then the least possible value (accurate up to two decimal positions) of $\alpha$ is ________.

Flow chart for Recursive Function $A(n)$.

Consider the recurrence function

$$T(n) = \begin{cases} 2T(\sqrt{n})+1, & n>2 \\ 2, & 0 < n \leq 2 \end{cases}$$

Then $T(n)$ in terms of $\Theta$ notation is

  1. $\Theta(\log \log n)$
  2. $\Theta( \log n)$
  3. $\Theta (\sqrt{n})$
  4. $\Theta(n)$

Consider the following recurrence relation:

$T\left(n\right)=
\begin{cases}
T\left(\frac{n}{k}\right)+ T\left(\frac{3n}{4}\right)+ n & \text{if } n \geq 2   \\
 1& \text{if } n=1   
\end{cases}$

Which of the following statements is FALSE?

  1. $T(n)$ is $O(n^{3/2})$ when $k=3$.
  2. $T(n)$ is $O(n \log n)$ when $k=3$.
  3. $T(n)$ is $O(n \log n)$ when $k=4$.
  4. $T(n)$ is $O(n \log n)$ when $k=5$.
  5. $T(n)$ is $O(n)$ when $k=5$.

Consider the following recurrence relation:

$T(n)
= \begin{cases}
2T (\lfloor\sqrt{n}\rfloor)+ \log n & \text{if }n \geq 2   \\
 1& \text{if }n = 1   
\end{cases}$

Which of the following statements is TRUE?

  1. $T(n)$ is $O(\log n)$.
  2. $T(n)$ is $O(\log n. \log \log n)$ but not $O(\log n)$.
  3. $T(n)$ is $O(\log^{3/2} n)$ but not $O(\log n. \log \log n)$.
  4. $T(n)$ is $O(\log^{2} n)$ but not $O(\log^{3/2} n)$.
  5. $T(n)$ is $O(\log^{2} n. \log \log n)$ but not $O(\log^{2} n)$.

Let $T(a, b)$ be the function with two arguments (both nonnegative integral powers of 2) defined by the following recurrence:

  • $ T(a, b) = T \left( \frac{a}{2}, b \right) +T\left( a, \frac{b}{2} \right) \quad \quad \text{ if } a, b \geq 2$;
  • $ T(a, 1) = T \left( \frac{a}{2}, 1 \right) \quad \quad \quad \text{ if } a \geq 2$;
  • $ T(1, b) = T \left( 1, \frac{b}{2} \right) \quad \quad \quad \text{ if } b \geq 2$;
  • $T(1, 1)=1$.

What is $T(2^r, 2^s)$?

  1. $rs$
  2. $r+s$
  3. $\begin{pmatrix} 2^r+2^s \\ 2^r \end{pmatrix}$
  4. $\begin{pmatrix} r+s \\ r \end{pmatrix}$
  5. $2^{r-s}$ if $r \geq s$, otherwise $2^{s-r}$

Which of the following functions, given by there recurrence, grows the fastest asymptotically ?

  1. T(n) = 4T$\left ( \frac{n}{2} \right )$ + 10n
  2. T(n) = 8T$\left ( \frac{n}{3} \right )$ + 24n$^{2}$
  3. T(n) = 16T$\left ( \frac{n}{4} \right )$ + 10n$^{2}$
  4. T(n) = 25T$\left ( \frac{n}{5} \right )$ + 20$\left ( n log n \right )^{1.99}$
  5. They all are asymptotically the same

Answers: Recurrence

Selected Answer
$T(n) = T(n-1)+n$
           $=T(n-2)+(n-1)+n$
           $=T(n-3)+(n-2)+(n-1)+n$
           .
           .
           .
           $=T(n-k)+\left[(n-k+1)+ (n-k+2)+\ldots+(n-1) + n\right]$   

Recurrence stops at,

$n-k=1$
$k=n-1$

$\therefore T(n) = T(1)+\left[2+3+\ldots+n\right]
\\= 1+ 2+3+\ldots + n
\\= \frac{n(n+1)}{2}$

PS: Unless explicitly asked for asymptotic bound, we should always try to get the exact answer.
23 votes -- kirti singh ( 2.6k points)
Selected Answer
$T(n) = T(n/2) + 1$

$=T(n/4) + 2$

$= T(n/8) + 3$

$\vdots$

$=T(n/{2^k}) + k.$

Recurrence stops when $2^k >= n$.

When $2^k = n,k = \lg n$

So, $T(n) = T(1) + \lg n \\= 1 + \lg n$

PS: Unless explicitly asked for asymptotic bound, we should give exact answers for solutions of recurrence equations.
11 votes -- Arjun Suresh ( 328k points)
Selected Answer
$T(n) = T(\frac{n}{2})+ \sqrt n$

$\quad = T(\frac{n}{4}) + \sqrt n + \sqrt {(n/2)}$

$\ldots = \sqrt n + \sqrt {(n/2)}+\sqrt {(n/4)}+\sqrt {(n/8)}+\ldots + \sqrt{(n/2^{\lg n-1})} + T(1)$

$ \quad = \sqrt n \left[ 1+\frac{1}{\sqrt 2} + \frac{1}{{\sqrt 2}^2} + \ldots + \frac{1}{\sqrt 2^{{\lg n}}}\right]$

$\quad = \sqrt n \left[ \frac{1 - (\frac{1}{\sqrt 2})^{\lg n+1}}{1-\frac{1}{\sqrt 2}}\right]$ (Sum of $\lg n +1$ terms of GP with $a = 1$ and $r = 1\sqrt 2)$

$\quad = \sqrt n \left[ \frac{1 - \frac{1}{\sqrt 2\sqrt n}}{1 - \frac{1}{\sqrt 2}}\right]$

$\quad =  \frac{\sqrt {2n} -1}{\sqrt 2 - 1}$

$\quad =  \left({\sqrt {2n} -1}\right)\left({\sqrt 2 + 1}\right)$

$\quad = \sqrt n \left(2+\sqrt 2\right)-\sqrt 2-1$
10 votes -- Arjun Suresh ( 328k points)
Selected Answer
$T(n) = \frac{n+1}{n}T(n-1) + 1\qquad\qquad \to(1)$

$T(n-1) = \frac{n}{n-1}T(n-2) + 1\qquad \to(2)$

$T(n-2) = \frac{n-1}{n-2}T(n-3) + 1\qquad \to(3)$

Substituting value of $T(n-1)$ from $(2)$ in $(1)$

$\implies T(n) = \frac{n+1}{n}*\frac{n}{n-1}T(n-2) + \frac{n+1}{n} + 1$

$\implies T(n) = \frac{n+1}{n-1}T(n-2) + \frac{n+1}{n} + 1$

Now substituting value of $T(n-2)$ in above equation

$\implies T(n) =  \frac{n+1}{n-1}* \frac{n-1}{n-2}T(n-3) + \frac{n+1}{n-1} + \frac{n+1}{n} + 1$

$\implies T(n) =  \frac{n+1}{n-2}T(n-3) + \frac{n+1}{n-1} + \frac{n+1}{n} + 1$

$\displaystyle{\vdots}$

so on

$T(n) = \frac{n+1}{n-k+1}T(n-k) + \frac{n+1}{n} + \frac{n+1}{n-1} + \ldots + \frac{n+1}{n-k+2} + 1$

$T(n) = \frac{n+1}{n-k+1}T(n-k) + (n+1)*( \frac{1}{n} + \frac{1}{n-1} +\ldots+ \frac{1}{n-k+2}) + 1$

Now let $n-k=1$ so $k = n-1$, substitute value of $k$ in above equation

$\implies T(n) = \frac{n+1}{n-(n-1)+1}T(1) + (n+1)*( \frac{1}{n} + \frac{1}{n-1} +\ldots+ \frac{1}{n-(n-1)+2}) + 1$

$\implies T(n) = \frac{n+1}{2} + (n+1)\times ( \frac{1}{n} + \frac{1}{n-1} +\ldots+ \frac{1}{3}) + 1$

$\implies T(n) = \frac{n+1}{2}+ (n+1)\times (H_n - \frac{1}{2} - 1) + 1$

$\implies T(n) = \frac{n+1}{2} + (n+1)\times H_n - \frac{n+1}{2} - (n+1) + 1$

$\implies \mathbf{T(n) = (n+1)\times H_n - n}$

Now, $H_n \approx \log n+γ$

where $γ$ is the Euler-Mascheroni constant.

$\mathbf{T(n) = O(n \log n)}$
28 votes -- Digvijaysingh Gautam ( 6.4k points)
Selected Answer
  1. The function $F(n)$ is NOT a recursive function. You can't have a recurrence relation for it in the first place!
  2. $F(n)$ calculates $(n-1)^n$.

The equivalent C++ code is as follows: (You can try it out here: http://ideone.com/w0u4lk)

long F(long n) {
	long F1 = 1;
 
	if(n==1) { return 3; }
	else {
		for(long i = 1; i <= n; i++) {
			long C = 0;
			// Note: the belore For loop only has one line
			for(long j = 1; j <= n-1; j++) { C = C+1; }
			// At the end of this for loop, C will be = (n-1)
			F1 = F1 * C;
		}
	}
	return F1;
}

It is clear that the inner for loop can be replaced by a single statement as follows:

long F(long n) {
	long F1 = 1;
 
	if(n==1) { return 3; }
	else {
		for(long i = 1; i <= n; i++)
			F1 = F1 * (n-1);
	}
	return F1;
}

And this calculates $(n-1)^n$

References
27 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer
Function F(n)
begin
F1 ← 1
if(n=1) then F ← 3  //if (n==1) then return 3 
 else
   For i = 1 to n do
        begin
            C ← 0
           For j = 1 to n – 1 do //inner loop runs n-1 times outer loop runs for n times 
           begin C ← C + 1 end   //means C=n-1
           F1 = F1 * C  //means n-1 is getting multiplied n times so ans is (n-1)^n for n>=2 
       end 
    F = F1 
end
12 votes -- Rajesh Pradhan ( 18.8k points)
Selected Answer
$a_n= a_{n-1} + n-1$ ($n-1$ comparisons for $n$ numbers)

$a_n= a_{n-2} + (n-2)+(n-1)$  

$a_n= a_{n-3} + (n-3)+(n-2)+(n-1)$  
.
.
.

$a_n= a_{n-n} + (n-n) + (n-(n-1)) + .... + (n-3)+(n-2)+(n-1)$  

$a_n= 0 + 1 + 2 + ....+ (n-3) + (n-2) + (n-1)$

which given $a_n= \frac{(n-1) \times (n)}{2}$
21 votes -- Rajarshi Sarkar ( 27.6k points)
Selected Answer

It is B. searching for only one half of the list. leading to $T(n/2) $+ constant time in comparing and finding mid element.

34 votes -- Gate Keeda ( 15.8k points)
Selected Answer

Answer: A

According to Master theorem,
$T(n) = aT(\frac{n}{b}) + f(n)$ can be expressed as:
$T(n) = [n^{\log_ba}][T(1) + u(n)]$
where $u(n) = \Theta(h(n))$ where $h(n) = \frac{f(n)}{n^{\log_ba}} = \frac{n}{n^{\log_43}} = n^{1-\log_43}$ as $h(n) = n^r$ where $r>0$.
So, $T(n) = [n^{\log_ba}][T(1) + u(n)] = T(n) = [n^{\log_43}][T(1) + \Theta(n^{1-\log_43})] = \Theta(n^{1})$.

19 votes -- Rajarshi Sarkar ( 27.6k points)
a) $\frac{n(n-1)}{2}$

b) $\frac{n(n-m+1)}{m!}$
1 votes -- Prashant Singh ( 46.5k points)
Selected Answer

Answer is $B$.

using master method (case 1)

where a = 2, b = 2

O(n1/2) < O(nlogba)

O(n1/2) < O(nlog22)

25 votes -- Ankit Rokde ( 6.9k points)
Selected Answer
$T(n) = 2T(n-1) -1$

$\qquad =2(2T(n-2)-1) -1$

$\qquad =2^2T(n-2) -2 -1$

$\qquad =2^2(2T(n-3)-1) -2 -1$

$\qquad =2^3T(n-3) - 2^2 -2 -1$

$\qquad \dots $

$\qquad =2^{n-1}T(n-(n-1)) - \left(2^{n-2} +2^{n-3}+\dots+2^2 + 2 +1\right)  $
$\qquad =2^{n-1}\times 2 - \frac{2^{n-1}-1}{2-1}  \because T(1) = 2, S_n = \frac{a.\left(r^n -1\right)}{r-1}$

$\qquad =2^n - \left(2^{n-1} -1\right)$

$\qquad =2^{n-1} + 1$
30 votes -- srestha ( 83.4k points)
Selected Answer
(M) $T(n)$ = Sum of first n natural numbers = $\frac{n(n+1)}{2} = O(n^2)$

(N) $T(n) = \Theta(n)  =O(n)$, third case of Master theorem

($f(n) = n = \Omega \left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0 + \epsilon}\right) $, satisfied for any positive $\epsilon \leq 1$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} < cn$, satisfied for any $c$ between $0$ and $0.5$)

(O) $T(n) = \Theta(n \log n) = O (n \log n)$, third case of Master theorem

($f(n) = n \log n =\Omega\left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0.5 + \epsilon}\right) $, satisfied for positive $\epsilon = 0.5$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} \log \frac{n}{2} < cn \log n$, satisfied for $c = 0.5$)

(P) Like in (M), here we are adding the log of the first n natural numbers. So,

$T_n = \log 1 + \log 2 + \log 3 + \dots + \log n$

$=\log (1\times 2\times \dots \times n)$

$=\log(n!)$

$=\Theta(n \log n)$  (Stirling's Approximation)
43 votes -- Arjun Suresh ( 328k points)
Selected Answer

Let $x = 2^k$
$T(x) = 3T\left(\frac{x}{2}\right) + 1$
We can apply Master's theorem case 1 with $a = 3$ and $b = 2$ as $f(x) = 1 = O\left(x^{\log_2 3 - \epsilon} \right), \epsilon > 0 $

So, $T(x) = \Theta \left(x^{\log_2 3}\right)  \\= \Theta \left( {2^k}^{\log_2 3} \right)\\= \Theta \left({2^{\log_2 3}}^k \right)\\ = \Theta \left(3^k \right)$

So, only option possible is B.



We can also directly solve as follows:

$T(x) = 3T\left(\frac{x}{2}\right) + 1$
$\\\quad= 9T \left (\frac{x}{4}\right) + 1 + 3$
$ \\\quad \vdots$
$ \\\quad= 3^{\log_2 2^k} + \left( 1 + 3 + 9 + \dots + 3^{\log_2 {2^k-1}}\right)$
$\\\quad \left(\text{recursion depth is }\log_2 x \text{ and } x = 2^k\right)$
$ \\\quad= 3^{k} + \frac{3^{\log_2 {2^k}} - 1}{3-1}$
$ \\\quad \left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)$
$\\\quad =3^k + \frac{3^k - 1}{2}$
$\\\quad=\frac{3. 3^k - 1}{2}$
$\\\quad=\frac{3^{k+1} -1}{2} $



OR
 
$T\left(2^k\right) = 3T\left(2^{k-1}\right) + 1$
$ \\\quad= 3^2T\left(2^{k-2}\right) + 1 +3$
$ \quad\vdots$
$ \\\quad= 3^k T\left(2^{k-k}\right) + \left( 1 + 3 + 9 + \dots + 3^{k-1}\right)$
$ \\\quad \left(\text{recursion depth is }k\right)$
$\\\quad= 3^k + \frac{3^{k -1}} {3-1}$
$\\\quad\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)$
$\\\quad=3^k + \frac{3^k -1}{2}$
$\\\quad=\frac{3. 3^k - 1}{2}$
$\\=\frac{3^{k+1} -1}{2}  $

53 votes -- Arjun Suresh ( 328k points)
Selected Answer

The complexity will be the number of times the recursion happens which is equal to the number of times we can take square root of n recursively, till n becomes $2$.

$T(n) = T\left(\lceil \sqrt n \rceil \right) + 1$

$T(2) = 1$
$T\left(2^2\right) =  T(2) + 1 = 2$
$T\left(2^{2^2}\right) =  T(4) + 1 = 3$
$T\left(2^{2^3}\right) =  T(16) + 1 = 4$

So, $T(n) = \lg\lg n + 1 = O\left(\log \log n\right)$

Answer Option C

47 votes -- Arjun Suresh ( 328k points)
Selected Answer

$T(m^2) = T(m^2-1) + \left\lfloor\sqrt{(m^2)} \right\rfloor$

               $= T(m^2 - 2) + \left\lfloor\sqrt{(m^2 - 1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$

               $= T(m^2 - 3) + \left\lfloor\sqrt{(m^2 - 2)} \right\rfloor + \left\lfloor\sqrt{(m^2 - 1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$

               $\vdots$

               $= T(1) + \left\lfloor\sqrt{(2)} \right\rfloor + \left\lfloor\sqrt{(3)} \right\rfloor + \ldots + \left\lfloor\sqrt{(m^2)} \right\rfloor$

               $= 3 \times 1 + 5 \times 2 +  \ldots + \left(2m - 1\right) \times (m-1) +m $

(We are taking floor of square root of numbers, and between successive square roots number of numbers are in the series $3,5,7 \dots$ like $3$ numbers from $1..4$, $5$ numbers from $5-9$ and so on).

We can try out options here or solve as shown at end:

Put $m = 5$, $T(25)  = 3 \times 1 + 5 \times 2 + 7 \times 3 + 9 \times 4 + 5 = 75$

  1. $59$
  2. $75$
  3. non-integer
  4. $297.5$


So, answer must be B.


$T(m^2) = 3 \times 1 + 5 \times 2 +  \dots + \left(2m - 1\right) \times (m-1) +m$
              $= m + \displaystyle \sum_{i=1}^{m-1} \left[ (2i+1). (i) \right] $
              $= m + \displaystyle \sum_{i=1}^{m-1} \left[2i^2 + i\right]$
              $= m + \frac{(m-1) .m .(2m-1)}{3} + \frac{(m-1)m}{2}$
              $= \frac{m}{6} \left(6 + 4m^2 -2m -4m + 2 + 3m - 3\right)$
              $= \frac{m}{6} \left(4m^2 -3m + 5\right) $

  • Sum of the first $n$ natural numbers $=\frac{n. (n+1)}{2}.$ 
  • Sum of the squares of first $n$ natural numbers $ = \frac{n. (n+1). (2n+1)}{6}.$
41 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is D) $O(2^n)$

 

int recursive (int n) {
    if(n == 1)      // takes constant time say 'A' time
        return (1); // takes constant time say 'A' time
    else
        // takes T(n-1) + T(n-1) time
        return (recursive (n-1) + recursive (n-1));
}

 

$T(n) = 2T(n - 1) + a$ is the recurrence equation found from the pseudo code. Note: $a$ is a constant $O(1)$ cost that the non-recursive part of the function takes.

Solving the recurrence by Back Substitution:

$$\begin{align}
T(n) &=  2T(n - 1) + a \\[1em]
T(n - 1) &= 2T(n - 2) + a \\[1em]
T(n - 2) &= 2T(n - 3) + a \\[1em]
&\vdots
\end{align}$$

Thus, we can re-write the equation for $T(n)$ as follows

 $$\begin{align*}
T(n)
&= 2 \Bigl [ 2T(n - 2) + a \Bigr ] + a &= 4T(n - 2) + 2a + a \\[1em]
&= 4 \Bigl [ 2T(n - 3) + a \Bigr ] + 3a &= 8T(n - 3) + 4a + 2a + a \\[1em]
&\vdots \\[1em]
&= 2^k T(n - k) + (2^k - 1) a
\end{align*}$$

On Substituting Limiting Condition

$$
T(1) = 1 \\
\implies n - k = 1 \\
\implies k = n - 1
$$

Therefore, our solution becomes

$$2^{n - 1} + \Bigl ( 2^{n - 1} - 1 \Bigr ) a \\ = O(2^n)$$

32 votes -- Akhil Nadh PC ( 16.4k points)
Selected Answer
$T(n) = 2T(n-1) +n, n \geqslant 2 , T(1) = 1$

$  T(n)    = n + 2(n-1) + 2^2 (n-2) + \dots + 2^{(n-1)}(n -(n-1))$

           $=n( 1 + 2 + \dots + 2^{n-1}) - (1.2 + 2.2^{2} +3.2^{3}+\dots+ (n-1).2^{n-1})$

           $=n(2^n-1) -(n.2^n-2^{n+1}+2)$

           $=2^{n+1}-n -2$

Correct Answer: $A$
44 votes -- suraj ( 4.8k points)
T(1) = 1

T(2) = 4

T(3) = 11

T(4) = 26

T(5) = 57

T(6) = 120

T(7) = 247

So, $$T(n) = 2^{n+1} - n - 2$$
73 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is B.

9 votes -- Sankaranarayanan P.N ( 8.6k points)
Selected Answer

Applying Masters theorem,

$T(n) = \Theta (n \log n)$

So, it cannot be  $\Omega(n^2)$.

Hence, answer is (C).

30 votes -- shreya ghosh ( 2.8k points)
Selected Answer

Option $C$ is the answer. It can be done by Master's theorem.

$n^{\log_b a} = n^{\log_2 2} = n$.

$f(n) = \sqrt n = n^{\frac{1}{2}}$.

So, $f(n) = O\left(n^{\log_b a -\epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
$$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$

29 votes -- Bhagirathi Nayak ( 11.6k points)
Selected Answer

$T(n) = 2T\left(n^{\frac{1}{2}}\right)+1$

           $ = 2\left(2T\left(n^{\frac{1}{2^2}}\right) + 1\right) + 1 $

           $ = 4\times T\left(n^{\frac{1}{2^2}}\right) + 5 $

           $ = 8\times T\left(n^{\frac{1}{2^3}}\right) + 13 \\ \cdots$

           $=2^{(\lg \lg n)} + 2 \times \lg \lg n  + 1\text{ (Proved below)} $

           $= \Theta(\lg n)$


$n^{\frac{1}{2^k}} = 2 \\ \text{(Putting 2 so that we can take log.}\\\text{One more step of recurrence can't change the complexity.)} \\\implies \frac{1}{{2^k}} \lg n = 1 \text{(Taking log both sides)}\\\implies \lg n = 2^k \\\implies k = \lg \lg n$

So, answer is B, $T(n) = \Theta(\log n)$

46 votes -- Arjun Suresh ( 328k points)
Selected Answer
$0 \ 1 -2$
$01 \ 10 \ 11 -3$
$010 \ 011 \ 101 \ 110 \ 111 -5$
$0101 \  0110 \  0111 \  1010 \  1011 \ 1101 \  1110 \ 1111 -8$

So, $x_n = x_{n-1} + x_{n-2}$ (For all the strings ending in $1$, we get two new strings ending in either $0$ or $1$ and for all strings ending in $0$, we get a new string ending in $1$. So, the new set of strings for $x_{n}$, will have exactly $x_{n-1}$ strings ending in $1$ and $x_{n-2}$ strings ending in $0)$

$x_{5}= 8+5 = 13$

Correct Answer: $D$
27 votes -- Arjun Suresh ( 328k points)
Here X1 =0,1(2)

X2=01,10,11(total 3)

X3=010,011,101,110,111(total 5)

Here X3=X2+x1

X4=3+5=8

X5=5+8=13

No option matching here.
16 votes -- Manoj Kumar ( 26.6k points)
Selected Answer

$T(n) = \sqrt(2) T\left(\frac{n}{2}\right) + \sqrt n$
           $= {\sqrt 2}^2 T\left(\frac{n}{2^2} \right) +\sqrt {2} \sqrt {\frac{n}{2}} + \sqrt n$
           $\vdots$
           $= {\sqrt 2}^ {\lg n} T(1) +\lg n \sqrt n$
           $=\sqrt n + \lg n \sqrt n$
           $= \sqrt n \left( \lg n + 1\right)$

If we use Master theorem we get option B. But one must know that Master theorem is used to find the asymptotic bound and not an EXACT value. And in the question here it explicitly says "evaluates to".

62 votes -- Arjun Suresh ( 328k points)
Selected Answer

$a=1, b=3,  \log_b a=0$

So $n^{\log_b a} = n^0=1$

$f(n)=n$

So, $f(n)=Ω(1)$

To, check Master theorem case 3, we need $c>0$,

$f(n/3)\leq c f(n)$

$c=1$

So using case  three of master theorem

$T(n)= \Theta(f(n)) = \Theta(n)$

answer is A.

22 votes -- Pooja Palod ( 23.7k points)
Selected Answer

Recurrence relation for Towers of Hanoi is

$T(1) = 1$

$T(n) = 2 T( n-1 ) +1$

So Answer should be (D)

31 votes -- Narayan Kunal ( 309 points)
Selected Answer

$f(n) =  \log n$

$a = 2, b = 2 \implies n^{\log_b a} = n$

So, $f(n) = \log n = O\left(n^{1-\epsilon} \right)$, we can take any $\epsilon$ from $0$-$1$ for example $0.5$ which gives $\log n = O(\sqrt(n))$, whose proof is given here: http://math.stackexchange.com/questions/145739/prove-that-logn-o-sqrtn

So, Master theorem Case 1, and answer will be $O\left(n^{\log_2 2}\right) = O(n)$


Alternate way:

$T(1) = 1 \\ T(2) = 2T(1) + \log 2 = 3 = 3n - 2\\T(4) = 2T(2) + \log 4 = 8 = 3n - 4 \\T(8) = 2T(4) + \log 8 = 19 = 3n - 5\\ T(16) = 2T(8) + \log 16 = 42 = 3n - 6$

The second term being subtracted is growing at a lower rate than the first term. So, we can say $T(n) = O(n)$.

Correct Answer: $A$

References
33 votes -- Arjun Suresh ( 328k points)
Selected Answer

B.
Worst case for quick sort happens when $1$ element is on one list and $n-1$ elements on another list.

32 votes -- Arjun Suresh ( 328k points)
Selected Answer

Counting the number of bit strings NOT containing two consecutive $1$'s. $($It is easy to derive a recurrence relation for the NOT case as shown below$)$

$0 \quad 1$
$00 \quad 01 \quad 10 - 3$ $($append both $0$ and $1$ to any string ending in $0$, and append $0$ to any string ending in $1$$)$
$000 \quad 001 \quad 010 \quad 100 \quad 101 - 5$ $($all strings ending in $0$ give two strings and those ending in $1$ give $1$ string$)$
$0000 \quad 0001 \quad 0010 \quad 0100 \quad 0101 \quad 1000 \quad 1001 \quad 1010 - 8$
$\vdots$

$a_n' = a_{n-1}' + a_{n-2}' $ $($where $a_n$ denote the number of bit strings of length $n$ containing two consecutive $1$s$)$

$2^n - a_n = (2^{n-1}  - a_{n-1}) + (2^{n-2} - a_{n-2})$

$a_n= 2^{n-2}(4 - 2 - 1) + a_{n-1} +a_{n-2}$

$a_n= a_{n-1} + a_{n-2} + 2^{n-2}$

A is choice.

62 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer : Option B

$\text{T(n) = T(n-1) + T(n-3) + 2}$, here $\text{T(n)}$ denotes the number of times a recursive call is made for input $n$. $2$ denotes the two direct recursive calls.

$\text{T(n $\leq0$) = 0}$
$\text{T(1) = 2}$
$\text{T(2) = 4}$
$\text{T(3) = 6}$
$\text{T(4) = 10}$
$\text{T(5) = 16}$
$\text{T(6) = 24}$

So, answer is $\text{24 + 1}$ call from main = $25$.

58 votes -- Arjun Suresh ( 328k points)
Selected Answer
If they are asking for worst case complexity hence,
By calling $A(n)$ we get $A(n/2)$ $5$ times,

$A(n)=5A (n /2) + O(1)$

Hence, by applying masters theorem,
Case 1 : $a>b^k$

$n^{\log_25}$

Thus value of alpha will be $2.32$
75 votes -- Shashank Chavan ( 2.4k points)
Selected Answer

$T(n)=2T({\sqrt{n}})+1$

Put, $n=2^m$

$T(2^m)=2T(2^{m/2})+1$

put, $T(2^m) =s(m)$

$s(m)=2s(m/2)+1$

Using case 1 of master method ,

$=\Theta(m) = \Theta(\log n)$

 https://gateoverflow.in/1829/gate2006-51-isro2016-34?show=37791#c37791

Correct Answer: $B$

References
46 votes -- jatin saini ( 4.1k points)
Selected Answer

Considering when $k=3$

$T(n)=T(\frac{n}{3})+T(\frac{3}{4})+n$

Below is the recursion tree

Since, the combine work according to recurrence is linear in term of the size of the sub-problem

so at Level 1, total combine work is $n$

Combined work at level 2$=\frac{n}{3}+\frac{3n}{4}=\frac{13n}{12}$

At level 3 combine work $=\frac{n}{9}+\frac{3n}{12}+\frac{3n}{12}+\frac{9n}{16}=(\frac{13}{12})^2n$

So, at level $k$, when the recursion bottoms out at $T(1),$ the combine work will be $(\frac{13}{12})^{k-1}.n$

The left height of the tree is when the subproblem $\frac{n}{3}$ would bottom out and that is when $\frac{n}{3^k}=1$

gives $k=\log_3n$

The right height of the tree is $\log_{\frac{4}{3}}n$

for the purpose of upper bound consider right height and then sum up total combine work from root till the leaf level

$n+\frac{13}{12}n+(\frac{13}{12})^2.n+\ldots+(\frac{13}{12})^{\log_{\frac{4}{3}}n-1}n$

$=n\left [ 1+\frac{13}{12}+\ldots +(\frac{13}{12})^{\log_{\frac{4}{3}}n-1} \right ]$

$=n \left[ 1.\frac{(\frac{13}{12})^{\log_{\frac{4}{3}}n}-1}{\frac{13}{12}-1} \right]$

Using property 

$a^{\log_bc}=c^{\log_ba}$

and ignoring denominator as constant term for upper bound

$n.\left [ n^{\log_{\frac{4}{3}}\frac{13}{12}} -1\right ]=n^{1.27}$ (approximately)

Option (A) : $O(n^{1.5})$ is surely an upper bound when k=3. Correct option.

Option (B): $O(n\log n)$

this means $n.n^{0.27} \leq c.n.\log n$

$n^{0.27}\leq c.\log n$

This is clearly false as polynomials grow faster than poly-logarithms.

Option(B) can never be an upper bound, yes lower bound possible when $k=3.$

When $k=4.$

$T(n)=T(\frac{n}{4})+T(\frac{3n}{4})+n$

If we draw recursion tree for this, at every level cost comes to be n and there would be maximum $\log_{\frac{4}{3}}n$ levels

So, in this case $T(n)=O(n\log n)$. Option (C) correct.

When $k=5,$

$T(n)=T(\frac{n}{5})+T(\frac{3n}{4})+n$

It's recursion tree would look like below:

Level 1 Total cost $=n$

Level 2 total cost $=\frac{19n}{20}$

Level 3 total cost $=(\frac{19}{20})^2.n$

Level k cost $=(\frac{19}{20})^{k-1}.n$

Number of levels on right $=\log_{\frac{4}{3}}n$

Number of levels on left $=\log_5n$

For the purpose of upper bound I'll consider right height.

Computing total cost from root till leaf

$n.\left[1+\frac{19}{20}+\ldots +(\frac{19}{20})^{\log_{\frac{4}{3}}n-1} \right]$

$=n.\left[ \frac{1-(\frac{19}{20})^{\log_{\frac{4}{3}}n}}{1-\frac{19}{20}}\right]$

Ignoring denominator as we are only interested in some function of $n.$

$=n.\left[ 1-(\frac{19}{20})^{\log_{\frac{4}{3}}n} \right]$

$=n\left[1-n^{\log_{\frac{4}{3}}\frac{19}{20}} \right]$

$=n[1-n^{-0.17}]$

$=n-n^{0.82}$

So, option(E) is correct.

Option(D) which says $O(n\log n)$ might not look convincing but

$n-n^{0.82} \leq c.n\log n$

$1-\frac{1}{n^{0.17}} \leq c\log n$

for $c=1,n_o=2$ this satisfies big-oh.

Option (D) is also correct.

Answer: option(B)

28 votes -- Ayush Upadhyaya ( 25.8k points)
Selected Answer

Let $n=2^k$

$T\left(2^k\right)= 2T\left(2^{k/2}\right) + k $           

Let $T\left(2^k\right)=S(k) \implies T\left(2^{k/2}\right) = S(k/2)$

$S(k)=2S(k/2)+k$

This gives $S(k) = \Theta(k \log k)$ (Master theorem case 2)

$ = \Theta(\log n \log \log n)$

So ans is b

19 votes -- Pooja Palod ( 23.7k points)

Value of $T(2^r,2^s)$ is nothing but the number of leaf nodes in its recursion tree. The reason for this is that there is only one base condition $T(1,1)$ which returns $1$ which are all ultimately added. Here is an example recursion tree for $T(2^2,2^3)$.

 

 

 

We will try to use some combinatorial technique to count the number of leaf nodes. One way of counting the number of leaf nodes in the recurrence tree is by counting all possible paths that one can take from the root to the leaf node. The reason for doing this would be clear in a moment. This idea would be used to establish a link with a very common combinatorial problem.

Claim: If we consider a grid of size $r \times s$ then the possible paths from $(r,s)$ to $(0,0)$ in that grid such that we can either move backwards or downward is nothing but all possible paths in recurrence tree from the root node to leaf node.

This happens to be true because arguments of recurrence relation are in powers of two, and at each level of recurrence tree, the power of one of the arguments get reduced by one.

$$\begin{align} T(2^r, 2^s) &= T(2^{r-1},2^s) + T(2^r, 2^{s-1}) \\ T(1,2^s) &= T(1,2^{s-1}) \\ T(2^r,1) &= T(2^{r-1},1) \\ T(1,1) &= 1 \\ \end{align}$$

Continuing from our previous example here is a grid of size $2 \times 3$ with two valid paths highlighted.

 

The number of valid paths possible in this grid will give us the count of the number of leaf nodes in the recurrence tree. Interestingly this problem, when interpreted exactly in opposite way, happens to be a very well known problem of combinatory.

In a grid of size $r \times s$, count the number of possible paths from $(0,0)$ to $(r,s)$ such that we can either move forward(F) or upward(U).

Video:

So there are $\binom{r+s}{r}$ many different valid paths on this grid to reach $(r,s)$ from $(0,0)$ or vice versa. And in turn, there are same numbers of leaf nodes in the recursion tree. Therefore, we can conclude with - $$T(2^r,2^s) = \binom{r+s}{r}$$

Video:
10 votes -- Prateek Dwivedi ( 3.4k points)
Selected Answer
By applying master theorem,

(a) T(n) = $\Theta$($n^2$)

(b) T(n) = $\Theta$($n^2$)

(c) T(n) = $\Theta$($n^2$*log n)

(d) T(n) = $\Theta$($n^2$)

C is growing fastest.
13 votes -- Hemant Parihar ( 12k points)

Consider the following program written in pseudo-code. Assume that $x$ and $y$ are integers.

Count (x, y) {
    if (y !=1 ) {
        if (x !=1) {
            print("*");
            Count (x/2, y);
        }
        else {
            y=y-1;
            Count (1024, y);
        }
    }
}

The number of times that the $print$ statement is executed by the call $Count(1024, 1024)$ is _____

Answers: Recursion

Selected Answer
Count (x, y) {
    if (y !=1 ) {
        if (x !=1) {
            print("*");
            Count (x/2, y);
        }
        else {
            y=y-1;
            Count (1024, y);
        }
    }
}

Here, for each y value print("$*$") will run $10$ times. Once $x$ value reaches $1$, $count(1024, y-1)$ will be called. Variable $y$ can take values $[2\ \ 1024]$ $i.e.$ total $1023$ values. So,

The total number of times '$*$' will be printed =  $\big($Number of times '$*$' printed per $y$ value$\big)$*$\big($number of values $y$ takes$\big)$.

Number of times '$*$' printed $= 10* 1023 = 10230$

19 votes -- Digvijay ( 44.8k points)

Consider the following program that attempts to locate an element $x$ in an array $a[ ]$ using binary search. Assume $N > 1$. The program is erroneous. Under what conditions does the program fail?

var i,j,k: integer; x: integer;
    a: array; [1..N] of integer;
begin	i:= 1; j:= n;
repeat	
    k:(i+j) div 2;
    if a[k] < x then i:= k
    else j:= k
until (a[k] = x) or (i >= j);
    
if (a[k] = x) then
    writeln ('x is in the array')
else
    writeln ('x is not in the array')
end;

The average number of key comparisons required for a successful search for sequential search on $n$ items is

  1. $\frac{n}{2}$
  2. $\frac{n-1}{2}$
  3. $\frac{n+1}{2}$
  4. None of the above

Consider the following algorithm for searching for a given number $x$ in an unsorted array $A[1..n]$ having $n$ distinct values:

  1. Choose an $i$ at random from $1..n$
  2. If $A[i] = x$, then Stop else Goto 1;

Assuming that $x$ is present in $A$, what is the expected number of comparisons made by the algorithm before it terminates?

  1. $n$
  2. $n-1$
  3. $2n$
  4. $\frac{n}{2}$

Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous. 

 f (int Y[10] , int x) {
    int u, j, k;
    i= 0; j = 9;
    do {
        k = (i+ j) / 2;
        if( Y[k] < x) i = k;else j = k;
        } while (Y[k] != x) && (i < j)) ;
    if(Y[k] == x) printf(" x is in the array ") ;
    else printf(" x is not in the array ") ;
 }

On which of the following contents of $Y$ and $x$ does the program fail? 

  1. $Y$ is $[1 \ 2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8 \ 9 \ 10]$ and $x < 10 $  
  2. $Y$ is $[1 \ 3 \ 5 \ 7 \ 9 \ 11   \ 13 \ 15 \ 17 \ 19]$ and $x < 1 $ 
  3. $Y$ is $[2 \  2 \ 2 \ 2 \  2 \  2 \  2 \ 2 \ 2 \ 2]$ and $x > 2$ 
  4. $Y$ is $[2 \ 4 \ 6 \ 8 \ 10 \ 12 \ 14 \ 16 \ 18 \ 20]$ and $ 2 < x < 20$ and $x$ is even

Consider the following C program that attempts to locate an element $x$ in an array $Y[ \ ]$ using binary search. The program is erroneous. 

 f (int Y[10] , int x) {
    int u, j, k;
    i= 0; j = 9;
    do {
        k = (i+ j) / 2;
        if( Y[k] < x) i = k;else j = k;
        } while (Y[k] != x) && (i < j)) ;
    if(Y[k] == x) printf(" x is in the array ") ;
    else printf(" x is not in the array ") ;
 }

The correction needed in the program to make it work properly is 

  1. Change line 6 to: if $(Y[k] < x) i = k + 1$; else $j = k-1$; 
  2. Change line 6 to: if $(Y[k] < x) i = k - 1$; else $ j = k +1$; 
  3. Change line 6 to: if $(Y[k] < x) i = k$; else $j = k$;
  4. Change line 7 to: } while $((Y[k] == x) \&\& (i < j))$ ;

Let $A$ be an array of $31$ numbers consisting of a sequence of $0$'s followed by a sequence of $1$'s. The problem is to find the smallest index $i$ such that $A\left [i \right ]$ is $1$ by probing the minimum number of locations in $A$. The worst case number of probes performed by an optimal algorithm is ____________.

Suppose you are given an array $A$ with $2n$ numbers.

The numbers in odd positions are sorted in ascending order, that is, $A[1] \leq A[3] \leq \ldots \leq A[2n - 1]$.

The numbers in even positions are sorted in descending order, that is, $A[2] \geq A[4] \geq \ldots \geq A[2n]$.

What is the method you would recommend for determining if a given number is in the array?

  1. Sort the array using quick-sort and then use binary search.
  2. Merge the sorted lists and perform binary search.
  3. Perform a single binary search on the entire array.
  4. Perform separate binary searches on the odd positions and the even positions.
  5. Search sequentially from the end of the array.

Consider the following three version of the binary search program. Assume that the elements of type $T$ can be compared with each other; also assume that the array is sorted.

i, j, k : integer;  
a : array [1....N] of T;   
x : T;                       

Program 1 :   i := 1; j := N;   
             repeat   
                   k := (i + j) div 2; 
                   if a[k] < x then i := k else j := k  
             until (a[k] = x) or (i > j)                   
Program 2 :   i := 1; j := N;  
              repeat  
                   k := (i + j) div 2;   
                  if x < a[k] then j := k - 1; 
                  if a[k] < x then i := k + 1;
              until i > j                              
Program 3 :=  i := 1; j := N  
              repeat
                   k := (i + j) div 2; 
                  if x < a[k] then j := k else i := k + 1 
              until i > j

A binary search program is called correct provided it terminates with $a[k] = x$ whenever such an element exists, or it terminates with $a\left [ k \right ]\neq  x$ if there exists no array element with value $x$. Which of the following statements is correct?

  1. Only Program $1$ is correct
  2. Only Program $2$ is correct
  3. Only Program $1$ and $2$ are correct.
  4. Both Program $2$ and $3$ are correct
  5. All the three programs are wrong

Answers: Searching

Selected Answer

The code is wrong here

k=(i+j) / 2;
if (a[k] < x) then i = k;
else j = k;

The (correct) code should be:

k=(i+j) / 2;
if (a[k] < x) then i = k + 1;
else j = k - 1;


We can try an example with the given code in question

Let the array be $\qquad a[1,2,3,4,5,6,7,8,9,10]$
Index numbers$\qquad \quad1,2,3,4,5,6,7,8,9,10$
Let $x=10$; now run the code;

Initially $i = 1, j=10$;

first time  $k =(i+j) /2 = 11/2 =5.5 = 5$ (because of integer type) $=i$

second time $= k =(i+j) /2 =15/2 =7.5 =7 =i$

third time $=  k =(i+j) /2 = 17/2 = 8.5 = 8 =i$

fourth time $=  k =(i+j) /2 = 18/2 = 9 = i$

fifth time  $=  k =(i+j) /2 = 19/2 = 9.5 =9 = i$

sixth time $=  k =(i+j) /2 = 19/2 = 9.5 =9 = i$

seventh time $=  k =(i+j) /2 = 19/2 = 9.5 =9 = i$


Going to infinite loop (run time error)

For terminating the loop, it should be $i = k + 1$  instead of $i =k$ and $j = k - 1$ instead of $j = k$;

23 votes -- Mithlesh Upadhyay ( 4.3k points)
Selected Answer
Expected number of comparisons
$= 1 \times$ Probability of first element being $x + 2 \times$ Probability of second element being $x + \ldots  +n  \times$ Probability of last element being $x$.

$= \frac{1}{n} + \frac{2}{n} + \frac{3}{n} +\ldots +\frac{n}{n} $

$ = \frac{\left ( \frac{n\times (n+1)}{2} \right ) } {n} $

$ = \frac{n+1}{2}$

Correct Answer: $C$
78 votes -- Arjun Suresh ( 328k points)
Selected Answer

Expected number of comparisons $(E) = 1 \times $ Probability of find on first comparison $+ 2 \times $Probability of find on second comparison $+ ... + i \times $ Probability of find on ith comparison $+ ...$

$= 1 \times \frac{1}{n} + 2 \times \frac{n-1}{n^2}  + 3 \times \frac{ (n-1)^2}{n^3} + \dots $

$ =  \frac{1/n} {1 - \frac{n-1}{n} } + \frac{(n-1)/n^2}{\left( {1- \frac{n-1}{n} }\right)^2} \\ \left( \text{Sum to infinity of aritmetico-geometric series with }\\a = \frac{1}{n}, r = \frac{n-1}{n} \text{ and } d = \frac{1}{n} \right) \\= 1 + n - 1 = n$

Reference: https://en.wikipedia.org/wiki/Arithmetico-geometric_sequence

Or we can also do,

$E= 1 \times \frac{1}{n} + 2 \times \frac{n-1}{n^2}  + 3 \times \frac{ (n-1)^2}{n^3} + \dots $

$E \frac{n-1}{n}  = \frac{n-1}{n^2} + 2 \times \frac{(n-1)^2}{n^3} + 3 \times \frac{(n-1)^3}{n^4}  + \dots $

$  E  -E\frac{n-1}{n} =  \frac{1}{n} +   \frac{n-1}{n^2}  +   \frac{ (n-1)^2}{n^3} + \dots $

$E. \frac{1}{n} = \frac{(1/n)}{1 - \frac{n-1}{n}} = 1 \left( \text{Sum to infinity of GP with } a = \frac{1}{n} \text{ and } r = \frac{n-1}{n} \right) \\ \implies E = n$

Correct Answer: $A$

References
74 votes -- Arjun Suresh ( 328k points)
Selected Answer

for Q.84 

when it is option C the control will continue to iterate as $i=8$ and $j=9$;
again and again $i$ will be assigned $k$ which itself equals $8$ as $\frac{8+9}{2}$ being stored in an integer type variable, will evaluate to $8$.


For option A, with $x=9$, $k$ will take the following values:

  • $4$
  • $6$
  • $7$
  • $8 - y[8] = 9$, $x$ found

For option D, with $x=10$, $k$ will take the following values:

  • $4$, $y[4] = 10$, $x$ found
34 votes -- Amar Vashishth ( 24.9k points)
Selected Answer

Answer should be A.

if( Y[k] < x) then i = k + 1;

if given element that  we are searching is greater, then searching will be continued in the upper half of array

otherwise $\mathbf{j = k - 1}$;

in the lower half.

Take few case in consideration i.e.

  1. All elements are same
  2. Increasing order with no repetition
  3. Increasing order with  repetition.
24 votes -- Manoj Kumar ( 26.6k points)
Selected Answer
Here, since $0$s are followed by $1$s so we have a sorted sequence and we can apply binary search.

At each stage we compare with ${\frac{(low + high)}{2}}^{th}$ element index and if it is $1$ we check left and if it is $0$ we check right.

Total worst case no. of probes is $\lceil \log_{2}{31}\rceil=5.$

So, answer is $5.$
53 votes -- sriv_shubham ( 2.8k points)

 

it should not take more than 5 probes 

#include <bits/stdc++.h>
using namespace std;
#define N 100
void display(bool a[]) {
  int i=0;
  while(i<31) printf("%2d ",a[i++]);
}
void assign(bool a[],int zeros) {
  for(int i=0;i<31;i++)
    (i<zeros)?a[i] = 0:a[i] = 1;
}
int main() {
  srand(time(NULL));
  bool a[31];
  // header
  for(int i=0;i<31;i++) printf("%2d ",i);
  printf("\n\n");
  int max_probes = 0;
  for(int iteration = 1;iteration <= N;iteration++) {
    int zeros = rand()%32;
    assign(a,zeros);
    sort(a,a+31);
    int low,high,mid,ans;
    std::vector<int> seq;
    
    low = 0;
    high = 31;
    while(low < high) {
      mid = low + floor((high - low) / 2);
      seq.push_back(mid);
      if(a[mid] == 0) {
      low = mid + 1;
      ans = low;
        if(a[mid + 1] == 1) break; 
      } else {
      high = mid;
      ans = high;
      if(mid > 0)
        if(a[mid - 1] == 0) break;

      }
    }

    display(a);
    printf(" | probes=%d  ",seq.size());
    for(auto e:seq) printf("%d ",e);
    printf(" | at = %dth\n",ans);
    //if(ans == 15) printf("\nHHH=-------------\n");
    max_probes = max(max_probes,(int)(seq.size()));
    seq.clear();	
  }
  printf("%d\n",max_probes);
}

22 votes -- Debashish Deka ( 40.4k points)
Selected Answer

Option D is the correct answer.

We can simply use clever indexing to binary search the element in the odd positions, and in the even positions separately.

This will take $O(\log n)$ time and $O(1)$ space in the worst case.

A: Sorting using Quicksort will take $O(n^2)$ time.
B: Merging will take $O(n)$ time and $O(n)$ space.
C: Binary search only works on a sorted array.
E: Sequential search will take $O(n)$ time.

30 votes -- Pragy Agarwal ( 18.1k points)
Selected Answer

First program wont work if array has elements same..it may go into infinite loop .To make it work it properly we have to do following changes $j=k-1$  and $i=k+1$

For second program $a[k]==x$ condition is missing so it is wrong

Third program is also wrong as $j!=k-1$ and condition $a[k]==x$ is missing

So, answer is E.

16 votes -- Pooja Palod ( 23.7k points)

Let $G=(V,E)$ be a DIRECTED graph, where each edge $\large e$ has a positive weight $\large\omega(e),$ and all vertices can be reached from vertex $\large s.$ For each vertex $\large v,$ let $\large \phi(v)$ be the length of the shortest path from $\large s$ to $\large v.$ Let $G'=(V,E)$ be a new weighted graph with the same vertices and edges, but with the edge weight of every edge $\large e=(u\to v)$ changed to $\large \omega'(e)=\omega(e)+\phi(v)-\phi(u).$ Let $P$ be a path from $\large s$ to a vertex $\large v,$ and let $\large \omega(P)=\sum_{e\in P} \omega_{e},$ and $\large \omega'(P)=\sum_{e\in P} \omega'_{e}.$

Which of the following options is NOT NECESSARILY TRUE ?

  1. If $P$ is a shortest path in $G,$ then $P$ is a shortest path in $G'.$
  2. If $P$ is a shortest path in $F',$ then P is a shortest path in $G.$
  3. If $P$ is a shortest path in $G,$ then $\omega'(P)=2\times \omega(P).$
  4. If $P$ is NOT a shortest path in $G,$ then $\omega'(P)<2\times \omega(P).$
  5. All of the above options are necessarily TRUE.

Answers: Shortest Path

Selected Answer

Answer should be $E,$ i.e., all statements $A,B,C,D$ are correct.

Explanation:

$\omega\left ( P \right ) =\sum_{e} \omega_e$

$\omega'\left ( P \right ) =\sum_{e} \omega_e'$

$\qquad =\sum_{e} (\omega (e) + \phi(v) - \phi(u))$

$\qquad =\sum \omega(e) + \sum ( \phi(v) - \phi(u) )$

Given that $P$ is a path from $s$ to $v.$ Let $P$ be comprised of  $S \rightarrow V_1 \rightarrow V_2 \rightarrow V_3 \cdots \rightarrow V_n \rightarrow V $ 

$\require{cancel} \omega'\left ( P \right ) =\sum_{e} \omega(e) + \phi(v) -\cancel {\phi(v_n)}$

$\qquad  +\cancel {\phi(v_n)} - \cancel{\phi(v_{n-1})}$

$\qquad + $

$\qquad \vdots$

$\qquad  +\cancel {\phi(v_1)} - \phi(v_s) $

$\omega'(P)=\sum \omega(e) +  \phi(v) - \phi(s)$

Since, $s$ is the source (in the context of the shortest path) $\phi (s) = 0.$

$ \therefore \omega'(P) = \sum \omega(e) +  \phi(v) = \omega(P) + \phi(v)$

If $P$ is the shortest path in $G,$ $\omega'(P) = \sum \omega(e) + \phi(v)$ 

$\qquad = \phi(v) + \phi(v)$

$\qquad = 2. \phi(v)$

$\qquad = 2. \sum_{e} \omega_e$

$\qquad =2. \omega(P)$

(C is correct)

$\omega'(P) = \omega(P) + \phi(v)$

$\qquad \le \omega(P) + \omega(P)$

$\qquad  = 2* \omega(P)$

or $\omega'(P) \le 2\omega(P)$                 

(D is correct)

If $P$ is the shortest path in $G,$ (Say from $s$ to $v)$

  • $\omega(P) = \phi(v)$
  • $\omega'(P) = \omega(P) + \phi(v)$

Say $P'$ is the shortest path in $G'$ (not equal to $P)$

$\omega'(P') \le \omega'(P)$

$\Rightarrow \omega(P') + \phi(v) \le \omega(P) + \phi(v)$

$\Rightarrow \omega(P') \le \omega(P) \Rightarrow$ Contradiction as we assumed $P$ to be shortest path in $G$

$\therefore$ $P$ is the shortest path in $G'$ as well.               

(A is correct)

Similarly say $P$ is the shortest path in $G'$ $(s$ to $v)$

$\omega'(P) = \omega(P) + \phi(v)$

We claim $P$ is the shortest path in $G$ as well. By contradiction let us assume $P'$ is the shortest path in $G.$

$\omega(P') \le \omega(P)$

$\omega(P') + \phi(v) \le \omega(P) + \phi(v)$

$\omega'(P') \le \omega'(P) \Rightarrow$ Contradiction

Here, what we assumed was wrong and $P$ is the shortest path in $G$     

(B is correct)

$\therefore$ E is the answer

8 votes -- Yash Khanna ( 355 points)

Let $P$ be a quicksort program to sort numbers in ascending order. Let $t_{1}$ and $t_{2}$ be the time taken by the program for the inputs $\left[1 \ 2 \ 3 \ 4\right]$ and $\left[5 \ 4 \ 3 \ 2 \ 1\right]$, respectively. Which of the following holds?

  1. $t_{1} = t_{2}$
  2. $t_{1} > t_{2}$
  3. $t_{1} < t_{2}$
  4. $t_{1}=t_{2}+5 \log 5$ 
Quicksort is ________ efficient than heapsort in the worst case.

An input files has $10$ records with keys as given below:

$25\quad 7\quad 34\quad  2\quad  70\quad  9\quad  61\quad  16\quad  49\quad  19$

This is to be sorted in non-decreasing order.

  1. Sort the input file using QUICKSORT by correctly positioning the first element of the file/subfile. Show the subfiles obtained at all intermediate steps. Use square brackets to demarcate subfiles.
  2. Sort the input file using 2-way- MERGESORT showing all major intermediate steps. Use square brackets to demarcate subfiles.

Choose the correct alternatives (More than one may be correct).

The complexity of comparision based sorting algorithms is:

  1. $\Theta (n \log n)$
  2. $\Theta (n)$
  3. $\Theta \left(n^2\right)$
  4. $\Theta (n\sqrt n)$
The minimum number of comparisons required to sort $5$ elements is ____
Give an optimal algorithm in pseudo-code for sorting a sequence of $n$ numbers which has only $k$ distinct numbers ($k$ is not known a Priori). Give a brief analysis for the time-complexity of your algorithm.

Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:

Following algorithm(s) can be used to sort $n$ in the range $[1\ldots n^3]$ in $O(n)$ time

  1. Heap sort
  2. Quick sort
  3. Merge sort
  4. Radix sort
Assume that the last element of the set is used as partition element in Quicksort. If $n$ distinct elements from the set $\left[1\dots n\right]$ are to be sorted, give an input for which Quicksort takes maximum time.

Algorithm design technique used in quicksort algorithm is?

  1. Dynamic programming

  2. Backtracking

  3. Divide and conquer

  4. Greedy method

For merging two sorted lists of sizes $m$ and $n$ into a sorted list of size $m+n$, we require comparisons of

  1. $O(m)$

  2. $O(n)$

  3. $O(m+n)$

  4. $O(\log m + \log n)$

Merge sort uses:

  1. Divide and conquer strategy

  2. Backtracking approach

  3. Heuristic search

  4. Greedy approach

Consider the following sequence of numbers:$$92, 37, 52, 12, 11, 25$$ Use Bubble sort to arrange the sequence in ascending order. Give the sequence at the end of each of the first five passes.

A two dimensional array $A[1..n][1..n]$ of integers is partially sorted if $\forall i, j\in [1..n-1], A[i][j] < A[i][j+1] \text{ and } A[i][j] < A[i+1][j]$

Fill in the blanks:

  1. The smallest item in the array is at $A[i][j]$ where $i=__$ and $j=__$.

  2. The smallest item is deleted. Complete the following $O(n)$ procedure to insert item $x$ (which is guaranteed to be smaller than any item in the last row or column) still keeping $A$ partially sorted.
procedure insert (x: integer);
var i,j: integer;
begin
    i:=1; j:=1, A[i][j]:=x;
    while (x > __ or x > __) do
        if A[i+1][j] < A[i][j] ___ then begin
            A[i][j]:=A[i+1][j]; i:=i+1;
        end
        else begin
            _____
        end
    A[i][j]:= ____
end

Quick-sort is run on two inputs shown below to sort in ascending order taking first element as pivot

  1. $1, 2, 3, \dots n$
  2. $n, n-1, n-2, \dots, 2, 1$

Let $C_1$ and $C_2$ be the number of comparisons made for the inputs (i) and (ii) respectively. Then, 

  1. $C_1 < C_2$
  2. $C_1 > C_2$
  3. $C_1 = C_2$
  4. we cannot say anything for arbitrary $n$

Give the correct matching for the following pairs: $$\begin{array}{ll|ll}\hline \text{(A)} & \text{$O (\log n)$} & \text{(P)} & \text{Selection sort} \\\hline  \text{(B)} & \text{$O (n)$} & \text{(Q)}& \text{Insertion sort} \\\hline   \text{(C)}& \text{$O (n \log n)$} & \text{(R)}  & \text{Binary search} \\\hline  \text{(D)} & \text{$O (n^2)$} &\text{(S)}  & \text{Merge sort}  \\\hline \end{array}$$

  1. $\text{A-R  B-P  C-Q  D-S}$

  2. $\text{A-R  B-P  C-S  D-Q}$

  3. $\text{A-P  B-R  C-S  D-Q}$

  4. $\text{A-P  B-S  C-R  D-Q}$

A sorting technique is called stable if

  1. it takes $O (n \log n)$ time
  2. it maintains the relative order of occurrence of non-distinct elements

  3. it uses divide and conquer paradigm

  4. it takes $O(n)$ space

If one uses straight two-way merge sort algorithm to sort the following elements in ascending order:

     $20, \ 47, \ 15, \ 8, \ 9, \ 4, \ 40, \ 30, \ 12, \ 17$

then the order of these elements after second pass of the algorithm is:

  1. $8, \ 9, \ 15, \ 20, \ 47, \ 4, \ 12, \ 17, \ 30, \ 40$

  2. $8, \ 15, \ 20, \ 47, \ 4, \ 9, \ 30, \ 40, \ 12, \ 17$

  3. $15, \ 20, \ 47, \ 4, \ 8, \ 9, \ 12, \ 30, \ 40, \ 17$

  4. $4, \ 8, \ 9, \ 15, \ 20, \ 47, \ 12, \ 17, \ 30, \ 40$

Let $A$ be an $n \times n$ matrix such that the elements in each row and each column are arranged in ascending order. Draw a decision tree, which finds $1$st, $2$nd and $3$rd smallest elements in minimum number of comparisons.

An array contains four occurrences of $0$, five occurrences of $1$, and three occurrences of $2$ in any order. The array is to be sorted using swap operations (elements that are swapped need to be adjacent).

  1.  What is the minimum number of swaps needed to sort such an array in the worst case?
  2.  Give an ordering of  elements in the above array so that the minimum number of swaps needed to sort the array is maximum.

Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. What is the worst case complexity of sorting n numbers using Randomized quicksort?

  1. $O(n)$
  2. $O(n \log n)$
  3. $O(n^2)$
  4. $O(n!)$

The unusual $\Theta(n^2)$ implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will

  1. remain $\Theta(n^2)$
  2. become $\Theta(n  (\log n)^2)$
  3. become $\Theta(n \log n)$
  4. become $\Theta(n)$

In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).

If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of \(1. . . n\)?

  1. \(\frac{n(n-1)}{2}\)
  2. \(\frac{n(n-1)}{4}\)
  3. \(\frac{n(n+1)}{4}\)
  4. \(2n[\log_2n]\)

In a permutation \(a_1 ... a_n\), of $n$ distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).

What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of \(1. . . n\) with at most $n$ inversions?

  1. \(\Theta(n^2)\)
  2. \(\Theta(n\log n)\)
  3. \(\Theta(n^{1.5})\)
  4. \(\Theta(n)\)

The tightest lower bound on the number of comparisons, in the worst case, for comparison-based sorting is of the order of

  1. $n$
  2. $n^2$
  3. $n \log n$
  4. $n \log^2n$

Suppose there are $\lceil \log n \rceil$ sorted lists of $\lfloor n /\log n \rfloor$ elements each. The time complexity of producing a sorted list of all these elements is: (Hint:Use a heap data structure)

  1. $O(n \log \log n)$
  2. $\Theta(n \log n)$
  3. $\Omega(n \log n)$
  4. $\Omega\left(n^{3/2}\right)$

Let $a$ and $b$ be two sorted arrays containing $n$ integers each, in non-decreasing order. Let $c$ be a sorted array containing $2n$ integers obtained by merging the two arrays $a$ and $b$. Assuming the arrays are indexed starting from $0$, consider the following four statements

  1. $a[i] \geq b [i] \Rightarrow c[2i] \geq a [i]$
  2. $a[i] \geq b [i] \Rightarrow c[2i] \geq b [i]$
  3. $a[i] \geq b [i] \Rightarrow c[2i] \leq a [i]$
  4. $a[i] \geq b [i] \Rightarrow c[2i] \leq b [i]$

Which of the following is TRUE?

  1. only I and II
  2. only I and IV
  3. only II and III
  4. only III and IV

Which one of the following in place sorting algorithms needs the minimum number of swaps?

  1. Quick sort
  2. Insertion sort
  3. Selection sort
  4. Heap sort

The median of $n$ elements can be found in $O(n)$ time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot?

  1. $\Theta (n)$
  2. $\Theta (n \log n)$
  3. $\Theta (n^{2})$
  4. $\Theta (n^{3})$

Which of the following sorting algorithms has the lowest worse-case complexity?

  1. Merge sort

  2. Bubble sort

  3. Quick sort

  4. Selection sort

Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let $T(n)$ be the number of comparisons required to sort $n$ elements. Then

  1. $T(n) \leq 2T(n/5) + n$

  2. $T(n) \leq T(n/5) + T(4n/5) + n$

  3. $T(n) \leq 2T(4n/5) + n$

  4. $T(n) \leq 2T(n/2) + n$

If we use Radix Sort to sort $n$ integers in the range $\left (n^{k/2}, n^k \right ]$, for some $k > 0$ which is independent of $n$, the time taken would be?

  1. $\Theta(n)$
  2. $\Theta(kn)$
  3. $\Theta(n \log n)$
  4. $\Theta(n^2)$

What is the number of swaps required to sort $n$ elements using selection sort, in the worst case?

  1. $\Theta(n)$

  2. $\Theta(n \log  n)$

  3. $\Theta(n^2)$

  4. $\Theta(n^2 \log  n)$

In quick-sort, for sorting $n$ elements, the $\left(n/4\right)^{th}$ smallest element is selected as pivot using an $O(n)$ time algorithm. What is the worst case time complexity of the quick sort?

  1. $\Theta(n)$

  2. $\Theta(n \log  n)$

  3. $\Theta(n^2)$

  4. $\Theta(n^2 \log  n)$

A list of $n$ strings, each of length $n$, is sorted into lexicographic order using the merge-sort algorithm. The worst case running time of this computation is

  1. $O (n \log n) $
  2. $ O(n^{2} \log n) $
  3. $ O(n^{2} + \log n) $
  4. $ O(n^{2}) $

The number of elements that can be sorted in $\Theta(\log n)$ time using heap sort is

  1. $\Theta(1)$
  2. $\Theta(\sqrt{\log} n)$
  3. $\Theta(\frac{\log n}{\log \log n})$
  4. $\Theta(\log n)$

Which one of the following is the tightest upper bound that represents the number of swaps required to sort $n$ numbers using selection sort?

  1. $O(\log n$)
  2. $O(n$)
  3. $O(n \log n$)
  4. $O(n^{2}$)

Let $P$ be quicksort program to sort numbers in ascending order using the first element as the pivot. Let $t_1$ and $t_2$ be the number of comparisons made by P for the inputs  $[1 \ 2 \ 3 \ 4 \ 5]$ and $[4 \ 1 \ 5 \ 3 \ 2]$ respectively. Which one of the following holds?

  1. $t_1 = 5$
  2. $t_1 < t_2$
  3. $t_1>t_2$
  4. $t_1 = t_2$
Suppose $P, Q, R, S, T$ are sorted sequences having lengths $20, 24, 30, 35, 50$ respectively. They are to be merged into a single sequence by merging together two sequences at a time. The number of comparisons that will be needed in the worst case by the optimal algorithm for doing this is ____.

You have an array of $n$ elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

  1. $O(n^2)$
  2. $O(n \log n)$
  3. $\Theta(n\log n)$
  4. $O(n^3)$

Suppose you are provided with the following function declaration in the C programming language.

int partition(int a[], int n);

The function treats the first element of $a[\:]$ as a pivot and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part.

The following partially given function in the C programming language is used to find the $k^{th}$ smallest element in an array $a[\:]$ of size $n$ using the partition function. We assume $k \leq n$.

int kth_smallest (int a[], int n, int k)
{
    int left_end = partition (a, n);
    if (left_end+1==k) {
        return a[left_end];
    }
    if (left_end+1 > k) {
        return kth_smallest (___________);
    } else {
        return kth_smallest (___________);   
    }
}

The missing arguments lists are respectively

  1. $(a,$ left_end$, k) $ and $ (a+$left_end$+1, n-$left_end$-1, k-$left_end$-1)$
  2. $(a,$ left_end$, k) $ and $ (a, n-$left_end$-1, k-$left_end$-1)$
  3. $(a+$ left_end$+1, n-$left_end$-1, k-$left_end$-1) $ and $(a, $ left_end$, k)$
  4. $(a, n-$left_end$-1, k-$left_end$-1)$ and $(a, $left_end$, k)$

Assume that a mergesort algorithm in the worst case takes $30$ seconds for an input of size $64$. Which of the following most closely approximates the maximum input size of a problem that can be solved in $6$ minutes?

  1. $256$
  2. $512$
  3. $1024$
  4. $2018$

The worst case running times of Insertion sort , Merge sort and Quick sort, respectively are:

  1. $\Theta (n \log n)$, $\Theta (n \log n)$ and $\Theta(n^2)$
  2. $\Theta (n^2)$, $\Theta (n^2)$ and $\Theta(n \log n)$
  3. $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n \log n)$
  4. $\Theta (n^2)$, $\Theta (n \log n)$ and $\Theta (n^2)$

Assume that the algorithms considered here sort the input sequences in ascending order. If the input is already in the ascending order, which of the following are TRUE?

  1. Quicksort runs in $\Theta (n^2)$ time
  2. Bubblesort runs in $\Theta (n^2)$ time
  3. Mergesort runs in $\Theta (n)$ time
  4. Insertion sort runs in $\Theta (n)$ time
  1. I and II only
  2. I and III only
  3. II and IV only
  4. I and IV only

Suppose you are given $n$ numbers and you sort them in descending order as follows:

First find the maximum. Remove this element from the list and find the maximum of the remaining elements, remove this element, and so on, until all elements are exhausted. How many comparisons does this method require in the worst case?

  1. Linear in $n$.
  2. $O\left ( n^2 \right )$ but not better.
  3. $O\left (n \log n \right )$
  4. Same as heap sort.
  5. $O\left ( n^{1.5} \right )$ but not better.

Consider the Insertion Sort procedure given below, which sorts an array $L$ of size $n\left ( \geq 2 \right )$ in ascending order:

begin     
    for xindex:= 2 to n do        
        x := L [xindex];
        j:= xindex - 1;
        while j > 0 and L[j] > x do
            L[j + 1]:= L[j];
            j:= j - 1;
        end {while}
        L [j + 1]:=X;
    end{for}   
end

It is known that insertion sort makes at most $n (n - 1) / 2$ comparisons. Which of the following is true?

  1. There is no input on which insertion Sort makes $n (n - 1) / 2 $ comparisons.
  2. Insertion Sort makes $n (n - 1) / 2$ comparisons when the input is already sorted in ascending order.
  3. Insertion Sort makes $n (n - 1) / 2$ comparisons only when the input is sorted in descending order.
  4. There are more than one input orderings where insertion sort makes $n (n - 1) / 2 $ comparisons.
  5. Insertion Sort makes $n (n - 1) / 2$ comparisons whenever all the elements of $L$ are not distinct.

Let $S=\left \{ x_{1},....,x_{n} \right \}$ be a set of $n$ numbers. Consider the problem of storing the elements of $S$ in an array $A\left [ 1...n \right ]$ such that the following min-heap property is maintained for all $2\leq i\leq n: A [\lfloor i/2\rfloor]\leq A[i]$. (Note that $\lfloor x\rfloor$ is the largest integer that is at most $x$). Which of the following statements is TRUE?

  1. This problem can be solved in $O(\log n)$ time.
  2. This problem can be solved in $O (n)$ time but not in $O(\log n)$ time.
  3. This problem can be solved in $O(n\log n)$ time but not in $O (n)$ time.
  4. This problem can be solved in $O \left ( n^{2} \right )$ time but not in $O(n\log n)$ time.
  5. None of the above.

Given a set of $n=2^{k}$ distinct numbers, we would like to determine the smallest and the second smallest using comparisons. Which of the following statements is TRUE?

  1. Both these elements can be determined using $2k$ comparisons.
  2. Both these elements can be determined using $n - 2$ comparisons.
  3. Both these elements can be determined using $n + k - 2$ comparisons.
  4. $2n - 3$ comparisons are necessary to determine these two elements.
  5. $nk$ comparisons are necessary to determine these two elements.

The first $n$ cells of an array $L$ contain positive integers sorted in decreasing order, and the remaining $m - n$ cells all contain 0. Then, given an integer $x$, in how many comparisons can one find the position of $x$ in $L$?

  1. At least $n$ comparisons are necessary in the worst case.
  2. At least $\log m$ comparisons are necessary in the worst case.
  3. $O (\log (m - n))$ comparisons suffice.
  4. $O (\log n)$ comparisons suffice.
  5. $O (\log (m / n))$ comparisons suffice.

An array $A$ contains $n$ integers. We wish to sort $A$ in ascending order. We are told that initially no element of $A$ is more than a distance $k$ away from its final position in the sorted list. Assume that $n$ and $k$ are large and $k$ is much smaller than $n$. Which of the following is true for the worst case complexity of sorting $A$?

  1. $A$ can be sorted with constant $.kn$ comparison but not with fewer comparisons.
  2. $A$ cannot be sorted with less than constant $.n \log n$ comparisons.
  3. $A$ can be sorted with constant $.n$ comparisons.
  4. $A$ can be sorted with constant $.n \log k$ comparisons but not with fewer comparisons.
  5. $A$ can be sorted with constant $.k^{2}n$ comparisons but not fewer.

Consider the quick sort algorithm on a set of $n$ numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?

  1. The running time of the algorithm is $\Theta (n).$
  2. The running time of the algorithm is $\Theta (n\log n)$.
  3. The running time of the algorithm is $\Theta (n^{1.5})$.
  4. The running time of the algorithm is $\Theta (n^{2})$.
  5. None of the above.

Suppose $n$ processors are connected in a linear array as shown below. Each processor has a number. The processors need to exchange numbers so that the numbers eventually appear in ascending order (the processor $\rm P1$ should have the minimum value and the the processor $\rm Pn$ should have the maximum value).

The algorithm to be employed is the following. Odd numbered processors and even numbered processors are activated alternate steps; assume that in the first step all the even numbered processors are activated. When a processor is activated, the number it holds is compared with the number held by its right-hand neighbour (if one exists) and the smaller of the two numbers is retained by the activated processor and the bigger stored in its right hand neighbour.
How long does it take for the processors to sort the values?

  1. $n \log n$ steps
  2. $n^2$ steps
  3. $n$ steps
  4. $n^{1.5}$ steps
  5. The algorithm is not guaranteed to sort

An array of $n$ distinct elements is said to be un-sorted if for every index $i$ such that $ 2 \leq i \leq n-1$, either $A[i] > max \{A [i-1], A[i+1]\}$, or $A[i] < min \{A[i-1], A[i+1]\}$. What is the time-complexity of the fastest algorithm that takes as input a sorted array $A$ with $n$ distinct elements, and un-sorts $A$?

  1. $O(n \: \log \: n)$ but not $O(n)$
  2. $O(n)$ but not $O(\sqrt{n})$
  3. $O(\sqrt{n})$ but not $O(\log n)$
  4. $O(\log n)$ but not $O(1)$
  5. $O(1)$

Answers: Sorting

Selected Answer
Actually, in both the cases, it will take $O(n^{2})$ time for partition algorithm and $T(n-1)$ time for subproblem. As $n$ is the number of inputs and in the $2^{nd}$ case inputs are $5($greater than $1^{st}$ one that is $4),$ $t_{1}<t_{2}.$

Correct Answer: $C$
35 votes -- Rohan Ghosh ( 1.6k points)
Selected Answer

Answer is LESS.

As worst case time for Quicksort is $O(n^2)$ and worst case for heap sort is $O(n \log n)$.

12 votes -- Pavan Kumar Munnam ( 7.4k points)
Selected Answer

Quick Sort

$\textbf{Pseudocode for quicksort}$

$\text{QUICKSORT(A,p,r)}$
$\quad \textbf{if } p < r$
$\quad\quad \textbf{then} q \leftarrow \text{PARTITION}(A,p,r)$
$\quad\quad\quad \text{QUICKSORT}(A,p,q-1)$
$\quad\quad\quad\text{QUICKSORT}(A,q+1,r)$

${\color{Red}{\text{Initial call: }} }\text{QUICKSORT}(A,1,n)$

  1. ${\color{Red} {25}}\ 7\ 34\ 2\ 70\ 9\ 61\ 16\ 49\ 19$
  2. ${\color{Red} {25}}\ 7\  \color{blue}{2\ 34}\ 70\ 9\ 61\ 16\ 49\ 19$
  3. ${\color{Red} {25}}\ 7\ 2\  \color{blue}{9}\   70\;  \color{blue}{34}\;  61\ 16\ 49\ 19$
  4. ${\color{Red} {25}}\ 7\ 2\ 9 \ \color{blue}{16}\   34\ 61\ \color{blue}{70}\  49\ 19$
  5. ${\color{Red} {25}}\ 7\ 2\ 9\ 16\ \color{blue}{19} \; 61\ 70\ 49 \ \color{blue}{34}$
  6. $[7\ 2\ 9\ 16$ $19]$ ${\color{Red} {25}}\ [61\ 70\ 49$ $34]$
  7. $[{\color{Red} {7}}\ 2\ 9\ 16\ 19]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  8. $[[2]\ {\color{Red} {7}}\ [9\ 16\ 19]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  9. $[{\color{Red} {2\ 7}}\ [{\color{Red} {9}}\ [16\ 19]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  10. $[{\color{Red} {2\ 7}}\ [{\color{Red} {9}}\ [{\color{Red} {16}} \ 19]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  11. $[{\color{Red} {2}}\ {\color{Red} {7}}\ [{\color{Red} {9}}\ [{\color{Red} {16}} \ {\color{Red} {19}}]]]\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  12. ${\color{Red} {[2}}\ {\color{Red} {7[}}\ {\color{Red} {9}}\ {\color{Red} {16}} \ {\color{Red} {19]]}}\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  13. ${\color{Red} {[2}}\ {\color{Red} {7}}\ {\color{Red} {9}}\ {\color{Red} {16}} \ {\color{Red} {19]}}\ {\color{Red} {25}}\ [61\ 70\ 49\ 34]$
  14. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[{\color{Red} {61}}\ \color{blue}{49\ 70}\  34]$
  15. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[{\color{Red} {61}}\ 49\; \color{blue}{34\ 70}]$
  16. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }} [[49\ 34]\ {\color{Red} {61}}\  [70] ]$
  17. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[[ {\color{Red} {49}}\ 34]\ {\color{Red} {61}}\  [70] ]$
  18. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[[ [34]\ {\color{Red} {49}}]\ {\color{Red} {61}}\  [70] ]$
  19. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[ {\color{Red} {[ [34]\ 49]}}\ {\color{Red} {61}}\  [70] ]$
  20. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}[ {\color{Red} {[34\ 49]}}\ {\color{Red} {61}}\  [70] ]$
  21. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}  [{\color{Red} {34\ 49}}\ {\color{Red} {61\ [70 ]}}]  $
  22. ${\color{Red} {[2\ 7\ 9\ 16\ 19]\ 25\ }}  {\color{Red} {[34\ 49}}\ {\color{Red} {61\ 70 ]}}\  $
  23. ${\color{Red} {[2\ 7\ 9\ 16\ 19\ 25\ }}  {\color{Red} {34\ 49}}\ {\color{Red} {61\ 70] }}\  $

2-way MergeSort [Iterative version]

  1. $[25\ 7]\ [34\ 2]\ [70\ 9]\ [61\ 16]\ [49\ 19]$
  2. $[{\color{Red} {[7\ 25]}}\ {\color{Red} {[2\ 34]}}]\ [{\color{Red} {[9\ 70]}}\ {\color{Red} {[16\ 61]}}]\ [{\color{Red} {[19\ 49]}}]$
  3. $[{\color{Green} {[2\ 7\ 25\ 34 ]}}\  {\color{Green} {[9\ 16\ 61\ 70]}}]\ {\color{Green} {[19\ 49]}}$
  4. $[{\color{Blue} {[2\ 7\ 9\ 16\  25\ 34\  61\ 70]}}\ {\color{Green} {[19\ 49]}}]$
  5. $[{\color{Blue} {[2\ 7\ 9\ 16\ 19\ 25\ 34\ 49\ 61\ 70]}}$

$\textbf{MergeSort [Recursive version]}$

$\textbf{MERGE - SORT A[1...n]}$

  1. $\text{If } n = 1,$ done.
  2. Recursively sort $A[1...\left \lceil n/2 \right \rceil]$ and $A[\left \lceil n/2 \right \rceil + 1 ... n]. $
  3. ${\color{Red}{\text{“Merge"}}}$ the $2$ sorted lists.

$\qquad{\color{Red}{\text{Key subroutine: }}}  \textbf{MERGE}$

  1. $[25\ 7\ 34\ 2\ 70]\ [9\ 61\ 16\ 49\ 19]$
  2. $[[25\ 7\ 34]\ 2\ 70]\ [9\ 61\ 16\ 49\ 19]$
  3. $[[{\color{Red} {[7\ 25]}}\ [34]]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
  4. $[[{\color{Red} {[7\ 25]}}\ {\color{Red} {[34]}}]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
  5. $[[{\color{Red} {[7\ 25\ 34]}}]\ [2\ 70]]\ [9\ 61\ 16\ 49\ 19]$
  6. $[{\color{Red} {[7\ 25\ 34]}}\ {\color{Red} {[2\ 70]}}]\ [9\ 61\ 16\ 49\ 19]$
  7. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [9\ 61\ 16\ 49\ 19]$
  8. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[9\ 61\ 16]\ [49\ 19]]$
  9. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[[9\ 61]\ [16]]\ [49\ 19]]$
  10. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[{\color{Red} {[9\ 61]}}\ [16]]\ [49\ 19]]$
  11. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [[{\color{Red} {[9\ 61]\ [16]}}\ ]\ [49\ 19]]$
  12. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [{\color{Red} {[9\ 16\ 61]}}\ [49\ 19]]$
  13. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ [{\color{Red} {[9\ 16\ 61]}}\ {\color{Red} {[19\ 49]}}]$
  14. ${\color{Red} {[2\ 7\ 25\ 34\ 70]}}\ {\color{Red} {[9\ 16\ 19\ 49\ 61]}}$
  15. ${\color{Red} {[2\ 7\ 9\ 16\ 19\ 25\ 34\ 49\ 61\ 70]}}$
4 votes -- Satbir Singh ( 19k points)
Selected Answer

First of all which sorting algorithm is being asked? It is not just the normal ones we see in algorithm text books but can be any sorting algorithm which uses comparisons. So, in crux we cannot rely on any specific algorithms for this question.

Now, coming to the options, we can see that $\Theta$ notation is used. We use $\Theta$ notation for a problem when

  1. we have a lower bound for the solution -- that is any algorithm which solves the problem must have minimum this much complexity (time complexity being implied)
  2. there exist an algorithm which solves the problem with above minimum time complexity (worst case input being implied)

For comparison based sorting we already have known algorithms like heap sort and merge sort, which solves the problem in $O(n \log n)$ $($See $O)$ and now if we show that this is the best possible by any algorithm our answer becomes $\Theta (n \log n).$

And it is indeed proved that for comparison based sorting minimum $\Omega(n \log n)$ comparisons are required and considering time taken being proportional to the number of comparisons, the time complexity is also $\Omega(n \log n).$ Proof of this is given here but in essence it says

  • The output of sorting $n$ elements can be any of the $n!$ permutations.
  • Each comparison reduces the possibility by $2$
  • So, to finish all $n!$ permutations we need minimum $\lg n!$ comparisons which is $\Omega (n \lg n)$

Now, if someone asks what is the minimum number of comparisons to sort $5$ elements answer is definitely $\geq$ $\lg 5! \geq \lg 120 \geq 7. $ We can only use $\geq$ here and not $=$ unless we prove that we can do it in $7$ comparisons. But interestingly, this $\geq$ becomes $=$ till $n = 11$ as given in below Wikipedia link.

Ref: https://en.wikipedia.org/wiki/Comparison_sort

Answer A. $\Theta (n \log n)$

References
3 votes -- Arjun Suresh ( 328k points)
Selected Answer

Answer is 7.

Minimum number of comparisons = $\lceil \log(n!) \rceil$ =  $\lceil \log(5!) \rceil$ = $\lceil \log(120) \rceil$ = 7.

Reference: http://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_a_list

References
34 votes