GATE Overflow Book

This book was created programatically by GATE Overflow on Nov 25, 2016. If you feel any doubt regarding the answer, click on the question link and give a comment on the site. Studying all these questions might get you 60 marks in GATE but that might not be enough for an IIT. So, read standard books, solve exercise questions and use these questions for cementing the concepts and aim 75+. At least if you are not getting solution to a given problem first refer standard book. If any error is found on any question it shall be updated at http://gateoverflow.in/corrections.
An updated Aptitude Overflow book, book for Previous Year Questions including ISRO, UGCNET and a book for Non-previous Year Questions are expected in 1-2 months. An email subscribe option shall be provided on GATE Overflow by November 15 so that those who have subscribed get notification when the book is released.

This book consists of only previous year GATE and TIFR questions (CS from 1991 and all 5 years of IT) both of which are relevant for GATE. Out of syllabus subjects are removed from this book but some out of syllabus topic questions might still be present. This book is mostly stable and is expected to be updated only yearly when new papers come to site. But for GATE2017 a special update shall be made within 2 months which includes all GO Test series questions and previous GATE questions preceding 1991. Also GATECSE book is coming this month describing the topics to be covered and from where and also includes discussions about confusing topics.

Since GATE Overflow started in August 2014, a lot of people have dedicated their time and effort in bringing this book now. Initiated by Omesh Pandita and Arjun Suresh as a Q/A platform for CSE students, Kathleen was instrumental in getting all previous year GATE questions here. Then experts like Praven Saini, Happy Mittal, Sankaranarayanan, Suraj etc. have contributed a lot to the answers here. Pragy Agarwal even after topping GATE has continuously contributed here with his knowledge as well as in making the contents beautiful with fine latex skills. We also have to thank the work by Jothee, Misbah, Ishrat and Nataliyah who are continuously adding and keeping the contents here neat and clean. There are also many toppers of GATE 2015, GATE 2016 and probable ones of GATE 2017 who are contributing a lot here. The list of all the contributors can be found here but even that does not include the contributions of some like Arif Ali in helping design this book, Arvind Devaraj and others who have provided guidance and help etc. Last but not the least, we thank all the users of GATE Overflow.

Table of Contents
  1. Algorithms (342)
    1. Arrays
    2. Asymptotic Notations
    3. Binary Heap
    4. Binary Search
    5. Binary Tree
    6. Connected Components
    7. Decidability
    8. Dfs
    9. Distance Vector Routing
    10. Dynamic Programming
    11. Expectation
    12. Graph Algorithms
    13. Graph Search
    14. Greedy Algorithm
    15. Hashing
    16. Heap
    17. Huffman Code
    18. Identify Function
    19. Insertion Sort
    20. Lines Curves
    21. Linked Lists
    22. Loop Invariants
    23. Matrix Chain Ordering
    24. Median
    25. Merge Sort
    26. Minimum
    27. Minimum Spanning Trees
    28. P Np Npc Nph
    29. Programming In C
    30. Quicksort
    31. Radix Sort
    32. Recurrence
    33. Recursion
    34. Reduction
    35. Runtime Environments
    36. Shortest Path
    37. Sorting
    38. Space Complexity
    39. Spanning Tree
    40. Test Cases
    41. Time Complexity
    42. Topological Sort
    43. Transactions
    44. Trees
  2. CO & Architecture (163)
    1. Addressing Modes
    2. Cache
    3. Cache Memory
    4. Computer Organization
    5. Data Dependencies
    6. Data Path
    7. Dma
    8. Instruction Format
    9. Interrupts
    10. Io Handling
    11. Machine Instructions
    12. Microprogramming
    13. Page Fault
    14. Pipeline
    15. True False
    16. Virtual Memory
  3. Compiler Design (138)
    1. Assembler
    2. Code Optimization
    3. Context Free
    4. Dag Representation
    5. Grammar
    6. Intermediate Code
    7. Lexical Analysis
    8. Linking
    9. Live Variable
    10. Macros
    11. Parameter Passing
    12. Parsing
    13. Postfix
    14. Programming In C
    15. Recursion
    16. Register Allocation
    17. Runtime Environments
    18. Static Single Assignment
    19. Syntax Directed Translation
    20. Target Code Generation
  4. Computer Networks (177)
    1. Application Layer Protocols
    2. Bit Stuffing
    3. Bridges
    4. Communication
    5. Congestion Control
    6. Crc Polynomial
    7. Cryptography
    8. Csma Cd
    9. Distance Vector Routing
    10. Encoding
    11. Error Detection
    12. Ethernet
    13. Firewall
    14. Hamming Code
    15. Icmp
    16. Ip Packet
    17. Ipv4
    18. Lan Technologies
    19. Link State Routing
    20. Mac Protocol
    21. Manchester Encoding
    22. Network Flow
    23. Network Layering
    24. Network Protocols
    25. Network Security
    26. Network Switching
    27. Osi Protocol
    28. Routers Bridge Hubs Switches
    29. Routing
    30. Selective Repeat
    31. Serial Communication
    32. Sliding Window
    33. Sockets
    34. Stop And Wait
    35. Subnetting
    36. Tcp
    37. Token Bucket
    38. Udp
    39. Wifi
  5. Databases (190)
    1. B Tree
    2. Concurrency
    3. Database Normalization
    4. Er Diagram
    5. Er Model
    6. Functional Dependencies
    7. Indexing
    8. Multivalued Dependency 4nf
    9. Referential Integrity
    10. Relational Algebra
    11. Relational Calculus
    12. Sql
    13. Transactions
    14. Uniform Distribution
  6. Digital Logic (218)
    1. Adder
    2. Boolean Algebra
    3. Boolean Expressions
    4. Booth Recoding
    5. Booths Algorithm
    6. Booths Coding
    7. Canonical Normal Form
    8. Circuit Output
    9. Counter
    10. Flip Flop
    11. Floating Point Representation
    12. Functional Completeness
    13. Gray Code
    14. Half Adder
    15. Ieee Representation
    16. K Map
    17. Memory Interfacing
    18. Min No Gates
    19. Min Sum Of Products Form
    20. Minimal State Automata
    21. Modular Arithmetic
    22. Multiplexer
    23. Multiplexor
    24. Number Representation
    25. Pla
    26. Priority Encoder
    27. Ram
    28. Static Hazard
  7. Discrete Mathematics Combinatory (37)
    1. Binary Tree
    2. Combinations
    3. Combinatorics
    4. Counting
    5. Permutation
    6. Pigeonhole
    7. Recurrence
  8. Discrete Mathematics Graph Theory (54)
    1. Chromatic Number
    2. Combinatorics
    3. Connected Components
    4. Cutvertices&edges
    5. Graph Coloring
    6. Graph Connectivity
    7. Graph Isomorphism
    8. Graph Matching
    9. Graph Planarity
    10. Groups
    11. Havel Hakimi Theorem
    12. Permutation
    13. Spanning Tree
  9. Discrete Mathematics Mathematical Logic (67)
    1. Boolean Expressions
    2. Canonical Normal Form
    3. First Order Logic
    4. Logical Reasoning
    5. Predicate Logic
  10. Discrete Mathematics Set Theory & Algebra (159)
    1. Boolean Algebra
    2. Boolean Expressions
    3. Convergence
    4. Countable Uncountable Set
    5. Counting
    6. Functions
    7. Generating Functions
    8. Groups
    9. Injective Functions
    10. Lattice
    11. Lines Curves
    12. Partial Order
    13. Pigeonhole
    14. Polynomials
    15. Recurrence
    16. Relations
    17. Ring
    18. Sequence
    19. Sets
    20. Time Complexity
  11. Engineering Mathematics (2)
    1. Functions
  12. Engineering Mathematics Calculus (49)
    1. Continuity
    2. Counting
    3. Differentiability
    4. Integration
    5. Limits
    6. Maxima Minima
    7. Mean Value Theorem
  13. Engineering Mathematics Discrete Mathematics (1)
    1. Engineering Mathematics Linear Algebra (66)
      1. Cayley Hamilton Theorem
      2. Eigen Value
      3. Matrices
      4. System Of Equations
      5. Vector Space
    2. Engineering Mathematics Probability (94)
      1. Bayes Theorem
      2. Binomial Distribution
      3. Birthday Paradox
      4. Conditional Probability
      5. Expectation
      6. Exponential Distribution
      7. Huffman Code
      8. Normal Distribution
      9. Poisson Distribution
      10. Probability
      11. Random Variable
      12. Uniform Distribution
    3. General Aptitude Numerical Ability (195)
      1. Bar Charts
      2. Bayes Theorem
      3. Circle
      4. Clock Time
      5. Complex Number
      6. Compound Interest
      7. Conditional Probability
      8. Cost Market Price
      9. Counting
      10. Data Interpretation
      11. Directions
      12. Distance Time
      13. First Order Logic
      14. Fraction
      15. Functions
      16. Geometry
      17. Inference
      18. Logical Reasoning
      19. Mean
      20. Modular Arithmetic
      21. Most Appropriate Word
      22. Number Representation
      23. Number Series
      24. Numerical Methods
      25. Odd One
      26. Passage Reading
      27. Percent
      28. Pie Chart
      29. Polynomials
      30. Probability
      31. Proportions
      32. Quadratic Equations
      33. Ratios
      34. Sequence
      35. Sets
      36. Speed
      37. Statistics
      38. System Of Equations
      39. Variance
    4. General Aptitude Verbal Ability (155)
      1. Closest Word
      2. English Grammar
      3. Grammatically Incorrect Sentence
      4. Inference
      5. Logical Reasoning
      6. Meaning
      7. Most Appropriate Alternative
      8. Most Appropriate Word
      9. Odd One
      10. Opposite
      11. Passage Reading
      12. Tense
      13. Verbal Reasoning
      14. Word Pairs
    5. Operating System (268)
      1. Cache Memory
      2. Computer Peripherals
      3. Concurrency
      4. Context Switch
      5. Critical Section
      6. Dining Philosopher
      7. Disk
      8. Disk Scheduling
      9. Dma
      10. File
      11. File System
      12. Fork
      13. Interrupts
      14. Io Handling
      15. Linking
      16. Memory Allocation
      17. Memory Management
      18. Page Replacement
      19. Process Schedule
      20. Process Synchronization
      21. Resource Allocation
      22. Semaphore
      23. Threads
      24. User Modes
      25. Virtual Memory
      26. Working Set
    6. Programming & DS (1)
      1. Programming In C
    7. Programming & DS DS (146)
      1. Arrays
      2. Avl Tree
      3. B Tree
      4. Bfs
      5. Binary Heap
      6. Binary Search
      7. Binary Tree
      8. Bst
      9. Counting
      10. Graph Algorithms
      11. Hashing
      12. Heap
      13. Infix Postfix
      14. Linked Lists
      15. Queues
      16. Spanning Tree
      17. Stack
      18. Tree Traversal
      19. Trees
    8. Programming & DS Programming (89)
      1. Activation Records
      2. Arrays
      3. Concurrency
      4. Loop Invariants
      5. Parameter Passing
      6. Pointers
      7. Post Condition
      8. Programming In C
      9. Programming Paradigms
      10. Recursion
      11. Runtime Environments
      12. Stack
      13. Variable Binding
    9. Theory of Computation (229)
      1. Chomsky Normal Form
      2. Closure Property
      3. Compound Automata
      4. Context Free
      5. Decidability
      6. Dfa
      7. Finite Automata
      8. Identify Class Language
      9. Minimal State Automata
      10. Myhill Nerode
      11. Pda
      12. Pumping Lemma
      13. Recursive Languages
      14. Recursive Recursively Enumerable
      15. Regular Expressions
      16. Regular Set
      17. True False
      18. Turing Machine

    Assuming P ≠ NP, which of the following is TRUE?

    (A) NP-complete = NP
    (B) NP-complete $\cap$ P = $\phi$
    (C) NP-hard = NP
    (D) P = NP-complete

    Consider two strings $A$="qpqrr" and $B$="pqprqrp". Let $x$ be the length of the longest common subsequence (not necessarily contiguous) between $A$ and $B$ and let $y$ be the number of such longest common subsequences between $A$ and $B$. Then $x +10y=$ ___.

    Which of the following statements is true?

    1. As the number of entries in a hash table increases, the number of collisions increases.

    2. Recursive programs are efficient

    3. The worst case complexity for Quicksort is $O(n^2)$

    4. Binary search using a linear linked list is efficient

     

    1. I and II
    2. II and III
    3. I and IV
    4. I and III

     

    A language with string manipulation facilities uses the following operations

    head(s): first character of a string
    tail(s): all but exclude the first character of a string
    concat(s1, s2): s1s2

    For the string "acbc" what will be the output of

    concat(head(s), head(tail(tail(s))))
    1. ac
    2. bc
    3. ab
    4. cc

     

    Which of the following statements is false?

    1. Optimal binary search tree construction can be performed efficiently using dynamic programming

    2. Breadth-first search cannot be used to find connected components of a graph

    3. Given the prefix and postfix walks over a binary tree, the binary tree cannot be uniquely constructed.

    4. Depth-first search can be used to find connected components of a graph

     

    Let $a=(a_{ij})$ be an $n$-rowed square matrix and $I_{12}$ be the matrix obtained by interchanging the first and second rows of the $n$-rowed Identify matrix. Then $AI_{12}$ is such that its first

    1. row is the same as its second row

    2. row is the same as the second row of $A$

    3. column is the same as the second column of $A$

    4. row is all zero

    The worst case running time to search for an element in a balanced binary search tree with $n2^{n}$ elements is

    (A) $\Theta(n\log n)$  (B) $\Theta(n2^n)$  (C)  $\Theta(n)$ (D) $\Theta(\log n)$

    The weight of a sequence $a_0,a_1, \dots, a_{n-1}$ of real numbers is defined as $a_0+a_1/2+ \dots + a_{n-1}/2^{n-1}$. A subsequence of a sequence is obtained by deleting some elements from the sequence, keeping the order of the remaining elements the same. Let $X$ denote the maximum possible weight of a subsequence of $a_o,a_1, \dots, a_{n-1}$ and $Y$ the maximum possible weight of a subsequence of $a_1,a_2, \dots, a_{n-1}$. Then $X$ is equal to

    1. $max(Y, a_0+Y)$
    2. $max(Y, a_0+Y/2)$
    3. $max(Y, a_0 +2Y)$
    4. $a_0+Y/2$

    An algorithm to find the length of the longest monotonically increasing sequence of numbers in an array $A[0:n-1]$ is given below.

    Let $L_i$, denote the length of the longest monotonically increasing sequence starting at index $i$ in the array.

    Initialize $L_{n-1} = 1$.

    For all $i$ such that $0 \leq i \leq n-2$

    $ L_i = \begin{cases} 1+ L_{i+1} &  \quad\text{if A[i] < A[i+1]} \\ 1 & \quad\text{Otherwise}\end{cases} $

    Finally the the length of the longest monotonically increasing sequence is $\text{Max} \:(L_0, L_1, \dots , L_{n-1}).$

    Which of the following statements is TRUE?

    (A) The algorithm uses dynamic programming paradigm

    (B) The algorithm has a linear complexity and uses branch and bound paradigm

    (C) The algorithm has a non-linear polynomial complexity and uses branch and bound paradigm

    (D) The algorithm uses divide and conquer paradigm

    Suppose you want to move from $0$ to $100$ on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a pre-specified pair of integers $i,\:j \:\text{with}\: i <j$. Given a shortcut $i,j$ if you are at position $i$ on the number line, you may directly move to $j$. Suppose $T(k)$ denotes the smallest number of steps needed to move from $k$ to 100. Suppose further that there is at most 1 shortcut involving any number, and in particular from 9 there is a shortcut to 15. Let $y$ and $z$ be such that $T(9) = 1 + \min(T(y),T(z))$. Then the value of the product $yz$ is _____.

    The minimum number of comparisons required to find the minimum and the maximum of 100 numbers is ________

    What is the size of the smallest MIS(Maximal Independent Set) of a chain of nine nodes?

     

    A) 5
    B) 4
    C)
    3
    D) 2

    There are 5 bags labeled 1 to 5. All the coins in a given bag have the same weight. Some bags have coins of weight 10 gm, others have coins of weight 11 gm. I pick 1, 2, 4, 8, 16 coins respectively from bags 1 to 5. Their total weight comes out to 323 gm. Then the product of the labels of the bags having 11 gm coins is ___.

    Given two arrays of numbers $a_{1},...,a_{n}$ and $b_{1},...,b_{n}$ where each number is $0$ or $1$, the fastest algorithm to find the largest span $(i, j)$ such that $ a_{i}+a_{i+1}+\dots+a_{j}=b_{i}+b_{i+1}+\dots+b_{j}$ or report that there is not such span,

    1. Takes  $ 0(3^{n})$ and $\Omega(2^{n})$ time if hashing is permitted
    2. Takes $ 0(n^{3})$ and $\Omega(n^{2.5})$ time in the key comparison mode
    3. Takes $\Theta (n)$ time and space
    4. Takes $O(\sqrt n)$ time only if the sum of the $2n$ elements is an even number

    A set $X$ can be represented by an array x[n] as follows: 

     x$\left [ i \right ]=\begin {cases} 1 & \text{if } i \in X \\ 0 & otherwise  \end{cases}$

    Consider the following algorithm in which x y, and z are Boolean arrays of size n: 

    algorithm zzz(x[], y[], z[]) {
        int i;
        
        for(i=0; i<n; ++i)
            z[i] = (x[i] ∧ ~y[i]) ∨ (~x[i] ∧ y[i]);
    }

    The set Z computed by the algorithm is: 

    1. $(X\cup Y)$
    2. $(X\cap Y)$
    3. $(X-Y)\cap (Y-X)$
    4. $(X-Y)\cup (Y-X)$

    Let $A$ be an $n \times n$ matrix such that the elements in each row and each column are arranged in ascending order. Draw a decision tree, which finds 1st, 2nd and 3rd smallest elements in minimum number of comparisons.

    If $n$ is a power of 2, then the minimum number of multiplications needed to compute $a^n$ is

    (a) $\log_2 n$

    (b) $\sqrt n$

    (c) $n-1$

    (d) $n$

    Suppose we want to arrange the $n$ numbers stored in any array such that all negative values occur before all positive ones. Minimum number of exchanges required in the worst case is

    1. $n-1$
    2. $n$
    3. $n+1$
    4. None of the above

     

    Consider the following C program

    main()
    { 
        int x, y, m, n;
        scanf("%d %d", &x, &y);
        /* Assume x>0 and y>0*/
        m = x; n = y;
        while(m != n)
            { 
                if (m > n)
                    m = m-n;
                else
                    n = n-m;
            }
        printf("%d", n);
    }

    The program computes

    1. $x+y$ using repeated subtraction

    2. $x \mod y$ using repeated subtraction

    3. the greatest common divisor of $x$ and $y$

    4. the least common multiple of $x$ and $y$

    An element in an array X is called a leader if it is greater than all elements to the  right of it in X. The best algorithm to find all leaders in an array 

    1. Solves it in linear time using a left to right pass of the array
    2. Solves it in linear time using a right to left pass of the array
    3. Solves it using divide and conquer in time $\Theta (n\log n)$
    4. Solves it in time $\Theta( n^2)$

    The average number of key comparisons required for a successful search for sequential search on $n$ items is

    1. $\frac{n}{2}$
    2. $\frac{n-1}{2}$
    3. $\frac{n+1}{2}$
    4. None of the above

    A program attempts to generate as many permutations as possible of the string, 'abcd' by pushing the characters a, b, c, d in the same order onto a stack, but it may pop off the top character at any time. Which one of the following strings CANNOT be generated using this program?

    1. abcd
    2. dcba
    3. cbad
    4. cabd

    In the worst case, the number of comparisons needed to search a single linked list of length n for a given element is

    1. $\log n$
    2. $\frac{n}{2}$
    3. $\log_2 {n} - 1$
    4. $n$

    Let $n$ be a large integer. Which of the following statements is TRUE?

    1. $2^{\sqrt{2\log n}}< \frac{n}{\log n}< n^{1/3}$
    2. $\frac{n}{\log n}< n^{1/3}< 2^{\sqrt{2\log n}}$
    3. $2\sqrt{^{2\log n}}< n^{1/3}< \frac{n}{\log n}$
    4. $n^{1/3}< 2\sqrt{^{2\log n}}<\frac{n}{\log n}$
    5. $\frac{n}{\log n}< 2\sqrt{^{2\log n}}<n^{1/3}$

    The subset-sum problem is defined as follows. Given a set of n positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2-dimensional Boolean array, $X$, with n rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.

    Which entry of the array $X$, if TRUE, implies that there is a subset whose elements sum to $W$?

    1. $X[1, W]$
    2. $X[n, 0]$
    3. $X[n, W]$
    4. $X[n-1, n]$

    Consider the following game. There is a list of distinct numbers. At any round, a player arbitrarily chooses two numbers $a, b$ from the list and generates a new number $c$ by subtracting the smaller number from the larger one. The numbers $a$ and $b$ are put back in the list. If the number $c$ is non-zero and is not yet in the list, $c$ is added to the list. The player is allowed to play as many rounds as the player wants. The score of a player at the end is the size of the final list.

    Suppose at the beginning of the game the list contains the following numbers: $48, 99, 120, 165$ and $273$. What is the score of the best player for this game?

    1. $40$
    2. $16$
    3. $33$
    4. $91$
    5. $123$

    Given a set of $n$ distinct numbers, we would like to determine both the smallest and the largest number. Which of the following statements is TRUE?

    1. These two elements can be determined using $O\left(\log^{100}n\right)$ comparisons.
    2. $O\left(\log^{100}n\right)$ comparisons do not suffice, however these two elements can be determined using $n + O(\log n)$ comparisons.
    3. $n+O(\log n)$ comparisons do not suffice, however these two elements can be determined using $3\lceil n/2 \rceil$ comparisons.
    4. $3\lceil n/2 \rceil$ comparisons do not suffice, however these two elements can be determined using $2(n - 1)$ comparisons.
    5. None of the above.

    Given a set of $n$ distinct numbers, we would like to determine the smallest three numbers in this set using comparisons. Which of the following statements is TRUE?

    1. These three elements can be determined using $O\left(\log^{2}n\right)$ comparisons.
    2. $O\left(\log^{2}n\right)$ comparisons do not suffice, however these three elements can be determined using $n+O(1)$ comparisons.
    3. $n + O(1)$ comparisons do not suffice, however these three elements can be determined using $n +O(\log n)$ comparisons.
    4. $n+O(\log n)$ comparisons do not suffice, however these three elements can be determined using $O(n)$ comparisons.
    5. None of the above.

    Consider the problem of computing the minimum of a set of $n$ distinct numbers. We choose a permutation uniformly at random (i.e., each of the n! permutations of $\left \langle 1,....,n \right \rangle$ is chosen with probability $(1/n!)$ and we inspect the numbers in the order given by this permutation. We maintain a variable MIN that holds the minimum value seen so far. MIN is initialized to $\infty$ and if we see a value smaller than MIN during our inspection, then MIN is updated. For example, in the inspection given by the following sequence, MIN is updated four times.

        5 9 4 2 6 8 0 3 1 7

    What is the expected number of times MIN is updated?

    1. $O (1)$
    2. $H_{n}=\sum ^{n}_{i=1} 1/i$
    3. $\sqrt{n}$
    4. $n/2$
    5. $n$

    Suppose $n$ processors are connected in a linear array as shown below. Each processor has a number. The processors need to exchange numbers so that the numbers eventually appear in ascending order (the processor $\rm P1$ should have the minimum value and the the processor $\rm Pn$ should have the maximum value).

    The algorithm to be employed is the following. Odd numbered processors and even numbered processors are activated alternate steps; assume that in the first step all the even numbered processors are activated. When a processor is activated, the number it holds is compared with the number held by its right-hand neighbour (if one exists) and the smaller of the two numbers is retained by the activated processor and the bigger stored in its right hand neighbour.
    How long does it take for the processors to sort the values?

    1. $n \log n$ steps
    2. $n^2$ steps
    3. $n$ steps
    4. $n^{1.5}$ steps
    5. The algorithm is not guaranteed to sort

    Given a weighted directed graph with $n$ vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time

    1. $O (n)$
    2. $O(n . \log(n))$ but not $O (n)$
    3. $O(n^{1.5})$ but not $O (n \log n)$
    4. $O(n^{3})$ but not $O(n^{1.5})$
    5. $O(2^{n})$ but not $O(n^{3})$

    Let $T$ be a tree of $n$ nodes. Consider the following algorithm, that constructs a sequence of leaves $u_{1}, u_{2}...$. Let $u_{1}$ be some leaf of tree. Let $u_{2}$be a leaf that is farthest from $u_{1}$. Let $u_{3}$ be the leaf that is farthest from $u_{2}$, and, in general, let $u_{i+1}$ be a leaf of $T$ that is farthest from $u_{i}$ (if there are many choices for $u_{i+1}$, pick one arbitrarily). The algorithm stops when some $u_{i}$ is visited again. What can u say about the distance between $u_{i}$ and $u_{i+1}$, as $i = 1, 2,...?$

    1. For some trees, the distance strictly reduces in each step.
    2. For some trees, the distance increases initially and then decreases.
    3. For all trees, the path connecting $u_{2}$ and $u_{3}$ is a longest path in the tree.
    4. For some trees, the distance reduces initially, but then stays constant.
    5. For the same tree, the distance between the last two vertices visited can be different, based on the choice of the first leaf $u_{1}$.

    Consider the class of recursive and iterative programs. Which of the following is false?

    1. Recursive programs are more powerful than iterative programs.
    2. For every iterative program there is an equivalent recursive program.
    3. Recursive programs require dynamic memory management.
    4. Recursive programs do not terminate sometimes.
    5. Iterative programs and recursive programs are equally expressive.

    Consider the following C program which is supposed to compute the transpose of a given 4 x 4 matrix M. Note that, there is an X in the program which indicates some missing statements. Choose the correct option to replace X in the program.

    #include<stdio.h>
    #define ROW 4
    #define COL 4
    int M[ROW][COL] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
    main()
    {
        int i, j, t;
        for (i = 0; i < 4; ++i)
        {
            X
        }
        for (1 = 0; i < 4; ++i)
            for (j = 0; j < 4; ++j)
                printf ("%d", M[i][j]);
    }
    1. for(j = 0; j < 4; ++j){
           t = M[i][j];
           M[i][j] = M[j][i];
           M[j][i] = t;
      }
      
    2. for(j = 0; j < 4; ++j){
           M[i][j] = t;
           t = M[j][i];
           M[j][i] = M[i][j];
      }
      
    3. for(j = i; j < 4; ++j){
           t = M[i][j];
           M[i][j] = M[j][i];
           M[j][i] = t;
      }
      
    4. for(j = i; j < 4; ++j){
           M[i][j] = t;
           t = M[j][i];
           M[j][i] = M[i][j];
      }

    You are given ten rings numbered from 1 to 10, and three pegs labeled A, B, and C. Initially all the rings are on peg A, arranged from top to bottom in ascending order of their numbers. The goal is to move all the rings to peg B in the minimum number of moves obeying the following constraints:

    (i) In one move, only one ring can be moved.

    (ii) A ring can only be moved from the top of its peg to the top of a new peg.

    (iii) At no point can a ring be placed on top of another ring with a lower number.

    How many moves are required?

    1. 501
    2. 1023
    3. 2011
    4. 10079
    5. None of the above.

    Consider the following program operating on four variables $u, v, x, y$, and two constants $X$ and $Y$.

    x, y, u, v:= X, Y, Y, X;    
    While (x ≠ y)  
    do
        if (x > y) then x, v := x - y, v + u;
        else if (y > x) then y, u:= y - x, u + v;  
    od;
    print ((x + y) / 2);  print ((u + v) / 2);  

     

    Given $X > 0 \land Y > 0$, pick the true statement out of the following:

    1. The program prints $\text{gcd}(X, Y)$ and the first prime larger than both $X$ and $Y$.
    2. The program prints $\text{gcd}(X, Y)$ followed by $\text{lcm}(X, Y)$.
    3. The program prints $\text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
    4. The program prints $\frac1 2 \times \text{gcd}(X, Y)$ followed by $\frac1 2 \times \text{lcm}(X, Y)$.
    5. The program does none of the above.

    Consider the following C function.

    int fun1 (int n) { 
         int i, j, k, p, q = 0; 
         for (i = 1; i < n; ++i) {
            p = 0; 
           for (j = n; j > 1; j = j/2) 
               ++p;  
          for (k = 1; k < p; k = k * 2) 
               ++q;
         } 
         return q;
    }

     

    Which one of the following most closely approximates the return value of the function fun1?

    1. n3
    2. n(log n)2
    3. n log n
    4. n log(log n)

     

     

    Suppose you are provided with the following function declaration in the C programming language.

    int partition(int a[], int n);

    The function treats the first element of $a[\:]$ as a pivot and rearranges the array so that all elements less than or equal to the pivot is in the left part of the array, and all elements greater than the pivot is in the right part. In addition, it moves the pivot so that the pivot is the last element of the left part. The return value is the number of elements in the left part.

    The following partially given function in the C programming language is used to find the $k^{th}$ smallest element in an array $a[\:]$ of size $n$ using the partition function. We assume $k \leq n$.

    int kth_smallest (int a[], int n, int k)
    {
        int left_end = partition (a, n);
        if (left_end+1==k) {
            return a[left_end];
        }
        if (left_end+1 > k) {
            return kth_smallest (___________);
        } else {
            return kth_smallest (___________);   
        }
    }

    The missing arguments lists are respectively

     

    1. (a, left_end, k) and (a+left_end+1, n-left_end-1, k-left_end-1)
    2. (a, left_end, k) and (a, n-left_end-1, k-left_end-1)
    3. (a, left_end+1, n-left_end-1, k-left_end-1) and (a, left_end, k)
    4. (a, n-left_end-1, k-left_end-1) and (a, left_end, k)

    Given below are some algorithms, and some algorithm design paradigms.

    1. Dijkstra's Shortest Path i. Divide and Conquer
    2. Floyd-Warshall algorithm to compute all pairs shortest path ii. Dynamic Programming
    3. Binary search on a sorted array iii. Greedy design
    4. Backtracking search on a graph iv. Depth-first search
      v. Breadth-first search

    Match the above algorithms on the left to the corresponding design paradigm they follow.

     

    1. 1-i, 2-iii, 3-i, 4-v
    2. 1-iii, 2-iii, 3-i, 4-v
    3. 1-iii, 2-ii, 3-i, 4-iv
    4. 1-iii, 2-ii, 3-i, 4-v

    Match the following:

    (P) Prim's algorithm for minimum spanning tree     (i) Backtracking
    (Q) Floyd-Warshall algorithm for all pairs shortest paths   (ii) Greedy method
    (R) Mergesort            (iii) Dynamic programming
    (S) Hamiltonian circuit      (iv) Divide and conquer

                                                                

                                                            

    1. P-iii, Q-ii, R-iv, S-i
    2. P-i, Q-ii, R-iv, S-iii
    3. P-ii, Q-iii, R-iv, S-i
    4. P-ii, Q-i, R-iii, S-iv

    Let a be an array containing n integers in increasing order. The following algorithm determines whether there are two distinct numbers in the array whose difference is a specified number S > 0.

    i = 0; j = 1;
    while (j < n ){
             if (E) j++;
             else if (a[j] - a[i] == S) break;
             else i++;
    }
    if (j < n) printf("yes") else printf ("no");


    Choose the correct expression for E.

    1. a[j] - a[i] > S
    2. a[j] - a[i] < S
    3. a[i] - a[j] < S
    4. a[i] - a[j] > S

    The following C function takes two ASCII strings and determines whether one is an anagram of the other. An anagram of a string s is a string obtained by permuting the letters in s.

    int anagram (char *a, char *b) {
        int count [128], j;
        for (j = 0;  j < 128; j++) count[j] = 0;
        j = 0;
        while (a[j] && b[j]) {
            A;
            B;
        }
        for (j = 0; j < 128; j++) if (count [j]) return 0;
        return 1;
    }


    Choose the correct alternative for statements A and B.

    1. A : count [a[j]]++ and B : count[b[j]]--
    2. A : count [a[j]]++ and B : count[b[j]]++
    3. A : count [a[j++]]++ and B : count[b[j]]--
    4. A : count [a[j]]++and B : count[b[j++]]--

    In the following table, the left column contains the names of standard graph algorithms and the right column contains the time complexities of the algorithms. Match each algorithm with its time complexity.

     1. Bellman-Ford algorithm
     2. Kruskal’s algorithm
     3. Floyd-Warshall algorithm
     4. Topological sorting
     A : O ( m log n)           
     B : O (n3)
     C : O (nm)
     D : O (n + m)
    1. 1→ C, 2 → A, 3 → B, 4 → D
    2. 1→ B, 2 → D, 3 → C, 4 → A
    3. 1→ C, 2 → D, 3 → A, 4 → B
    4. 1→ B, 2 → A, 3 → C, 4 → D

    The following are the starting and ending times of activities A, B, C, D, E, F, G and H respectively in chronological order: $“a_s \: b_s \: c_s \: a_e \: d_s \: c_e \: e_s \: f_s \: b_e \: d_e \: g_s \: e_e \: f_e \: h_s \: g_e \: h_e”$. Here, $x_s$ denotes the starting time and $x_e$ denotes the ending time of activity X. We need to schedule the activities in a set of rooms available to us. An activity can be scheduled in a room only if the room is reserved for the activity for its entire duration. What is the minimum number of rooms required?

    1. 3
    2. 4
    3. 5
    4. 6

    The correct matching for the following pairs is

    (A) All pairs shortest path (1) Greedy
    (B) Quick Sort (2) Depth-First search
    (C) Minimum weight spanning tree (3) Dynamic Programming
    (D) Connected Components (D) Divide and and Conquer
    1. A-2 B-4 C-1 D-3

    2. A-3 B-4 C-1 D-2

    3. A-3 B-4 C-2 D-1

    4. A-4 B-1 C-2 D-3

    A recursive program to compute Fibonacci numbers is shown below. Assume you are also given an array f [ 0......m] with all elements initialized to 0


    fib(n) {
        if (n > M) error ();
        if (n == 0) return 1;
        if (n == 1)return 1;
        if (▭)________________________(1)
            return ▭__________________(2)
        t = fib(n - 1) + fib(n - 2);
        ▭_____________________________(3)
        return t;
    }

    1. Fill in the boxes with expressions/statement to make fib() store and reuse computed Fibonacci values. Write the box number and the corresponding contents in your answer book.
    2. What is the time complexity of the resulting program when computing fib(n)?

     

    The most appropriate matching for the following pairs

        X: depth first search         1: heap 
        Y: breadth-first search       2: queue 
        Z: sorting                    3: stack

    is:

    1. X - 1  Y - 2  Z - 3
    2. X - 3  Y - 1  Z - 2
    3. X - 3  Y - 2  Z - 1
    4. X - 2  Y - 3  Z - 1

    Let $W(n) $ and $A(n)$ denote respectively, the worst  case and average case running time of an algorithm executed on an input of size $n$.  Which of the following is ALWAYS TRUE?

    (A) $A(n) = \Omega (W(n))$
    (B) $A(n) = \Theta (W(n))$
     (C) $A(n) = \text{O} (W(n))$
    (D) $A(n) = \text{o} (W(n))$

    The minimum number of comparisons required to determine if an integer appears more than $\frac{n}{2}$ times in a sorted array of $n$ integers is

    1. $\Theta(n)$

    2. $\Theta(\log n)$

    3. $\Theta(\log^*n)$

    4. $\Theta(1)$

    The recurrence relation capturing the optimal execution time of the $Towers of Hanoi$ problem with $n$ discs is

    (A) $T(n) =  2T(n − 2) + 2$
    (B) $T (n) =  2T(n − 1) + n$
    (C) $T (n) =  2T(n/2) + 1$
    (D) $T (n) =  2T(n − 1) + 1$
    The minimum number of record movements required to merge five files A (with 10 records), B (with 20 records), C (with 15 records), D (with 5 records) and E (with 25 records) is:
    (a) 165  (b) 90   (c) 75    d) 65

    The subset-sum problem is defined as follows. Given a set of n positive integers, $S = \{ a_1, a_2, a_3, \dots , a_n \}$, and positive integer $W$, is there a subset of $S$ whose elements sum to $W$? A dynamic program for solving this problem uses a 2-dimensional Boolean array, $X$, with n rows and $W+1$ columns. $X[i, j], 1 \leq i \leq n, 0 \leq j \leq W$, is TRUE, if and only if there is a subset of $\{a_1, a_2, \dots, a_i\}$ whose elements sum to $j$.

    Which of the following is valid for $2 \leq i \leq n$, and $a_i \leq j \leq W$?

    1. $X[i, j] = X[i-1, j] \vee X[i, j-a_i]$
    2. $X[i, j] = X[i-1, j] \vee X[i-1, j-a_i]$
    3. $X[i, j] = X[i-1, j] \wedge X[i, j-a_i]$
    4. $X[i, j] = X[i-1, j] \wedge X[i-1, j-a_i]$
    Suppose there are $\lceil \log n \rceil$ sorted lists of $\lfloor n /\log n \rfloor$ elements each. The time complexity of producing a sorted list of all these elements is: (Hint:Use a heap data structure)

    Let S be a sorted array of n integers. Let t(n) denote the time taken for the most efficient algorithm to determined if there are two elements with sum less than 1000 in S. Which of the following statement is true?

    1. t (n) is 0(1)
    2. n ≤  t(n) ≤ n log2 n
    3. n log2 n ≤ t(n) < $\frac{n}{2}$
    4. t(n) = $\left (\frac{n}{2} \right)$

    Suppose you are given an array s[1....n] and a procedure reverse (s, i, j) which reverses the order of elements in s between positions i and j (both inclusive). What does the following sequence do, where 1 ≤ k ≤ n:

               reverse (s, 1, k);
               reverse (s, k+1, n);
               reverse (s, 1, n);
    1. Rotates s left by k positions
    2. Leaves s unchanged 
    3. Reverses all elements of s
    4. None of the above
    Let $T$ be a Depth First Tree of a undirected graph $G$. An array $P$ indexed by the vertices of $G$ is given. $P[V]$ is the parent of vertex $V$, in $T$. Parent of the root is the root itself.

    Give a method for finding and printing the cycle formed if the edge $(u,v)$ of $G$ not in $T$ (i.e., $e \in G-T$) is now added to $T$.

    Time taken by your method must be proportional to the length of the cycle.

    Describe the algorithm in a PASCAL – like language. Assume that the variables have been suitably declared.

    Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:

    Consider the following Pascal function:

    Function X(M:integer):integer;
    Var i:integer;
    Begin
        i := 0;
        while i*i < M
        do i:= i+1
        X := i
     end
    
    

    The function call $X(N)$, if $N$ is positive, will return

    (a). $\lfloor\sqrt N \rfloor$

    (b). $\lfloor\sqrt N \rfloor +1$

    (c). $\lceil \sqrt N \rceil$

    (d). $\lceil \sqrt N \rceil +1$

    (e). None of the above

    Answers:

    Selected Answer
    Answer is (B) NP-complete $\cap P = \phi$

    Since, P $\neq$ NP, there is at least one problem in NP, which is harder than all P problems. Lets take the hardest such problem, say $X$. Since, P $\neq$ NP, $X \notin $ P .

    Now, by definition, NP-complete problems are the hardest problems in NP and so $X$ problem is in NP-complete.  And being in NP, $X$ can be reduced to all problems in NP-complete, making any other NP-complete problem as hard as $X$. So, since $X \notin $P, none of the other NP-complete problems also cannot be in P.
    7 votes -- Arjun Suresh ( 156k points)
    34

    qprr  

    Pqrr

    qpqr

    In first string

    If we want to get 4 as max len den  lcs should end with either   rr or qr

    Only 4combinations possible for lcs with len 4

    qpqr

    qqrr

    pqrr

    qprr

     

    Now check  for matching sequences in second string,  except for qqrr all possible
    6 votes -- Anurag Semwal ( 5.8k points)
    Selected Answer
    Its D according to me.

    Binary search using linked list is not efficient as it will not give O(logn), because we will not be able to find the mid in constant time. Finding mid in linked list takes O(n) time.

    Recursive programs are not efficient because they take a lot of space, Recursive methods will often throw StackOverflowException while processing big sets. moreover it has its own advantages too.
    11 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    C.

    concat(a,head(tail(tail(acbc))))

    concat(a,head(tail(cbc)))

    concat(a,head(bc))

    concat(a,b)

    ab.
    8 votes -- Gate Keeda ( 17.7k points)
    Answer: B

    (a) True.
    (b) False.
    (c) True.
    (d) True.
    5 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    c is the answer, in AI12 matrix will result into the same matrix as A but first column will be exchanged with second column.
    A matrix:
    a  b  c
    d  e  f
    g  h  i

    I12 matrix
    0 1 0
    1 0 0
    0 0 1

    resulted matrix
    b a c
    e d f
    h g i
    5 votes -- Manu Thakur ( 5.6k points)
    Selected Answer
    Binary search takes $\Theta(\log n)$ for $n$ elements in the worst case. So, with $(n2^n)$ elements, the worst case time will be

    $\Theta(\log (n2^n)) $

    $= \Theta(\log n + \log 2^n) $

    $= \Theta(\log n + n) $

    $=\Theta(n)$
    8 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    $$\begin{align}
    S &= \langle a_0, S_1 \rangle\\
    S_1 &= \langle a_1, a_2, a_3 \ldots a_{n-1} \rangle
    \end{align}$$

    Two possible cases arise:

    1. $a_0$ is included in the max weight subsequence of $S$:

              In this case, $X = \text{weight} \Bigl( \langle a_0, S_1 \rangle \Bigr) = a_0 + \dfrac Y 2$
       
    2. $a_0$ is not included in the max weight subsequence of $S$:

              In this case, $X = \text{weight}(S_1) = Y$

     

    Since the value of $a_0$ can be anything (negative or $<\frac Y 2$ in general) $\{ \because a_i \in \mathbb{R} \}$, it is possible that $Y > a_0 + \frac Y 2$.

    The maximum possible weight of a subsequence of $S$ is given by: $$X = \max \left ( Y, a_0 + \frac Y 2 \right )$$

    Thus, option B is correct.

    12 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    (A) is the answer. 

    The algorithm is storing the optimal solutions to subproblems at each point (for each i), and then using it to derive the optimal solution of a bigger problem. And that is dynamic programming approach. And the program has linear time complexity. 

    http://stackoverflow.com/questions/1065433/what-is-dynamic-programming

    Now, branch and bound comes when we explore all possible solutions (branch) and backtracks as soon as we find we won't get a solution (in classical backtracking we will retreat only when we won't find the solution). So, backtracking gives all possible solutions while branch and bound will give only the optimal one. http://www.cs.cornell.edu/~wdtseng/icpc/notes/bt2.pdf

    The given algorithm here is neither backtracking nor branch and bound. Because we are not branching anywhere in the solution space. 

    And the algorithm is not divide and conquer as we are not dividing the problem and then merging the solution as in the case of merge sort (where merge is the conquer step). 

    https://en.wikipedia.org/wiki/Divide_and_conquer_algorithms

    10 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    T(9) = Distance from 9 to 100
     T(9)=1+ min(T(y),T(z))=1+min(Distance from y to 100 , Distance from z to 100)

    There are only two such values where we can reach from 9 , one is simple step to right on number line , i.e 10 and another is 15 (given shortcut)

    Hence , y=10 , z=15
    yz=10x15 = 150
    9 votes -- Srinath Sri ( 2.9k points)
    Selected Answer

    Ans: minimum number of comparison require to find minimum and maximum is: Approach is divide and conquer ....

      T(n) = T(floor(n/2)) + T(ceil(n/2)) + 2  
      T(2) = 1 // if two element then compare both and return max and min
      T(1) = 0 // if one element then return both max and min same

      If n is a power of 2, then we can write T(n) as:

       T(n) = 2T(n/2) + 2 

      After solving above recursion, we get

      T(n)  = 3/2n -2 

     Thus, the approach does 3/2n -2 comparisons if n is a power of 2. And it does more than 3/2n -2 comparisons if n is not a power of 2.

    So, here in this case put n=100 and we will get (3/2)(100) - 2 = 148 comparison .....

    16 votes -- Jay ( 1.1k points)
    Selected Answer
    Answer: C

    1--2--3--4--5--6--7--8--9

    (2,5,8) is the maximal independent set for a chain of 9 nodes. If we add any other node to the set then it will not be MIS.
    7 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    There are 5 bags , I assumed initially all bags are having 10 gm coins, and picked them as per the given condition

    1,2,4,8,16   of all bags have 10 gm coins then total weight will come to

    10 + 20 + 40 + 80+ 160 = 310  but total weight should be 323, but 13 is less, i divided 13 into 1 + 4 + 8
    11 + 20 + 44+ 88 + 160, means 1st, 3rd and 4th bags have 11 gm coins. so product of labels will be 1*3*4 = 12
    10 votes -- Manu Thakur ( 5.6k points)
    Selected Answer

    Answer is (C). Following algorithm would do.

    Since array is binary, the max sum will go until $n$ and so the sum difference of the two arrays can vary between $-n$ and $n$. We use array start to keep the starting index of each possible sum (hence of size $2n+1$) and array end to keep the ending index (these two arrays work like hash tables and since we have only $2n+1$ possible keys, we can do a perfect hashing). So, our required solution will be max(end[i]-start[i]) provided both are assigned values. 

    The algorithm works as follows:

    1. Initialize diff array to contain the difference of sum of elements of array a and b. i.e., $diff[i] = \sum_{i=0}^n a[i] - b[i]$.
    2. Now diff[i] can have values from $-n$ to $n$ which gives $2n+1$ possible values and the first occurrence of a diff value marks the beginning of a span and the last occurrence marks the end. We use start and end array for storing these two positions for the $2n+1$ possible values.
    3. Now, the largest value of end[i] - start[i] for any i, will be the largest span and the start of it will be start[i] + 1, and end will be end[i]. If the span is starting from first position itself (arrays a and b have same first elements), then it will start from start[i] itself. 
    #include <stdio.h>
    
    #define size 100 //assume n is less than 100
    int main()
    {
        int n, a[size], b[size]; 
        int start[2*size+1], end[2*size+1];
        int sum1 = 0, sum2 = 0, i;
        int diff[size];
        printf("Enter n: ");
        scanf("%d", &n);
        for(i = 0; i < n; i++)
        {
            printf("Enter a[%d]: ", i);
            scanf("%d", &a[i]);
        }
        for(i = 0; i < n; i++)
        {
            printf("Enter b[%d]: ", i);
            scanf("%d", &b[i]);
        }
            
        for(i = 0; i < n; i++)
        {
            if(a[i]) sum1++;
            if(b[i]) sum2++;
            diff[i] = sum1 - sum2;
        }
        for(i = 0; i < 2*n; i++)
            start[i] = -1,end[i] = -1;
        start[n] = end[n] = 0;  
        //initially sum is 0 at the beginning of array and 
        //the first n-1 elements of start and end are used 
        //if sum of A till ith element is less than sum of B till ith element
        for(i=0; i < n; i++)
        {
            if(start[diff[i] + n] == -1)//interested only in the first occurrence of diff[i]
                start[diff[i] + n] = i;
            end[diff[i] + n] = i;//interested in the last occurrence of diff[i]
        }
        int max = -1;
        int savei = -1; //savei is for storing the sum having the largest span
        
        for(i = 0; i < 2*n; i++)
        {
            if(start[i] > -1 && (end[i] - start[i] > max))
            {
                max = end[i] - start[i];
                savei = i;
            }
            
        }
        if(savei >= 0)
        {
            printf("The largest span is from %d to %d\n", start[savei]+(savei != n), end[savei]);
        //when sum zero is having the largest span, span starts from first element itself. 
        //Else, the span starts from the next element from which the span does not change
        }
        else
        {
            printf("No span\n");
        }
    }
    
    3 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Option (d)

    In the given algorithm the for loop contains a logical expression

     z[i] = (x[i]  ~y[i])  (~x[i]  y[i]);

    The equivalent set representation of a given logical expression if we assume z[i] = z, X[i] = X, Y[i] = Y then 

    z = ( X ^ Y' ) V ( X ' ^ Y)

    z = ( X - Y ) V ( Y - X )  [A ^ B' = A - B]

    4 votes -- Prasanna Ranganathan ( 2.5k points)

    If all elements are distinct we have to check 1st row and 1st column.So complexity will be O(n2)

    Here minimum no of comparison could be 1 , because a[0][0] will be minimum always, now we have to check between a[0][1] and a[1][0]

    if all elements are not distinct we have to search all elements of the array

    So, minimum comparison could be O(1)

    1 votes -- srestha ( 29.8k points)
    Selected Answer
    a. $\log n$

    For n = 8, we can do

    $b = a \times a$

    $b = b\times b$

    $b = b\times b$ and we get $b = a^8$
    3 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Answer is (d) None of these

    We just require n/2 swaps in the worst case. The algorithm is as given below:

    Find positive number from left side and negative number from right side and do exchange. Since, at least one of them must be less than or equal to n/2, there cannot be more than n/2 exchanges. An implementation is given below:

    http://gatecse.in/wiki/Moving_Negative_Numbers_to_the_Beginning_of_Array
    9 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    It is a simple algorithm of gcd

    here while loop executes until m==n

    take any two number as m,n and compute it , get the answer

    Ans will be (C)
    3 votes -- srestha ( 29.8k points)
    Selected Answer

    Ans B should be correct. 

    We can move from right keeping a note of the maximum element(suppose current_max). At the start the right most element will always be a leader. If an element is greater than our current_max, it will a leader. Add this element to leaders. Set current_max to this element and carry on leftward. Time Complexity would be O(n)

    6 votes -- Madhur Rawat ( 2.4k points)
    Selected Answer
    Expected number of comparisons
    = 1 $\times$ Probability of first element be x + 2 $\times$ Probability of second element be x + ....  +n $\times$ Probability of last element be x.

    $= \frac{1}{n} + \frac{2}{n} + \frac{3}{n} +.... +\frac{n}{n} $

    $ = \frac{\left ( \frac{n\times (n+1)}{2} \right ) } {n} $

    $ = \frac{n+1}{2}$
    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    A. push a & pop a, push b & pop b, push c & pop c, and finally push d and pop d
       sequence of popped elements will come to abcd
    B. first push abcd, and after that pop one by one sequence of popped elements will come to dcba

    C. push abc, and after that pop one by one sequence of popped elements will come to cba, now push d and pop d, final sequence comes to cbad

    D. this sequence is not possible because 'a' can not be pooped before 'b' any how
    5 votes -- Manu Thakur ( 5.6k points)
    Selected Answer
    A & C is not correct as we can not do binary search in Linked list.

    B seems like average case, be we are asked for worst case.

    Worst case is we do not find the element in list. We might end up searching entire list & comparing with each element. So answer -> D. n
    5 votes -- Akash ( 32.4k points)
    Selected Answer
    Ans will be C

    Take $n=2^{1024}$

    Now,   $2^{\sqrt{(2 \log n)}} \approx 2^{45}$

              $n^{\frac{1}{3}} \approx 2^{341}$

              $n / \log n = 2^{1024}/1024 \approx 2^{1014}$

    Just one value is not enough to confirm growth rate. So, take $n = 1024$.

    Now,  $2^{\sqrt{(2 \log n)}} \approx 2^{4}$

              $n^{\frac{1}{3}}\approx 2^{3}$

              $n / \log n = 2^{10}/10 \approx 2^{7}$

    So, as $n$ increases the gap between second and third function increases and also the second function overtakes the first. So, $f1 < f2 < f3$.
    2 votes -- srestha ( 29.8k points)
    ANS) C ,If LAST ROW and LAST COLUMN entry is 1, then there exists subset whose elements sum to W
    2 votes -- Shivam Bhardwaj ( 31 points)
    Selected Answer
    OPTION d is correct

    Here the list is (48, 99, 120, 165 ,273.)

    Gcd(48,99)=3 ,means if we subtract(99-48=51) that no is also % 3,

    so the no's like (3,6,9----99) are added , total no's=99/3=33

    //y Gcd(48,120)=24,so the no's %24 are added like (24,48,---120) ,total no's=120/24=5

    //y Gcd(48,165)=3,so the nos(3,6,9,--24--48--99--120---165) are added ,totally 165/3=55

    at end Gcd(48,273)=3,so the no's(3,6,9--24---48--99---120--165---273) are added(which covers all the above no's)

    so total no"s added to this list=273/3=91
    4 votes -- venky.victory35 ( 581 points)
    I think ans will be C)

    to be accurate it will be need 3n/2 -2 comparisons .
    3 votes -- Pranay Datta ( 6.8k points)
    Selected Answer

    Option (C) is correct .. Reason is as follows : 

    3

    Here , At first level we are Given n elements , out of which we have to find smallest 3 numbers.

    we compare 2 -2 elements as shown in figure & get n/2 elements at Second level.

    Note: Minimum element is in these n/2 elements.

    So, comparisons for this is n/2..

    Similarly for next level we have n/4 Comparisons & n/2 elements..and so on..

    Total Comparisons till now is n/2 + n/4 + n/8 + .... + 4 + 2 +1  = (2log n -1) = n-1 {Use G.P. sum}


    Now we have to get smallest 3..

    We have 1st smallest at last level already  => 0 Comparison for this..

    => 2nd & 3rd smallest can be found in O(log n) time as shown below:

    h

     

    Minimum Element must have descended down from some path from top to Bottom..


    => SQUARES represent Candidates for 2nd minimum..

    every element that is just below m1(first  minimum) is a candidate for second minimum..

    So, O(log n ) Comparisons for finding second smallest..


    => similarly for 3rd minimum we get O(log n) time.. As, every element  that is just below 1st & 2nd minimum is a candidate for 3rd minimum .. 

     

    3 votes -- Himanshu Agarwal ( 9.9k points)
    Selected Answer
    Let us consider 3 numbers {1,2,3}

    We will consider the permutation along with min no of times MIN is updated .

    Permutation :                   No of times MIN updated (Minimum)

    1,2,3                                          1

    1,3,2                                          1

    2,1,3                                          2

    2,3,1                                          2

    3,1,2                                          2

    3,2,1                                         3

    Total number of times MIN updated is  : 11 .

    Average no of times MIN updated is  : (11/6)

    Now going by the options i am getting B .   

    H$_3$ = 1 + 1/2 + 1/3 = 11/6 .

    H$_3$ is the answer and that is option B .
    6 votes -- Riya Roy(Arayana) ( 5.6k points)
    Selected Answer
    OPTION C is correct.
    5 votes -- venky.victory35 ( 581 points)
    I think arbitrary large weights means having positive weight cycle...so bellmanford algo can be used..O(VE)....changing sign of weights of edges....
    2 votes -- papesh pathare ( 359 points)
    Selected Answer

    answer = option E

    Computable function: those which can be incorporated in a program using for/while loops.

    Total Function: Defined for all possible inputs

    Well Defined: if its definition assigns it a unique value.

    It was a belief in early 1900s that every Computable function was also Primitively Recursive. But the discovery of Ackermann function provided a counter to it.

    The class of primitive recursive functions is a small subclass of the class of recursive functions. This means that there are some functions which are Well-Defined Total Functions and are Computable BUT Not primitively recursive; eg. Ackermann function.

    This makes all options from option A to option D as True.

    But option E as $\text{FALSE}$. As iterative programs are equivalent to only Primitively Recursive class.

     

    2 votes -- Amar Vashishth ( 20.9k points)
    Selected Answer

    option C:

    look at the initial value of j, if j starts with 0, then double for loop will swap   M[i][j] with M[j][i] and also M[j][i] and M[i][j] so the matrix M will remain unchanged, so to avoid this double swapping we need to initialize j = i and swap only upper triangular matrix with lower triangular matrix.

     

    for(j = i; j < 4; ++j){
    
         // code for swapping M[i][j] with M[j][i]
         t = M[i][j];
         M[i][j] = M[j][i];
         M[j][i] = t;
    }
    7 votes -- Vikrant Singh ( 11.1k points)

    I think its Tower of Hanoi problem.
    Therefore Total number of function call 2n-1 = 1023 option B

    2 votes -- Umang Raman ( 11.5k points)
    It prints with gcd(x,y) and lcm(x,y)

    consider x,y,u,v=17,3,3,17

    X=14, v=20

    X=11,v=23

    X=8, v=26

    X=5,v=29

    X=2,v=32

    Y=1,u=35

    X=1,v=67

    This is the value obtained

    lastly print (x+y)/2 and (v+u)/2 gives 1 and 51
    1 votes -- zambus ( 189 points)
    Selected Answer

    i loop is executing n times. j loop is executing log n times for each i, and so value of p is log n. k loop is executing log p times, which is log log n times for each iteration of i. In each of these q is incremented. So, over all iterations of i, q will be incremented n log log n times. So, D choice. 

    17 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    option c is correct ..
    8 votes -- Anoop Sonkar ( 4.4k points)
    Selected Answer

    Answer is (B)

    For some 'i' if we find that difference of ( A[j] - A[i] <S )we increment 'j' to make this difference wider so that it becomes equal to S .

    If at times difference becomes greater than S we know that it wont reduce further for same 'i' and so we increment the 'i'.

    We do it for each 'i' if not found in previous iteration. until i=n

    6 votes -- Sandeep_Uniyal ( 5.5k points)
    Selected Answer

    The answer is D

    #include <stdio.h>
    
    int main(void) {
    
    return 0;
    }
    
    int anagram (char *a, char *b) {
    /*
    ASCII characters are of 7-bits 
    so we use count array to represent all the ASCII characters 
    (ranging 0-127)
    */
    int count [128], j;
    
    /*
    so this loop will initialize count of all the ASCII characters to be 
    0 (zero)
    */
    for (j = 0;  j < 128; j++) count[j] = 0;	
    
    j = 0;
    /*
    "a[j] && b[j]"  ensures that anagram returns 0 (false) in case both 
    strings have different length. Because different length strings cannot
    be anagram of each other
    */
    
    /*
    Logic:
    
    Below while loop increments ASCII equivalent position for its occurence
    in array 'a'in count array; and decrements ASCII equivalent position 
    for its occurence in array 'b'in count array.
    
    Example: a = "ISS" and b = "SIS"
    ASCII equivalent of:
    I - 73
    S - 83
    
    j = 0:	Statement A will increment count[ASCII of 'I']==>count[73] 
    count[73] = 0 --> 1
    Statement B will decrement count[ASCII of 'S'] ==> count[83]	
    count[83] = 0 --> -1 and will increment j	j = 0 --> 1
    
    j = 1:	Statement A will increment count[ASCII of 'S'] ==> count[83]	
    count[83] = -1 --> 0 
    Statement B will decrement count[ASCII of 'I'] ==> count[73]	
    count[73] = 1 --> 0 and will increment j	j = 1 --> 2
    
    j = 2:  Statement A will increment count[ASCII of 'S'] ==> count[83]	
    count[83] = 0 --> 1 
    Statement B will decrement count[ASCII of 'S'] ==> count[83]	
    count[83] = 1 --> 0 and will increment j	j = 2 --> 3
    
    *** END OF LOOP ***	
    
    */
    while (a[j] && b[j]) {
    
    
    A;	//count [a[j]]++
    
    /*
    Note: j will be increment after count[]-- will execute
    Resource: http://www.c4learn.com/c-programming/increment-operator-inside-printf
    */
    B; 	//count[b[j++]]--
    }
    
    /*
    This loop checks that the number of occurences of the individual ASCII
    characters is same or not.
    If count[i] = 0 ---> same number of occurences for ASCII chracter i 
    ---> return 1 (true)
    
    if count[i]!= 0 ---> different number of occurences for ASCII chracter i 
    ---> return 0 (false)
    */
    
    for (j = 0; j < 128; j++) if (count [j]) return 0;
    return 1;
    }

     

    6 votes -- Sohil Ladhani ( 153 points)
    Selected Answer

     1. Bellman-Ford algorithm =>: O (nm) Assuming n as edges , m as vertices, for every vertex we relax all edges. m*n , O(mn)
     2. Kruskal’s algorithm => Remaining Option ,A : O ( m log n)  
     3. Floyd-Warshall algorithm => Dynamic Programming Algo, O(N3)
     4. Topological sorting => Boils Down to DFS, O(V+E) D

    Answer A)

    2 votes -- Akash ( 32.4k points)
    Selected Answer

    Solution: B

    The problem can be modeled as a graph coloring problem. Construct a graph with one node corresponding to each activity $$A,B,C,D,E,F,G and H. Connect the activities that occur between the start and end time of an activity now the chromatic number of the graph is the number of rooms required.

    8 votes -- Gowthaman Arumugam ( 1.2k points)
    Selected Answer
    answer B
    6 votes -- ankitrokdeonsns ( 8.4k points)
    Selected Answer

    Array f is used to store the fib() values calculated in order to save repeated calls. Since n = 0 and n = 1 are special cases we can store fib(2) to f[0], fib(3) to f[1] and so on. The missing code completed would be:

    
    if (f[n - 2] != 0){
        return f[n-2];
    }
    t = fib(n-1) + fib(n-2);
    f[n-2] = t;
    return t;

    In this code, fib(i) will do a recursion only once as once fib(i) is calculated it is stored in array. So, the time complexity for fib(n) would be $\Theta(n)$. 

    PS: We can also store fib(n) in f(n), the above code just saves 2 elements' space in the array. 

    7 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Worst case complexity can never be lower than the average case complexity, but it can be higher. So, (C) is the answer.
    $$A(n) = O(W(n)) $$
    8 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    answer = option B

    whenever there exists an element which is present in the array : more than $\frac{n}{2}$ times, then definitely it will be present at the middle index position; in addition to that it will also be present at anyone of the neighbourhood indices namely $i-1$ and $i+1$ 
    No matter how we push that stream of More than $\frac{n}{2}$ times of elements of same value around the Sorted Array, it is bound to be present at the middle index + atleast anyone of its neighbourhood

    once we got the element which should have occurred more that $n/2$ times we count its total occurrences in $\mathcal{O}(\log n)$ time.

    13 votes -- Amar Vashishth ( 20.9k points)
    Selected Answer

    Recurrence relation for Towers of Hanoi is

    T(1) = 1

    T(n) = 2 T( n-1 ) +1

    So Answer should be (D)

    8 votes -- Narayan Kunal ( 399 points)
    Selected Answer

    Arrange files in increasing order of records

    D  A   C   B   E

    5 10 15 20 25 

                                  75

                  30                           45

        15           15(C)    20 (B)      25(E)

    5(D)     10(A)

    No of movements=15+30+45+75=165

    5 votes -- Pooja ( 26.4k points)

    I think answers are

    80. (B)

    $X[i][j] = X[i-1][j] \cup X[i-1][j-a_i]$

    $\text{ We calculate the value of } X[i][j] \text{ as if we include the value }$

    $ a_i \text{ in the subset } X[i-1][j-a_i] \text{ or we do not include the value in the subset} X[i-1][j].$

    81. (C)

    // Returns true if there is a subset of set[] with sun equal to given sum
    bool isSubsetSum(int set[], int n, int sum)
    {
        // The value of subset[i][j] will be true if there is a subset of set[0..j-1]
        //  with sum equal to i
        bool subset[sum+1][n+1];
     
        // If sum is 0, then answer is true
        for (int i = 0; i <= n; i++)
          subset[0][i] = true;
     
        // If sum is not 0 and set is empty, then answer is false
        for (int i = 1; i <= sum; i++)
          subset[i][0] = false;
     
         // Fill the subset table in botton up manner
         for (int i = 1; i <= sum; i++)
         {
           for (int j = 1; j <= n; j++)
           {
             subset[i][j] = subset[i][j-1];
             if (i >= set[j-1])
               subset[i][j] = subset[i][j] || subset[i - set[j-1]][j-1];
           }
         }
     
        /* // uncomment this code to print table
         for (int i = 0; i <= sum; i++)
         {
           for (int j = 0; j <= n; j++)
              printf ("%4d", subset[i][j]);
           printf("\n");
         } */
     
         return subset[sum][n];
    }

     

    1 votes -- Sona Praneeth Akula ( 3.8k points)
    Selected Answer
    Since we have $\log n$ lists we can make a min-heap of $\log n$ elements by taking the first element from each of the $\log n$ sorted lists. Now, we start deleting the min-element from the heap and put the next element from the sorted list from which that element was added to the heap. (This identity can be done by making a structure of two values, one for the number and one for identifying the origin sorted list of that number and storing this structure in the heap). In this way each delete and the corresponding insert will take $O(\log\log n)$ time as delete in heap of size $n$ is $O(\log n)$ and inserting an element on a heap of size $n$ is also $O(\log n)$. (here, heap size is $\log n$). Now, we have a total of $\log n \times \frac{n}{\log n} = n$ elements. So, total time will be $O(n \log\log n)$.
    13 votes -- gatecse ( 10.8k points)
    Selected Answer
    Because array is always sorted just check the 1st two elements. option a.
    9 votes -- anshu ( 2.4k points)
    Selected Answer

    Answer is (a)

    Effect of the above 3 reversal for any K is equivalent to left rotation of the array of size n by k.

    Let , S[1......7]

    1 2 3 4 5 6 7

    so, n=7 ,k = 2

    reverse(S,1,2) we get [2,1,3,4,5,6,7]

    reverse(S,3,7) we get [2,1,7,6,5,4,3]

    reverse(S,1,7) we get [3,4,5,6,7,1,2]

    hence option (a) Rotates s left by k position is correct

     

    11 votes -- Kalpna Bhargav ( 3.1k points)
    Union-Find Algorithm can be used to find the cycle.

    Ref: http://www.geeksforgeeks.org/union-find/
    3 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer

    For N=9, it returns 3.

    For N=10 it returns 4.

    For N=16 it returns 4.

    For N=17 it returns 5.

    So answer should be C.

    3 votes -- Pepper ( 2k points)

    Consider an array $A[1...n]$. It consists of a permutation of numbers $1....n$. Now compute another array $B[1...n]$ as follows: $B[A[i]]:= i$ for all $i$. Which of the following is true?

    1. $B$ will be a sorted array.
    2. $B$ is a permutation of array $A$.
    3. Doing the same transformation twice will not give the same array.
    4. $B$ is not a permutation of array $A$.
    5. None of the above.
    An array $A$ contains $n$ integers in non-decreasing order, $A[1] \leq A[2] \leq \cdots \leq A[n]$. Describe, using Pascal like pseudo code, a linear time algorithm to find $i, j,$ such that $A[i]+A[j]=a$ given integer $M$, if such $i, j$ exist.

    An array A contains $n \geq 1$ positive integers in the locations $A[1], A[2], \dots A[n]$. The following program fragment prints the length of a shortest sequence of consecutive elements of $A$, $A[i], A[i+1], \dots,A[j]$ such that the sum of their values is $\geq M$, a given positive number. It prints ‘n+1’ if no such sequence exists. Complete the program by filling in the boxes. In each case use the simplest possible expression. Write only the line number and the contents of the box. 

    begin
    i:=1;j:=1;
    sum := ◻
    min:=n; finish:=false;
    while not finish do
        if ◻ then
            if j=n then finish:=true
            else
            begin
                j:=j+1;
                sum:= ◻
            end
        else
        begin
            if(j-i) < min then min:=j-i;
            sum:=sum –A[i];
            i:=i+1;
        end
        writeln (min +1);
    end.

     

    The following Pascal program segments finds the largest number in a two-dimensional integer array $A[0\dots n-1, 0\dots n-1]$ using a single loop. Fill up the boxes to complete the program and write against $\fbox{A}, \fbox{B}, \fbox{C} \text{ and } \fbox{D}$ in your answer book Assume that max is a variable to store the largest value and $i, j$ are the indices to the array.

    begin
        max:=|A|, i:=0, j:=0;
        while |B| do
        begin
            if A[i, j]>max then max:=A[i, j];
            if |C| then j:=j+1;
            else begin
                j:=0;
                i:=|D|
            end
        end
    end

     

    Answers: Arrays

    Selected Answer

    Option b) $B$ is a permutation of array $A$.

    Infact, $B$ gives the reverse index of all the elements of array $A$. Since the array $A$ contains numbers $[1 .. n]$ mapped to the locations $[1 .. n]$ and $A$ is a permutation of the numbers $[1 .. n]$, the array $B$ will also be a permutation of the numbers $[1 .. n]$.

    For example: $$\begin{array}{l|cccccccc} \text{index} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\[1em] \hline A & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4\\ B & 2 & 6 & 3 & 8 & 1 & 5 & 4 & 7 \end{array}$$


    To see that option c is incorrect, let array $C$ be the array attained from doing the same transformation twice, that is, $C[B[i]] = i , \forall i \in [1 .. n]$. We get, $$\begin{array}{l|cccccccc} \text{index} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\[1em] \hline A & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4\\ B & 2 & 6 & 3 & 8 & 1 & 5 & 4 & 7\\ C & 5 & 1 & 3 & 7 & 6 & 2 & 8 & 4 \end{array}$$

    We can see that $C = A$, which makes option c incorrect.

    4 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer
    i = 1;
    j = n;
    while(i != j) {
       if(A[i] + A[j] == M) break;
       else if(A[i] + A[j] < M) i++;
       else j--;
    }

     

    10 votes -- ankitrokdeonsns ( 8.4k points)
    Selected Answer
    begin
    i:=1;j:=1;
    sum := A[1]
    min:=n; finish:=false;
    while not finish do
        if sum < M then
            if j=n then finish:=true
            else
            begin
                j:=j+1;
                sum:= sum + A[j]
            end
        else
        begin
            if(j-i) < min then min:=j-i;
            sum:=sum A[i];
            i:=i+1;
        end
        writeln (min +1);
    end.

    Algorithm

    'i' indicates the starting marker and 'j' acts as ending marker for the sum sequence. 'sum' is initialised as the first element in the array because the algorithm proceeds by taking the sum of remaining elements. 'finish' is a boolean variable that indicates exit from the loop.

    After entering the loop for the first time with 'finish' as false, the sum is checked if it's strictly less than "M". If that's the case j is incremented and the sum is modified to sum + A[j]. When 'sum' becomes greater than or equal to 'M', 'min' is modified to the latest number of elements that make the sum greater than or equal to 'M' and then, the first element is stripped off from the sum and 'i' is incremented by one to move the initial marker of the sum sequence. The loop runs till 'j' reaches the end of the array.

    The algorithm keeps track of 'min' i.e. the number of elements in the minimum sum sequence. This is very similar to the way we find the minimum value in an array by modifying the min value whenever a lesser value is encountered.

    5 votes -- krish__ ( 339 points)

    We have to traverse all elements in array. The code is doing this row wise. 

    
    begin
        max:=A[0,0], i:=0, j:=0;
        while (i < n) do
        begin
            if A[i, j]>max then max:=A[i, j];
            if (j < n-1) then j:=j+1;
            else begin
                j:=0;
                i:=i++;
            end
        end
    end

     

    3 votes -- Arjun Suresh ( 156k points)

    Let $f(n) = n^2 \log n$ and $g(n) = n(\log n)^{10}$ be two positive functions of $n$. Which of the following statements is correct?

    1. $f(n) = O(g(n)) \text{ and } g(n) \neq O(f(n))$
    2. $g(n) = O(f(n)) \text{ and } f(n) \neq O(g(n))$
    3. $f(n) \neq O(g(n)) \text{ and } g(n) \neq O(f(n))$
    4. $f(n) =O(g(n)) \text{ and } g(n) = O(f(n))$

    Let $f(n) = n$ and $g(n) = n^{(1 + sin \: n)}$ where $n$ is a positive integer. Which of the following statements is/are correct?

    1. $f(n) = O(g(n))$
    2. $f(n) = \Omega(g(n))$

     

    1. Only I
    2. Only II
    3. Both I and II
    4. Neither I nor II

    Let $n$ be a large integer. Which of the following statements is TRUE?

    1. $n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n} < n^{1/100}$
       
    2. $n^{1/100} < n^{1 / \sqrt{\log_2 n}} < \sqrt{\log_2 n}$
       
    3. $n^{1 / \sqrt{\log_2 n}} < n^{1/100} < \sqrt{\log_2 n}$
       
    4. $\sqrt{\log_2 n} < n^{1 / \sqrt{\log_2 n}} < n^{1/100}$
       
    5. $\sqrt{\log_2 n} < n^{1/100} < n^{1 / \sqrt{\log_2 n}}$

    Consider the following two functions:

    $g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$

    $g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$

    Which of the following is true?

    1. $g_1(n) \text{ is } O(g_2(n))$

    2. $g_1(n) \text{ is } O(n^3)$

    3. $g_2(n) \text{ is } O(g_1(n))$

    4. $g_2(n) \text{ is } O(n)$

     

    Consider the equality $\sum_{(i=0)}^n i^3 = X$ and the following choices for $X$

    1. $\Theta(n^4)$
    2. $\Theta(n^5)$
    3. $O(n^5)$
    4. $\Omega(n^3)$

    The equality above remains correct if $X$ is replaced by

     

    1. Only I
    2. Only II
    3. I or III or IV but not II
    4. II or III or IV but not I

    Which of the following is false?

    1. $100n \log n=(\frac{n\log n}{100})$

    2. $\sqrt{\log n} = O(\log\log n)$

    3. If $0 < x < y \text{ then } n^x = O\left(n^y\right)$

    4. $2^n \neq O\left(nk\right)$

     

    Arrange the following functions in increasing asymptotic order:

    1. $n^{1/3}$
    2. $e^n$
    3. $n^{7/4}$                                                                              
    4. $n \log^9n$
    5. $1.0000001^n$

     

    A) A, D, C, E, B
    B) D, A, C, E, B
    C)
    A, C, D, E, B
    D) A, C, D, B, E

    Suppose $T(n) =2T (\frac{n}{2}) + n, T(0) = T(1) =1$

    Which one of the following is FALSE?

    1. $T(n)=O(n^2)$

    2. $T(n)=\Theta(n \log n)$

    3. $T(n)=\Omega(n^2)$

    4. $T(n)=O(n \log n)$

    Let f(n), g(n) and h(n) be functions defined for positive inter such that 
    f(n) = O(g(n)), g(n) ≠ O(f(n)), g(n) = O(h(n)), and h(n) = O(g(n)).

    Which one of the following statements is FALSE?

    1. f(n) + g(n) = O(h(n)) + h(n))
    2. f(n) = O(h(n))
    3. h(n) ≠ O(f(n))
    4. f(n)h(n) ≠ O(g(n)h(n))

    Consider the following functions

    • $f(n) = 3n^{\sqrt{n}}$
    • $g(n) = 2^{\sqrt{n}{\log_{2}n}}$
    • $h(n) = n!$

    Which of the following is true?

    1. $h(n)$ is $O(f(n))$
    2. $h(n)$ is $O(g(n))$
    3. $g(n)$ is not $O(f(n))$
    4. $f(n)$ is $O(g(n))$

     

     

    If $T_1 = O(1)$, give the correct matching for the following pairs:
     

    (M) $T_n = T_{n-1} + n$ (U) $T_n = O(n)$
    (N) $T_n = T_{n/2} + n$ (V) $T_n = O(n \log n)$
    (O) $T_n = T_{n/2} + n \log n$ (W) $T = O(n^2)$
    (P) $T_n = T_{n-1} + \log n$ (X) $T_n = O(\log^2 n)$

     

    (A) M-W N-V O-U P-X

    (B) M-W N-U O-X P-V

    (C) M-V N-W O-X P-U

    (D) M-W N-U O-V P-X

    Consider the following functions:

    $f(n) = 2^n$

    $g(n) = n!$

    $h(n) = n^{\log n}$

    Which of the following statements about the asymptotic behavior of $f(n), g(n)$ and $h(n)$ is true?

    1. $f\left(n\right)=O\left(g\left(n\right)\right); g\left(n\right)=O\left(h\left(n\right)\right)$
    2. $f\left(n\right) = \Omega\left(g\left(n\right)\right); g(n) = O\left(h\left(n\right)\right)$

    3. $g\left(n\right) = O\left(f\left(n\right)\right); h\left(n\right)=O\left(f\left(n\right)\right)$

    4. $h\left(n\right)=O\left(f\left(n\right)\right); g\left(n\right) = \Omega\left(f\left(n\right)\right)$

    The tightest lower bound on the number of comparisons, in the worst case, for comparison-based sorting is of the order of

    1. $n$
    2. $n^2$
    3. $n \log n$
    4. $n \log^2n$

    Which of these functions grows fastest with $n$?

    1. $e^{n}/n$.
    2. $e^{n-0.9 \log n}$.
    3. $2^{n}$.
    4. $\left(\log n\right)^{n-1}$.
    5. None of the above.

    Which of the given options provides the increasing order of asymptotic complexity of functions $f_1, f_2, f_3$ and $f_4$?

    • $f_1(n) = 2^n$
    • $f_2(n) = n^{3/2}$
    • $f_3(n) = n \log_2 n$
    • $f_4(n) = n^{\log_2 n}$

    (A) $f_3,  f_2, f_4, f_1$ 

    (B) $f_3, f_2, f_1, f_4$

    (C) $f_2, f_3, f_1, f_4$

    (D) $f_2, f_3, f_4, f_1$

    Consider the following three claims

    1. $(n+k)^m = \Theta(n^m)$ where k and m are constants
    2. $2^{n+1} = O(2^n)$
    3. $2^{2n+1} = O(2^n)$

    Which of the following claims are correct

    1. I and II
    2. I and III
    3. II and III
    4. I, II, and III

    Answers: Asymptotic Notations

    Selected Answer
      f(n) g(n)
    n = 210 10 * 21* 210 210  * 1010
    n = 2256 256 * 2256 * 2256 2256 * 25610

    So, as n is going larger, f(n) is overtaking g(n) and the growth rate of f is faster than that of g. So, g(n) = O(f(n)) and f(n) ≠ O(g(n)).

    B choice. 

    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The answer is option D.

    Since the value of sin(n) will always range from -1 to +1, hence g(n) can take values 1, n, n^2.

    Hence, if g(n) = 1, Statement I is incorrect.

    And, if g(n) = n^2, then Statement II is incorrect.

    14 votes -- saurabhrk ( 1.4k points)

    Let $n = 2^x$. Then, $\log_2 n = x$

    $$\begin{array}{rlllcll} f(n) &=& n^{1/\sqrt{\log_2 n}} &=& \left (2^x \right )^{1/\sqrt{x}} = 2^{x/\sqrt{x}} &=& 2^{\sqrt{x}}\\[1em] g(n) &=& \sqrt{\log_2 n} &=& \sqrt{\log_2 {\left ( 2^x \right )}}&=& \sqrt{x}\\[1em] h(n) &=& n^{1/100} &=& \left ( 2^x \right )^{1/100} &=& 2^{x/100} \end{array}$$


    Since exponentials grow faster than polynomials, $h(n) > g(n)$ for large $n$.


    Since linear functions grow faster than square roots, $\frac{x}{100} > \sqrt{x}$ for large $x$. Thus, $h(n) > f(n)$ for large $n$.


    Since exponentials grow faster than polynomials, $2^{\sqrt{x}} > \sqrt{x}$ for large $\sqrt{x}$. Thus, $f(n) > g(n)$ for large $n$.


    Hence, the relation is,

    $g(n) < f(n) < h(n)$

    Thus, option D is correct.

    3 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer
    For asymptotic complexity, we assume sufficiently large $n$. So, $g_1(n) = n^2$ and $g_2(n) = n^3$. Growth rate of $g_1$ is less than that of $g_2.$ i.e., $g_1(n) = O(g_2(n)).$

    Options A and B are true here.
    8 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Sum of the cubes of the first n natural numbers is given by (n(n+1)/2 )2 which is ϴ(n4). So, I, III and IV are correct. II is wrong. C choice.

    17 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    • $100n \log n=(\frac{n\log n}{100})$  cant do comment about it not given properly in paper.
    • $\sqrt{\log n} = O(\log\log n)$  : false take any long value like 256 LHS results 16 But RHS resuts 4 only . gerenraly we take log left side but that is wrong.
    • $0 < x < y \text{ then } n^x = O\left(n^y\right)$ : true since y is always greater than x so RHS is always greater than LHS.
    • $2^n \neq O\left(nk\right)$ true since k is constant  .so for large value of n LHS is very higher than RHS ( exponential function always greater than linear )

    Only B is false

     

    3 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer
    A < C and A < D

    E < B

    and

    C, D < E as E is exponential function.

    Now, we just need to see if C or D is larger.

    In C we have a term $n^{3/4}$ and correspondingly in D we have $\log^9n$ (after taking $n$ out).

    $n^{3/4}$ is asymptotically larger than $\log^9n$ as when $n = 10^{100}, \log^9 n$ gives $100^9$, while $n^{3/4}$ gives $10^{75} > 100^{37}$ a much higher value and this is true for all higher values of $n$. So, D < C.

    Thus A is correct.
    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Applying Masters theorem T(n) = ⊖(n log n) So, it can't be Ω(n2

    Hence answer is C.

    11 votes -- shreya ghosh ( 2.9k points)

    Answer is (D)

    We can verify as :  f<=g  BUT g!<=f  . Therefore    f<g

    Also  g=h as g=O(h) and h=O(g)

    5 votes -- Sandeep_Uniyal ( 5.5k points)
    Selected Answer
      $n = 256$ $n=65536$

    $f(n) = 3n^{\sqrt{n}}$

     

     

    $3 \times 256^{16}$

    $=3 \times {2^{128}}$

    $3 \times 65536^{256}$

    $ = 3 \times 2^{16 \times 256}$

    $ = 3 \times 2^{4096}$

    $g(n) = 2^{\sqrt{n}{\log_{2}n}}$

    $2^{16 \times 8}$

    $=2^{128}$

    $2^{256 \times 16}$

    $ = 2^{4096}$

    $h(n) = n!$

    $256!$

    $= O\left(\left(2^8\right)^{256}\right)$

    $=O\left(2^{2048}\right)$

    $65536!$

    $= O\left(\left(2^{16}\right)^{65536}\right)$

    $=O\left(2^{1M}\right)$

     

    Case of h(n) is given only by an upper bound but factorial has higher growth rate than exponential. 

    http://math.stackexchange.com/questions/351815/do-factorials-really-grow-faster-than-exponential-functions

    f(n) and g(n) are having same order of growth as f(n) is simply 3 g(n) (we can prove this by taking log also). So, (d) is correct and all other choices are false. 

    11 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    (M) $T(n)$ = Sum of first n natural numbers = $\frac{n(n+1)}{2} = O(n^2)$

    (N) $T(n) = \Theta(n)  =O(n)$, third case of Master theorem

    ($f(n) = n = \Omega \left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0 + \epsilon}\right) $, satisfied for any positive $\epsilon \leq 1$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} < cn$, satisfied for any $c$ between $0$ and $0.5$)

    (O) $T(n) = \Theta(n \log n) = O (n \log n)$, third case of Master theorem

    ($f(n) = n \log n =\Omega\left(n^{\log_b a+ \epsilon}\right) = \Omega \left(n^{\log_2 1 + \epsilon}\right) = \Omega\left(n^{0.5 + \epsilon}\right) $, satisfied for positive $\epsilon = 0.5$. Also, $af\left(\frac{n}{b}\right) < cf(n) \implies f\left(\frac{n}{2}\right) < cf(n) \implies \frac{n}{2} \log \frac{n}{2} < cn \log n$, satisfied for $c = 0.5$)

    (P) Like in (M), here we are adding the log of the first n natural numbers. So,

    $T_n = \log 1 + \log 2 + \log 3 + \dots + \log n$

    $=\log (1\times 2\times \dots \times n)$

    $=\log(n!)$

    $=\Theta(n \log n)$  (Stirling's Approximation)
    10 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    $g(n) = n!$ on expanding factorial we get $g(n) = \mathcal{O}(n^n)$ :
    $$\begin{align*} n^n &> n^{\log n} \\ n^n &> 2^n \end{align*}$$
    this condition is violated by option A, B and C by first statements of each Hence, they cannot be said true.

    second statement of option D says that $g(n)$ is asymptotically biggest of all.

    answer = option D

    4 votes -- Amar Vashishth ( 20.9k points)
    Selected Answer
    For comparison-based sorting the asymptotically tight bound for worst case is given by $\Theta (n \log n)$, which means it is the tightest upper bound (big O) as well as the tightest lower bound (big omega). So, answer is $n \log n$.

    Tightest lower bound of sorting (say S$(n)$) is $n \log n$ means there is no function $f$ which has an order of growth larger than $n \log n$ and $f(n) = \Omega (S(n))$ holds.

    A usual mistake is to think worst case changes with lower and upper bounds, but that is not the case. Worst case is defined for the algorithm and it is always the input which causes the algorithm the maximum complexity.
    10 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Assuming that the base of the $\log$ in the question is $e$.

    Let us try to rewrite each of these functions in the form $e^{\,\small\rm something}$, to make the comparison easier.

    $$\def\d{\displaystyle\,}
    \begin{array}{l|ll}
    a. & e^{n} / n & = \dfrac{e^{\d n}}{e^{\d(\ln n)}} & = e^{\d (n - \ln n)}\\[1em]
    b. & e^{n - 0.9 \ln n} &&= e^{\d (n - 0.9 \ln n)}\\[1em]
    c. & 2^n &= \left (e^{\d\ln 2} \right )^{\d n} &= e^{\d (n \ln 2)}\\[1em]
    d. & (\ln n)^{n-1} &= \left (e^{\d\ln \ln n} \right )^{\d n-1} &= e^{\d (n \ln \ln n - \ln\ln n)}
    \end{array}$$

    Now, if we just compare the exponents of all, we can clearly see that $(n \ln \ln n - \ln \ln n)$ grows faster than the rest. Note that in option c. the multiplicative $\ln 2$ is a constant, and hence grows slower than the multiplicative $\ln \ln n$ from option d.

    This implies that $e^{\d (n \ln \ln n - \ln\ln n)}$ grows the fastest, and hence, $(\ln n)^{n-1}$ grows the fastest.

    Thus, option d. is the correct answer.

    7 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    [EDIT]

    answer A

    nlog2n < n3/2 is quite straightforward

    also n3/2 < nlog2n  and n3/2 < 2n

    now only nlog2n and 2n need to be compared

    taking log of both (log2n)2 and n

    n > (log2n)2

    hence 2n > nlog2n

    6 votes -- ankitrokdeonsns ( 8.4k points)
    Selected Answer
    1) Clearly rate of growth of $(n+k)^m = n^m$ as $k$ and $m$ are constants

    so TRUE

    2) $2^{n+1} = 2*(2^n) = \Theta\left(2^n\right)$ as 2 is a constant here

    As $2^{n+1}$ is both upper and lower bounded by $2^n$ we can say $2^{n+1} = O\left(2^n\right)$

    so TRUE

    3) $2^{2n+1}$ has same rate of growth as $2^{2n}$

    $2^{2n} = {2^n}^2$

    $2^n$ is upper bounded by $(2^n)^2$, not the other way round

    so FALSE
    9 votes -- Danish ( 2.4k points)

    In a binary max heap containing $n$ numbers, the smallest element can be found in time 

    1.  $O(n)$
    2.  $O(\log n)$
    3.  $O(\log \log n)$
    4.  $O(1)$

     

    Answers: Binary Heap

    Selected Answer
    (A) $O(n)$

    In a max heap, the smallest element is always present at a leaf node. Heap being a complete binary tree, there can be up to $\frac{n}{2}$ leaf nodes and to examine all of them we would need $O(n)$ time.
    6 votes -- Keith Kr ( 6k points)

    Suppose you are given an array $A$ with $2n$ numbers.

    The numbers in odd positions are sorted in ascending order, that is, $A[1] \leq A[3] \leq \ldots \leq A[2n - 1]$.

    The numbers in even positions are sorted in descending order, that is, $A[2] \geq A[4] \geq \ldots \geq A[2n]$.

    What is the method you would recommend for determining if a given number is in the array?

    1. Sort the array using quick-sort and then use binary search.
    2. Merge the sorted lists and perform binary search.
    3. Perform a single binary search on the entire array.
    4. Perform separate binary searches on the odd positions and the even positions.
    5. Search sequentially from the end of the array.

     

    Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. The program is erroneous. 

     f (int Y[10] , int x) {
        int u, j, k;
        i= 0; j = 9;
        do {
            k = (i+ j) / 2;
            if( Y[k] < x) i = k;else j = k;
            } while (Y[k] != x) && (i < j)) ;
        if(Y[k] == x) printf(" x is in the array ") ;
        else printf(" x is not in the array ") ;
     }

    The correction needed in the program to make it work properly is 

    1. Change line 6 to: if (Y[k] < x) i = k + 1; else j = k-1; 
    2. Change line 6 to: if (Y[k] < x) i = k - 1; else j = k +1; 
    3. Change line 6 to: if (Y[k] < x) i = k; else j = k;
    4. Change line 7 to: } while ((Y[k] == x) && (i < j)) ;

    Consider the following program that attempts to locate an element $x$ in an array $a[ ]$ using binary search. Assume $N > 1$. The program is erroneous. Under what conditions does the program fail?

    var i,j,k: integer; x: integer;
        a: array; [1..N] of integer;
    begin	i:= 1; j:= n;
    repeat	
        k:(i+j) div 2;
        if a[k] < x then i:= k
        else j:= k
    until (a[k] = x) or (i >= j);
        
    if (a[k] = x) then
        writeln ('x is not in the array')
    else
        writeln ('x is not in the array')
    end;

     

     

    Consider the following three version of the binary search program. Assume that the elements of type $T$ can be compared with each other; also assume that the array is sorted.

    i, j, k : integer;  
    a : array [1....N] of T;   
    x : T;                       
    
    Program 1 :   i := 1; j := N;   
                 repeat   
                       k := (i + j) div 2; 
                       if a[k] < x then i := k else j := k  
                 until (a[k] = x) or (i > j)                   
    Program 2 :   i := 1; j := N;  
                  repeat  
                       k := (i + j) div 2;   
                      if x < a[k] then j := k - 1; 
                      if a[k] < x then i := k + 1;
                  until i > j                              
    Program 3 :=  i := 1; j := N  
                  repeat
                       k := (i + j) div 2; 
                      if x < a[k] then j := k else i := k + 1 
                  until i > j

    A binary search program is called correct provided it terminates with $a[k] = x$ whenever such an element exists, or it terminates with $a\left [ k \right ]\neq  x$ if there exists no array element with value $x$. Which of the following statements is correct?

    1. Only Program 1 is correct
    2. Only Program 2 is correct
    3. Only Program 1 and 2 are correct.
    4. Both Program 2 and 3 are correct
    5. All the three programs are wrong

    Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. The program is erroneous. 

     f (int Y[10] , int x) {
        int u, j, k;
        i= 0; j = 9;
        do {
            k = (i+ j) / 2;
            if( Y[k] < x) i = k;else j = k;
            } while (Y[k] != x) && (i < j)) ;
        if(Y[k] == x) printf(" x is in the array ") ;
        else printf(" x is not in the array ") ;
     }

    On which of the following contents of Y and x does the program fail? 

    1.  Y is [1 2 3 4 5 6 7 8 9 10] and x < 10   
    2. Y is [1 3 5 7 9 11 13 15 17 19] and x < 1  
    3. Y is [2 2 2 2 2 2 2 2 2 2] and x > 2 
    4. Y is [2 4 6 8 10 12 14 16 18 20] and 2 < x < 20 and x is even

    Answers: Binary Search

    Selected Answer

    Option D is the correct answer.

    We can simply use clever indexing to binary search the element in the odd positions, and in the even positions separately.

    This will take $O(\log n)$ time and $O(1)$ space in the worst case.

     

    A: Sorting using Quicksort will take $O(n^2)$ time.
    B: Merging will take $O(n)$ time and $O(n)$ space.
    C: Binary search only works on a sorted array.
    E: Sequential search will take $O(n)$ time.

    5 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    Ans should be A

    if( Y[k] < x) then i = k + 1;

    if given element that  we are searching is greater then searching will be continued upper half of array

    otherwise j = k - 1;

    lower half.

    Take few case in consideration i.e.

    1. all elements are same

    2. increasing order with no repeatation

    3. increasing order with  repeatation.

    1 votes -- Manoj Kumar ( 23.4k points)
    the code is wrong here ....k=(i+j) / 2;

                                         if(a[k] < x) then i = k ;
                                          else j = k ;
    the (correct) code should be ..

                                          k=(i+j) / 2;

                                         if(a[k] < x) then i = k +1 ;
                                          else j = k -1 ;
    try an example ....with given code in question
    let an array               a[1,2,3,4,5,6,7,8,9,10]
    index number               1,2,3,4,5,6,7,8,9,10
             and x=10 ; now run the code ;

    initially i = 1 ,j=10;
    first time  k =(i+j) /2 = 11/2 =5.5 = 5 (because of integer type) =i
    second time = k =(i+j) /2 =15/2 =7.5 =7 =i
    third time =  k =(i+j) /2 = 17/2 = 8.5 = 8 =i

    fourth time =  k =(i+j) /2 = 18/2 = 9 = i

    fifth time  =  k =(i+j) /2 = 19/2 = 9.5 =9 = i

    sixth time =  k =(i+j) /2 = 19/2 = 9.5 =9 = i

    seventh time =  k =(i+j) /2 = 19/2 = 9.5 =9 = i
    .................................................................
    going to infinite loop (run time error) .... ;

    for terminating loop , it should be , i = k + 1  instead of i =k ;and j = k - 1 instead of j = k ;
    correct me ....???
    6 votes -- Mithlesh Upadhyay ( 3.8k points)
    first program wont work if array has elements same..it may go into infinite loop .To make it work it properly we have to do following changes j=k-1  and i=k+1

    For second program a[k]==x condition is mussung so it is wrong

    Third program is also wrong as j!=k-1 and condition a[k]==x is missing

    So ans is e
    1 votes -- Pooja ( 26.4k points)
    Selected Answer

    for Q.84 

    when it is option C the control will continue to iterate as $i=8$ and $j=9$;
    again and again $i$ will be assigned $k$ which itself equals $8$ as $\frac{8+9}{2}$ being stored in an integer type variable, will evaluate to $8$.


    For option A, with $x=9$, $k$ will take the following values:

    • 4
    • 6
    • 7
    • 8 - y[8] = 9, x found

    For option D, with $x=10$, $k$ will take the following values:

    • 4, y[4] = 10, x found

     

    9 votes -- Amar Vashishth ( 20.9k points)
    The preorder traversal sequence of a binary search tree is 30, 20, 10, 15, 25, 23, 39, 35, 42. Which one of the following is the postorder traversal sequence of the same tree?

    (A) 10, 20, 15, 23, 25, 35, 42, 39, 30

    (B) 15, 10, 25, 23, 20, 42, 35, 39, 30

    (C) 15, 20, 10, 23, 25, 42, 35, 39, 30

    (D) 15, 10, 23, 25, 20, 35, 42, 39, 30

    Consider the following C program segment where CellNode represents a node in a binary tree:

    struct CellNode {
        struct CellNode *leftChild;
        int element;
        struct CellNode *rightChild;
    };
    
    int Getvalue (struct CellNode *ptr) {
        int value = 0;
        if (ptr != NULL) {
            if ((ptr->leftChild == NULL) &&
                (ptr->rightChild == NULL))
                value = 1;
            else
                value = value + GetValue(ptr->leftChild)
                          + GetValue(ptr->rightChild);
        }
        return(value);
    }
    

    The value returned by GetValue when a pointer to the root of a binary tree is passed as its argument is:

    1. the number of nodes in the tree

    2. the number of internal nodes in the tree

    3. the number of leaf nodes in the tree

    4. the height of the tree

    Let LASTPOST, LASTIN and LASTPRE denote the last vertex visited `in a postorder, inorder and preorder traversal respectively, of a complete binary tree. Which of the following is always true?

    1. LASTIN = LASTPOST
    2. LASTIN = LASTPRE
    3. LASTPRE = LASTPOST
    4. None of the above

     

    The inorder and preorder traversal of a binary tree are

    $\text{d b e a f c g}$ and $\text{a b d e c f g}$, respectively

    The postorder traversal of the binary tree is:

     

    1. $\text{d e b f g c a}$

    2. $\text{e d b g f c a}$

    3. $\text{e d b f g c a}$

    4. $\text{d e f g b c a}$

     

    Suppose we have a balanced binary search tree $T$ holding $n$ numbers. We are given two numbers $L$ and $H$ and wish to sum up all the numbers in $T$ that lie between $L$ and $H$. Suppose there are $m$ such numbers in $T$. If the tightest upper bound on the time to compute the sum is $O(n^a\log^bn+m^c\log^dn)$, the value of $a+10b+100c+1000d$ is ______.

    Consider the following rooted tree with the vertex labeled P as the root:

    The order in which the nodes are visited during an in-order traversal of the tree is

    (A) SQPTRWUV

    (B) SQPTUWRV

    (C) SQPTWUVR

    (D) SQPTRUWV

     

    The maximum number of binary trees that can be formed with three unlabeled nodes is:

    1. 1
    2. 5
    3. 4
    4. 3

    A program takes as input a balanced binary search tree with $n$ leaf nodes and computes the value of a function $g(x)$ for each node $x$. If the cost of computing $g(x)$ is: $$\Large \min \left ( \substack{\text{number of leaf-nodes}\\\text{in left-subtree of $x$}}\;,\; \substack{\text{number of leaf-nodes}\\\text{in right-subtree of $x$}}\right )$$

    Then the worst-case time complexity of the program is?

    1. $\Theta (n)$

    2. $\Theta (n \log n)$

    3. $\Theta(n^2)$

    4. $\Theta (n^2\log n)$

    You are given the postorder traversal, $P$,  of a binary search tree on the $n$ elements $1, 2, \dots, n$. You have to determine the unique binary search tree that has $P$ as its postorder traversal. What is the time complexity of the most efficient algorithm for doing this?

    1. $\Theta(\log n)$

    2. $\Theta(n)$

    3. $\Theta(n\log n)$

    4. None of the above, as the tree cannot be uniquely determined

    A binary search tree contains the value 1, 2, 3, 4, 5, 6, 7, 8. The tree is traversed in pre-order and the values are printed out. Which of the following sequences is a valid output?

    1. 5 3 1 2 4 7 8 6
    2. 5 3 1 2 6 4 8 7
    3. 5 3 2 4 1 6 7 8
    4. 5 3 1 2 4 7 6 8

     

    The height of a tree is defined as the number of edges on the longest path in the tree. The function shown in the pseudo-code below is invoked as height(root) to compute the height of a binary tree rooted at the tree pointer root.


    (A)

    B1: $(1+\text{height}(n \to \text{ right}))$

    B2: $(1+\max(h1, h2))$

    (B)

    B1: $(\text{height}(n \to \text{ right}))$ 

    B2: $(1+\max(h1,h2))$

    (C)

    B1: $\text{height}(n \to \text{ right})$ 

    B2: $\max(h1, h2) $

    (D)

    B1: $(1+ \text{height}(n \to \text{ right}))$

    B2: $\max(h1, h2)$

    Answers: Binary Tree

    Selected Answer

    Since it is a binary search tree, its inorder traversal produces a sorted sequence i.e. 10, 15, 20, 23, 25, 30, 35, 39, 42.

    Now given inorder and preorder traversals, we get following tree :

    From this, we can give postorder traversal sequence as 15,10,23,25,20,35,42,39,30 i.e. option (D).

    9 votes -- Happy Mittal ( 9.6k points)
    Selected Answer

    Answer: C

    As the function returns 1 if and only if any node has both left & right children as NULL (that node is a leaf node). Hence, value gets incremented at each leaf node.

    5 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    The answer is D.

    Take any random sequence and check for the inorder, postorder and preorder Last Node.
    8 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    The answer is A.

    Take the first node in preorder traversal - a will be the root of the tree
    All nodes to the left of 'a' in inorder traversal will be in the left subtree of 'a' and all elements on the right will be in the right subtree of 'a'. 
    Take the second element from preorder traversal - 'b' - goes to left subtree of 'a' as it is in the left of 'a' in inorder list. 
    Proceeding likewise we can construct the binary tree as:

    3 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    In worst case for finding $L$ and $H$ it will take $O(\log n)$ time as the given tree is balanced binary search tree.
    Now there are $m$ elements between $L$ and $H$. So to traverse m element it will take $O(m)$ time (traversal algorithm given below). So, total

    $O(m+\log n)  \implies a=0, b=1,c=1,d=0$
    $\therefore 0+(10 \times 1)+(100 \times 1)+(1000 \times 0)=110$. 


    To find all the numbers from $L$ to $H$ we can do an inorder traversal from root and discard all elements before $L$ and after $H$. But this has $O(n)$ time complexity. So, we can do a modification to inorder traversal and combine with binary search as follows:

    1. Find $L$ using binary search and keep all nodes encountered in the search in a stack. 
    2. After finding $L$ add it to stack as well and initialize sum = 0.
    3. Now, for all nodes in stack, do an inorder traversal starting from their right node and adding the node value to sum. If $H$ is found, stop the algorithm. 
    18 votes -- Kalpish Singhal ( 1.7k points)
    Selected Answer
    A.

    the inorder traversal order of a ternary tree is left-->root-->middle-->right.
    12 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    can be found with formula... (2nCn/n+1)... n being the number of nodes. for the given question... where n=3... answer is 5. Let me also specify here.. that number of Binary Search Trees with n nodes is equal to number of unlabeled Binary trees.

    http://gatecse.in/wiki/Number_of_Binary_trees_possible_with_n_nodes

     

    7 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    B. At the root node (first level) the cost would be $\Theta\left(\frac{n}{2}\right)$ as the tree is balanced.

    At next level, we have 2 nodes and for each of them cost of computing $g(x)$ will be $\Theta\left(\frac{n}{4}\right)$. So, total cost at second level $= \Theta\left(\frac{n}{2}\right).$ Similarly at each level (total cost per level and not the cost per node in a level) the cost would be $\Theta\left(\frac{n}{2}\right)$ and so for $\log n$ levels it would be $\Theta({n} \log n).$

    PS: Even if we change $\min$ to $\max$ in the defintion of $g(x)$ we get the same answer.

    14 votes -- Shaun Patel ( 5.9k points)
    Selected Answer

    Last element in post order is the root of tree- find this element in inorder- $\log n$ time. 
    Now as in quick sort consider this as pivot and  split the post order array into 2- possible because all elements smaller than pivot goes to left and all elements larger than pivot goes to right and suppose we have x elements smaller than pivot, these elements will be same in both inorder as well as postorder (order may change). We already got the root, now left child is the left split and right child is the right split. 

    So, doing this recursively gives time complexity of this approach as 
    $$T(n) = T(k) + T(n-k-1) + \log n$$

    Solving would give $T(n) = O(n \log n)$ in worst case, by putting $k=0$ and shown at bottom.

    But searching for an element in the inorder traversal of given BST can be done in $O(1)$ because we have $n$ elements from $1..n$ so there is no need to search for an element- if last element in post order is say $5$ we take it as root and since 4 elements (1..4) are smaller we split the post order array in to two- (first 4 elements), (6th element onward) and solve recursively. Time complexity for this would be 

    $$T(n) = T(k) + T(n-k-1) + O(1)$$

    which gives $T(n) = O(n).$

    Since we know that all elements must be traversed at least  once, $T(n) = \Omega(n)$ also and so $$T(n) = \Theta(n).$$ The following code is doing this. 

    //Install graphviz (sudo apt-get install graphviz on Ubuntu) to view output tree
    #include<stdio.h>
    #include<stdlib.h>
    struct tree
    {
        struct tree* left;
        struct tree* right;
        int x;
    };
    struct tree* makenode(int x) 
    {
        struct tree * root =  malloc(sizeof(struct tree));
        root -> x = x;
        root -> left = root -> right = NULL;
        return root;
    }
    
    struct tree* makeBST(int *post, int start, int n, int inorder){
        if(n <= 0)
                return NULL;
        int pivot = post[start + n -1];
        struct tree * root = makenode(pivot);
        root -> left = makeBST(post, start, pivot-1 - inorder, inorder );
        root -> right = makeBST(post, pivot - inorder - 1, n - (pivot - inorder), pivot);
        return root;
    }
    void preorder(struct tree* node)
    {
        if(node == NULL)
            return;
        printf("%d ", node->x);
        preorder(node->left);
        preorder(node->right);
    }
    void printdot(struct tree* node, FILE * f)
    {
        if(node == NULL)
            return;
        if(node-> left != NULL)
        {
            fprintf(f, "%d -- %d;\n", node->x, node->left->x);
        }
        if(node-> right != NULL)
        {
            fprintf(f, "%d -- %d;\n", node->x, node->right->x);
        }		
        printdot(node->left, f);
        printdot(node->right, f);
    }
    
    int main(){
        int i, n, *a;
        printf("Enter n: ");
        scanf("%d", &n);
        a = malloc(n * sizeof(int));
        printf ("Enter post order traversal: ");
        for(i = 0; i < n; i++)
        {
            scanf("%d", &a[i]);
        }
        struct tree * tree = makeBST(a, 0, n, 0);
        printf("Pre order traversal is : ");
        preorder(tree);
        printf("\n");
        FILE * f = fopen("tree.dot", "w");
        fprintf(f, "graph tree { \n");
        printdot(tree, f);
        fprintf(f, "}\n");
        fclose(f);
        
        #if defined(__linux__) || (defined(__APPLE__) && defined(__MACH__) || (defined (__gnu_linux__)) )
            system("dot -Tpng tree.dot -o output.png; eog output.png");
        #endif
    
    }

    $$T(n) = T(k) + T(n-k-1) + \log n$$

    Solving would give $T(n) = O(n \log n)$, by putting $k=0$,
    $T(n) = T(0) +T(n-1) + \log n, \\\implies T(n) = O(1)+ \log n + \log (n-1) + \log (n-2) + \dots + \log 1\\\implies T(n) = n + \log n! \\\implies T(n) = O(n \log n).$

    (Stirling's Approximation)

     

    14 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Answer: D

    The tree is:

    5 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer

    answer = option A
    From the diagram below we are able to see how this works :

    7 votes -- Amar Vashishth ( 20.9k points)

    In a depth-first traversal of a graph $G$ with $n$ vertices, $k$ edges are marked as tree edges. The number of connected components in $G$ is

     

    1. $k$
    2. $k+1$
    3. $n-k-1$
    4. $n-k$

     

    Which of the following is the correct decomposition of the directed graph given below into its strongly connected components?

    GATE2006-IT_46

    1. {P, Q, R, S}, {T}, {U}, {V}
    2. {P,Q, R, S, T, V}, {U}
    3. {P, Q, S, T, V}, {R}, {U}
    4. {P, Q, R, S, T, U, V}

    Consider the depth-first-search of an undirected graph with 3 vertices P, Q, and R. Let discovery time d(u) represent the time instant when the vertex u is first visited, and finish time f(u) represent the time instant when the vertex u is last visited. Given that

     d(P) = 5 units  f(P) = 12 units
     d(Q) = 6 units  f(Q) = 10 units
     d(R) = 14 unit  f(R) = 18 units

    Which one of the following statements is TRUE about the graph

    1. There is only one connected component
    2. There are two connected components, and P and R are connected
    3. There are two connected components, and Q and R are connected
    4. There are two connected components, and P and Q are connected
    How many edges can there be in a forest with $p$ components having $n$ vertices in all?

    Let $G=(V,E)$ be a directed graph where $V$ is the set of vertices and $E$ the set of edges. Then which one of the following graphs has the same strongly connected components as $G$ ?

    (A) $G_1$ = $(V,E_1)$ where $E_1 = \left\{(u,v) \mid (u,v) \notin E\right\}$

    (B) $G_2$ = $(V,E_2)$ where $E_2 = \left\{(u,v) \mid (v,u) \in E \right\}$

    (C) $G_3$ = $(V,E_3)$ where $E_3 = \{(u,v) \mid$ there is a path of length $\leq2$ from $u$ to $v$ in $E\}$

    (D) $G_4$ = $(V_4,E)$ where $V_4$ is the set of vertices in $G$ which are not isolated

    Let G be the graph with 100 vertices numbered 1 to 100.  Two vertices i and j are adjacent  if $\vert i-j \vert =8$ or $\vert i-j \vert=12$. The number of connected components in G is

    1. 8
    2. 4
    3. 12
    4. 25

     

    Answers: Connected Components

    Selected Answer
    Answer D => n-k

    Why ?

    If Graph is connected , while doing DFS we will visit some spanning Tree of Graph. So no of edges will be n-1 &

    No of components => n - (n-1) => 1

     

    If Graph is not connected in that case when we do the DFS on those disconnected graph,

    For every disconnected component with say x vertices, we will get x-1 Tree edge. When we sum up all vertices we will get total no of vertices. When we sum up all edges in spanning tree of each component, we will get => Total no of vertices - Total No of connected component ( Due to for each connected component we are getting tree of no of vertices in that connnected component - 1)

    So Total connected component => D) n-k
    8 votes -- Akash ( 32.4k points)
    Selected Answer

    Here ans is B.

    A graph is said to be strongly connected if every vertex is reachable from every other vertex.

    Strongly connected componect is always maximal that is if x is strongly connceted component there should not exist another strongly connected component which contains x.

    If we take R as strongly connected componect but which is part of PQRS and PQRS is part of PQRSVT.

     

    3 votes -- Gabbar ( 12.3k points)
    Selected Answer

     

    As seen in quetion after 10 we have to go for p again and since p is finish and then r is start so r must be disconnected because if their is edges from q to r then r must be visited before of q and p ending. 

    D is answer

    3 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer
    Answer: n-p

    Corollary: If G is a forest with n vertices and p components, then G has n-p edges.

    As, 0 edges for p-1 vertices (p-1 components) and n-p edges for n-p+1 vertices (1 component). So, total of n-p edges for p components.
    3 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    A is false. Consider just two vertices connected to each other. So, we have one SCC. The new graph won't have any edges and so 2 SCC.

    B is true. In a directed graph a SCC will have a path from each vertex to every other vertex. So, changing the direction of all the edges, won't change the SCC.

    D is false. Consider any graph with isolated vertices- we loose those components.

    C is a bit tricky. Any edge is a path of length 1. So, the new graph will have all the edges from old one. Also, we are adding new edges $(u,v)$. So, does this modify any SCC? No, because we add an edge $(u,v)$, only if there is already a path of length <= 2 from $u$ to $v$- so we do not create a new path.  So, both B and C must be answers though GATE key says only B.
    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    From the description it is clear that vertices are connected as follows:

    1-9-17-...-97
    2-10-18-...-98
    3-11-19-...-99
    4-12-20-...-100
    5-13-21-...-93
    6-14-22-...-94
    7-15-23-...-95
    8-16-24-...96

    We have covered all vertices using 8 vertex sets considering only $\mid i - j \mid = 8$. Using $\mid i - j \mid = 12$ we can see the vertex 1 is connected to 13, 2-14, 3-15 and 4-16, so the top 4 vertex sets are in fact connected to the bottom 4 sets, thus reducing the connected components to 4.
    8 votes -- Arjun Suresh ( 156k points)
    • Consider three decision problems P1, P2 and P3. It is known that P1 is decidable and P2 is undecidable. Which one of the following is TRUE?
    1. P3 is decidable if P1 is reducible to P3
    2. P3 is undecidable if P3 is reducible to P2
    3. P3 is undecidable if P2 is reducible to P3
    4. P3 is decidable if P3 is reducible to P2's complement

    Answers: Decidability

    Selected Answer

    (A) If P1 is reducible to P3, then P3 is at least as hard as P1. So, no guarantee if P3 is decidable.

    (B) If P3 is reducible to P2, then P3 cannot be harder than P2. But P2 being undecidable, this can't say P3 is undecidable.

    (C) If P2 is reducible to P3, then P3 is at least as hard as P2. Since, Pis undecidable, this means P3 is also undecidable -hence the answer.

    (D) Complement of an undecidable problem is undecidable. Now, reducing to an undecidable problem can't prove P3 is decidable. 

    http://gatecse.in/wiki/Some_Reduction_Inferences

    13 votes -- Arjun Suresh ( 156k points)

    Consider the following directed graph.

    Suppose a depth-first traversal of this graph is performed, assuming that whenever there is a choice, the vertex earlier in the alphabetical order is to be chosen. Suppose the number of tree edges is $T$, the number of back edges is $B$ and the number of cross edges is $C$. Then

    1. $B = 1$, $C = 1$, and $T = 4$.
    2. $B = 0$, $C = 2$, and $T = 4$.
    3. $B = 2$, $C = 1$, and $T = 3$.
    4. $B = 1$, $C = 2$, and $T = 3$.
    5. $B = 2$, $C = 2$, and $T = 1$.

    Answers: Dfs

    Selected Answer
    Since they said that whenever their is a choice we will have to select the node which is alphabetically earlier , therefore we choose the starting node as A .

    The tree then becomes  A-B-E-C .  Therefore no of tree edges is 3  that is (T=3) .

    Now there is one cycle B-E-C so we will get a back edge from C to B while performing DFS. Hence B=1 .

    Now D becomes disconnected node and it can only contribute in forming cross edge . There are 2 cross edges D-A , D-B . Therefore C=2 .

    Answer is  Option D .

    Correct me if am going wrong.
    5 votes -- Riya Roy(Arayana) ( 5.6k points)

    Consider the following recursive C function that takes two arguments.

    unsigned int foo(unsigned int n, unsigned int r) {
        if (n>0) return ((n%r) + foo(n/r, r));
        else return 0;
    }
    

    What is the return value of the function $\text{foo}$ when it is called as $\text{foo(513, 2)}$?

    1. 9
    2. 8
    3. 5
    4. 2

    Answers: Distance Vector Routing

    Selected Answer

    The function returns the sum of digits in a binary representation of the given number

    so 1+0+0+0+0+0+0+0+0+1 = 2

    4 votes -- Sandeep_Uniyal ( 5.5k points)

    A sub-sequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths m and n, respectively with indexes of $X$ and $Y$ starting from $0$.

    We wish to find the length of the longest common sub-sequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:

    l(i,j)  = 0, if either i = 0 or j = 0
            = expr1, if i,j > 0 and X[i-1] = Y[j-1]
            = expr2, if i,j > 0 and X[i-1] ≠ Y[j-1]
    The value of $l(i,j)$ could be obtained by dynamic programming based on the correct recursive definition of $l(i,j)$ of the form given above, using an array $L[M,N]$, where $M = m+1$ and $N = n+1$, such that $L[i, j] = l(i, j)$.

    Which one of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of $l(i, j)$?

    1. All elements of $L$ should be initialized to 0 for the values of $l(i, j)$ to be properly computed.
    2. The values of $l(i, j)$ may be computed in a row major order or column major order of $L[M, N]$.
    3. The values of $l(i, j)$ cannot be computed in either row major order or column major order of $L[M, N]$.
    4. $L[p, q]$ needs to be computed before $L[r, s]$ if either $p<r$ or $q < s$.

    A sub-sequence of a given sequence is just the given sequence with some elements (possibly none or all) left out. We are given two sequences $X[m]$ and $Y[n]$ of lengths m and n, respectively with indexes of $X$ and $Y$ starting from $0$.

    We wish to find the length of the longest common sub-sequence (LCS) of $X[m]$ and $Y[n]$ as $l(m, n)$, where an incomplete recursive definition for the function $I(i, j)$ to compute the length of the LCS of $X[m]$ and $Y[n]$ is given below:

    l(i,j)  = 0, if either i = 0 or j = 0
            = expr1, if i,j > 0 and X[i-1] = Y[j-1]
            = expr2, if i,j > 0 and X[i-1] ≠ Y[j-1]
    

    Which one of the following options is correct?

    1. $\text{expr1} = l\left(i-1, j\right) +1$

    2. $\text{expr1} = l\left(i, j-1\right)$

    3. $\text{expr2} = \max\left(l\left(i-1, j\right), l\left(i,j-1\right)\right)$

    4. $\text{expr2} = \max\left(l\left(i-1, j-1\right), l\left(i,j\right)\right)$

    The Floyd-Warshall algorithm for all-pair shortest paths computation is based on

    1. Greedy paradigm.
    2. Divide-and-conquer paradigm.
    3. Dynamic Programming paradigm.
    4. Neither Greedy nor Divide-and-Conquer nor Dynamic Programming paradigm.

     

     

    Answers: Dynamic Programming

    $\text{expr2} = \max\left(l\left(i-1, j\right), l\left(i,j-1\right)\right)$

    When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].

    Answer is B. We can either use Row Major or column major order.

    Issue of option D -> Read option D carefully.

    L[p,q] needs to be computed before L[r,s] if either p < q or r < s

     Assuming that we want to compute L(3,3). We need not compute L(4,2) if we are using Row Major Order ! Here L(4,2) = L[p,q] & L(3,3) = L[r,s]. Then q<s still we need not compute it ! so D IS FALSE

    6 votes -- Akash ( 32.4k points)
    Selected Answer

    Answer is C. When the currently compared elements doesn't match, we have two possibilities for the LCS, one including X[i] but not Y[j] and other including Y[j] but not X[i].

    10 votes -- Akash ( 32.4k points)
    Selected Answer

    In floyd warshalls we calcuate all possibilities and select best one so its neither dac nor greedy but based on Dynamic Programming Paradigm. 

    12 votes -- Anurag Semwal ( 5.8k points)

    In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).

    What would be the worst case time complexity of the Insertion Sort algorithm, if the inputs are restricted to permutations of \(1. . . n\) with at most $n$ inversions?

    1. \(\Theta(n^2)\)
    2. \(\Theta(n\log n)\)
    3. \(\Theta(n^{1.5})\)
    4. \(\Theta(n)\)

    In a permutation \(a_1 ... a_n\), of n distinct integers, an inversion is a pair \((a_i, a_j)\) such that \(i < j\) and \(a_i > a_j\).

    If all permutations are equally likely, what is the expected number of inversions in a randomly chosen permutation of \(1. . . n\)?

    1. \(\frac{n(n-1)}{2}\)
    2. \(\frac{n(n-1)}{4}\)
    3. \(\frac{n(n+1)}{4}\)
    4. \(2n[\log_2n]\)

    Answers: Expectation

    Ans is D: ⊝(n), 

    Insertion sort runs in Θ(n + f(n)) time, where f(n) denotes the number of inversion initially present in the array being sorted.

    Read more at http://geeksquiz.com/gate-gate-cs-2003-question-62/

    6 votes -- Pepper ( 2k points)

    ANSWER: D. $\Theta(n)$

    REASON:

    Count of number of times the inner loop of insertion sort executes is actually equal to number of inversions in input permutation $a_1,a_2,\dots a_n$. Since for each value of $i = 1..n$, $j$ take the value $1..i-1$, which means for every $j<i$ it checks if $a[j] > a[i]$.

    In any given permutation, maximum number of inversions possible is $n(n-1)/2$ which is $O(n^2)$. It is the case where the array is sorted in reverse order. Therefore, to resolve all inversions i.e., worst case time complexity of insertion sort is $\Theta(n^2)$.

    However, as per the question the number of inversion in input array is restricted to $n$. The worst case time complexity of insertion sort reduces to $\Theta(n)$. 


    INSERTION SORT ALGORITHM (for reference)

    6 votes -- Prateek Dwivedi ( 1.1k points)
    They are asking the average number of inversion. basically what i learned about averages from dbms indexing is.

    average apart from the standard definition can be calculated as ( best case + worst case )/2

    and inversion is like  9,5.

    so best case will be sorted array - 1,2,3,4,5 no inversion . = zero

    worst case = 9,5,4,3,2,1 . here total number of inversion will be n(n-1)/2 as . 9 can be paired with any 5 elements (5,4,3,2,1 ) will form a inversion pair. similarly 5 with 4.elements .

    so we can say if we have n elements then. it will be (n-1)+(n-2)+(n-3)...+2+1 which is the sum of first n-1 natural numbers. so it is n(n-1)/2

    so expected average number of inversion = (n(n-1)/2 + zero (best case)) /2= n(n-1)/4

    so option b.

    second question.

    we all know that insertion sort has complexity due to swapping and movements, if we have n n inversion pair then the movements and comparision will be restricted to n only . like if inversion is 1 , then array must be sorted and only the inversion should exist at the end, like 1,2,3,5,4. otherwise more than one inversion pair will form. so to sort this. for two it will be 1,2,3,7,5,4. so to sort this type of array using insertion sort atmost N swaps wiill be required, so d
    6 votes -- Ravi Singh ( 8.4k points)

    Consider the tree arcs of a BFS traversal from a source node $W$ in an unweighted, connected, undirected graph. The tree $T$ formed by the tree arcs is a data structure for computing

    (A) the shortest path between every pair of vertices.

    (B) the shortest path from $W$ to every vertex in the graph.

    (C) the shortest paths from $W$ to only those nodes that are leaves of $T$.

    (D) the longest path in the graph.

    A depth-first search is performed on a directed acyclic graph. Let d[u] denote the time at which vertex u is visited for the first time and f[u] the time at which the dfs call to the vertex u terminates. Which of the following statements is always true for all edges (u, v) in the graph ?

    1. d[u] < d[v]
    2. d[u] < f[v]
    3. f[u] < f[v]
    4. f[u] > f[v]

    Let T be a depth first search tree in an undirected graph G. Vertices u and ν are leaves of this tree T. The degrees of both u and ν in G are at least 2. which one of the following statements is true?

    1. There must exist a vertex w adjacent to both u and ν in G
    2. There must exist a vertex w whose removal disconnects u and ν in G
    3. There must exist a cycle in G containing u and ν
    4. There must exist a cycle in G containing u and all its neighbours in G

    Consider a weighted undirected graph with positive edge weights and let uv be an edge in the graph. It is known that the shortest path from the source vertex s to u has weight 53 and the shortest path from s to v has weight 65. Which one of the following statements is always true?

    1. Weight (u,v) $\leq$ 12
    2. Weight (u,v) = 12
    3. Weight (u,v) $\geq$ 12
    4. Weight (u,v) > 12

    An independent set in a graph is a subset of vertices such that no two vertices in the subset are connected by an edge. An incomplete scheme for a greedy algorithm to find a maximum independent set in a tree is given below:

    V: Set of all vertices in the tree;	
    I := ϕ
    while	V ≠ ϕ do	
    begin	
        select a vertex u ∊ V such that	
        _______;
        V := V - {u};	
        if u is such that	
        ________then I := I ∪ {u}	
    end;	
    Output(I);	
    1. Complete the algorithm by specifying the property of vertex $u$ in each case.

    2. What is the time complexity of the algorithm

     

    Let $G$ be a graph with $n$ vertices and $m$ edges.What is the tightest upper bound on the running time of Depth First Search on $G$, when $G$ is represented as an adjacency matrix?

     

    (A) $\Theta(n)$

    (B) $\Theta(n+m)$

    (C) $\Theta(n^2)$

    (D) $\Theta(m^2)$

    Consider the following sequence of nodes for the undirected graph given below.

    1. a b e f d g c
    2. a b e f c g d
    3. a d g e b c f
    4. a d b c g e f

    A Depth First Search (DFS) is started at node a. The nodes are listed in the order they are first visited. Which all of the above is (are) possible output(s)?

    GATE2008-IT_47

     

    A) 1 and 3 only
    B)
    2 and 3 only
    C) 2, 3 and 4 only
    D) 1, 2 and 3 only

    Which one of the following algorithm design techniques is used in finding all pairs of shortest distances in a graph?

    1. Dynamic programming

    2. Backtracking

    3. Greedy

    4. Divide and Conquer

     

    Suppose depth first search is executed on the graph below starting at some unknown vertex. Assume that a recursive call to visit a vertex is made only after first checking that the vertex has not been visited earlier. Then the maximum possible recursion depth (including the initial call) is _________.

    A sink in a directed graph is a vertex i such that there is an edge from every vertex j ≠ i to i and there is no edge from i to any other vertex. A directed graph G with n vertices is represented by its adjacency matrix A, where A[i] [j] = 1 if there is an edge directed from vertex i to j and 0 otherwise. The following algorithm determines whether there is a sink in the graph G.

     

    i = 0;
    do {
        j = i + 1;
        while ((j < n) && E1) j++;
        if (j < n) E2;
    } while (j < n);
    flag = 1;
    for (j = 0; j < n; j++)
        if ((j! = i) && E3) flag = 0;
    if (flag) printf("Sink exists") ;
    else printf ("Sink does not exist");

    Choose the correct expression for E3

    1. (A[i][j] && !A[j][i])
    2. (!A[i][j] && A[j][i])
    3. (!A[i][j] | |  A[j][i])
    4. (A[i][j] | | !A[j][i])

    Consider the following graph 

    Among the following sequences

    1. abeghf
    2. abfehg
    3. abfhge
    4. afghbe

    which are depth first traversals of the above graph?

    1. I, II and IV only
    2. I and IV only
    3. II, III and IV only
    4. I, III and IV only

    Let G be an undirected graph. Consider a depth-first traversal of G, and let T be the resulting depth-first search tree. Let u be a vertex in G and let v be the first new (unvisited) vertex visited after visiting u in the traversal. Which of the following statement is always true?

    1. {u, v} must be an edge in G, and u is a descendant of v in T
    2. {u, v} must be an edge in G, and v is a descendant of u in T
    3. If {u, v} is not an edge in G then u is a leaf in T
    4. If {u, v} is not an edge in G then u and v must have the same parent in T

     

    The most efficient algorithm for finding the number of connected components in an undirected graph on $n$ vertices and $m$ edges has time complexity

    1. $\Theta(n)$
    2. $\Theta(m)$
    3. $\Theta(m+n)$
    4. $\Theta(mn)$

    Let $G= (V,E)$ be a directed graph with $n$ vertices. A path from $v_i$ to $v_j$ in $G$  is a sequence of vertices ($v_{i},v_{i+1}, \dots , v_j$) such that $(v_k, v_k+1) \in E$ for all $k$ in $i$ through $j-1$. A simple path is a path in which no vertex appears more than once.

    Let $A$ be an $n \times n$ array initialized as follows.

    $$A[j,k] = \begin{cases} 1 \text { if } (j,k) \in E \\ 0 \text{ otherwise} \end{cases}$$

    Consider the following algorithm.

    for i=1 to n
        for j=1 to n
            for k=1 to n
                A[j,k] = max(A[j,k], A[j,i] + A[i,k]);  

    Which of the following statements is necessarily true for all j and k after termination of the above algorithm?

    1. $A[j,k] \leq n$
    2. If $A[j,j] \geq n-1$ then has a Hamiltonian cycle
    3. If there exists a path from to k, A[j,k] contains  the longest path length from to k
    4. If there exists a path from to k, every simple path from to k contains at most A[j,k] edges

    In an adjacency list representation of an undirected simple graph $G=(V, E)$, each edge $(u, v)$ has two adjacency list entries: [v] in the adjacency list of $u$, and $[u]$ in the adjacency list of $v$. These are called twins of each other. A twin pointer is a pointer from an adjacency list entry to its twin. If $|E|=m$ and $|V|=n$, and the memory size is not a constraint, what is the time complexity of the most efficient algorithm to set the twin pointer in each entry in each adjacency list?

    1. $\Theta\left(n^{2}\right)$
    2. $\Theta\left(n+m\right)$
    3. $\Theta\left(m^{2}\right)$
    4. $\Theta\left(n^{4}\right)$

     

    Let G = (V,E) be an undirected graph with a subgraph $G_1 = (V_1, E_1)$. Weights are assigned to edges of as follows.

    $$w(e) = \begin{cases} 0 \text{ if } e \in E_1 \\1 \text{ otherwise} \end{cases}$$

    A single-source shortest path algorithm is executed on the weighted graph (V,E,w) with an arbitrary vertex $v_1$ of $V_1$ as the source. Which of the following can always be inferred from the path costs computed?
     

    1. The number of edges in the shortest paths from $v_1$ to all vertices of G
    2. $G_1$ is connected
    3. $V_1$ forms a clique in G

    4. $G_1$ is a tree

    Which of the following statement(s) is/are correct regarding Bellman-Ford shortest path algorithm?

    P: Always finds a negative weighted cycle, if one exists.

    Q: Finds whether any negative weighted cycle is reachable from the source.

    1. P only
    2. Q only
    3. Both P and Q
    4. Neither P nor Q

     

    Suppose we run Dijkstra’s single source shortest-path algorithm on the following edge-weighted directed graph with vertex P as the source.

         

    In what order do the nodes get included into the set of vertices for which the shortest path distances are finalized?

    1. P,Q,R,S,T,U
    2. P,Q,R,U,S,T
    3. P,Q,R,U,T,S
    4. P,Q,T,R,U,S

    Let $G_1=(V,E_1)$ and $G_2 =(V,E_2)$ be connected graphs on the same vertex set $V$ with more than two vertices. If $G_1 \cap G_2= (V,E_1\cap E_2)$  is not a connected graph, then the graph $G_1\cup G_2=(V,E_1\cup E_2)$

    1. cannot have a cut vertex
    2. must have a cycle
    3. must have a cut-edge (bridge)
    4. Has chromatic number strictly greater than those of $G_1$ and $G_2$

    A sink in a directed graph is a vertex i such that there is an edge from every vertex j ≠ i to i and there is no edge from i to any other vertex. A directed graph G with n vertices is represented by its adjacency matrix A, where A[i] [j] = 1 if there is an edge directed from vertex i to j and 0 otherwise. The following algorithm determines whether there is a sink in the graph G.

    i = 0;
    do {
        j = i + 1;
        while ((j < n) && E1) j++;
        if (j < n) E2;
    } while (j < n);
    flag = 1;
    for (j = 0; j < n; j++)
        if ((j! = i) && E3) flag = 0;
    if (flag) printf("Sink exists"); 
    else printf ("Sink does not exist");

    Choose the correct expressions for E1 and E2

    1. E1 : A[i][j] and E2 : i = j;
    2. E1 : !A[i][j] and E2 : i = j + 1;
    3. E1: !A[i][j] and E2 : i = j;
    4. E1 : A[i][j] and E2 : i = j + 1;

    Let $G$ be an undirected graph with $n$ vertices. For any subset $S$ of vertices, the set of neighbours of $S$ consists of the union of $S$ and the set of vertices $S'$ that are connected to some vertex in $S$ by an edge of $G$. The graph $G$ has the nice property that every subset of vertices $S$ of size at most $n/2$ has at least $1.5 |S|$-many neighbours. What is the length of a longest path in $G$?

    1. $O (1)$
    2. $O (\log \log n)$ but not $O (1)$
    3. $O (\log n)$ but not $O (\log \log n)$
    4. $O \left(\sqrt{n}\right)$ but not $O (\log n)$
    5. $O (n)$ but not $O \left(\sqrt{n}\right)$

    To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:

    1. Queue
    2. Stack
    3. Heap
    4. B-Tree

    Let $G(V,E)$ be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:

    1. $O\left(|V|^2\right)$

    2. $O\left(|E|+|V|\log |V|\right)$

    3. $O\left(|V|\log|V|\right)$

    4. $O\left(\left(|E|+|V|\right)\log|V|\right)$

    Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum  weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.

    The edge $e$ must definitely belong to:

    1. the minimum weighted spanning tree of G

    2. the weighted shortest path from s to t

    3. each path from s to t

    4. the weighted longest path from s to t

    Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph G with n*n adjacency matrix A. A[i,j] equals 1 if there is an edge in G from i to j, and 0 otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.

    INITIALIZATION: For i = 1 ... n
        {For j = 1 ... n
            { if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;}
        }
        
    ALGORITHM: For i = 1 ... n
        {For j = 1 ... n
            {For k = 1 ... n
                {P[__,__] = min{_______,______}; }
            }
        }    
    1. Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
    2. Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
    3. Fill in the blank: The running time of the Algorithm is O(___).

    In an unweighted, undirected connected graph, the shortest path from a node S to every other node is computed most efficiently, in terms of time complexity, by

    1. Dijkstra’s algorithm starting from S.

    2. Warshall’s algorithm.

    3. Performing a DFS starting from S.

    4. Performing a BFS starting from S.

    Consider the undirected graph below:

    GATE2004-IT_56

    Using Prim's algorithm to construct a minimum spanning tree starting with node A, which one of the following sequences of edges represents a possible order in which the edges would be added to construct the minimum spanning tree?

    1. (E, G), (C, F), (F, G), (A, D), (A, B), (A, C)
    2. (A, D), (A, B), (A, C), (C, F), (G, E), (F, G)
    3. (A, B), (A, D), (D, F), (F, G), (G, E), (F, C)
    4. (A, D), (A, B), (D, F), (F, C), (F, G), (G, E)
    What is the time complexity of Bellman-Ford single-source shortest path algorithm on a complete graph of n vertices?

    (A) $\theta(n^2)$     (B) $\theta(n^2\log n)$     (C) $\theta(n^3)$     (D) $\theta(n^3\log n)$

    Let $s$ and $t$ be two vertices in a undirected graph $G=(V,E)$ having distinct positive edge weights. Let $[X,Y]$ be a partition of $V$ such that $s \in X$ and $t \in Y$. Consider the edge $e$ having the minimum  weight amongst all those edges that have one vertex in $X$ and one vertex in $Y$.

    Let the weight of an edge $e$ denote the congestion on that edge. The congestion on a path is defined to be the maximum of the congestions on the edges of the path. We wish to find the path from $s$ to $t$ having minimum congestion. Which of the following paths is always such a path of minimum congestion?

    1. a path from $s$ to $t$ in the minimum weighted spanning tree

    2. a weighted shortest path from $s$ to $t$

    3. an Euler walk from $s$ to $t$

    4. a Hamiltonian path from $s$ to $t$

    Consider an undirected unweighted graph G. Let a breadth-first traversal of G be done starting from a node r. Let d(r,u) and d(r,v) be the lengths of the shortest paths from r to u and v respectively in G. if u visited before v during the breadth-first traversal, which of the following statements is correct?

    1. $d(r,u) < d(r,v)$
    2. $d(r,u) > d(r,v)$
    3. $d(r,u) \leq d(r,v)$
    4. None of the above

    Answers: Graph Algorithms

    Selected Answer
    BFS always having starting node. It does not calculate shortest path between every pair but it computes shorted path between W and any other vertex ..
    7 votes -- Digvijay ( 36.9k points)
    Selected Answer

    I'm gonna disprove all wrong options here.

    A) d[u] < d[v] , Counter Example => Well if we directly start DFS on V first. Then I call DFS on X which visits U.

    B) d[u] < f[v] Counter example => Same as A)

    C) f[u] < f[v] Counter example => Same as A) again laugh

    So answer is D)

    6 votes -- Akash ( 32.4k points)

    Consider following graph

    Dfs is .

    so D is answer.
     

    4 votes -- Nikunj Vinchhi ( 51 points)
    Selected Answer
    D. weight(u,v) ≥ 12

    If weight (u, v) < 12, then the min. weight of (s, v) = weight of (s, u) + weight of (u, v) = 53 + (<12) will be less than 65.
    13 votes -- Arjun Suresh ( 156k points)

    (a) While adding vertex u to I it should not have an edge with any node in I.

    (b) The algorithm runs till V is empty (in O(n) time) and is checking u with each vertex v in set I (in O(n) time). So, overall complexity O(n2).

    3 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    Ans (C)

    http://web.eecs.utk.edu/~huangj/CS302S04/notes/graph-searching.html
    10 votes -- Keith Kr ( 6k points)
    Selected Answer
    Answer: B

    1. After f is visited, c or g should be visited next. So, the traversal is incorrect.
    4. After c is visited, e or f should be visited next. So, the traversal is incorrect.
    2 and 3 are correct.
    4 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    Answer is A because floyd warshall algorithm used to find all shortest path which is a dynamic programming approach.
    6 votes -- shashi shekhar ( 397 points)

    The algorithm is an example of dynamic programming.
    http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm

    6 votes -- Mithlesh Upadhyay ( 3.8k points)
    Selected Answer
    19. apply DFS.
    10 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    If there is a sink in the graph, the adjacency matrix will contain all 1s (except diagonal) in one column and all 0s (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity.

    The first part of the code, is finding if there is any vertex which does not have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is E2, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through

    E1: !A[i][j]
    and
    E2: i = j;

    E1 makes sure that there is no edge from i to j and i is a potential sink till A[i][j] becomes 1. If A[i][j] becomes 1, i can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from i to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is j. So, in E2, we must make i = j.

    Now, the loop breaks when we found a potential sink- that is a vertex which does not have any outgoing edge to any coming after it in adjacency matrix. So, if the column in which this vertex comes is all 1s and the row is all 0s (except diagonal), this is the sink. Otherwise there is no sink in the graph. So, E3 is checking this condition. 

    But in the code $\text{flag}$ is used for storing the state that sink is present or not. And as per the usage of flag in code, by default sink is considered present.  So, the condition in E3  must make flag = 0, if the found $i$ is not a sink. So, the condition should be: 

    A[i][j] || !A[j][i]

    So, (D) is the answer.

    8 votes -- Arjun Suresh ( 156k points)
    In dfs think of a stack as if every adjacent node is being put on top of it lifo and chosen randomly while in bfs think of a queue i.e.fifo here option d.
    2 votes -- anshu ( 2.4k points)
    Selected Answer

    let this be the dfs order of tree then

    u= D and v = F 

    so what we conclude

    1. its not necessary their is edge b/w them .

    2. if thier is no edge then u must be leaf i.e. D is leaf here.

    3. it not always possible u and v have same parent . but they have same  ancestor.

     

    7 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer

    Run DFS to find connected components. Its time complexity is $\Theta(m+n)$, hence (C) is the answer.

    11 votes -- Happy Mittal ( 9.6k points)
    Selected Answer

    D is correct.

    Consider a graph with 2 nodes and one edge from to $V_1$ and $V_2$,

    Running the above algorithm will result in A being 

    A 1 2
    1 1 2
    2 1 2

    Clearly options B and C are wrong. Since

    1. A[1][1] and A[2][2] > n-1 and there exists no Hamiltonian cycle. Hence invalid
    2. The longest path between $V_1$ and $V_2$ is 1, but A[1][2] is 2, which is invalid. And no path between  $V_2$ and $V_1$ yet A[2][1] = 1 // it should be max cost path between j and k not path length.

    Hence A or D could be valid.

    Now consider a graph with 2 nodes and two edges, one from  $V_1$ and $V_2$ and other form $V_2$ and $V_1$. Running the above algorithm will result in A being

    A 1 2
    1 2 3
    2 3 4

    Hence option A is invalid, as A[i][j] can be >n

    D is correct

    6 votes -- ryan sequeira ( 1.6k points)

    Applying BFS on Undirected graph give you twin pointer .Visit every vertex level-wise for every vertex fill adjacent vertex in the adjacency list. BFS take 0(m+n) time. 

    take extra field for storing no. of linked list for perticular vertex. take extra m+n time( m vertex and n edges).

    So B is answer

    7 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer
    After applying the shortest path algorithm, check cost of vertex from source to every vertex in $G_1$. If $G_1$ is connected all these costs must be $0$ as edge weights of subgraph $G_1$ is $0$ and that should be the shortest path. If cost is not $0$, to at least one vertex in $G_1$ (not necessarily $G$), then $G_1$ is disconnected.

    Ans is b
    11 votes -- Anurag Semwal ( 5.8k points)
    Selected Answer

    as we can see that last step is the verification step. In that step values remained unchanged. If there was a negative edge weight cycle reachable from source then at verification step also those values will be different from the values above.

    In case the cycle is not reachable from source then we can see that they will be at $\infty$ distance(or cost) from the source from the beginning till the last step. As take anything away from the $\infty$ it will still be infinite.

    But it can also be the case that there are some points which are not forming a cycle and are still unreachable from source those also will be at $\infty$ distance from the source from the beginning till end.

    Hence, we won't be able to make a distinction among the cycle and such vertices. Thus, we say that this algorithm can detect negative edge weight cycles only if they are reachable from the source.

    answer = option B

    8 votes -- Amar Vashishth ( 20.9k points)
    Selected Answer
    Ans is (B). In Dijkstra's algorithm at each point we choose the smallest weight edge which starts from any one of the vertices in the shortest path found so far and add it to the shortest path.
    7 votes -- gate_asp ( 583 points)
    Selected Answer

    Take a tree for example 

    (A) False. Every vertex of tree(other than leaves) is a cut vertex

    (B)True

    (C)False. Without E in G1 and G2, G1 U G2  has no bridge.

    (D)False. G1 U G2, G1, G2 three graphs have same chromatic number of 2. 

    4 votes -- srestha ( 29.8k points)
    Selected Answer

    If there is a sink in the graph, the adjacency matrix will contain all 1's (except diagonal) in one column and all 0's (except diagonal) in the corresponding row of that vertex. The given algorithm is a smart way of doing this as it finds the sink in $O(n)$ time complexity. 

    The first part of the code, is finding if there is any vertex which doesn't have any outgoing edge to any vertex coming after it in adjacency matrix. The smart part of the code is E2, which makes rows skip when there is no edge from i to it, making it impossible them to form a sink. This is done through

    E1: !A[i][j] 
    and
    E2: i = j; 

    E1 makes sure that there is no edge from i to j and i is a potential sink till A[i][j] becomes 1. If A[i][j] becomes 1, i can no longer be a sink, similarly all previous j can also not be a sink (as there was no edge from i to them and a sink requires an edge from all other vertices). Now, the next potential candidate for sink is j. So, in E2, we must make i = j. 

    So, answer is (C)

    For E3, 
    http://gateoverflow.in/3857/gate2005-it_84b

    6 votes -- Arjun Suresh ( 156k points)

    Here we have to find the longest path in Graph.

    Let's start with 1 node(chose any one)  & find the number of hops in which whole Graph is reachable i.e. all the n vertices are reachable.


    From First node (say A) - 

    (1) within 1 hop(1 edge traversal)  we can reach atleast       =     3/2    nodes 

                    {as neighbourhood of A has atleast 3/2 nodes}

    (2) within 2 hops we can reach atleast           $\left (\frac{3}{2} \right )^2$  nodes

    (3) within 3 hops we can reach atleast           $\left (\frac{3}{2} \right )^3$  nodes

    and so on.. like this when our nodes reach $\left ( \frac{n}{2} \right )$ within 1 more hop we can reach upto $\left ( \frac{n}{2} \right ) * \frac{3}{2} = \frac{3n}{4}$ nodes..

    uptill here we will take O(log n) hops {to reach $\frac{3n}{4}$ nodes}.

    But this property is valid upto a subset of size atmost n/2 .. so, here I stuck to proceed..

    Hope it helps.. Also any suggestions to proceed further are welcome..

    0 votes -- Himanshu Agarwal ( 9.9k points)
    Answer is A) Queue

    we can find single source shortest path in unweighted graph by using Breadth first search (BFS) algorithm which using "Queue" data structure , which time O(m+n) (i.e. linear with respect to the number of vertices and edges. )
    12 votes -- Mithlesh Upadhyay ( 3.8k points)
    Selected Answer
    D.- Binary heap. |E| decrease key operations and each taking O(log |V|) time plus |V| extract-min operations each taking O(log |V|).

    B- Fibonacci heap. |E| decrease key operations and each taking O(1) time plus |V| extract-min operations each taking O(log |V|).

    A- Array. Finding min-vertex in each iteration takes O(V) and this needs to be done |V| times.

    Binomial Heap- same as Binary heap here as the critical operations are decrease key and extract-min.
    10 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    For 82a The answer should be Option A because edge 'e' is the lightest safe edge connecting X and Y so the minimum spanning tree of G must contain 'e' (Greedy and optimal choice).
    While B might seem correct but it is not always true. One such case is when G is not connected therefore there might not be any path between 's' and 't'.
    Since the question is about definitely true B is incorrect and A is the only correct optio

    Lets say AC =1 CD = 2 BD = 3 and  AB=4

    Then if  s= A  and  t= B then AC  is the lightest edge crossing X and Y where  X = { A }  and Y = { C, B, D}
    But clearly AC is not on the shortest path from A to B. The shortest path is AB = 4.

    3 votes -- chandan1223 ( 147 points)

    Answer for 82 b) is option a) A path from s to t in MST.

    T is the spanning tree and Eis the edge set of the tree.

    Let L denote the set of all paths Pe in T where e is any edge in EG. For g ∈ ET , we define the congestion of g as the number of paths in L in which g is an element:

    c(g, T) = |{Pe ∈ L : g ∈ Pe}|. 

    We then define the congestion of G embedded onto T as the maximum c(g, T) over all edges g in ET :

    c(G : T) = max{c(g, T) : g ∈ ET }. 

    The tree congestion of G, denoted t(G), is the minimum value of c(G : T) over all trees T in TG. Similarly, the spanning tree congestion of G, denoted s(G), is the minimum value of c(G : T) over all spanning trees S in SG:

    t(G) = min{c(G : T) : T ∈ TG},

    s(G) = min{c(G : S) : S ∈ SG}

    Please refer to the link 

    http://www.math.csusb.edu/reu/dt07.pdf

    3 votes -- Shounak Kundu ( 4.1k points)
    Selected Answer
    INITIALIZATION: For i = 1 ... n
        {For j = 1 ... n
            { if a[i,j] = 0 then P[i,j] =infinite  // i.e. if there is no direct path then put infinite
              else P[i,j] =a[i,j];
             }
        }
    ALGORITHM: 
    For i = 1 ... n
        {For j = 1 ... n
            {For k = 1 ... n
                {
                   P[i, j] = min( p[i,j] , p[i,k] + p[k,j])
                };
            }
        }              

    time complexity 0($n^3$)

    this algorithm is 4 weighted graph but it will work 4 unweighted graph 2 because if p[i,j]=1, p[i,k]=1 and p[k,j]=1  then according to the algo p[i,j] = min(p[i,j] ,p[i,k] + p[k,j])  =min(1,2) =1

    And all the other case is also satisfied.(like as if p[i,j] was 0 in last iteration nd there exist a path via k)

    3 votes -- Saurav Kumar Gupta ( 1.7k points)
    Selected Answer
    In BFS traversal .1st we note those vertices which can be reached directly from starting vertex..next we note the vertices which can be reached in 2 steps from starting vertex ..then as follows so it is the best choice i can make
    8 votes -- Bhagirathi Nayak ( 11.4k points)
    Selected Answer

    Answer is (D)

    (A) and (B) produce disconnected components with the GIVEN order in options which is NEVER allowed by prims's algorithm.

    (C) produces connected component every instant a new edge is added BUT when first vertex is chosen(first vertex is chosen randomly) first edge must be the minimum weight edge that is chosen . Therefore (A,D) MUST be chosen BEFORE (A,B). Therefore (C) is FALSE

     

    6 votes -- Sandeep_Uniyal ( 5.5k points)
    Selected Answer
    Time complexity of Bellman-Ford algorithm is $\Theta(|V||E|)$ where $|V|$ is number of vertices and $|E|$ is number of edges. If the graph is complete, the value of $|E|$ becomes $\Theta\left(|V|^2\right)$. So overall time complexity becomes $\Theta\left(|V|^3\right)$. And given here is $n$ vertices. So answers ends up to $\Theta\left(n^3\right)$.
    16 votes -- Gate Keeda ( 17.7k points)

    Here ans should be A.



    Here shortest path will give 6 .


    Spanning tree contains edges of weights 2,3,4 so congestion in this case is max(2,3,4) thai is 4.  for path s to t overall congestion is max(3,4) =4 but total weight is 7.

     option C and D are i think not related to this quetion.

    0 votes -- Gabbar ( 12.3k points)
    Selected Answer
    Ans :->

    C.

    BFS is used to count shortest path from source (If all path costs are 1 !)

    now if u is visited before v it means 2 things.

    1. Either u is closer to v.

    2. if u & v are same distance from r, then our BFS algo chose to visit u before v.
    5 votes -- Akash ( 32.4k points)

    Let $G = (V, E)$ be a simple undirected graph, and $s$ be a particular vertex in it called the source. For $x \in V$, let $d(x)$ denote the shortest distance in $G$ from $s$ to $x$. A breadth first search (BFS) is performed starting at $s$. Let $T$ be the resultant BFS tree. If $(u, v)$ is an edge of $G$ that is not in $T$, then which one of the following CANNOT be the value of $d(u) - d(v)$?

    1. $-1$
    2. $0$
    3. $1$
    4. $2$

    Answers: Graph Search

    Selected Answer

    $2$ is the answer. 

    $d(u) - d(v) = 0$ is possible when both $u$ and $v$ have an edge from $t$ and $t$ is in the shortest path from $s$ to $u$ or $v$.

    $d(u) - d(v) = 1$ is possible when $v$ and $t$ are in the shortest path from $s$ to $u$ and both $t$ and $v$ are siblings- same distance from $s$ to both $t$ and $v$ causing $t-u$ edge to be in BFS tree and not $v-u$. 

    $d(u) - d(v) = -1$ is possible as explained above by interchanging $u$ and $v$.

    $d(u) - d(v) = 2$ is not possible. This is because on BFS traversal we either visit $u$ first or $v$. Let's take $u$ first. Now, we put all neighbors of $u$ on queue. Since $v$ is a neighbour and $v$ is not visited before as assumed, $d(v)$ will become $d(u) + 1$. Similarly, for $v$ being visited first. 

    15 votes -- Arjun Suresh ( 156k points)

    We are given 9 tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.

    Task $T_1$ $T_2$ $T_3$ $T_4$ $T_5$ $T_6$ $T_7$ $T_8$ $T_9$
    Profit 15 20 30 18 18 10 23 16 25
    Deadline 7 2 5 3 4 5 2 7 3

    What is the maximum profit earned?

    1. 147
    2. 165
    3. 167
    4. 175

    We are given 9 tasks $T_1, T_2, \dots, T_9$. The execution of each task requires one unit of time. We can execute one task at a time. Each task $T_i$ has a profit $P_i$ and a deadline $d_i$. Profit $P_i$ is earned if the task is completed before the end of the $d_i^{th}$ unit of time.

    Task $T_1$ $T_2$ $T_3$ $T_4$ $T_5$ $T_6$ $T_7$ $T_8$ $T_9$
    Profit 15 20 30 18 18 10 23 16 25
    Deadline 7 2 5 3 4 5 2 7 3

    Are all tasks completed in the schedule that gives maximum profit?

    1. All tasks are completed

    2. $T_1$ and $T_6$ are left out

    3. $T_1$ and $T_8$ are left out

    4. $T_4$ and $T_6$ are left out

    Answers: Greedy Algorithm

    Selected Answer

    The most important statement in question is 

    each task requires one unit of time

    This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:

    Task T7 T2 T9 T4 T5 T3 T6 T8 T1
    Deadline 2 2 3 3 4 5 5 7 7

           0 --T7 -- 1  --  T2  --  2  --  T9  --  3  --  T5  --  4  --  T3 --  5 --  T8 --  6  --  T1 --  7 

    so we know that T4 and T6 are left

    so profit will not include T4 and T6 = 15 + 20 +30 +18 + 16 + 23 + 25 = 147 

    A is answer

     

    4 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer

    The most important statement in question is 

    each task requires one unit of time

    This shows that we can greedily choose the better task and that should give us the optimal solution. The best task would be the one with maximum profit. Thus we can sort the tasks based on deadline and then profit as follows:

    Task T7 T2 T9 T4 T5 T3 T6 T8 T1
    Deadline 2 2 3 3 4 5 5 7 7

       0 ----T7 ----- 1----T2-----2-----T9-----3-----T5-----4-----T3-----5-----T8-----6-----T1------7

    T4 and T6 left out
     

    D is answer.

    7 votes -- Pooja ( 26.4k points)

    Which one of the following hash functions  on integers will distribute keys most uniformly over 10 buckets numbered 0 to 9 for $i$ ranging from 0 to 2020?

     

    1. $h(i) = i^2 \text{mod } 10$
    2. $h(i) = i^3 \text{mod } 10$
    3. $h(i) = (11 \ast i^2) \text{mod } 10$
    4. $h(i) = (12 \ast i^2) \text{mod } 10$

    Consider a hash table of size seven, with starting index zero, and a hash function $(3x + 4)\mod 7$. Assuming the hash table is initially empty, which of the following is the contents of the table when the sequence 1, 3, 8, 10 is inserted into the table using closed hashing? Note that − denotes an empty location in the table.

    1. 8, −, −, −, −, −, 10

    2. 1, 8, 10, −, −, −, 3

    3. 1, −, −, −, −, −, 3

    4. 1, 10, 8, −, −, −, 3

    Answers: Hashing

    Selected Answer

    Since mod 10 is used, the last digit matters.

    If you do cube all numbers from 0 to 9, you get following

    Number    Cube    Last Digit in Cube
      0        0              0 
      1        1              1 
      2        8              8 
      3        27             7 
      4        64             4 
      5        125            5 
      6        216            6
      7        343            3
      8        512            2
      9        729            9  

    Therefore all numbers from 0 to 2020 are equally divided in 10 buckets. If we make a table for square, we don’t get equal distribution. In the following table. 1, 4, 6 and 9 are repeated, so these buckets would have more entries and buckets 2, 3, 7 and 8 would be empty.

    Number   Square     Last Digit in Cube
      0        0              0 
      1        1              1 
      2        4              4 
      3        9              9 
      4        16             6
      5        25             5 
      6        36             6
      7        49             9
      8        64             4
      9        81             1  

    http://geeksquiz.com/gate-gate-cs-2015-set-2-question-43/

    12 votes -- Anu ( 9k points)
    Selected Answer

    The answer is B.

    1 will occupy location 0, 3 will occupy location 6, 8 hashed to location 0 which is already occupied so, it will be hashed to one location next to it. i.e. to location 1.

    Since 10 also clashes, so it will be hashed to location 2.

    4 votes -- Gate Keeda ( 17.7k points)

    Consider the process of inserting an element into a $Max \: Heap$, where the $Max \: Heap$ is represented by an $array$. Suppose we perform a binary search on the path from the new leaf to the root to find the position for the newly inserted element, the number of $comparisons$ performed is:

    1. $\Theta(\log_2n)$

    2. $\Theta(\log_2\log_2n)$

    3. $\Theta(n)$

    4. $\Theta(n\log_2n)$

    In a min-heap with $n$ elements with the smallest element at the root, the 7th smallest element can be found in time

    1. T (n log n)
    2. T (n)
    3. T (log n)
    4. T(1)

    Let $S=\left \{ x_{1},....,x_{n} \right \}$ be a set of $n$ numbers. Consider the problem of storing the elements of $S$ in an array $A\left [ 1...n \right ]$ such that the following min-heap property is maintained for all $2\leq i\leq n: A [\lfloor i/2\rfloor]\leq A[i]$. (Note that $\lfloor x\rfloor$ is the largest integer that is at most $x$). Which of the following statements is TRUE?

    1. This problem can be solved in $O(\log n)$ time.
    2. This problem can be solved in $O (n)$ time but not in $O(\log n)$ time.
    3. This problem can be solved in $O(n\log n)$ time but not in $O (n)$ time.
    4. This problem can be solved in $O \left ( n^{2} \right )$ time but not in $O(n\log n)$ time.
    5. None of the above.

    The number of elements that can be sorted in $Θ(\log n)$ time using heap sort is

    1. $\Theta(1)$
    2. $\Theta(\sqrt{\log} n)$
    3. $\Theta(\frac{\log n}{\log \log n})$
    4. $\Theta(\log n)$

    Answers: Heap

    Selected Answer
    number of elements in the path from new leaf to root $= \log n$, and all elements are sorted from leaf to root so we can do a binary search which will result in $O(\log \log n)$ number of comparisons.

    Since in heap is a complete binary tree, in an array implementation, from every node index, we can know its depth and this will be the $n$ for binary search.
    13 votes -- Vikrant Singh ( 11.1k points)
    Selected Answer

    Time to find the smallest element on a min-heap- one retrieve operation - $\Theta(1)$
    Time to find the second smallest element on a min-heap- requires $2^2 - 1 = 3$ check operations to find the second smallest element out of 3 elements - $\Theta(1)$

    Time to find the 7th smallest element - requires $O(2^7-1) = O(127)$ check operations to find the seventh smallest element out of 127 possible ones - $\Theta(1)$

    In short if the number of required operations is independent of the input size n, then it is always $\Theta(1)$. 

    (Here, we are doing a level order traversal of the heap and checking the elements)

    If we are not allowed to traverse the heap and allowed only default heap-operations, we will be restricted with doing Extract-min 7 times which would be $O(\log n)$. 

    22 votes -- gatecse ( 10.8k points)
    Selected Answer

    store the elements in an array and then call build_heap(A). the build_heap takes O(n) time.

    so, option 'b' is correct.


    but, if we try building heap by inserting each element one by one, the total complexity will be then O(n log n). cause insertion takes O(log n) and inserting 'n' elements will take O(n log n).

    7 votes -- Sujit Kumar Muduli ( 189 points)
    Selected Answer

    To sort $k$ elements in a heap, complexity is $\Theta (k \log k)$. Lets assume there are $\frac{\log n}{\log \log n}$ elements in the heap.

    Complexity = $\Theta\left(\frac{\log n}{\log \log n} \log\left(\frac{\log n}{\log \log n}\right)\right) $

    $=\Theta \left( \frac{\log n}{\log \log n} \left({\log \log n}- { \log \log \log n}\right) \right )$

    $= \Theta \left ({ \log n} - \frac{ \log n \log \log \log n}{\log \log n} \right )$

    $=\Theta (\log n)$ (as shown below)

    So, (c) is the answer.



    $\log \log n > \log \log \log n$

    $\implies \frac{\log \log \log n}{\log \log n} < 1$

    $\implies \frac{ \log n \log \log \log n}{\log \log n} < \log n$

    $\implies \Theta \left ( { \log n} - \frac{ \log n \log \log \log n}{\log \log n} \right ) =\Theta (\log n)$

    37 votes -- Arjun Suresh ( 156k points)

    Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively. 

    Which of the following is the Huffman code for the letter $a, \,b, \,c, \,d, \,e, \,f$?

    1. 0, 10, 110, 1110, 11110, 11111

    2. 11, 10, 011, 010, 001, 000

    3. 11, 10, 01, 001, 0001, 0000

    4. 110, 100, 010, 000, 001, 111

    Suppose the letters $a, \,b, \,c, \,d, \,e, \,f$ have probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, \frac{1}{32}, \frac{1}{32}$, respectively. 

    What is the average length of the Huffman code for the letters $a, \,b, \,c, \,d, \,e, \,f$?

    1. 3
    2. 2.1875
    3. 2.25
    4. 1.9375

    The characters a to h have the set of frequencies based on the first 8 Fibonacci numbers as follows

    a : 1, b : 1, c : 2, d : 3, e : 5, f : 8, g : 13, h : 21

    A Huffman code is used to represent the characters. What is the sequence of characters corresponding to the following code?

    110111100111010

    1. fdheg
    2. ecgdf
    3. dchfg
    4. fehdg

    Answers: Huffman Code

    Selected Answer

    Based on the probabilities, we can say the probable frequency of the letters will be 

    16, 8, 4, 2, 1, 1

    Now, the Huffman tree can be constructed as follows:

     

    So, A is the answer for 76.

    https://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html

    5 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Ans should be D) 

    3 votes -- sonam vyas ( 8.3k points)
    Selected Answer

    Answer is A. Huffman's tree is as follows. The two least frequent characters are taken as the children of a newly made node and the frequency of the newly made node is made equal to the sum of those two child nodes. Then the same procedure is repeated till all nodes are finished. 

    110111100111010 = 110 11110 0 1110 10 = fdheg

    7 votes -- Arjun Suresh ( 156k points)

    What does the following code do?

    var a, b: integer;
    begin
        a:=a+b;
        b:=a-b;
        a:a-b;
    end;

     

    1. exchanges a and b
    2. doubles a and stores in b
    3. doubles b and stores in a
    4. leaves a and b unchanged
    5. none of the above
       

     

    Suppose $c = \langle c[0], \dots, c[k-1]\rangle$ is an array of length $k$, where all the entries are from the set {0, 1}. For any positive integers $a \text{ and } n$, consider the following pseudocode.

    DOSOMETHING (c, a, n)

    $z \leftarrow 1$
    for $i \leftarrow 0 \text{ to } k-1$

         do $z \leftarrow z^2 \text{ mod } n$
         if c[i]=1
               then $z \leftarrow (z \times a) \text{ mod } n$
    return z

    If $k=4, c = \langle 1, 0, 1, 1 \rangle , a = 2, \text{ and } n=8$, then the output of DOSOMETHING(c, a, n) is _______.

     

     

    Consider the following code.

    def brian(n):
        count = 0 
    
        while ( n ! = 0 )
              n = n & ( n-1 ) 
             count = count + 1 
    
      return count

    Here $n$ is meant to be an unsigned integer. The operator & considers its arguments in binary and computes their bit wise $AND$. For example, $22$ & $15$ gives $6$, because the binary (say 8-bit) representation of $22$ is $00010110$ and the binary representation of $15$ is $00001111$, and the bit-wise $AND$ of these binary strings is $00000110$, which is the binary representation of $6$. What does the function $\text{brian}$ return?

    1. The highest power of $2$ dividing $n$, but zero if $n$ is zero.
    2. The number obtained by complementing the binary representation of $n$.
    3. The number of ones in the binary representation of $n$.
    4. The code might go into an infinite loop for some $n$.
    5. The result depends on the number of bits used to store unsigned integers.

    In the following Pascal program segment, what is the value of X after the execution of the program segment?

    X := -10; Y := 20;
    If X > Y then if X < 0 then X := abs(X) else X := 2*X;
    1. 10
    2. -20
    3. -10
    4. None

     

    Assume that X and Y are non-zero positive integers. What does the following Pascal program segment do?

    while X <> Y do
    if  X > Y then
        X := X - Y
    else
        Y := Y - X;
    write(X);
    
    1. Computes the LCM of two numbers

    2. Divides the larger number by the smaller number

    3. Computes the GCD of two numbers

    4. None of the above

     

    1. Consider the following Pascal function where $A$ and $B$ are non-zero positive integers. What is the value of $GET(3, 2)$?
      function GET(A,B:integer): integer;
      begin
          if B=0 then
              GET:= 1
          else if A < B then
              GET:= 0
          else
              GET:= GET(A-1, B) + GET(A-1, B-1)
      end;

       

    2. The Pascal procedure given for computing the transpose of an $N \times N, (N>1)$ matrix $A$ of integers has an error. Find the error and correct it. Assume that the following declaration are made in the main program
      const
          MAXSIZE=20;
      type
          INTARR=array [1..MAXSIZE,1..MAXSIZE] of integer;
      Procedure TRANSPOSE (var A: INTARR; N : integer);
      var
          I, J, TMP: integer;
      begin
          for I:=1 to N – 1 do
          for J:=1 to N do
          begin
              TMP:= A[I, J];
              A[I, J]:= A[J, I];
              A[J, I]:= TMP
          end
      end;
      

       

    Consider the following C function.

    int fun(int n) {
        int x=1, k;
        if (n==1) return x;
        for (k=1; k<n; ++k)
            x = x + fun(k) * fun (n-k);
        return x;
    }

    The return value of fun(5) is ______.

    What is the output printed by the following program?

    #include <stdio.h>
    
    int f(int n, int k) {
        if (n == 0) return 0;
        else if (n % 2) return f(n/2, 2*k) + k;
        else return f(n/2, 2*k) - k;
    }
    
    int main () {
        printf("%d", f(20, 1));
        return 0;
    }
    
    
    A) 5
    B) 8
    C)
    9
    D) 20

    The following function computes the value of  $\binom{m}{n}$ correctly for all legal values $m$ and $n$ ($m ≥1, n ≥ 0$ and $m > n$)

    int func(int m, int n)
    {
        if (E) return 1;
        else return(func(m -1, n) + func(m - 1, n - 1));
    }


    In the above function, which of the following is the correct expression for E?

    1. (n = = 0) || (m = = 1)
    2. (n = = 0) && (m = = 1)
    3. (n = = 0) || (m = = n)
    4. (n = = 0) && (m = = n)

    What function of x, n is computed by this program?

    Function what(x, n:integer): integer:
    Var
        value : integer
    begin
        value := 1
        if n > 0 then
        begin
            if n mod 2 =1 then
                value := value * x;
            value := value * what(x*x, n div 2);
        end;
        what := value;
    end;
    

     

    Consider the following C function definition

    int Trial (int a, int b, int c)
    {
        if ((a>=b) && (c<b) return b;
        else if (a>=b) return Trial(a, c, b);
        else return Trial(b, a, c);
    }
    

    The functional Trial:

     

    1. Finds the maximum of a, b, and c

    2. Finds the minimum of a, b, and c

    3. Finds the middle number of a, b, c

    4. None of the above

     

    In the following C program fragment, j, k, n and TwoLog_n are integer variables, and A is an array of integers. The variable n is initialized to an integer ≥ 3, and TwoLog_n is initialized to the value of $2^*\lceil \log_2(n) \rceil$

    for (k = 3;  k <= n; k++)
            A[k] = 0;
    for (k = 2; k <= TwoLog_n; k++)
        for (j = k+1; j <= n; j++)
            A[j] = A[j] || (j%k);
    for (j = 3; j <= n; j++)
        if (!A[j]) printf("%d", j);
    

    The set of numbers printed by this program fragment is

    1. $\left\{m \mid m \leq n, (\exists i)\left[m=i!\right]\right\}$

    2. $\left\{m \mid m \leq n, (\exists i) \left[m=i^2\right]\right\}$

    3. $\left\{m \mid m \leq n, \text{m is prime} \right\}$

    4. { }

    Consider the following function:

    int unknown(int n){ 
    
    int i, j, k=0; 
    for (i=n/2; i<=n; i++) 
        for (j=2; j<=n; j=j*2) 
            k = k + n/2; 
     return (k); 
    
    } 

    The return value of the function is

    (A) $\Theta(n^2)$      (B) $\Theta(n^2\log n)$      (C) $\Theta(n^3)$      (D) $\Theta(n^3\log n)$

    Consider the following C-function in which a[n] and b[m] are two sorted integer arrays and c[n+m] be another integer array,

    void xyz(int a[], int b [], int c []){ 
        int i,j,k; 
        i=j=k=0; 
        while ((i<n) && (j<m)) 
            if (a[i] < b[j]) c[k++] = a[i++]; 
            else c[k++] = b[j++]; 
    }

    Which of the following condition(s) hold(s) after the termination of the while loop?

    1. $j< m,k=n+j-1$ and $a[n-1]< b[j]$ if $i=n$
    2. $i<n,k=m+i-1$ and $b[m-1]\leq a[i]$ if $j=m$

    (A) only (i) 
    (B) only (ii) 
    (C) either (i) or (ii) but not both 
    (D) neither (i) nor (ii) 

     

    Let $A$ be the square matrix of size $n \times n$. Consider the following pseudocode. What is the expected output?

    C=100;
    for i=1 to n do
        for j=1 to n do
        {
            Temp = A[i][j]+C;
            A[i][j] = A[j][i];
            A[j][i] = Temp -C;
        }
    for i=1 to n do
        for j=1 to n do 
            output (A[i][j]);

    (A) The matrix A itself

    (B) Transpose of the matrix A

    (C) Adding 100 to the upper diagonal elements and subtracting 100 from lower diagonal elements of A

    (D) None of the above

    Consider the following C function in which size is the number of elements in the array E

    int MyX(int *E, unsigned int size) 
    { 
       int Y = 0; 
       int Z; 
       int i, j, k; 
    
       for(i = 0; i< size; i++) 
              Y = Y + E[i]; 
              
        for(i=0; i < size; i++) 
            for(j = i; j < size; j++)
            {
                Z = 0; 
                for(k = i; k <= j; k++) 
                   Z = Z + E[k];
                if(Z > Y) 
                   Y = Z; 
            } 
       return Y; 
    } 

    The value returned by the function MyX is the

    (A) maximum possible sum of elements in any sub-array of array E.

    (B) maximum element in any sub-array of array E.

    (C) sum of the maximum elements in all possible sub-arrays of array E.

    (D) the sum of all the elements in the array E.

    Consider the following C function.

    For large values of y, the return value of the function f best approximates

    float f,(float x, int y) {
        float p, s; int i;
        for (s=1,p=1,i=1; i<y; i++) {
            p *= x/i;
            s += p;
        }
        return s;
    }
    
    
    1. $x^y$
    2. $e^x$
    3. $\text{ln} (1+x)$
    4. $x^x$

    Consider the function func shown below: 

    int func(int num) { 
       int count = 0; 
       while (num) { 
         count++; 
         num>>= 1; 
       } 
       return (count); 
    } 

    The value returned by func(435) is ________

    What does the following algorithm approximate? (Assume $m > 1, \epsilon >0$).

    x = m;
    y = 1;
    While (x-y > ϵ)
    {
        x = (x+y)/2;
        y = m/x;
    }
    print(x);
    1. $\log \, m$
    2. $m^2$
    3. $m^{\frac{1}{2}}$
    4. $m^{\frac{1}{3}}$

    Answers: Identify Function

    Selected Answer
    Answer is simply A i.e. it swaps the values of the two.. Take any two values for A and B. and perform the given operations over them.
    8 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    Initially k = 4, c = [1, 0, 1, 1], a = 2, n = 8

    now let's iterate through the function step by step :

    z = 1 (at the start of do-something)

    i = 0 (start of external for loop)

    in the do loop

    z = 1*1 % 8 = 1 (non zero value so considered as true and continue)

    c[0] = 1 so in the if clause z = 1*2 % 8 = 2

    in the do loop

    z = 2*2 % 8 = 4 (since now z = 2) (non zero value so considered as true and continue)

    c[0] = 1 so in the if clause z = 4*2 % 8 = 0

    now no need to check further :

    reason all the operations that update Z are multiplicative operations and hence the value of Z will never change from 0.

    12 votes -- Tamojit Chatterjee ( 1.9k points)
    Selected Answer

     

    option c. It return no of 1's in binary representation of n.
    here n&(n-1) reset rightmost bit of n in each iteration.

    e.g

    Suppose n=15=00001111(binary)

    n-1=14(00001110)

       00001111 
    ^  00001110

    ---------------------
        00001110
     

    4 votes -- Avdhesh Singh Rana ( 1.7k points)
    Selected Answer

    Ans of X remains unchanged. As the if condition becomes false.

    X := -10

    ans is C . This is classic example of if-else issue. Always else matches for nesting to closest if in C Programming & Pascal .

    https://en.wikipedia.org/wiki/Dangling_else

     

    if (x>y)
    {
       if (x<0)
           x=abs(x)
        else
           x=2*x
    }

     

    5 votes -- Akash ( 32.4k points)
    Selected Answer
    Answer: C

    Let X = 3 and Y = 7.
    1st pass: X=3, Y=4
    2nd pass: X=3, Y=1
    3rd pass: X=2, Y=1
    4th pass: X=1, Y=1
    write (X), which writes 1.

    Ref: http://www.naturalnumbers.org/EuclidSubtract.html
    8 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    A. = 3

    For B.

    begin

    for I:=2 to N do

    for J:=1 to ( I-1) do

    begin

    TMP:= A[I, J];

    A[I, J]:= A[J, I];

    A[J, I]:= TMP

    end

    Should be the condition...
    2 votes -- Gabbar ( 12.3k points)
    ans for A is 3.

    ans for B is:

    outer for loop should run for 1 to n/2, if it run for n-1 times then it will again put the transposed elements at its original place.
    2 votes -- jayendra ( 6.8k points)
    Selected Answer

    fun(1) = 1;

    fun(2) = 1 + fun(1) * fun(1) = 1 + 1 = 2;

    fun(3) = 1 + fun(1) * fun(2) + fun(2) * fun(1) = 5;

    fun(4) = 1 + fun(1) * fun(3) + fun(2) * fun(2) + fun(3) * fun(1) = 1 + 5 + 4 + 5 = 15;

    fun(5) = 1 + fun(1) * fun(4) + fun(2) * fun(3) + fun(3) * fun(2) + fun(4) * fun(1) = 1 + 15 + 10 + 10 + 15  = 51;


     

    More formal way:

     

    The recurrence relation is

    $f(n) = \begin{cases} 1, n = 1 \\ 1 + \sum_{i=1}^{n-1} f(i) \times f(n-i), n > 1 \end{cases}$

    $f(1) = 1$

     

    $f(2) = 1+ f(1).f(1) \\= 1 + 1.1 = 2$

     

    $f(3) = 1 + f(1). f(2) + f(2).f(1) \\= 1+ 1.2 + 2.1 = 5$

     

    $f(4) = 1 + f(1).f(3) + f(2).f(2) + f(3).f(2) \\= 1+ 1.5 + 2.2 + 5.1 = 15$

     

    $f(5) = 1 + f(1).f(4) + f(2).f(3) + f(3). f(2)+ f(4).f(1) \\= 1 + 1.15 + 2.5 + 5.2 + 15.1 = 51$

    16 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    The sequence has to be followed.

    6.) f(20,1) = 9.

    5.) f(10,2) - 1 = 9

    4.) f(5,4) - 2 = 10

    3.) f(2,8) + 4 = 12

    2.) f(1,16) - 8 = 8

    1.) f(0,32) + 16 = 16
    7 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    Answer: C

    Because $\binom{m}{0}$ = 1 and $\binom{n}{n}$ = 1.
    8 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer

    answer - xn

    5 votes -- ankitrokdeonsns ( 8.4k points)
    Selected Answer
    abcReturn
    111The final return statement is c < b, so this never returns. Answer D. 

     

    14 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The nested loop is taking all integers from 2 to 2 * log2 n take all their non-multiples before n, and make the corresponding entry in A as 1. For example, for 2, and n = 10, A[3], A[5], A[7], and A[9] are made 1. Similarly for 3, 4, ... till 2 * log n. So, if any entry A[p] is 1 means it must be a multiple of 2, 3, .... 2log2 n, which is (2 log n)! and is greater than n. So, for no index p, A[p] will be 0. So, answer is D. 

    Suppose the line

    A[j] = A[j] || (j%k);

    is replaced with

    A[j] = A[j] || !(j%k);

    Now, the nested loop is taking all integers from 2 to log2 n, take all their multiples before n, and make the corresponding entry in A as 1. For example, for 2, and n = 10, A[4], A[6], A[8] and A[10] are made 1. Similarly for 3, 4, ... till 2 * log n. So, for all non-prime indices of A, we will have a 1, and for prime indices we have a 0. And we print i if A[j] is 0 meaning j is prime. 

    6 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The outer loop is running for n/2 times and inner loop is running for log2 n times (each iteration doubles j and j stops at n means log2 n times j loop will iterate). 

    Now in each iteration k is incremented by n/2. So, overall k will be added n/2 * log n * n/2 with an initial value of 0. So, final value of k will be Θ(n2 log n)

    15 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The while loop add elements from a and b (whichever is smaller) to c and terminates when either of them exhausts. So, when loop terminates either i = n or j = m. 

    Suppose i = n. This would mean all elements from array a are added to c => k must be incremented by n. c would also contain j elements from array b. So, number of elements in c would be n+j and hence k = n + j. 

    Similarly, when j = m, k = m + i. 

    Hence, option (D) is correct. (Had k started from -1 and not 0 and we used ++k inside loop, answer would have been option (C))

     

    17 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    A.

    In the computation of given pseudo code for each row and column of Matrix A, each upper
    triangular element will be interchanged by its mirror image in the lower triangular and after
    that the same lower triangular element will be again re-interchanged by its mirror image in
    the upper triangular, resulting the final computed Matrix A same as input Matrix A.
    11 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    answer is (A) maximum possible sum of elements in any sub-array of array E.

    int MyX ( int * E, unsinged int size )
    {
        int Y= 0;
        int z;
        int i, j,k;
        // calculate sum of the elements of the array E and stores it in Y
        for i 0;i<size;i++)   
          Y = Y+E[i]; 
        //calculate the sum of all possible subaarays (starting from postion 0..n-1)  
        for (i=0;i<size;i++)        
            for(j=i;j<size ;j++)       
            {                                    
                z = 0;                    
                for(k=i; k<=j;k++)  
                    z=z+E[k]; 
                 // checks whether sum of elements of each subarray is greater than the current max, if so, then assign it to currentmax
                if(z>Y)           
                    Y = z;
            }
        // ultimately returns the maximum possible sum of elements in any sub array of given array E    
        return Y;  
    }
    
    

     

    12 votes -- Kalpna Bhargav ( 3.1k points)
    Selected Answer

    $$\begin{array}{l}
    i = 1 &\rm then& p = x &\&& s = 1 + x\\[1em]
    i = 2 &\rm then& p = \frac{x^2}{2} &\&& s = 1+x+\frac{x^2}{2}
    \end{array}$$

    As $y$ tends to infinity, $s$ tends to  $e^x$.

    Hence, the correct answer is option B.

    12 votes -- Pooja ( 26.4k points)
    Selected Answer

    Ans - 9

    435-(110110011) 

    num >>= 1; implies a num is shifted one bit right in every while loop execution.While loop is executed 9 times successfully and 10th time num is zero.

    So count is incremented 9 times.

    Note:

    Shifting a number "1"bit position to the right will have the effect of dividing by 2:

    8 >> 1 = 4    // In binary: (00001000) >> 1 = (00000100)
     
    6 votes -- Prasanna Ranganathan ( 2.5k points)
    Selected Answer

    by putting y = m/x into x = ( x + y )/2

    x= ( x + m/x ) /2

    => 2x2 = x2 + m

    => x = m^1/2

    or we can check by putting 2-3 different  values also.

    11 votes -- gate_asp ( 583 points)

    Consider the Insertion Sort procedure given below, which sorts an array L of size $n\left ( \geq 2 \right )$ in ascending order:

    begin     
        for xindex:= 2 to n do        
            x := L [xindex];
            j:= xindex - 1;
            while j > 0 and L[j] > x do
                L[j + 1]:= L[j];
                j:= j - 1;
            end {while}
            L [j + 1]:=X;
        end{for}   
    end

    It is known that insertion sort makes at most n (n - 1) / 2 comparisons. Which of the following is true?

    1. There is no input on which insertion Sort makes n (n - 1) / 2 comparisons.
    2. Insertion Sort makes n (n - 1) / 2 comparisons when the input is already sorted in ascending order.
    3. Insertion Sort makes n (n - 1) / 2 comparisons only when the input is sorted in descending order.
    4. There are more than one input orderings where insertion sort makes n (n - 1) / 2 comparisons.
    5. Insertion Sort makes n (n - 1) / 2 comparisons whenever all the elements of L are not distinct.

    Answers: Insertion Sort

    Selected Answer

    In worst case Insertion sort will have N(N-1)/2 comparisons i.e. when input is sorted in descending order.

    50 40 30 20 10

    pass 1 50 40 30 20 10..............n                       0 compare     

    pass 2 40 50 30 20 10..............n                       1 compare

    .                                                                                                                                             

    .

    .

    pass n n ...............10 20 30 40 50                    n-1 compare                                                           

    1+2+3....................+n-1 = N(N-1)/2 comparisons

    2 votes -- Umang Raman ( 11.5k points)

    Let $P_1, P_2,\dots , P_n $be $n$ points in the $xy$-plane such that no three of them are collinear. For every pair of points $P_i$ and $P_j$, let $L_{ij}$ be the line passing through them. Let $L_{ab}$ be the line with the steepest gradient among all $n(n -1)/2$ lines.

    The time complexity of the best algorithm for finding $P_a$ and $P_b$ is

    1. $\Theta\left(n\right)$
    2. $\Theta\left(n\log n\right)$
    3. $\Theta\left(n\log^2 n\right)$
    4. $\Theta\left(n^2\right)$

    Answers: Lines Curves

    I think D, for each and every pair we have to check.
    1 votes -- Shreya Roy ( 1.4k points)

    Linked lists are not suitable data structures for which one of the following problems?

    1. Insertion sort

    2. Binary search

    3. Radix sort

    4. Polynomial manipulation

     

    The following C function takes a singly-linked list as input argument. It modifies the list by moving the last element to the front of the list and returns the modified list. Some part of the code is left blank.

    typedef struct node 
    {
        int value;
        struct node *next;
    } node;    
    Node *move_to-front(Node *head) 
    {
        Node *p, *q;
        if ((head == NULL) || (head -> next == NULL))
            return head;
        q = NULL; 
        p = head;
        while (p->next != NULL)
        {
            q=p;
            p=p->next;
        }
        _______________ 
        
        return head;  
        
    }    

    Choose the correct alternative to replace the blank line.

    1. q=NULL; p->next = head; head = p;
    2. q->next = NULL; head = p; p->next = head;
    3. head = p; p->next =q; q->next = NULL;
    4. q->next = NULL; p->next = head; head = p;
    Consider a singly linked list having $n$ nodes. The data items $d_1, d_2, \dots d_n$ are stored in these $n$ nodes. Let $X$ be a pointer to the $j^{th}$ node $(1 \leq j \leq n)$ in which $d_j$ is stored. A new data item $d$ stored in node with address $Y$ is to be inserted. Give an algorithm to insert $d$ into the list to obtain a list having items $d_1, d_2, \dots, d_{j}, d,\dots, d_n$ in order without using the header.

    Let P be a singly linked list. Let Q be the pointer to an intermediate node x in the list. What is the worst-case time complexity of the best-known algorithm to delete the node x from the list ?

    1. O(n)
    2. O(log2 n)
    3. O(log n)
    4. O(1)

    Consider the following piece of ‘C’ code fragment that removes duplicates from an ordered list of integers.

    Node    *remove-duplicates (Node* head, int *j)
    {
        Node *t1, *t2;
        *j=0;
        t1 = head;
        if (t1! = NULL) t2 = t1 ->next;
        else return head;
        *j = 1;
        if(t2 == NULL)
            return head;
        while t2 != NULL)
        {
            if (t1.val != t2.val) ----------------> S1)
            {
                (*j)++; t1 -> next = t2; t1 = t2: -----> (S2)
            }
            t2 = t2 ->next;
        }
        t1 -> next = NULL;
        return head;
    }
    

    Assume the list contains $n$ elements ($n\geq 2$) in the following questions.

    1. How many times is the comparison in statement S1 made?

    2. What is the minimum and the maximum number of times statements marked S2 get executed?

    3. What is the significance of the value in the integer pointed to by $j$ when the function completes?

    Answers: Linked Lists

    Selected Answer
    B. Because in binary search we need to have access to the mid of the list in constant time. and finding the mid itself in a linked list takes $O(n)$ time which makes no sense to Binary search which otherwise takes $O(\log n)$.
    11 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    as per given code p points to last node which should be head in modified.

    q is the previous node of tail which should be tail for modified

    answer D
    10 votes -- Sankaranarayanan P.N ( 9.9k points)
    Selected Answer

    these steps are mandatory for the algorithm :
    $\begin{align*} temp &= X \rightarrow next\\ X \rightarrow next &= Y \\ Y \rightarrow next &= temp \end{align*}$

    3 votes -- Amar Vashishth ( 20.9k points)
    In the worst case x could be last or second last node, In that case full traversal of the list is required. Therefore answer is (A).
    14 votes -- suraj ( 3.7k points)
    Selected Answer

    a.  As we are comparing here pair wise so for n elements we require compulsory  n-1 comparision

    b. S2 is executed only for distinct elements so max n-1 times and min 0 when all r duplicates or list contain no or 1 element.

    c. j holds the count on no. of distinct elements in the ordered list.

    1 votes -- Rajesh Pradhan ( 7.6k points)

    Consider the following program for summing the entries of the array $b$: array $[0 .. N-1]$ of integers, where $N$ is a positive integer. (The symbol '<>' denotes 'not equal to').

    var      
        i, s: integer;
    Program
        i:= 0;
        s:= 0;
    [*] while i <> N do
            s := s + b[i];
            i := i + 1;
        od

    Which of the following gives the invariant that holds at the beginning of each loop, that is, each time the program arrives at point [*] ?

    1. $s = \sum\limits^{N}_{j=0}b[j] \;\&\; 0 \leq i \leq N$
    2. $s = \sum\limits^{i=1}_{j=0}b[j] \;\&\; 0 \leq i < N$
    3. $s = \sum\limits^{i}_{j=0}b[j] \;\&\; 0 < i \leq N$
    4. $s = \sum\limits^{N}_{j=1}b[j] \;\&\; 0 \leq  i < N$
    5. $s = \sum\limits^{i-1}_{j=0}b[j] \;\&\; 0 \leq  i \leq N$

     

    The following function computes $X^{Y}$ for positive integers $X$ and $Y$.

    int exp (int X, int Y) { 
         int res =1, a = X, b = Y;
       
         while (b != 0) { 
             if (b % 2 == 0) {a = a * a; b = b/2; } 
             else         {res = res * a; b = b - 1; } 
         } 
         return res; 
    }

    Which one of the following conditions is TRUE before every iteration of the loop?

    1. $X^{Y} = a^{b}$
    2. $(res * a)^{Y} = (res * X)^{b}$
    3. $X^{Y} = res * a^{b}$
    4. $X^{Y} = (res * a)^{b}$

     

     

    Answers: Loop Invariants

    Selected Answer

    Whenever we encounter the [*], the variable $s$ holds the sum of all elements $b[0]$ to $b[i-1]$.

    When we first enter the loop, $i=0$, and $s$ doesn't have any elements summed up.

    When we last enter the loop, $i = (N-1)$ and $s$ contains the sum of elements $b[0]$ through $b[N-2]$.

    We leave the loop when $i=N$, and $s$ gets the sum of elements $b[0]$ to $b[N-1]$

    The only option that matches this behavior is option E

    $$s = \sum\limits^{i-1}_{j=0}b[j] \;\&\; 0 \leq  i \leq N$$

    3 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    Answer ==> C

    13 votes -- Akash ( 32.4k points)
    Four Matrices $M_1, M_2, M_3,$ and $M_4$ of dimensions $ p \times q, \:\:q \times r, \:\:r \times s$ and $s \times t$ respectively can be multiplied in several ways with different number of total scalar multiplications. For example when multiplied as $((M_1 \times M_2) \times (M_3 \times M_4))$, the total number of scalar multiplications is $pqr + rst + prt$. When multiplied as $(((M_1 \times M_2) \times M_3) \times M_4)$, the total number of scalar multiplications is $pqr + prs + pst$.

    If $p=10, q=100, r=20, s=5$ and $t=80$, then the minimum number of scalar multiplications needed is

    (A) 248000

    (B) 44000

    (C) 19000

    (D) 25000
    Let $A_{1}, A_{2}, A_{3}$ and $A_{4}$ be four matrices of dimensions $10 \times 5, 5 \times 20, 20 \times 10,$ and $10 \times 5$, respectively. The minimum number of scalar multiplications required to find the product $A_{1}A_{2}A_{3}A_{4}$ using the basic matrix multiplication method is _________.

    Answers: Matrix Chain Ordering

    Selected Answer

    Answer is C.

    $\text{ Ordering: }\\ \text{ First Multiply } M_2 \times M_3. \text{ This requires 100*20*5 multiplications.}$ $\text{ Then Multiply } M_1 \times (M_2 \times M_3). \text{ This requires 10*100*5 multiplications. }$ $\text{ Then Multiply } (M_1 \times (M_2 \times M_3)) \times M_4. \text{ This requires 10*5*8 multiplications. }$ $\text{ Total 19000 Multiplications.}$

     


     

    Brute Force approach - anyone can do. 

    No. of possible ordering for 4 matrices is $C_3$ where $C_3$ is the $3^{rd}$ Catalan number and given by $n=3$ in $\frac{1}{n+1} {}^{2n}C_n = 5.$

    So, here we have 

    1. $(M_1 \times M_2) \times (M_3 \times M_4)$
    2. $(M_1 \times (M_2 \times M_3)) \times M_4$
    3. $((M_1 \times M_2) \times M_3) \times M_4$
    4. $M_1 \times (M_2 \times (M_3 \times M_4))$
    5. $M_1 \times ((M_2 \times M_3) \times M_4))$

    Each of these would give no. of multiplications required as

    1. $pqr + rst + prt $
    2. $qrs + pqs + pst$
    3. $pqr + prs + pst$
    4. $rst + qrt + pqt$
    5. $qrs + qst + pst$

    The last 2 are having $qt$ terms which are the highest terms by far and hence we can avoid them from consideration $qt = 8000$ multiplied by one other term would be larger than any value in choice. So, just find the value of first 3 terms. 

    1. $pqr + rst + prt  = 20000 + 8000 + 16000 = 44000$
    2. $qrs + pqs + pst = 10000 + 5000 + 4000 = 19000$ - smallest value in choice, we can stop here. 
    3. $pqr + prs + pst$

     

    Dynamic Programming Solution (should know Matrix Chain Ordering algorithm)

    Here we have a chain of length 4.

    Dynamic programming solution of Matrix chain ordering has the solution

    $m[i, j] = \begin{cases} 0 &\text{ if } i=j\\ \min_{i\leq k < j } m[i]k] + m[k+1][j] + p_{i-1}p_{j}p_{k} &\text{ if } i < j\end{cases}$

    So, we can fill the following table starting with the diagonals and moving upward diagonally. Here $k < j$ but $\geq i.$

     

      j=1 j=2 j=3 j=4
    i=1 0 $p_0p_1p_2 = 20000$ $\min(10000+p_0p_1p_3, \\20000+p_0+p_2p_3) = 15000$ $\min(18000+p_0p_1p_4, \\20000+8000+p_0+p_2+p_4, \\15000+p_0p_3p_4) = 19000$
    i=2   0 $p_1p_2p_3 = 10000$ $\min(10000 + p_2p_3p_4), \\p_1p_3p_4) = 18000$
    i=3     0 $p_2p_3p_4 = 8000$
    i=4       0

     

    Our required answer is given by $m[1,4] = 19000.$

    8 votes -- Sona Praneeth Akula ( 3.8k points)
    Selected Answer

    Answer is 1500 !

    Matrix Paranthesizing => A1 ((A2 A3) A4)

    Check my solution below, using dynamic programming (There was little mistake while writing in parantheses in this image, ignore it Check paranthesis above ) =>

    9 votes -- Akash ( 32.4k points)

    It takes $O(n)$ time to find the median in a list of $n$ elements, which are not necessarily in sorted order while it takes only $O(1)$ time to find the median in a list of $n$ sorted elements. How much time does it take to find the median of $2n$ elements which are given as two lists of $n$ sorted elements each?

    1. $O (1)$
    2. $O \left(\log n\right)$ but not $O (1)$
    3. $O (\sqrt{n})$ but not $O \left(\log n\right)$
    4. $O (n)$ but not $O (\sqrt{n})$
    5. $O \left(n \log n\right)$ but not $O (n)$

    Answers: Median

    Selected Answer

    1) Calculate the medians m1 and m2 of the input arrays ar1[]
       and ar2[] respectively.
    2) If m1 and m2 both are equal.
         return m1 (or m2)
    3) If m1 is greater than m2, then median is present in one
       of the below two subarrays.
        a)  From first element of ar1 to m1 (ar1[0 to n/2])
        b)  From m2 to last element of ar2  (ar2[n/2 to n-1])
    4) If m2 is greater than m1, then median is present in one    
       of the below two subarrays.
       a)  From m1 to last element of ar1  (ar1[n/2 to n-1])
       b)  From first element of ar2 to m2 (ar2[0 to n/2])
    5) Repeat the above process until size of both the subarrays
       becomes 2.
    6) If size of the two arrays is 2 then
      the median.
        Median = (max(ar1[0], ar2[0]) + min(ar1[1], ar2[1]))/2
    Time complexity O(logn)

    http://www.geeksforgeeks.org/median-of-two-sorted-arrays/

    5 votes -- Umang Raman ( 11.5k points)

    Let a and b be two sorted arrays containing n integers each, in non-decreasing order. Let c be a sorted array containing 2n integers obtained by merging the two arrays a and b. Assuming the arrays are indexed starting from 0, consider the following four statements

    1. a[i] ≥ b [i] => c[2i] ≥ a [i]
    2. a[i] ≥ b [i] => c[2i] ≥ b [i]
    3. a[i] ≥ b [i] => c[2i] ≤ a [i]
    4. a[i] ≥ b [i] => c[2i] ≤ b [i]

    Which of the following is TRUE?

    1. only I and II
    2. only I and IV
    3. only II and III
    4. only III and IV

    For merging two sorted lists of sizes $m$ and $n$ into a sorted list of size $m+n$, we require comparisons of

    1. $O(m)$

    2. $O(n)$

    3. $O(m+n)$

    4. $O(\log m + \log n)$

     

    Assume that a mergesort algorithm in the worst case takes 30 seconds for an input of size 64. Which of the following most closely approximates the maximum input size of a problem that can be solved in 6 minutes?

     

    1. 256
    2. 512
    3. 1024
    4. 2018

    Answers: Merge Sort

    Selected Answer
    a[i] ≥ b[i]

    Since both a and b are sorted in the beginning, there are i elements smaller than or equal to a[i] (i starts from 0), and similarly i elements smaller than or equal to b[i]. So, a[i]  ≥ b[i] means there are 2i elements smaller than or equal to a[i], and hence in the merged array a[i] will come after these 2i elements (its index will be > 2i). So, c[2i] ≤ a[i] (equality takes care of the "equal to" case which comes when array contains repeated elements).

    Similarly, a[i] ≥ b[i] says for b that, there are not more than 2i elements smaller than b[i] in the sorted array (i elements from b, and maximum another i elements from a). So, b[i] ≤ c[2i]

    So, II and III are correct -> option (C)
    7 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    It is C.

    The number of moves are however always m+n so that we can term it as theta(m+n). But the number of comparisons vary as per the input. In the best case the comparisons are Min(m,n) and in worst case they are m+n-1.
    10 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    The worst case time complexity of Mergesort is $k \times n \log n$ for an input of size $n$.

    For an input of size $64$, the algorithm takes $30s$. Therefore,
    $$\begin{align}
    k \times 64 \log_2{64} &= 30s\\[1em]
    k \times 384 &= 30s\\[1em]
    \implies k &= 0.078125s
    \end{align}$$
    Let the size of the problem that can be solved in $6$ minutes be $x$. Then,
    $$k \times x \log_2 x = 360s$$
    From this, we get:
    $$\begin{align}
    x \log_2 x &= \frac{360s}{0.078125s}\\[1em]
    \implies x &= 512
    \end{align}$$
    24 votes -- Pragy Agarwal ( 14.5k points)

    $G$ is a graph on $n$ vertices and $2n-2$ edges. The edges of $G$ can be partitioned into two edge-disjoint spanning trees. Which of the following is NOT true for $G$?

    1. For every subset of $k$ vertices, the induced subgraph has at most $2k-2$ edges.
    2. The minimum cut in $G$ has at least 2 edges.
    3. There are at least 2 edge-disjoint paths between every pair of vertices.
    4. There are at least 2 vertex-disjoint paths between every pair of vertices.

     

    Answers: Minimum

    Selected Answer
    There are 2 spanning trees (a spanning tree connects all $n$ vertices) for $G$ which are edge disjoint. A spanning tree for $n$ nodes require $n-1$ edges and so 2 edge-disjoint spanning trees requires $2n-2$ edges. As $G$ has only $2n-2$ edges, it is clear that it doesn't have any edge outside that of the two spanning trees. Now lets see the cases:

    Lets take any subgraph of $G$ with $k$ vertices. The remaining subgraph will have $n-k$ vertices. Between these two subgraphs (provided both has at least one vertex) there should be at least 2 edges, as we have 2 spanning trees in $G$. So, (b) is TRUE. Also, in the subgraph with $k$ vertices, we cannot have more than $2k-2$ edges as this would mean that in the other subgraph with $n-k$ vertices, we would have less than $2n-2k$ edges, making 2 spanning trees impossible in it. So, (a) is also TRUE.

    A spanning tree covers all the vertices. So, 2 edge-disjoint spanning trees in $G$ means, between every pair of vertices in $G$ we have two edge-disjoint paths (length of paths may vary).  So, (c) is also TRUE.

    So, that leaves option (d) as answer. It is not quite hard to give a counter example for (d).
    10 votes -- Arjun Suresh ( 156k points)

    Let $G$ be a weighted connected undirected graph with distinct positive edge weights. If every edge weight is increased by the same value, then which of the following statements is/are TRUE?

    • P: Minimum spanning tree of $G$ does not change.
    • Q: Shortest path between any pair of vertices does not change.
    1. $P$ only
    2. $Q$ only
    3. Neither $P$ nor $Q$
    4. Both $P$ and $Q$

     

    Consider the following undirected connected graph $G$ with weights on its edges as given in the figure below. A minimum spanning tree is a spanning tree of least weight and a maximum spanning tree is one with largest weight. A second best minimum spanning tree whose weight is the smallest among all spanning trees that are not minimum spanning trees in $G$.

    Which of the following statements is TRUE in the above graph? (Note that all the edge weights are distinct in the above graph)

    1. There is more than one minimum spanning tree and similarly, there is more than one maximum spanning tree here.
    2. There is a unique minimum spanning tree, however there is more than one maximum spanning tree here.
    3. There is more than one minimum spanning tree, however there is a unique maximum spanning tree here.
    4. There is more than one minimum spanning tree and similarly, there is more than one second-best minimum spanning tree here.
    5. There is unique minimum spanning tree, however there is more than one second-best minimum spanning tree here.
    Let G be a complete undirected graph on 4 vertices, having 6 edges with weights being 1, 2, 3,  4, 5, and 6. The maximum possible weight that a minimum weight spanning tree of G can have is __________

    $G=(V, E)$ is an undirected simple graph in which each edge has a distinct weight, and $e$ is a particular edge of $G$. Which of the following statements about the minimum spanning trees $(MSTs)$ of $G$ is/are TRUE?

    I. If $e$ is the lightest edge of some cycle in $G$, then every MST of $G$ includes $e$.

    II. If $e$ is the heaviest edge of some cycle in $G$, then every MST of $G$ excludes $e$.

    1. I only.
    2. II only.
    3. Both I and II.
    4. Neither I nor II.

     

    Answers: Minimum Spanning Trees

    Selected Answer

    Statement P is true.

    For statement Q consider a simple graph with 3 nodes.

      A B C
    A 0 1 100
    B 1 0 2
    C 100 2 0

    Shortest path from A to C is A-B-C = 1 + 2 = 3 

     Now if the value of each edge is increased by 100,

      A B C
    A 0 101 200
    B 101 0 102
    C 200 102 0

    The shortest path from A to C is A-C = 200, (A-B-C = 101 + 102 = 203)

    Hence option A is correct.

    21 votes -- ryan sequeira ( 1.6k points)
    Selected Answer
    In the graph we have all edge weights are distinct so we will get unique minimum and maximum spanning tree.

    Each Cycle must exclude maximum weight edge in minimum spanning tree.

    Here we have two cycle of 3 edges , ade and cgk .

    for second best minimum spanning tree = exclude ae edge and include de edge

    other way : second best minimum spanning tree= exclude cg edge and include gk edge.

    so e should be the ans.
    1 votes -- Gabbar ( 12.3k points)
    Selected Answer

    Graph G can be like this:

    18 votes -- shaiklam09 ( 199 points)
    Selected Answer
    I think answer is option B

    Statement 2 is correct absolutely. if e is the heaviest edge in cycle every mst excludes it.

    Regarding statement 1, It is not fully right i think. When we think of a complete graph with 4 vertices and edge weights 1,2,5,6 in non diagonal and diagonal edges 3 and 4. 4,5,6 will create a cycle and we can exclude the lighest edge e (4) from it, in a MST

    So i think answer could be B
    20 votes -- Sreyas S ( 1.6k points)

    Which of the following statements are TRUE?

    1. The problem of determining whether there exists a cycle in an undirected graph is in P.
    2. The problem of determining whether there exists a cycle in an undirected graph is in NP.
    3. If a problem A is NP-Complete, there exists a non-deterministic polynomial time algorithm to solve A.

    (A) 1, 2 and 3     (B) 1 and 2 only     (C) 2 and 3 only     (D) 1 and 3 only

    Consider the following two problems on undirected graphs:

    • $\alpha$: Given G(V, E), does G have an independent set of size |V| - 4?
    • $\beta$: Given G(V, E), does G have an independent set of size 5?

    Which one of the following is TRUE?

    1. $\alpha$ is in P and $\beta$ is NP-complete

    2. $\alpha$ is NP-complete and $\beta$ is in P

    3. Both $\alpha$ and $\beta$ are NP-complete

    4. Both $\alpha$ and $\beta$ are in P

    Let $\pi_A$ be a problem that belongs to the class NP. Then which one of the following is TRUE?

    1. There is no polynomial time algorithm for $\pi_A$.

    2. If $\pi_A$ can be solved deterministically in polynomial time, then P = NP.

    3. If $\pi_A$ is NP-hard, then it is NP-complete.

    4. $\pi_A$ may be undecidable.

     

    Ram and Shyam have been asked to show that a certain problem $\Pi$ is NP-complete. Ram shows a polynomial time reduction from the 3-SAT problem to $\Pi$, and Shyam shows a polynomial time reduction from $\Pi$ to 3-SAT. Which of the following can be inferred from these reductions?

    1. $\Pi$ is NP-hard but not NP-complete

    2. $\Pi$ is in NP, but is not NP-complete

    3. $\Pi$ is NP-complete

    4. $\Pi$ is neither NP-hard, nor in NP

    Suppose a language $L$ is $\mathbf{NP}$ complete. Then which of the following is FALSE?

    1. $L \in \mathbf{NP}$
    2. Every problem in $\mathbf{P}$ is polynomial time reducible to $L$.
    3. Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
    4. The Hamilton cycle problem is polynomial time reducible to $L$.
    5. $\mathbf{P} \neq \mathbf{NP}$ and $L \in \mathbf{P}$.

    Let $A_{TM}$ be defined as follows:

    $A_{TM}=\left \{ \left \langle M, w \right \rangle \mid \text{ The Turning machine $M$ accepts the word } w \right \}$

    And let $L$ be some $\mathbf{NP}-$ complete language. Which of the following statements is FALSE?

    1. $L\in \mathbf{NP}$
    2. Every problem in $\mathbf{NP}$ is polynomial time reducible to $L$.
    3. Every problem in $\mathbf{NP}$ is polynomial time reducible to $A_{TM}$.
    4. Since $L$ is $\mathbf{NP}-$ complete, $A_{TM}$ is polynomial time reducible to $L$.
    5. $A_{TM} \notin \mathbf{NP}$.

    02. Choose the correct alternatives (more than one may be correct) and write the corresponding letters only:

    (vi) Which of the following problems is not NP-hard?

    1. Hamiltonian circuit problem
    2. The 0/1 Knapsack problem
    3. Finding bi-connected components of a graph
    4. The graph coloring problem

     

    Given an integer $n\geq 3$, consider the problem of determining if there exist integers $a,b\geq 2$ such that $n=a^{b}$. Call this the forward problem. The reverse problem is: given $a$ and $b$, compute $a^{b}$ (mod b). Note that the input length for the forward problem is $\lfloor\log n\rfloor + 1$, while the input length for the reverse problem is $\lfloor\log a\rfloor + \lfloor\log b\rfloor + 2$. Which of the following statements is TRUE?

    1. Both the forward and reverse problems can be solved in time polynomial in the lengths of their respective inputs.
    2. The forward problem can be solved in polynomial time, however the reverse problem is $NP$-hard.
    3. The reverse problem can be solved in polynomial time, however the forward problem is $NP$-hard.
    4. Both the forward and reverse problem are $NP$-hard.
    5. None of the above.

    Let $L$ be a language over $\Sigma$ i.e., $L\subseteq \Sigma^*$. Suppose $L$ satisfies the two conditions given below.

    1. $L$ is in NP and

    2. For every $n$, there is exactly one string of length $n$ that belongs to $L$.

          Let $L^c$ be the complement of $L$ over $\Sigma^*$. Show that $L^c$ is also in NP.

     

    The subset-sum problem is defined as follows: Given a set S of n positive integers and a positive integer W, determine whether there is a subset of S whose elements sum to W. An algorithm Q solves this problem in O(nW) time. Which of the following statements is false?

    1. Q solves the subset-sum problem in polynomial time when the input is encoded in unary
    2. Q solves the subset-sum problem in polynomial time when the input is encoded in binary
    3. The subset sum problem belongs to the class NP
    4. The subset sum problem is NP-hard

    Which of the following is not implied by $P=NP$?

    1. 3SAT can be solved in polynomial time.
    2. Halting problem can be solved in polynomial time.
    3. Factoring can be solved in polynomial time.
    4. Graph isomorphism can be solved in polynomial time.
    5. Travelling salesman problem can be solved in polynomial time.

     

    Let S be an NP-complete problem and Q and R be two other problems not known to be in NP. Q is polynomial time reducible to S and S is polynomial-time reducible to R. Which one of the following statements is true?

    1. R is NP-complete
    2. R is NP-hard
    3. Q is NP-complete
    4. Q is NP-hard

    The problem 3-SAT and 2-SAT are 

    1. both in P

    2. both NP complete

    3. NP-complete and in P respectively

    4. undecidable and NP complete respectively

    Let SHAM3 be the problem of finding a Hamiltonian cycle in a graph $G=(V,E)$  with $|V|$  divisible by 3 and DHAM3  be the problem of determining if a Hamiltonian cycle exists in such graphs. Which one of the following is true? 

    1. Both DHAM3 and SHAM3 are NP-hard 
    2. SHAM3 is NP-hard, but DHAM3 is not 
    3. DHAM3 is NP-hard, but SHAM3 is not 
    4. Neither DHAM3 nor SHAM3 is NP-hard

    Answers: P Np Npc Nph

    Selected Answer

    Cycle detection is graph is in P as it can be done using a graph traversal (O(V+E))

    Ref: http://www.geeksforgeeks.org/detect-cycle-undirected-graph/

    If a problem is in P then it is also in NP as P is a subset of NP. So, both 1 and 2 are TRUE. 

    Statement 3 is also true as NP-Complete requires a problem to be in NP and for any problem in NP, we have a non-deterministic polynomial time algorithm. 

    So, answer is A- 1, 2 and 3 are TRUE.  

    7 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Independent Set- a set of vertices in a graph no two of which are adjacent. 

    Maximal Independent set problem - Given a graph $G$, find the size of the maximal independent set in it. This problem in NP-hard.

    Independent set decision problem - Given a graph $G$ and a number $k$, does $G$ have an independent set of size $k$. This problem is NP-complete (NP-hard but in NP).

    Now, in the given problem  $\beta$ corresponds to the Independent set decision problem. But there is a difference there. We have $5$ instead of $k$. And this drastically changes the problem statement. We can now give a polynomial time deterministic algorithm for $\beta$. 

    • Find all vertex sets of size $5$. We get ${}^{|V|}C_5$ such vertex sets
    • For each of them check if there is any adjacent vertices. This check can be done in constant time if we use an Adjacency matrix representation

    Thus the whole time complexity reduces to ${}^{|V|}C_5$ which is $O\left({|V|}^5\right)$ and hence polynomial. $({}^{|V|}C_k$ is not polynomial but ${}^{|V|}C_5$ is $)$. 

    Problem $\alpha$ is asking for an independent set of size $|V| - 4$. This is equivalent to asking if $G$ has a vertex cover* of size $4$. Following a similar approach as done for $\beta$ this problem also can be done in polynomial time. 

    So, both $\alpha$ and $\beta$ are in $P$.

    D choice. 


    Vertex cover of a graph $G$ is the set of vertices such that each edge of the graph is incident on atleast one of those vertices.

    Independent Set and Vertex cover Reduction: https://www.cs.cmu.edu/~ckingsf/bioinfo-lectures/npcomplete.pdf

    4 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    A problem which is in P , is also in NP- so A is false. If problem can be solved deterministically in Polynomial time, then also we can't comment anything about P=NP, we just put this problem in P. So, B also false. C is TRUE because that is the definition of NP-complete.

    D is false because all NP problems are not only decidable but decidable in polynomial time using a non-deterministic Turing machine.
    6 votes -- shreya ghosh ( 2.9k points)
    Selected Answer

    C

    Ram's reduction shows ∏ is NP hard.
    Shyam's reduction shows ∏ is in NP.

    So NPC. 

    8 votes -- Anurag Semwal ( 5.8k points)
    Selected Answer

    Option E leads to a contradiction, hence is false.

    We know that $L$ is $\mathbf{NPC}$, hence  $\in \mathbf{NP}$. If $\mathbf{P} \neq \mathbf{NP}$, then $L$ can't be in $\mathbf{P}$

    4 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    $\newcommand{NP}{\mathbf{NP}} \newcommand{NPC}{\mathbf{NPC}}$
    $A_{TM}$ is the language of the Halting Problem. It is undecidable, but Recursively Enumerable.

    $L$ is $\NPC$

    1. True. Any language in $\NPC$ is also in $\NP$ by definition.
    2. True. By definition, any problem in $\NP$ is polynomial time reducible to any $\NPC$ problem.
    3. True. $A_{TM}$ is undecidable. Any language that is decidable is polynomial time reducible to $A_{TM}$!
       
    4. False. $A_{TM}$ is undecidable. No Turing Machine can guarantee an answer in a finite time, let alone a polynomial time.
       
    5. True. $A_{TM}$ is undecidable. It is certainly not in $\NP$.

    Hence, the correct answer is option d.

    5 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer

    a. Is NPC and hence NP hard.

    b. Is again NP hard (optimization version is NP hard and decision version is NPC). Ref: http://stackoverflow.com/questions/3907545/how-to-understand-the-knapsack-problem-is-np-complete

    c.Is in P. See the algorithm here based on DFS: http://en.wikipedia.org/wiki/Biconnected_component

    d. NPC and hence NP hard. 

    6 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The reverse problem can be solved in polynomial time as $a^b$ requires at most $\log b$ recursive calls using the approach given below:

    pow(int a, int b)
    {
        if(b%2)
            return a* pow(a*a, b/2);
        else
            return pow(a*a, b/2);
    }

    Now, the forward problem is also solvable in polynomial time. We need to check for all the roots of $n$  (from $\sqrt n$ till $n^{\frac{1}{\log n}}$) whether it is an integer . But each of these check can be done in $\log n$ time using a binary search on the set of integers from 2..$n$ and so, the overall complexity will be $(\log n)^2$ which is polynomial in $\log n$ ($\log n$ is the size of input). So, (a) must be the answer. 
     

    3 votes -- gatecse ( 10.8k points)
    Selected Answer

    Since $L$ is in NP it is decidable (recursive) and so is its complement $L^c$. Now, $L^c$ may or may not be in NP. But we are given that for any string length $n$, exactly one string belong to $L$, which means for any string length all but one string belong to $L^c$. 

    Now, definition of NP says that all "yes" instances of the problem can be solved in polynomial time using a nondeterministic TM. So, given an instance of $\langle L^c, x\rangle$, we non-deterministically take all words of length $n$, where $n$ is the length of $w$, and see if it is in $L$. As soon as we get the word (such a word is sure to exist as exactly one word of length $n$ belong to $L$), we see if this word is same as $x$. If it is not same (and only if it is not same), $x \in L^c$ and we get this answer in polynomial time making $L^c$ an NP problem. 

    5 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Subset problem is NP-Complete - there is reduction proof but I don't remember (Can see the below link). So, (C) and (D) are true as an NPC problem is in NP as well as NPH.
     
    https://en.wikipedia.org/wiki/Subset_sum_problem
     
    Now, complexity of Q is O(nW), where W is an integer.
    (a) Input is encoded in unary. So, length of input is equal to the value of the input. So, complexity = O(nW) where both n and W are linear multiples of the length of the inputs. So, the complexity is polynomial in terms of the input length. So, (a) is true.
     
    (b) Input is encoded in binary. So, length of W will be lgW. (for W=1024, input length will be just 10). So, now W is exponential in terms of the input length of W and O(nW) also becomes exponential in terms of the input lengths. So, Q is not a polynomial time algorithm. So, (B) is false.
    7 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    I believe Umang is right, Option B is the correct answer.


    Intractability : We are looking for EFFICIENT algorithms.

    Intractable Problems are problems that are decidable, although the algorithm to decide that problem might be efficient (P) or inefficient (NP), but at least an algorithm exists to solve these problems.

    Here we talk about efficient vs inefficient computations.

    Thus the language of problems in P and NP classes is the language of Decidable Problems i.e. Recursive Language.


    Undecidability: We are looking for algorithms.

    Undecidable Problems are problems for which there is no algorithm to solve these problems.

    Here we talk about what can or can not be computed.

    The language of Undecidable Problems are "Recursively Enumerable but not recursive languages" & "Not Recursively Enumerable Languages".


    Clearly we can talk about the intractability of any problem if we know at least one algorithm to solve the problem, if there is no algorithm to solve a problem how can we talk about efficiency?


    Halting Problem is undecidable.

    I guess, all other problems mentioned here are decidable.

    I don't know the most efficient algorithms to solve these problems but at least I can say that Brute force approach will work on all the other options except the Halting Problem.


    What P = NP implies?

    "Any problem that is solved by a non deterministic Turing machine in polynomial time also be solved by some deterministic Turing machine in polynomial time, even if the degree of the polynomial is higher."

    and

    There is neither a Non Deterministic Turing Machine nor Deterministic Turing Machine that can solve the Halting Problem.


    So any inference about P & NP is not going to affect the solvability of Halting Problem, since it is undecidable.

    5 votes -- Anurag Pandey ( 10k points)
    Selected Answer
    Q cannot be NP hard as no np hard problems(unless they are np) can be polynomial time reducible to np complete. Answer is B, as npc problem can be reducible to np hard problem. But there is confusion if Q is not NP hard then what complexity class it is in!!
    7 votes -- Shaun Patel ( 5.9k points)
    Selected Answer
    The only difference between SHAM and DHAM, in SHAM |V| is divisible by 3.. which can be check in constant amount of time.. So the hardness of the two problem will the same... Next, finding hamiltonian cycle comes under NPC problem and NPC problem is a subset of NPH, so both are NPH..

    So, option (A)
    6 votes -- Vicky Bajoria ( 3.5k points)

     

     The following function computes the maximum value contained in an integer array $P[  ]$ of size $n$ $(n>=1)$.

                         

    int max (int *p,int n) {
        int a = 0, b=n-1;
        
        while (__________) {
            if (p[a]<= p[b]) {a = a+1;}
            else             {b = b-1;}
        }
        return p[a];
    }

    The missing loop condition is 

    1.  $a != n$ 
    2.  $b != 0$ 
    3.  $b>(a+1)$ 
    4.  $b != a$ 

    Consider the following two C code segments. $Y$ and $X$ are one and two dimensional arrays of size $n$ and $ n \times n$ respectively, where $2 \leq n \leq 10$. Assume that in both code segments, elements of $Y$ are initialized to 0 and each element $X[i][j]$ of array $X$ is initialized to $i+j$. Further assume that when stored in main memory all elements of $X$ are in same main memory page frame.

    Code segment 1:

    // initialize elements of Y to 0
    // initialize elements of X[i][j] of X to i+j
    for (i=0; i<n; i++)
        Y[i] += X[0][i];

    Code segment 2:

    // initialize elements of Y to 0
    // initialize elements of X[i][j] of X to i+j
    for (i=0; i<n; i++)
        Y[i] += X[i][0];

    Which of the following statements is/are correct?

    S1: Final contents of array $Y$ will be same in both code segments

    S2: Elements of array $X$ accessed inside the for loop shown in code segment 1 are contiguous in main memory

    S3: Elements of array $X$ accessed inside the for loop shown in code segment 2 are contiguous in main memory

     

    1. Only S2 is correct
    2. Only S3 is correct
    3. Only S1 and S2 are correct
    4. Only S1 and S3 are correct

    Answers: Programming In C

    Selected Answer
    d, solved through basic C fundamental approach

    Option C fails for $n=2, p = [1, 2].$
    15 votes -- sukanyac ( 171 points)
    Selected Answer

    option C. Only S1 and S2 are correct because Y have same element in both code and in code1

     Y[i] += X[0][i];

    this row major order (In C, arrays are stored in row-major order)  which gives address of each element in  sequential order(1,2,3,....,n) means we cross single element each time to move next shows   contiguous in main memory but in code2 for 

    Y[i] += X[i][0];

    we are crossing n element (row crossing with n element )to move next

     

    12 votes -- Anoop Sonkar ( 4.4k points)

    Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then

    1. $T(n) \leq 2T(n/5) + n$

    2. $T(n) \leq T(n/5) + T(4n/5) + n$

    3. $T(n) \leq 2T(4n/5) + n$

    4. $T(n) \leq 2T(n/2) + n$

    Let P be a quicksort program to sort numbers in ascending order. Let $t_{2}$ and $t_{2}$ be the time taken by the program for the inputs [1 2 3 4] and [5 4 3 2 1], respectively. Which of the following holds?

    1. $t_{1} = t_{2}$
    2. $t_{1} > t_{2}$
    3. $t_{1} < t_{2}$
    4. $t_{1}=t_{2}+5 \log 5$
    Assume that the last element of the set is used as partition element in Quicksort. If $n$ distinct elements from the set $\left[1\dots n\right]$ are to be sorted, give an input for which Quicksort takes maximum time.

    Quick-sort is run on two inputs shown below to sort in ascending order taking first element as pivot

    1. $1, 2, 3, \dots n$
    2. $n, n-1, n-2, \dots, 2, 1$

    Let $C_1$ and $C_2$ be the number of comparisons made for the inputs (i) and (ii) respectively. Then, 

    1. $C_1 < C_2$
    2. $C_1 > C_2$
    3. $C_1 = C_2$
    4. we cannot say anything for arbitrary $n$

     

    Consider the quick sort algorithm on a set of $n$ numbers, where in every recursive subroutine of the algorithm, the algorithm chooses the median of that set as the pivot. Then which of the following statements is TRUE?

    1. The running time of the algorithm is $\Theta (n).$
    2. The running time of the algorithm is $\Theta (n\log n)$.
    3. The running time of the algorithm is $\Theta (n^{1.5})$.
    4. The running time of the algorithm is $\Theta (n^{2})$.
    5. None of the above.

    Answers: Quicksort

    Selected Answer

    $T(n) \leq T(n/5) + T(4n/5) + n$

    One part contains $n/5$ elements
    and the other part contains $4n/5$ elements
    $+n$ is common to all options, so we need not to worry about it.

    Hence, answer = option B

    8 votes -- Amar Vashishth ( 20.9k points)
    ACTUALLY IN BOTH THE CASES IT WILL TAKE O(n^2) TIME(O(n) TIME FOR PARTITION ALGORITHM AND T(n-1) TIME FOR SUB PROBLEM.AS n IS THE NUMBER OF INPUTS AND IN THE 2ND CASE INPUTS ARE 5(GREATER THAN 1ST ONE THAT IS 4) I THINK t1<t2
    0 votes -- Rohan Ghosh ( 1.6k points)
    The question seems to be wrong.There should be 5 inputs for the 1st case.otherwise t1<t2
    0 votes -- Rohan Ghosh ( 1.6k points)
    Selected Answer
    The algorithm will take maximum time when:
    1) The array is already sorted in same order.
    2) The array is already sorted in reverse order.
    3) All elements are same in the array.
    10 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer
    C.

    both are the worst cases of quick sort. (assuming pivot is either first or last element)

    i) is sorted in ascending order.

    ii) is sorted in descending order.
    10 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    Algorithm is choosing $\text{median} = n/2$ smallest element as pivot.

    Hence, the array is divided as:
    $$\large
    \begin{array}{|c|c|}
    \hline\\
    \substack{\LARGE \left (\frac n 2 -1 \right )\\[0.5em] \text{ elements}}
    \quad&\quad
    \substack{\text{Median at}\\[0.5em]\LARGE \frac n 2 \text{th}\\[0.5em]\text{location}}
    \quad&\quad
    \substack{\LARGE \left (n - \frac n 2 \right )\\[0.5em] \text{ elements}}
    \quad
    \\\\
    \hline
    \end{array}$$

    Therefore quick sort recurrence relation is given by:
    $$\begin{align}
    T(n) &= T \left ( \frac n 2 -1 \right )  + T \left ( n- \frac n 2 \right ) + \Theta(n)\\[1em]
    &= \Theta ( n \log n)
    \end{align}$$

    Hence, Option B is the correct answer.
    6 votes -- Umang Raman ( 11.5k points)

    If we use Radix Sort to sort $n$ integers in the range $\left (n^{k/2}, n^k \right ]$, for some $k > 0$ which is independent of $n$, the time taken would be?

    A) $\Theta(n)$
    B) $\Theta(kn)$
    C)
    $\Theta(n \log n)$
    D) $\Theta(n^2)$

    Answers: Radix Sort

    Selected Answer

    Answer: C

    The complexity of Radix Sort is $O(wn)$, for $n$ keys which are integers of word size $w$.

    Here, $w = \log_2(n^k) = k \times \log_2(n)$

    So, the complexity is $O(wn) = O(k \times \log_2(n) \times n)$, which leads to option C.

    9 votes -- Rajarshi Sarkar ( 30.1k points)

    Solve the following recurrence relation

    $x_n = 2x_{n-1}-1, n>1$

    $x_1=2$

    Consider the following recurrence relation:

    $T(n)
    = \begin{cases}
    2T (\lfloor\sqrt{n}\rfloor)+ \log n & \text{if }n \geq 2   \\
     1& \text{if }n = 1   
    \end{cases}$

    Which of the following statements is TRUE?

    1. $T(n)$ is $O(\log n)$.
    2. $T(n)$ is $O(\log n. \log \log n)$ but not $O(\log n)$.
    3. $T(n)$ is $O(\log^{3/2} n)$ but not $O(\log n. \log \log n)$.
    4. $T(n)$ is $O(\log^{2} n)$ but not $O(\log^{3/2} n)$.
    5. $T(n)$ is $O(\log^{2} n. \log \log n)$ but not $O(\log^{2} n)$.

    Consider a list of recursive algorithms and a list of recurrence relations as shown below. Each recurrence relation corresponds to exactly one algorithm and is used to derive the time complexity of the algorithm.

      Recursive Algorithm   Recurrence Relation
    P. Binary search I. T(n) = T(n-k) + T(k) + cn
    Q. Merge sort II. T(n) = 2T(n-1) + 1
    R. Quick sort III. T(n) = 2T(n/2) + cn
    S. Tower of Hanoi IV. T(n) = T(n/2) + 1

    Which of the following is the correct match between the algorithms and their recurrence relations?

    1. P-II, Q-III, R-IV, S-I
    2. P-IV, Q-III, R-I, S-II
    3. P-III, Q-II, R-IV, S-I
    4. P-IV, Q-II, R-I, S-III

    Let $T(n)$ be a function defined by the recurrence

    $T(n) = 2T(n/2) + \sqrt n$ for $n \geq 2$ and
    $T(1) = 1$

    Which of the following statements is TRUE?

    1. $T(n) = \Theta(\log n)$
    2. $T(n) = \Theta(\sqrt n)$
    3. $T(n) = \Theta(n)$
    4. $T(n) = \Theta(n \log n)$

    Consider the following recurrence relation:

    $T\left(n\right)=
    \begin{cases}
    T\left(\frac{n}{k}\right)+ T\left(\frac{3n}{4}\right)+ n & \text{if } n \geq 2   \\
     1& \text{if } n=1   
    \end{cases}$

    Which of the following statements is FALSE?

    1. $T(n)$ is $O(n^{3/2})$ when $k=3$.
    2. $T(n)$ is $O(n \log n)$ when $k=3$.
    3. $T(n)$ is $O(n \log n)$ when $k=4$.
    4. $T(n)$ is $O(n \log n)$ when $k=5$.
    5. $T(n)$ is $O(n)$ when $k=5$.

    The recurrence equation
    $ T(1) = 1$
    $T(n) = 2T(n-1) + n, n \geq 2$
    evaluates to

    1. $2^{n+1} - n - 2$
    2. $2^n - n$
    3. $2^{n+1} - 2n - 2$
    4. $2^n + n $

    Consider the function F(n) for which the pseudocode is given below :

    Function F(n)
    begin
    F1 ← 1
    if(n=1) then F ← 3
     else
       For i = 1 to n do
            begin
                C ← 0
               For j = 1 to n – 1 do
               begin C ← C + 1 end
               F1 = F1 * C
           end
     F = F1
    end
    

    [n is a positive integer greater than zero]

    Solve the recurrence relation for a closed form solution of F(n).

    The running time of an algorithm is represented by the following recurrence relation:

    $T(n) =  \begin{cases}
      n & n \leq 3 \\
      T(\frac{n}{3})+cn & \text{ otherwise }
     \end{cases}$

    Which one of the following represents the time complexity of the algorithm?

    1. $\Theta(n)$

    2. $\Theta(n \log  n)$

    3. $\Theta(n^2)$

    4. $\Theta(n^2 \log  n)$

     

    The given diagram shows the flowchart for a recursive function $A(n)$. Assume that all statements, except for the recursive calls, have $O(1)$ time complexity. If the worst case time complexity of this function is $O(n^{\alpha})$, then the least possible value (accurate up to two decimal positions) of $\alpha$ is ________.

    Flow chart for Recursive Function $A(n)$.

     

     

    Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.

    Which of the following recurrences does $x_n$ satisfy?

    1. $x_n = 2x_{n-1}$
    2. $x_n = x_{\lfloor n/2 \rfloor} + 1$
    3. $x_n = x_{\lfloor n/2 \rfloor} + n$
    4. $x_n = x_{n-1} + x_{n-2}$

    Let a$_{n}$ represent the number of bit strings of length n containing two consecutive 1s. What is the recurrence relation for $a_{n}$?

    1. $a_{n - 2} + a_{n - 1} + 2^{n - 2}$
    2. $a_{n - 2} + 2a_{n - 1} + 2^{n - 2}$
    3. $2a_{n - 2} + a_{n - 1} + 2^{n - 2}$
    4. $2a_{n - 2} + 2a_{n - 1} + 2^{n - 2}$

    Consider the following recursive C function.

    void get(int n)
    {
        if (n<1) return;
        get (n-1);
        get (n-3);
        printf("%d", n);
    }

    If get(6) function is being called in main() then how many times will the get() function be invoked before returning to the main()?

     

    1. 15
    2. 25
    3. 35
    4. 45

    Let $x_n$ denote the number of binary strings of length $n$ that contain no consecutive 0s.

    The value of $x_5$ is 

    1. 5
    2. 7
    3. 8
    4. 16

    The time complexity of the following C function is (assume n > 0)

    int recursive (int n) {
        if(n == 1)
            return (1);
        else
            return (recursive (n-1) + recursive (n-1));
    }
    1. $O(n)$
    2. $O(n \log n)$
    3. $O(n^2)$
    4. $O(2^n)$

    Consider the following recurrence relation

    $T(1)=1$

    $T(n+1) = T(n)+\lfloor \sqrt{n+1} \rfloor$ for all $n \geq 1$

    The value of $T(m^2)$ for $m \geq 1$ is

    1. $\frac{m}{6}\left(21m-39\right)+4$
    2. $\frac{m}{6}\left(4m^2-3m+5\right)$
    3. $\frac{m}{2}\left(3m^{2.5}-11m+20\right)-5$
    4. $\frac{m}{6}\left(5m^3-34m^2+137m-104\right)+\frac{5}{6}$

    The solution to the recurrence equation $T(2^k) = 3T(2^{k-1})+1, T(1) =1$ is

    1. $2^k$
    2. $\frac{(3^{k+1}-1)}{2}$
    3. $3^{\log_2 k}$
    4. $2^{\log_3 k}$

    Which one of the following is the recurrence equation for the worst case time complexity of the quick sort algorithm for sorting n ( $\geq $ 2) numbers? In the recurrence equations given in the options below, c is a constant.

    1. T(n) = 2 T (n/2) + cn
    2. T(n) = T ( n - 1) + T(1) + cn
    3. T(n) = 2T ( n - 1) + cn
    4. T(n) = T (n/2) + cn

    The recurrence relation

    • $T(1) = 2$
    • $T(n) = 3T (\frac{n}{4}) +n$

    has the solution $T(n)$ equal to

    1. $O(n)$

    2. $O (\log n)$

    3. $O\left(n^\frac{3}{4}\right)$ 

    4. None of the above

     

    Which one of the following correctly determines the solution of the recurrence relation with $T(1) = 1$?

    $$T(n)= 2T\left(\frac {n} {2}\right) + \log n$$

    (A) $\Theta(n)$

    (B) $\Theta(n\log n)$

    (C) $\Theta(n^2)$

    (D) $\Theta(\log n)$

    Consider the following recurrence:

    $ T(n)=2T\left ( \sqrt{n}\right )+1,$ $T(1)=1$

    Which one of the following is true?

    1. $ T(n)=\Theta (\log\log n)$
    2. $ T(n)=\Theta (\log n)$
    3. $ T(n)=\Theta (\sqrt{n})$
    4. $ T(n)=\Theta (n)$

    The running time of the following algorithm

    Procedure A(n)

    If n<=2 return (1) else return $(A( \lceil  \sqrt{n}  \rceil))$;

    is best described by

    1. O(n)
    2. O( log n)
    3. O(log log n)
    4. O(1)

    When n = 22k for some k ≥ 0, the recurrence relation

    T(n) = √(2) T(n/2) + √n, T(1) = 1

    evaluates to :

     

    A)
    √(n) (log n + 1)
    B) √(n) log n
    C) √(n) log √(n)
    D) n log √n

    Consider the recursive algorithm given below:

    procedure bubblesort (n);
    var i,j: index; temp : item;
    begin
        for i:=1 to n-1 do
            if A[i] > A[i+1] then
            begin
                temp := A[i];
                A[i] := A[i+1];
                A[i+1] := temp;
            end;
        bubblesort (n-1)
    end
    

    Let $a_n$ be the number of times the ‘if…then…’ statement gets executed when the algorithm is run with value $n$. Set up the recurrence relation by defining $a_n$ in terms of $a_{n-1}$. Solve for $a_n$.

    Consider the following function.

    Function F(n, m:integer):integer;
    begin
        If (n<=0 or (m<=0) then F:=1
        else
          F:F(n-1, m) + F(n, m-1);
        end;
    

    Use the recurrence relation  $\begin{pmatrix} n \\ k \end{pmatrix} = \begin{pmatrix} n-1 \\ k \end{pmatrix} + \begin{pmatrix} n-1 \\ k-1 \end{pmatrix}$  to answer the following questions. Assume that $n, m$ are positive integers. Write only the answers without any explanation.

    1. What is the value of $F(n, 2)$?

    2. What is the value of $F(n, m)$?

    3. How many recursive calls are made to the function $F$, including the original call, when evaluating $F(n, m)$.

    Let $T(n)$ be the function defined by $T(1) =1, \: T(n) = 2T (\lfloor \frac{n}{2} \rfloor ) + \sqrt{n}$ for $n \geq 2$.

    Which of the following statements is true?

    1. $T(n) = O \sqrt{n}$

    2. $T(n)=O(n)$

    3. $T(n) = O (\log n)$

    4. None of the above

    Consider the function F(n) for which the pseudocode is given below :
     

    Function F(n)
    begin
    F1 ← 1
    if(n=1) then F ← 3
     else
       For i = 1 to n do
            begin
                C ← 0
               For j = 1 to n – 1 do
               begin C ← C + 1 end
               F1 = F1 * C
           end
     F = F1
    end
    


    [n is a positive integer greater than zero]

    (a) Derive a recurrence relation for F(n)

    The recurrence relation that arises in relation with the complexity of binary search is:

    1. $T(n) = T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

    2. $T(n) = 2T\left(\frac{n}{2}\right)+k, \text{ k is a constant }$

    3. $T(n) = T\left(\frac{n}{2}\right)+\log n$

    4. $T(n) = T\left(\frac{n}{2}\right)+n$

     

    Answers: Recurrence

    Selected Answer
    $T(n) = 2T(n-1) -1\\

          =2(2T(n-2)-1) -1\\

          =2^2T(n-2) -2 -1\\

          =2^2(2T(n-3)-1) -2 -1\\

          =2^3T(n-3) - 2^2 -2 -1\\

          \dots \\

         =2^{n-1}T(n-(n-1)) - \left(2^{n-2} +2^{n-3}+\dots+2^2 + 2 +1\right)  \\
         =2^{n-1}\times 2 - \frac{2^{n-1}-1}{2-1}\\\because T(1) = 2, S_n = \frac{a.\left(r^n -1\right)}{r-1}\\

         =2^n - \left(2^{n-1} -1\right)\\

    =2^{n-1} + 1$
    5 votes -- srestha ( 29.8k points)
    Selected Answer

    Let $n=2^k$

    $T\left(2^k\right)= 2T\left(2^{k/2}\right) + k $           

    Let $T\left(2^k\right)=S(k) \implies T\left(2^{k/2}\right) = S(k/2)$

    $S(k)=2S(k/2)+k$

    This gives $S(k) = \Theta(k \log k)$ (Master theorem case 2)

    $ = \Theta(\log n \log \log n)$

    So ans is b

    4 votes -- Pooja ( 26.4k points)
    Selected Answer
    answer B
    5 votes -- Sankaranarayanan P.N ( 9.9k points)
    Selected Answer
    Option C it can be done by Master theorem.

    $n^{\log_b a} = n^{\log_2 2} = n$.

    $f(n) = \sqrt n = n^{\frac{1}{2}}$.

    So, $f(n) = O\left(n^{\log_b a -\epsilon}\right)$ is true for any real $\epsilon$, $0 < \epsilon < \frac{1}{2}$. Hence Master theorem Case 1 satisfied,
    $$T(n) = \Theta\left(n^{\log_b a}\right) = \Theta (n).$$
    7 votes -- Bhagirathi Nayak ( 11.4k points)
    OPTION b is false

    c)  apply Akra-Bazzi method .

    for ex k=4, then eq is T(n)=T(n/4)+T(3n/4)+n

        let g(n)=n and a1=1,b1=1/4,a2=1,b2=3/4

       then &sum; ai (bi)p=1  :: 1 (1/4)p + 1 (3/4)p =1 --> p=1

     then T(n)=O(np(1+1&int;n  (g(u) / u1+p).du )

                   =n(1+1&int;n  u/u2 du

                  =n(1+logn)

                  =O(nlogn)        ::option c is correct

    d) apply back substitution

    if k=5               T(n)=T(n/5) + T(3n/4)+n   ---eq1

    T(n/5)=T(n/52) + T((1/5)(3/4)n)+n/5

    T(3n/4)=T((3/4)((1/5)n)+T((3/4)2n)+3n/4  substitute in eq1

    we got         T(n)=T(n/52) + 2 T((1/5)(3/4)n)+T((3/4)2n)+n(1/5+3/4) +n

    in the next we get  T(n)=T(n/53)+--+n(1/5+3/4)2+n(1/5+3/4) +n

                                  T(n)=T(n/53)+--+n(19/20)2+n(19/20)  +n

                       T(n)=T(n/5k) + --+n(1+(19/20)+(19/20)2)+--(19/20)k-1)

                      T(n)=T(n/5k)+--+n(-(19/20)k +1)/(1/20))

    n/5k=1 --> k=logn

                                 ::T(n)=1+ 20 n- 20 n(19/20)logn)

    ::             T(n)=O(n)  which inturn O(n logn) both d,e are correct

    a)by observing option a &b T(n)=(n &radic;n)   T(n)=O(n logn)   and  logn=O(&radic;n)

     so if option b is correct then option a is  also correct ----> so option b is false (we can eliminate)
    3 votes -- venky.victory35 ( 581 points)
    Selected Answer
    $T(n) = 2T(n-1) +n, n >=2, T(1) = 1$

    $  T(n)    = n + 2(n-1) + 2^2 (n-2) + \dots + 2^{(n-1)}(n -(n-1))$

    $ =n( 1 + 2 + \dots + 2^{n-1}) - (1.2 + 2.2^{2} +3.2^{3}+\dots+ (n-1).2^{n-1})$

    $=n(2^n-1) -(n.2^n-2^{n+1}+2)$

    $=2^{n+1}-n -2$
    18 votes -- suraj ( 3.7k points)

    F(n) = (n-1)n    When n>=2

    F(n) = 3  When n == 1

    1 votes -- Rdr Deva ( 1.5k points)
    Selected Answer

    $a=1, b=3,  \log_b a=0$

    So $n^{\log_b a} = n^0=1$

    $f(n)=n$

    So, $f(n)=Ω(1)$

    To, check Master theorem case 3, we need $c>0$,

    $f(n/3)\leq c f(n)$

    $c=1$

    So using case  three of master theorem

    $T(n)= \Theta(f(n)) = \Theta(n)$

    answer is a

    7 votes -- Pooja ( 26.4k points)
    Selected Answer
    If they are asking for worst case complexity hence,
    By calling A(n) we get A(n/2) 5 times,

    $A(n)=5A (n /2) + O(1)$

    Hence by applying masters theorem,
    Case 1 : $a>b^k$

    $n^{\log_25}$

    Thus value of alpha will be $2.32$
    18 votes -- Shashank Chavan ( 2.6k points)
    Selected Answer

    0 1 -2
    01 10 11 -3
    010 011 101 110 111 -5
    0101 0110 0111 1010 1011 1101 1110 1111 -8

    So, xn = xn-1 + xn-2 (For all the strings ending in 1, we get two new strings and for all strings ending in 0, we get a new string. So, the new set of strings for n+1, will have exactly n strings ending in 1)

    x5 = 8+5 = 13

    9 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Counting the number of bit strings NOT containing two consecutive 1's. (It is easy to derive a recurrence relation for the NOT case as shown below.)

    0 1
    00 01 10 - 3 (append both 0 and 1 to any string ending in 0, and append 0 to any string ending in 1)
    000 001 010 100 101 - 5 (all strings ending in 0 give two strings and those ending in 1 give 1 string)
    0000 0001 0010 0100 0101 1000 1001 1010 - 8
    ....

     

    an' = an-1' + an-2' (where an denote the number of bit strings of length n containing two consecutive 1s)

    2n - an = (2n-1 - an-1) + (2n-2 - an-2)

    an = 2n-2(4 - 2 - 1) + an-1 + an-2

    an = an-1 + an-2 + 2n-2

    A choice. 

     

    14 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    T(n) = T(n-1) + T(n-3) + 2, here T(n) denotes the number of times a recursive call is made for input n. 2 denotes the two direct recursive calls.

    T(n<=0) = 0
    T(1) = 2
    T(2) = 4
    T(3) = 6
    T(4) = 10
    T(5) = 16
    T(6) = 24

    So, answer is 24 + 1 call from main = 25.
    18 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Number of binary strings of length n that contain no consecutive 0s, following will be the required recurrence relation:

    T(n) = T(n-1) + T(n-2)    n>2
    base condition T(1) = 2   and T(2) = 3

    T(1) =2                       There will be 2 strings of length 1, i.e 0 & 1
    T(2) =3                       There will be 3 strings of length 2, i.e. 01,10,11
    T(3) = T(1) + T(2) = 2+3 = 5
    T(4) = T(3) + T(2) = 5 + 3 = 8
    T(5) = T(4) +T(3) = 8 + 5 = 13

    Hence, answer is 13, but no option matches!
    1 votes -- Vijay Thakur ( 2.1k points)
    Selected Answer

    option D

     

    int recursive (int n) {
        if(n == 1)                      // takes constant time say 'A' time
            return (1);                 // takes constant time say 'A' time
        else
            return (recursive (n-1) + recursive (n-1)); // takes T(n-1) + T(n-1)  time
    }



    $T(n) = 2T(n-1) +a$   is the recurrence equation found from the pseudo code .
     

    Solving the Recurrence Equation  By Back Substitution Method

    $T (n) = 2 T ( n-1 ) +a$ ---------( equation 1 )

    $T(n-1) = 2T ( n-2)+a$

    $T(n-2) = 2T ( n-3)+a$

    We can re write Equation 1 as

      $\begin{align*}T(n) &= 2 [2T ( n-2)+a ] +a  =  4T(n-2)+ 3a \\ &= 2[2T ( n-3)+a]  + 3a = 8T (n-3) + 7a \\&\dots \\&= 2^kT(n-k) + 2^{k-1} a \end{align*}$     ------ (Equation 2)

    On Substituting Limiting Condition

     $T(1) = 1$  implies  $n-k = 1 \\ \implies  k= n-1$

    Therefore Equation 2 becomes

       $2^{ n-1} +2^{n-2}a  = O(2^n)$
     

    8 votes -- pC ( 8.4k points)
    Selected Answer

    $T(m^2) = T(m^2-1) + \left\lfloor\sqrt{(m^2)} \right\rfloor$

    $= T(m^2 - 2) + \left\lfloor\sqrt{(m^2 - 1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$

    $= T(m^2 - 3) + \left\lfloor\sqrt{(m^2 - 2)} \right\rfloor + \left\lfloor\sqrt{(m^2 - 1)} \right\rfloor +\left\lfloor\sqrt{(m^2)} \right\rfloor$

    $= \dots$

    $= T(1) + \left\lfloor\sqrt{(2)} \right\rfloor + \left\lfloor\sqrt{(3)} \right\rfloor + \dots + \left\lfloor\sqrt{(m^2)} \right\rfloor$

    $= 3 \times 1 + 5 \times 2 +  \dots + \left(2m - 1\right) \times (m-1) +m $ (We are taking floor of square root of numbers, and between successive square roots number of numbers are in the series $3,5,7 \dots$ like 3 numbers from $1..4$, $5$ numbers from $5-9$ and so on).

    We can try out options here or solve as shown at end:

    Put $m = 5, T(25)  = 3 \times 1 + 5 \times 2 + 7 \times 3 + 9 \times 4 + 5 = 75$

    Option A: 59
    Option B: 75
    Option C: non-integer
    Option D: 297.5

    So, answer must be B.


    $T(m^2) = 3 \times 1 + 5 \times 2 +  \dots + \left(2m - 1\right) \times (m-1) +m
    \\= m + \sum_i=1^{m-1} (2i+1). (i) 
    \\= m + \sum_i=1^{m-1} 2i^2 + i
    \\= m + \frac{(m-1) .m .(2m-1)}{3} + \frac{(m-1)m}{2}
    \\= \frac{m}{6} \left(6 + 4m^2 -2m -4m + 2 + 3m - 3\right)
    \\= \frac{m}{6} \left(4m^2 -3m + 5\right) $

    [Sum of the first $n$ natural numbers $=\frac{n. (n+1)}{2}.$ 

    Sum of the squares of first $n$ natural numbers $ = \frac{n. (n+1). (2n+1)}{6}.$]

    4 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    Let $x = 2^k$

    $T(x) = 3T\left(\frac{x}{2}\right) + 1$

    We can apply Master's theorem case 1 with $a = 3$ and $b = 2$ as $f(x) = 1 = O\left(x^{\log_2 3 - \epsilon} \right) $

    So, $T(x) = \Theta \left(x^{\log_2 3}\right)  \\= \Theta \left( {2^k}^{\log_2 3} \right)\\= \Theta \left({2^{\log_2 3}}^k \right)\\ = \Theta \left(3^k \right)$

    So, only option possible is B.



    We can also directly solve as follows:

    $T(x) = 3T\left(\frac{x}{2}\right) + 1\\= 9T \left (\frac{x}{4}\right) + 1 + 3 \\ \dots \\= 3^{\log_2 2^k} + \left( 1 + 3 + 9 + \dots + 3^{\log_2 {2^k-1}}\right)\\ \left(\text{recursion depth is }\log_2 x \text{ and } x = 2^k\right) \\= 3^{k} + \frac{3^{\log_2 {2^k}} - 1}{3-1} \\ \left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)\\=3^k + \frac{3^k - 1}{2}  \\=\frac{3. 3^k - 1}{2} \\=\frac{3^{k+1} -1}{2} $



    OR
     
    $T\left(2^k\right) = 3T\left(2^{k-1}\right) + 1 \\= 3^2T\left(2^{k-2}\right) + 1 +3 \\ \dots \\= 3^k T\left(2^{k-k}\right) + \left( 1 + 3 + 9 + \dots + 3^{k-1}\right) \\ \left(\text{recursion depth is }k\right)\\= 3^k + \frac{3^{k -1}} {3-1}\\\left(\text{Sum to n terms of GP with } a = 1 \text{ and } r = 3 \right)  \\=3^k + \frac{3^k -1}{2}\\=\frac{3. 3^k - 1}{2} \\=\frac{3^{k+1} -1}{2}  $

    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    B.

    Worst case for quick sort happens when 1 element is on one list and n-1 elements on another list.
    13 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Answer: A

    According to Master theorem,
    $T(n) = aT(\frac{n}{b}) + f(n)$ can be expressed as:
    $T(n) = [n^{\log_ba}][T(1) + u(n)]$
    where $u(n) = \Theta(h(n))$ where $h(n) = \frac{f(n)}{n^{\log_ba}} = \frac{n}{n^{\log_43}} = n^{1-\log_43}$ as $h(n) = n^r$ where $r>0$.
    So, $T(n) = [n^{\log_ba}][T(1) + u(n)] = T(n) = [n^{\log_43}][T(1) + \Theta(n^{1-\log_43})] = \Theta(n^{1})$.
    6 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer

    $f(n) =  \log n$

    $a = 2, b = 2 \implies n^{\log_b a} = n$

    So, $f(n) = \log n = O\left(n^{1-\epsilon} \right)$, we can take any $\epsilon$ from 0-1 for example 0.5 which gives $\log n = O(\sqrt(n))$, whose proof is given here: http://math.stackexchange.com/questions/145739/prove-that-logn-o-sqrtn

    So, Master theorem Case 1, and answer will be $O\left(n^{\log_2 2}\right) = O(n)$


     

    Alternate way:

    $T(1) = 1 \\ T(2) = 2T(1) + \log 2 = 3 = 3n - 2\\T(4) = 2T(2) + \log 4 = 8 = 3n - 4 \\T(8) = 2T(4) + \log 8 = 19 = 3n - 5\\ T(16) = 2T(8) + \log 16 = 42 = 3n - 6$

    The second term being subtracted is growing at a lower rate than the first term. So, we can say $T(n) = O(n)$.

     

    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    $T(n) = 2T\left(n^{\frac{1}{2}}\right)+1 \\=2\left(T\left(n^{\frac{1}{2^2}}\right) + 1\right) + 1 \\=2\times T\left(n^{\frac{1}{2^2}}\right) + 3\\= 4\times T\left(n^{\frac{1}{2^3}}\right) + 5 \\ \cdots \\=2^{(\lg \lg n)} + 2 \times \lg \lg n  + 1\text{ (Proved below)} \\= \Theta(\lg n)$




    $n^{\frac{1}{2^k}} = 2 \\ \text{(Putting 2 so that we can take log.}\\\text{One more step of recurrence can't change the complexity.)} \\\implies \frac{1}{{2^k}} \lg n = 1 \text{(Taking log both sides)}\\\implies \lg n = 2^k \\\implies k = \lg \lg n$

    So, answer is B, $T(n) = \Theta(\log n)$

    12 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    The complexity will be the number of times the recursion happens which is equal to the number of times we can take square root of n recursively, till n becomes 2.

    $T(n) = T\left(\lceil \sqrt n \rceil \right) + 1$

    $T(2) = 1$
    $T\left(2^2\right) =  T(2) + 1 = 2$
    $T\left(2^{2^2}\right) =  T(4) + 1 = 3$
    $T\left(2^{2^3}\right) =  T(16) + 1 = 4$

    So, $T(n) = \lg\lg n + 1 = O\left(\log \log n\right)$
    14 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    $T(n) = \sqrt(2) T\left(\frac{n}{2}\right) + \sqrt n$

    $= {\sqrt 2}^2 T\left(\frac{n}{2^2} \right) +\sqrt {2} \sqrt {\frac{n}{2}} + \sqrt n$

    $\dots$

    $= {\sqrt 2}^ {\lg n} T(1) +\lg n \sqrt n$

    $=\sqrt n + \lg n \sqrt n$

    $= \sqrt n \left( \lg n + 1\right)$

    If we use Master theorem we get option B. But one must know that Matser theorem is used to find the asymptotic bound and not an EXACT value. And in the question here it explicitly says "evaluates to".

    15 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    an = an-1 + n-1 (n-1 comparisons for n numbers)

    an = an-2 + (n-2) + (n-1)

    an = an-3 + (n-3) + (n-2) + (n-1)

    .

    .

    an = an-n + (n-n) + (n-(n-1)) + .... + (n-3) + (n-2) + (n-1) 

    an = 0 + 1 + 2 + ....+ (n-3) + (n-2) + (n-1)

    which given an = $\frac{(n-1)\times (n)}{2}$

    6 votes -- Rajarshi Sarkar ( 30.1k points)
    a) $\frac{n(n-1)}{2}$

    b) $\frac{n(n-m+1)}{m!}$
    1 votes -- Anirudh Pratap Singh ( 27k points)
    Selected Answer

    answer B

    using master method (case 1)

    where a = 2, b = 2

    O(n1/2) < O(nlogba)

    O(n1/2) < O(nlog22)

     

    6 votes -- ankitrokdeonsns ( 8.4k points)
    Selected Answer

    1 - The function $F(n)$ is NOT a recursive function. You can't have a recurrence relation for it in the first place!

    2 - $F(n)$ calculates $(n-1)^n$.

     

    The equivalent C++ code is as follows: (You can try it out here: http://ideone.com/w0u4lk)

    long F(long n) {
    	long F1 = 1;
     
    	if(n==1) { return 3; }
    	else {
    		for(long i = 1; i <= n; i++) {
    			long C = 0;
    			// Note: the belore For loop only has one line
    			for(long j = 1; j <= n-1; j++) { C = C+1; }
    			// At the end of this for loop, C will be = (n-1)
    			F1 = F1 * C;
    		}
    	}
    	return F1;
    }

     

    It is clear that the inner for loop can be replaced by a single statement as follows:

     

    long F(long n) {
    	long F1 = 1;
     
    	if(n==1) { return 3; }
    	else {
    		for(long i = 1; i <= n; i++)
    			F1 = F1 * (n-1);
    	}
    	return F1;
    }

     

    And this calculates $(n-1)^n$

    5 votes -- Pragy Agarwal ( 14.5k points)
    Selected Answer
    It is A. searching for only one half of the list. leading to T(n/2) + constant time in comparing and finding mid element.
    9 votes -- Gate Keeda ( 17.7k points)

    The function f is defined as follows:
     

    int f (int n) {
        if (n <= 1) return 1;
        else if (n % 2  ==  0) return f(n/2);
        else return f(3n - 1);
    }

    Assuming that arbitrarily large integers can be passed as a parameter to the function, consider the following statements.

    1. The function f terminates for finitely many different values of n ≥ 1.
    2. The function f terminates for infinitely many different values of n ≥ 1.
    3. The function f does not terminate for finitely many different values of n ≥ 1.
    4. The function f does not terminate for infinitely many different values of n ≥ 1.

    Which one of the following options is true of the above?

    1. i and iii
    2. i and iv
    3. ii and iii
    4. ii and iv

    Consider the program below:

    #include <stdio.h>
    int fun(int n, int *f_p) {
        int t, f;
        if (n <= 1) {
            *f_p = 1;
            return 1;
        }
        t = fun(n-1, f_p);
        f = t + *f_p;
        *f_p = t;
        return f;
    }
    
    int main() {
        int x = 15;
        printf("%d/n", fun(5, &x));
        return 0;
    }
    

    The value printed is

    1. 6
    2. 8
    3. 14
    4. 15

     

    Consider the following C function:

    int f(int n)
    { 
        static int r = 0;
        if (n <= 0) return1;
        if (n > 3)
        {   r = n; 
            return f(n-2) + 2;
        }
        return f(n-1) + r;
    }
    

    What is the value of $f(5)$?

    1. 5
    2. 7
    3. 9
    4. 18

    Consider the code fragment written in C below :
            

    void f (int n)
    { 
        if (n <= 1)  {
            printf ("%d", n);
        }
        else {
            f (n/2);
            printf ("%d", n%2);
        }
    }

    Which of the following implementations will produce the same output for f(173) as the above code?

    P1 P2
    void f (int n)
    { 
        if (n/2)  {
            f(n/2);
        }
        printf ("%d", n%2);
    }

     

    void f (int n)
    { 
        if (n <=1)  {
            printf ("%d", n);
        }
        else {
            printf ("%d", n%2);
            f (n/2);
        }
    }

     

     

    A) Both P1 and P2
    B) P2 only
    C)
    P1 only
    D) Neither P1 nor P2

    What is the $\text{time complexity}$ of the following recursive function?

    int  DoSomething (int n) {
        if (n <= 2)
            return 1;
        else 
            return (DoSomething (floor (sqrt(n))) + n);
    }
    
    1. $\Theta(n^2)$

    2. $\Theta(n\log_2n)$

    3. $\Theta(\log_2n)$

    4. $\Theta(\log_2\log_2n)$

    Consider the following recursive definition of $fib$:

     fib(n) :=  if n = 0 then 1
                else if n = 1 then 1
                else fib(n-1) + fib(n-2)

    The number of times $fib$  is called (including the first call) for evaluation of $fib(7)$ is___________.

    Consider the following C-program:

    void foo (int n, int sum) {
        int k = 0, j = 0;
        if (n == 0) return;
        k = n % 10; j = n/10;
        sum = sum + k;
        foo (j, sum);
        printf ("%d,",k);
    }
    
    int main() {
        int a = 2048, sum = 0;
        foo(a, sum);
        printf("%d\n", sum);
    }
    

    What does the above program print?

    1. 8, 4, 0, 2, 14

    2. 8, 4, 0, 2, 0

    3. 2, 0, 4, 8, 14

    4. 2, 0, 4, 8, 0

    The following recursive function in C is a solution to the Towers of Hanoi problem.

    void move(int n, char A, char B, char C)  {
        if (......................) {
            move (.............................);
            printf("Move disk %d from pole %c to pole %c\n", n, A,C);
            move (.....................);
        }
    }

    Fill in the dotted parts of the solution.

    Consider the following recursive C function that takes two arguments.

    unsigned int foo(unsigned int n, unsigned int r) {
        if (n>0) return ((n%r) + foo(n/r, r));
        else return 0;
    }
    
    

    What is the return value of the function $\text{foo}$ when it is called as $\text{foo(345, 10)}$?

    1. 345
    2. 12
    3. 5
    4. 3

    What is the value printed by the following C program?

    #include<stdio.h>
    
    int f(int *a, int n)
    {
        if (n <= 0) return 0;
        else if (*a % 2 == 0) return *a+f(a+1, n-1);
        else return *a - f(a+1, n-1);
    }
    
    int main()
    {
        int a[] = (12, 7, 13, 4, 11, 6);
        printf("%d", f(a, 6));
        return 0;
    }
    1. -9
    2. 5
    3. 15
    4. 19

    What value would the following function return for the input $x=95$?

    Function fun (x:integer):integer;
    Begin
        If x > 100 then fun = x – 10
        Else fun = fun(fun (x+11))
    End;
    
    1. 89
    2. 90
    3. 91
    4. 92

     

     

    In the following C function, let $n \geq m$.

    int gcd(n,m)  {
        if (n%m == 0) return m;
        n = n%m;
        return gcd(m,n);
    }  

    How many recursive calls are made by this function?

    1. $\Theta(\log_2n)$

    2. $\Omega(n)$

    3. $\Theta(\log_2\log_2n)$

    4. $\Theta(\sqrt{n})$

    Consider the code fragment written in C below :

    void f (int n)
    { 
      if (n <=1)  {
       printf ("%d", n);
      }
      else {
       f (n/2);
       printf ("%d", n%2);
      }
    }

    What does f(173) print?

     

    1) 010110101
    2) 010101101
    3) 10110101
    4)
    10101101

    Consider the following function 

    double f(double x){
        if( abs(x*x - 3) < 0.01) 
            return x;
        else 
            return f(x/2 + 1.5/x);
    } 

    Give a value $q$ (to 2 decimals) such that $f(q)$ will return $q$:_____. 

    Answers: Recursion

    Selected Answer

    The function terminates for all powers of 2 (which is infinite), hence (i) is false and (ii) is TRUE.

    Let n = 5. 
    Now, recursive calls will go like 5 - 14 - 7 - 20 - 10 - 5 -

    And this goes into infinite recursion. And if we multiply 5 with any power of 2, also result will be infinite recursion. Since, there are infinite powers of 2 possible, there are infinite recursions possible (even considering this case only). So, (iv) is TRUE and (iii) is false.

    So, correct answer is (D)

     

    13 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    The answer is B.

    Let the address of x be 1000.

    1.f(5,1000) = 8

    2.f(4,1000) = 5

    3.f(3,1000) = 3

    4.f(2,1000) = 2

    5.f(1,1000) = 1.

    The evaluation is done from 5 to 1. Since recursion is used.
    6 votes -- Gate Keeda ( 17.7k points)
    Selected Answer
    The answer is D.

    f(5) = 18.

    f(3) + 2 = 16 + 2 = 18

    f(2) + 5 = 11 + 5 = 16

    f(1) + 5 = 6 + 5 = 11

    f(0) + 5 = 1+5 = 6

    Consider from last to first. Since it is recursive function.
    6 votes -- Gate Keeda ( 17.7k points)
    Selected Answer

    here P1 and P2 will print opposite in direction as shown in diagram

    and given code fragment will print like P1 and not like P2

    Hence ans will be (C)

    4 votes -- srestha ( 29.8k points)
    Selected Answer

    We are asked the time complexity which will be the number of recursive calls in the function as in each call we perform a constant no. of operations and a recursive call. The recurrence relation for this is (considering constant time "c" as 1)

    $T(n) = T(\sqrt n) + 1 \\= T\left(n^{1/4}\right) + 2 \\=T\left(n^{1/8}\right) + 3 $

    Going like this we will eventually reach T(3) or T(2). For asymptotic case this doesn't matter and we can assume we reach T(2) and in next step reach T(1). So, all we want to know is how many steps it takes to reach T(1) which will be 1+ no. of steps to reach T(2). 

    From the recurrence relation we know that T(2) happens when $n^{\left(\frac{1}{2^k}\right)} = 2$.

    Taking $\log $ and equating,

    $\frac{1}{2^k} \log n = 1 \\2^k = \log n \\k = \log \log n$. 

    So, T(1) happens in $\log \log n + 1$ calls, but for asymptotic complexity we can write as $\Theta \left( \log \log n\right)$


    Alternatively,

    Substituting values

    $T(1) = 1$
    $T(2) = 1$
    $T(3) = T(1) + 1 = 2$
    $\dots$
    $T(8) = T(2) + 1 = 2$
    $T(9) = T(3) + 1 = 3$
    $\dots$


    $T\left(\left({\left({2^2}\right)^2}\right)^2\right) = T\left(\left({2^2}\right)^2\right) + 1 \\= T(2^2)+ 2 \\= T(2) + 3 = 1 + 3 = 4, \\ \log \log n = 3 \text{ as } n = 256$.


    $T\left(\left({\left({\left({\left({2^2}\right)^2}\right)^2}\right)^2}\right)^2\right) = 6,\\ \log \log n = 5 \text{ as } n = 65536 \times 65536 = 2^{32}$

    $T\left(2^{(2^{10})}\right) = T\left(2^{512}\right) + 1 \\= T(2^{256}) + 2 \\= T(2^{128}) + 3\\ = T(2^{64}) + 4 \\= T(2^{32}) + 5 \\= T(2^{16}) + 6 \\= T(2^8)+7 \\= T(2^4) + 8 \\= T(2^2) + 9 \\= T(2) + 10 = 11,\\ \log \log n = 10$

    So, answer is D

    http://stackoverflow.com/questions/16472012/what-would-cause-an-algorithm-to-have-olog-log-n-complexity

    11 votes -- Arjun Suresh ( 156k points)
    Selected Answer

    The recurrence relation for the no. of calls is $$T(n) = T(n-1) + T(n-2) + 2$$

    $T(0) =T(1) = 0$ (for fib(0) and fib(1), there are no recursive calls). 

    $T(2) = 2$

    $T(3) = 4$

    $T(4) = 8$

    $T(5) = 14$

    $T(6) = 24$

    $T(7) = 40$. 

    Counting the initial call we get 40 + 1 = 41. 

    6 votes -- Arjun Suresh ( 156k points)
    Selected Answer
    Option d.

    foo is printing the lowest digit. But the printf inside it is after the recursive call. This forces the output to be in reverse order

    2, 0, 4, 8

    The final value "sum" printed will be 0 as C uses pass by value and hence the modified value inside foo won't be visible inside main.
    9 votes -- anshu ( 2.4k points)
    Selected Answer
    void move(int n, char A, char B, char C) {
        if (n > 0) {
            move (n-1, A, C, B);
            printf("Move disk %d from pole %c to pole %c\n", n, A, C);
            move (n-1, B, A, C);
        } 
    }

     

    8 votes -- sonam vyas ( 8.3k points)
    Selected Answer

    Red color represent return values

    Ans is 12

    3 votes -- Rajesh Pradhan ( 7.6k points)
    Selected Answer
    It will print
    12 + ( 7 - (13 - (4 + (11 - ( 6 + 0)))))
    = 12 + (7 - (13 - ( 4 + ( 11 -6)))))
    = 12 + 7 - 13 + 9
    = 15
    9 votes -- gatecse ( 10.8k points)
    Selected Answer
    value return by $fun(95) = fun(fun(106)) \\= fun(96) = fun(fun(107)) \\= fun(97) = fun(fun(108)) \\= fun(98) = fun(fun(109)) \\= fun(99) = fun(fun(110)) \\= fun(100) = fun(fun(111)) \\= fun(101) = 91.$
    10 votes -- Digvijay ( 36.9k points)
    Selected Answer

    Worst case will arise when both n and m are consecutive Fibonacci numbers.

    $\text{gcd}(F_n, F_{n-1}) = \text{gcd}(F_{n-1},F_{n-2}) = \dots =  \text{gcd}(F_1,F_0) = 1$

    and $n^{th}$ Fibonacci number is $1.618^n$, where $1.618$ is the Golden ratio.

    So, to find $\text{gcd} (n,m)$, number of recursive calls will be  $\Theta (\log n) $.

     

    11 votes -- Vikrant Singh ( 11.1k points)
    Selected Answer
    Answer: D

    The function prints the binary equivalent of the number n.

    Binary equivalent of 173 is 10101101.
    9 votes -- Rajarshi Sarkar ( 30.1k points)
    Selected Answer

    (We can directly go to the "if" part to get one answer, but we need to solve "else" part too to get all possible answers which though is not asked in question)

    Solving the else part:

    $\frac{x}{2} + \frac{3}{2x} = \frac{x^2+3}{2x}$

    So, the new value of $x$ will be  $\frac{x^2+3}{2x}$ and we need it equal to $x$.

    $\frac{x^2+3}{2x} = x \\ \implies x^2 + 3 = 2x^2 \\ \implies x^2 = 3  \\ \implies x = 1.732 $

     


    Now solving the if part.   

      abs(x*x - 3) < 0.01

    So, $x^2 - 3 < 0.01  \text { and } -\left(x^2 - 3\right) < 0.01\\ \implies x^2 < 3.01  \text{ and } x^2 > 2.99\\ \implies x < 1.735 \text { and }x > 1.729$

    Corrected to 2 decimal places answer should be 1.73 or 1.74. 

     

    8 votes -- Arjun Suresh ( 156k points)

    Language $L_1$ is polynomial time reducible to language $L_2$. Language $L_3$ is polynomial time reducible to language $L_2$, which in turn polynomial time reducible to language $L_4$. Which of the following is/are true?

    1. $\text{ if } L_4 \in P, \text{ then } L_2 \in P$
    2. $\text{ if } L_1 \in P\text{ or } L_3 \in P, \text{ then } L_2 \in P$
    3. $L_1 \in P, \text{ if and only if } L_3 \in P$
    4. $\text{ if } L_4 \in P, \text{ then } L_3 \in P$

     

    1. II only
    2. III only
    3. I and IV only
    4. I only

    Answers: Reduction

    Selected Answer
    1. L1 is polynomial time reducible to L2. So, L2 is at least as hard as L1. 
    2. L3 is polynomial time reducible to L2. So, L2 is at least as hard as L3. 
    3. L2 is polynomial time reducible to L4. So, L4 is at least as hard as L2. 

     

    If L4 is in P, L3, L2 and L1 must also be in P. So, I and IV are true. 

    We can have L1 in P and L2 not in P, and none of the given conditions are violated. So, II is false. 

    Assume L3 not in P. Now, Since L2 must be at least as hard as L3, it must also be not in P. But L1 is less harder than L1 as per condition 1, and it can be in P without violating any given conditions. So, III is false.

    Hence C choice.

    More Info: http://gatecse.in/wiki/Some_Reduction_Inferences

     

    10 votes -- Arjun Suresh ( 156k points)

    Consider the following recursive function:

    function fib (n:integer);integer;
    begin
    if (n