10,684 views

Consider the following C functions:

int f1 (int n)
{
if(n == 0 || n == 1)
return n;
else
return (2 * f1(n-1) + 3 * f1(n-2));
}
int f2(int n)
{
int i;
int X[N], Y[N], Z[N];
X[0] = Y[0] = Z[0] = 0;
X[1] = 1; Y[1] = 2; Z[1] = 3;
for(i = 2; i <= n; i++){
X[i] = Y[i-1] + Z[i-2];
Y[i] = 2 * X[i];
Z[i] = 3 * X[i];
}
return X[n];
}

The running time of $f1(n)$ and $f2(n)$ are

1. $\Theta(n)$ and $\Theta(n)$
2. $\Theta(2^n)$ and $\Theta(n)$
3. $\Theta(n)$ and $\Theta(2^n)$
4. $\Theta(2^n)$ and $\Theta(2^n)$

$f1(n)$ will form approximate a complete binary tree and the cost of each node is constant, hence T.C of f1(n) will be
$\theta(2^n)$

In $f2(n)$ ignore everything except the for loop, the for will run (n-2) time hence, T.C is $\theta(n)$

Correct option is (B).

i am having characteristic polynomial as r^2 - 2*r-3=0 with roots as 3 , -1?
and what are the base conditions?
f1(n) = c1.(r1)^n + c2.(r2)^n

base condition will only help to evaluate constants that will not change the answer (3^n)
reshown
and what is the correct answer according to you?
O(3^n)

the solution of the recurrence relation provided by you, T(n) = 2T(n-1) + 3T(n-2) , would contain a term having 3n but the it would be of  O(2n). (Check the comments by bhuv below arjun sir's answer)

This is a classic example of how dynamic programming reduces the timec complexity of an algorithm.

For a similar example, take the case of finding the nth Fibonacci number.

Using recursion, we'll write the code as:

int fib(int n)

{

    if (n <= 1)

        return n;

    return fib(n-1) + fib(n-2);

}

This will give an exponential time complexity.

However, we can use an array to find the result in amuch more efficient way:

   f[0] = 0;

  f[1] = 1;



  for (i = 2; i <= n; i++)

  {

      f[i] = f[i-1] + f[i-2];

  }

This will only take O(n) time.

Time complexity of $f1$ is given by
$T(n) = T(n-1) + T(n-2)$, (multiplication by $2$ and $3$ won't affect complexity as it is a constant time operation)
$T(0) = T(1) = 1$

The solution to this (fibonacci series) is given by Golden ratio. https://en.wikipedia.org/wiki/Golden_ratio which is $O(2^n)$. (Using theta in question must be a mistake)

Time complexity of $f2$ is $\Theta(n)$ as here all recursive calls are avoided by saving the results in an array (dynamic programming).

by

@arjun sir,

how can we ignore the constants 2 and 3 ?

the nature of subtract and conquer theorem is T(n)=aTt(n-b)+f(n)....so can we ignore constant 'a' here ?
Very good explanation.

$return ( f(n-1) + f(n-1) + f(n-2) +f(n-2)+f(n-2)) \neq return( 2*f(n-1) + 3*f(n-2))$

If it was
return ( f(n-1) + f(n-1) + f(n-2) +f(n-2)+f(n-2))
then while calculating time complexity we will take f(n)= 2f(n-1) + 3f(n-2)
because there are two f(n-1) operation and 3 f(n-2) which give there contribution in time complexity as each will execute separately whe they were called.
Also note that its Recurrence relation for VALUE also will be f(n)= 2f(n-1) + 3f(n-2).

NOW SEE THE CHANGE

---------------------------------------------------------------------------------------
But for return( 2*f(n-1) + 3*f(n-2)) multiplying by 2 and 3 are just constant time operations,
so recurrence relation for T.C. will be f(n)=f(n-1)+f(n-2).
but if you calculate Recurrence relation for VALUE it will be
f(n) = 2 * f(n-1) + 3 * f(n-2), as value will be get effected by multiplication.

@bhuv

your point is correct. But don't you think we will get same upper bound of time in both cases that you mentioned?

No,

return ( f(n-1) + f(n-1) + f(n-2) +f(n-2)+f(n-2))
then while calculating time complexity we will take f(n)= 2f(n-1) + 3f(n-2)

T.C. =O(3n)

return( 2*f(n-1) + 3*f(n-2)) multiplying by 2 and 3 are just constant time operations,
so recurrence relation for T.C. will be f(n)=f(n-1)+f(n-2).

T.C. = O(2n)

You can solve both equations by the method of Linear Homogeneous Recurrence relation with constant coefficients.

@bhuv

f(n)=2*f(n-1) + 3*f(n-2)

let g(n)=5*g(n-1)

Solving g(n) T.C= O(5n)

now f(n)<=g(n)

so can't we say that upper bound for f(n)=O(5n).?

I thought your were talking more precisely "what is upper bound on T.C. of both of those separately ?"

Yes, both O(2n) and O(3n) can be written as O(5n), even O(nn), you can grow above as much you want, you can write these type of upper bound while solving simpler or solo Time complexity comparisons question like which one is O of which one.
But I think that is not a good practice to do while giving an answer to RRs or TC of given algorithms because there upper bound generally means "time taken by an Algorithm in worst case scenario".

What I meant was O(5n) be the tightest upper bound?

Anyways how you got 3n?

@Srestha,Isn't 3n having a greater significance over 2n. Please tell?

yes brother, 3n is should be grater than 2n. but i have 2 Doubts on your comment

1) your question doesn't have any relation with the original question

2) Srestha mam, doesn't comment on this answer even this question too, then how you ping her on this?

@Shaik Masthan,Brother if you go to see that is how the time complexity should be derived for f1(). Please correct me if I am wrong. :)

yes it should be O(2n) only.

T(n) = T(n-1)+T(n-2) ===> (i)

note that T(n-1) > T(n-2)

T(n-1) + T(n-2) < T(n-1) +T(n-1)  ===> (ii)

substitute (ii) in (i)

T(n) < 2.T(n-1) ===> T(n) = O( 2.T(n-1) )

@Shaik Masthan,Brother I got ur point here. But what I am pointing out is that if we take a large value of n as input,then in that case how would the answer change?You see for a larger value of n. While in the expression it can be easily seen that 3(n-2) is smaller that 2(n-1).

https://gateoverflow.in/495/gate2008-74?show=231375#c231375

Is it fine ?

I mean options are "theta ( )"

So instead of "exact" we go for upper bound, will it be fine ?

Actually, in the options, it should be $O(2^n) \;instead \;of\; \theta(2^n)$

But if somewhere "theta" is the condition then can we

exaggerate the same way ?

if there is no other option to you, then you can do it

​​​​​​Traditional method is tedious , can you solve it by substitution ?

T(n) = T(n-1)+T(n-2) + c ===> (i)

note that T(n-1) > T(n-2)

T(n-1) + T(n-2) < T(n-1) +T(n-1)  ===> (ii)

substitute (ii) in (i)

T(n) =  2.T(n-1) + c

T(n) = 2 (2.T(n-2) + c ) + c = 2$^2$ T(n-2) + 2$^1$c + 2$^0$c

after k iterations,

T(n) = 2$^k$ T(n-k) + 2$^{k-1}$c+..... + 2$^0$c = 2$^k$ T(n-k) + (2$^{k}-1)$c ----> (iii)

let n-k=1 ===> k=n-1 ---> (iv)

substitute (iv) in (iii), then it look like

T(n) = 2$^{n-1}$ T(1)  +  (2$^{n-1}-1)$c

===> T(n) = O( 2$^n$ )

Thanx for the derivation but actually i was asking for that can we use substitution in its original homogeneous form.

did you mean, T(n) = T(n-1)+T(n-2) + c ?

if yes, it is very tough :)

Okay. So tradiotradi way will be better then. But i guess they wont ask non homogeneous as its time consuming.

Thanx for the help :)

for f1, can we say here that Time complexities are $O(2^n)$ and $\Omega(2^{n/2})$?

So, yes Using theta here is a mistake.

$\Omega \left ( 2^{n-1} \right )$ or $\Omega \left ( 2^{\frac{n}{2}}\right )?$

@Ayush Upadhyaya Why Theta is wrong?

@Arjun-Sir because  we have the following recurrence

$T(n)=T(n-1)+T(n-2)+c$

Recurrence tree

Cost at height 0-$2^0.c$

Cost at height 1-$2^1.c$

Cost at height h-$2^h.c$

Best case height $n-2k=0 \rightarrow k=\frac{n}{2}$

So, best case cost = $c \bigg \{2^0+2^1+....2^{\frac{n}{2}} \bigg \}=c.\frac{2^{\frac{n}{2}+1}-1}{2-1}=\Omega(2^{n/2})$

Saying $\theta(2^n)$ implying $\Omega(2^n)$ and $O(2^n)$, so that's why saying $\theta(2^n)$ for f1 would be incorrect.

@ayush sir can you please explain n=2k step.

Since the right subtree bottom outs early so the tree height will be decide by left subtree. The left tree bottom outs when n-k=0, this means k=n.

Yes I also feel n is height of tree and not n/2.

The f1(n) is a recursive function. The recurrence equation for f1(n) is

T(0) = 0

T(1) = 1

T(n) = 2T(n-1) + 3T(n-2)

The solution of this equation contains a polynomial of 2n so on average the running time of f1(n) is $\Theta$(2n).

f2(n) is the non-recursive version of f1(n) so the output of both the functions is same.

any one can please suggest me how to count the complexity in this equation...i didnt get it...
May you plz explain how yu are getting O(2^n) in the solution of this polymomial??coz m.getting O(3^n) solving it.i am.using the equation x^2-2x-3=0

Plz explain.
T(n) = 2T(n-1) + 3 T(n-2)

T(n) <= 5T(n-1). { replaced 3T(n-2) by 3T(n-1)) therefore came this inequality

$T(n) = O(2^{n})$ using recursion tree method

Second one is clearly O(n) as a single for loop upto n is there.

So OPTION B is the right answer.
by

The complexity is related to input-size, where each call produce a binary-tree of calls

Where T(n) make 2^n calls in total ..

T(n) = T(n-1) + T(n-2) + C

T(n) = O(2^n-1) + O(2^n-2) + O(1)

O(2 ^n)

In the same fashion, you can generalize your recursive function, as a Fibonacci number

T(n) = F(n) + ( C * 2n)

Next you can use a direct formula instead of recursive way

Using a complex method known as Binet's Formula

FORM STACK OVER FLOW.