The Gateway to Computer Science Excellence

First time here? Checkout the FAQ!

x

+37 votes

The cube root of a natural number $n$ is defined as the largest natural number $m$ such that $(m^3 \leq n)$ . The complexity of computing the cube root of $n$ ($n$ is represented by binary notation) is

- $O(n)$ but not $O(n^{0.5})$
- $O(n^{0.5})$ but not $O((\log n)^k)$ for any constant $k>0$
- $O((\log n)^k)$ for some constant $k>0$, but not $O( (\log \log n)^m)$ for any constant $m>0$
- $O( (\log \log n)^k )$ for some constant $k > 0.5$, but not $O( (\log \log n)^{0.5} )$

+52 votes

Best answer

We can simply do a binary search in the array of natural numbers from $1..n$ and check if the cube of the number matches $n$ (i.e., check if $a[i] * a[i] * a[i] == n$). This check takes $O(\log n)$ time and in the worst case we need to do the search $O(\log n)$ times. So, in this way we can find the cube root in $O(\log^2 n)$. So, options (A) and (B) are wrong.

Now, a number is represented in binary using $\log n$ bit. Since each bit is important in finding the cube root, any cube root finding algorithm must examine each bit at least once. This ensures that complexity of cube root finding algorithm cannot be lower than $\log n$. (It must be $\Omega \left( \log n \right)$). So, (D) is also false and (C) is the correct answer.

Now, a number is represented in binary using $\log n$ bit. Since each bit is important in finding the cube root, any cube root finding algorithm must examine each bit at least once. This ensures that complexity of cube root finding algorithm cannot be lower than $\log n$. (It must be $\Omega \left( \log n \right)$). So, (D) is also false and (C) is the correct answer.

0

@Arjun sir. How is it logn^2. It should be logn only? 1...n we do binary search and each time we check a(i)*a(i)*a(i)=n or not and this is a constant time and it will be done logn times at max.Can you tell please how is it logn^2?

0

@air1

I think final complexity should be O(log3n)O(log3n)

EDIT: If we convert the binary to decimal in the beginning, then it will be O(logn)

Please explain how it becomes O(logn)?

I think final complexity should be O(log3n)O(log3n)

EDIT: If we convert the binary to decimal in the beginning, then it will be O(logn)

Please explain how it becomes O(logn)?

+5

a == b? Usually we consider this as an $O(1)$ operation. But when $a$ is allowed to be arbitrarily large, this can take $O(\log a)$ as each bit needs to be checked. That is why the check is taking $O(\log n)$ in the given question.

0

Sir in asymptotic analysis we dont bother about how long arithmetic operations take.It depends on h/w .

even when we check a==b in constant as you mentioned,there also a can be any large or small number and we need to check all bit,but we dont bother about that.I still did not get:(

even when we check a==b in constant as you mentioned,there also a can be any large or small number and we need to check all bit,but we dont bother about that.I still did not get:(

0

@Arjun sir, why do the binary search on the whole of n elements? Isn't it sufficient to do it only for the 1st n^1/3 elements? If so, then the time taken is also reduced to O(log n^1/3).

0

cant we do like this?

1) convert given binary representation of $n$ into decimal----$O(logn)$

2) then find $\left \lfloor n^{0.333} \right \rfloor$ --------------$O(1)$

total complexity $O(logn)$

@Arjun sir I have same doubt as above @rahul said.

Do we consider the underlying hardware complexity in asymptotic analysis of our algorithm?

+1 vote

Consider the below function:

int cuberoot(int n)

{

int crt;

for(int i =1; i<=n;i++)

{

if (i*i*i<=n) crt=i;

else return(crt);

}

}

The above function will return the cube root of value 'n' as defined in question.

Now if n = 64, the for loop will run 5 times(i=1,2,3,4,5) O(logn).

But if we take larger value say n= 8192(i.e.2^13) then the for loop will run 21 times which is (logn)^k , where k=1.1869

Therefore we can say that the complexity of computing cuberoot will **O((logn)^k)** but not O(loglogn).

Hence Answer is **(C)**

0 votes

BY USING LINEAR SEARCH:

algo(n)

{ for(i=1 to n^3)

if(i^3>n) return(i-1)

}

it takes** n^(1/3) **time

BY USING BINARY SEARCH:

(**1**)^3 (**2**)^3 3 (**4**)^3 5 6 7 (**8**)^3 9 10 11 12 13 14 15 (**16**)^3 ..........

it takes** logn** time

so** c** is answer

–1 vote

Consider the array of natural numbers .

00 | 01 | 10 | 11 | 100 | 101 | 110 |

Suppose we want to find the cube root of $ 6$ , $(n=6)$ to store in binary it takes 3-bits ($ \left \lceil \log_2 n \right \rceil$)

take $m$ as middle element of the array. (here $m=3$)

step 1 : Find middle element $m$ [ will take $O(1)$ time ]

step 2 : Find cube of m [ will take $O(1)$ time ]

step 3 : Compare $m^{3} < n$ [ will take $\left \lceil \log_2 n \right \rceil$ time . We need to compare $\left \lceil \log_2 n \right \rceil$ bits because of binary representation ]

step 4 : If there is match or nothing in array is left we return m else if compared value is grater than $m$ we go left of $m$ else we go right of $m$ loop from step 1

Running Time Complexity Of the Algorithm

- $\log n$ -> Implementing Binary Search
- $( \log n )^{k}$ -> Comparing Each bit of the number represented in binary notation where k is the number of loop iterations
- So on best case it will take $O(\log n)$ and worst case $O(\log^{k+1} n)$

**Example**

Iteration 1

- $m=3$
- $3^{3}=9$
- $9 \leq 6$ False we go LEFT of the array [this is compared as $1001\leq 110 $]

Iteration 2

- $m=1$
- $1^{3}=1$
- $1\leq 6$ True we go RIGHT of the array [this is compared as $01\leq 110 $]

Iteration 3

- $m=2$
- $2^{3}=8$
- $8\leq 6$ False we go LEFT of the array [this is compared as $1000\leq 110 $]
- Since sonthing in LEFT of the array we return $2$ as it is the nearest cube root

Here we took $O(\log ^{4} n)$

[ @arjun sir can u pls check this one ]

0

@arjun sir ,

Actually What i Thought about upperbound complexity $O(\log^{k+1} n)$ is that

- Outer Loop for implementing binary search will take $O( \log n)$ time
- inner loop for comparing each bit of $n$ for each iteration takes $O( \log n)$ each

Isn't the example correct ?

- $O(\log n) $ for implementing binary search
- $O(\log n) $ for comparing bits of 9 and 6
- $O(\log n) $ for comparing bits of 1 and 6
- $O(\log n) $ for comparing bits of 8 and 6

Here k = number of iterarions . The loop executes for $\log n $ time hence $k=\log n$

Correct me if wrong...

+1

Got my mistake :)

This is wrong .

@arjun sir complexity will be always $O(\log^{2} n)$ ?

How does k counted ? will it affect the complexity ?

This is wrong .

@arjun sir complexity will be always $O(\log^{2} n)$ ?

How does k counted ? will it affect the complexity ?

- All categories
- General Aptitude 1.3k
- Engineering Mathematics 5.2k
- Digital Logic 2k
- Programming & DS 3.7k
- Algorithms 3.2k
- Theory of Computation 4k
- Compiler Design 1.6k
- Databases 3k
- CO & Architecture 2.6k
- Computer Networks 3k
- Non GATE 1k
- Others 1.3k
- Admissions 487
- Exam Queries 436
- Tier 1 Placement Questions 18
- Job Queries 56
- Projects 9

36,194 questions

43,647 answers

124,088 comments

42,928 users