@Arjun I think answer of this question should be $O(logn)$ as a decimal number needs max $logn$ bits to be represented in binary, and we need only 1 scan of logn bits to convert a binary number into decimal.

Dark Mode

427 views

3 votes

The best known algorithm for binary to decimal conversion runs in $(n$ is the number of bits in the input number, assume MUL and ADD operations to take constant time)

- $\Theta(n)$
- $\Theta(\log n)$
- $\Theta(1)$
- $\Theta\left(n^2\right)$

@Arjun I think answer of this question should be $O(logn)$ as a decimal number needs max $logn$ bits to be represented in binary, and we need only 1 scan of logn bits to convert a binary number into decimal.

0

2

12 votes

Best answer

1 vote

Converting a number from base $b^l$ to $b^k$ takes $O(M(n)logn)$ time, where $M(n)$ is the time it takes to multiply two n-bit numbers, n is the length of the number.

For example, to convert binary to octal (base $2^1$ to $2^3$), the algorithm is easy; group three bits and find their equivalent values paralelly.

But to convert from a base to another base where they're not a power of some common constant, the conversion is tricky. For example binary to decimal.

For that, we need to multiply the bit at $ith$ position with $2^{i-1}$

It'll take $O(M(n)n)$ time.

Since $O(M(n)\equiv O(1)$ //given

It takes $O(n) time$

Can't be done better than $O(n)$, So, **Option A.**

1010011101 to Octal

=> __001__ __010__ __011__ __101__

=> 1235

We grouped numbers together, so O(logn)

1010011101 to binary requires **each **digit to be multiplied by some power of 2. So, O(n)