1,132 views

What is the following function doing?

unsigned int foo(unsigned int x)
{
unsigned int c = sizeof x;
c <<= 3;
if(x == 0) return c;
c--;
while(x = x & x-1) c--;
return c;
}
1. Counting the number of bits in the binary representation of x
2. Counting the number of set bits in the binary representation of x
3. Counting the number of unset bits in the binary representation of x
4. None of the above

### 1 comment

1. unsigned int foo(unsigned int x){
2.    unsigned int c = sizeof x;
3.    c <<= 3;
4.    if(x == 0) return c;
5.    c--;
6.    while(x = x & x-1) c--;
7.    return c;
8. }

line no 2 : c = no of bytes for x.
line no 3 : c<<=3 is c*8 which is converting Byte into bits.
line no 6 : counting no of 1s and decrementing c value by 1.
c will be decremented as many times as no of 1s will be in x.
line no 7 : return c i.e. no of 0s or no of Unset bit in x.

 c<<=3 is c*8 which is converting Byte into bits.

did not understand the above line ..

According to me it should be,

line 3 = > c= 4     // by default  sizeof(int)= 4

line 4 => c = c << 3 = c * 8 = 4* 8 = 32

size of int depends on machine. By default is 4 ? No.
okay sir, but sir please explain line no 4 ??
plz explain line no 7.

just to clear self doubt-

bool isPowerOfTwo(int x)
{
// x will check if x == 0 and !(x & (x - 1)) will check if x is a power of 2 or not
return (x && !(x & (x - 1)));
}

In this code will the time complexity be O(1) or O(log X), since there are log X bits to evaluate and its bitwise??

Logic behind no of set bits and decrement of c

edited
line no 4 : c<<=3 is c*8 which is converting Byte into bits.
line no 6 : decrementing c because while will run only k-1 times, where k is no of 1s in x
line no 7 : counting no of 1s and decrementing c value by 1.
c will be decremented k-1 times, where k is no of 1s in x.
so overall after while loop, total no of 1s in x will be subtracted from c
line no 8 : return c i.e. no of 0s or no of Unset bit in x.

### Isn't it?

@Sushmita yes, it should be $O(\log n)$ in worst case where all bits are 1. But we can more precisely say it as $O(k)$, where $k$ is the number of 1's in the input.
thanx a lot.
can u plz explain line no.7 how its counting no. of 1's

what actually while loop is doing

@Digvijay Pandey

size of (unsigned int) = 2B or 4B

then which one to take

Let's take a bad number, ie, not a power of 2. Let's take 53.

In binary, 53 = 110101. (4 ones, 2 zeroes)

    c <<= 3;


will give us the number of bits required to represent 53 in binary. See Digvijay Pandey's answer.

Now,

    while(x = x & x-1) c--;


When x is bitwise-ANDED with (x-1), it is a property that this result would contain one less 1 than x.

53 AND 52 = 110101 AND 110100 = 110100 (4 1's reduced to 3 1's)

110100 is 52. So,

52 AND 51 = 110100 AND 110011 = 110000 (3 1's reduced to 2 1's)

110000 is 48, So

48 AND 47 = 110000 AND 101111 = 100000 (2 1's reduced to 1 1)

100000 is 32, So

32 AND 31 = 100000 AND 011111 = 0 (1 1 reduced to 0 1's)

It took 4 steps. At each step we decremented c.

=> We decremented c four times.

=> From the bits required to represent 53, we subtracted 4 bits.

=> Number of bits remaining is the count of 0's in 53.

Option C