Each time we & a number $n$ with $(n-1)$ the 1 at the least significant bit position of the binary representation of the number changes to 0.

This is because when we subtract a 1, the least significant bit always changes. If it was 1, it changes to 0 and if it was 0, it changes to 1. If the change is 1 -> 0, no other bits are affected. So, the result of (n & (n-1)) will be $n$ with its last bit as 0. Now, if the change is 0 -> 1, this would mean all the bits to the left till the next 1 are complemented for $n-1$ (consider 8 and 7 whose binary representations are 1000 and 0111). So, when we do (n & (n-1)), all the bits towards right from the least significant 1 in the number become 0. So, in both the cases when we do (n & (n-1)), the least significant 1 changes to 0.

Thus, the above code is doing nothing other than counting the number of 1's in the binary representation of the given number. This is an efficient way as the loop runs only for $O(k)$ where $k$ is the number of 1's in the given number as compared to $O(d)$ where $d$ is the number of bits in the given number for a normal code doing the same purpose.