char takes 1 byte. But when given to a function, these are promoted to int. So, positive char values are padded with 0s on the left and negatives ones with 1s (sign extension).
The maximum value which can be stored in a char is 127 if it is signed. Whether by default a char is signed or unsigned is "implementation defined" and lets assume signed here. Now, when we add 10, this becomes 135 which overflows and hence becomes 135-256 = -121 (2's complement representation). Now, if this is printed using %d, we get -121. If we use %u, we get UINT_MAX - 135.