For the statement: printf("%f,%f\n",a,4/2);
Answer would be 2.000000,0.000000
Refer to https://stackoverflow.com/questions/11673503/using-f-to-print-an-integer-variable where you can find this:
printf
's "%f"
format expects an argument of type double
, and prints it in decimal form with no exponent. Very small values will be printed as 0.000000
.
When you do this:
int x=10;
printf("%f", x);
we can explain the visible behavior given a few assumptions about the platform you're on:
int
is 4 bytes
double
is 8 bytes
int
and double
arguments are passed to printf
using the same mechanism, probably on the stack
So the call will (plausibly) push the int
value 10
onto the stack as a 4-byte quantity, and printf
will grab 8 bytes of data off the stack and treat it as the representation of a double
. 4 bytes will be the representation of 10
(in hex, 0x0000000a
); the other 4 bytes will be garbage, quite likely zero. The garbage could be either the high-order or low-order 4 bytes of the 8-byte quantity. (Or anything else; remember that the behavior is undefined.)