97 views
/*  sizeof(int)=4;
sizeof(float)=8;
sizeof(unsigned char)=1  ;   */

What is the output of the following program ?

#include<iostream>
#include<stdio.h>
using namespace std;
int main(){
union Data{
int i;
float f;
unsigned char str[20];
}data;
printf("size =%d\n",sizeof(data));
data.i=10;
data.f=220.5;
printf("data.i: %d\n",data.i);
return 0;
}
| 97 views
0

Pre-requisite for solving this problem:-

1) Storage of Floating Pointer Number in Memory

2) Union

if you are taking size of float=8 Bytes, then it is depend upon little endian or big-endian.

if it is little endian = printed 0

if it is big endian = printed ( 230 + 222 + 221+219+217+216+215+212)

if you are taking size of float=4 Bytes, then it is printed ( 230 + 225 + 224+222+220+219+218+215) = 1130135552

0
should be 20 and 10

right?
0
mam, it is union

and last updated is float.
0
What is the union size 20 or 24 ?
0
union size is maximum of it's members ===> str have size=20 ===> union size=20
0

f it is big endian = printed ( 230 + 222 + 221+219+217+216+215+212)

How? We need to know the floating point representation rt?

0
but why we need to convert it in floating point?

Why it is not printing in int??
0
yes sir, with that floating point representation, i calculate it.

Yes sir, i made a mistake, i didn't consider the 20 * 8 bits.... it is wrong.
0
@Shaik

Can u tell me why endian needed here?

why sizeof(int) directly cannot give ans?
+2

@srestha

mam,

let consider, size of int = size of float = 32 bits = 4 Bytes

union

{

int i;

float f;

}

memory allocated to this union is 4 * 8 bits = 32 bits

that is referred as int, float, which will update last, this is it's correct datatype.

data.i=10 ==> the memory location (32 bits) referred as int, ( if you print it %f, then it is showing something else )

data.f = 220.5 ==> the memory location ( 32 bits ) referred as float. ( if you print it %d, then it is showing something else )

Why this thing happens?

220.5 ===> Floating point ===> Stored as IEEE single precision standard format., 32 bits ==> 1 sign bit, 8 exponent bits , 23 Mantissa Bits

how it is stored ?

(-1)s . (1.M) * 2true Exponent. ===> 220.5 = ( 1101 1101 . 1 )2 . = ( 1 . 101 1101 1 )2 * 27.

===> true exponent = 7 ===> exponent =7+127 = 134 = 128 + 6

How this 32 bits look like ?

$\underbrace{0} \underbrace{10000110} \underbrace{10111011000000000000000}$

if you access by %f, then it is correct,

if you access by %d, it will consider these memory (32 bits) as integer

==> (0*20) + (0*21) +...+(1*230) +(0*231) = ( 230 + 225 + 224+222+220+219+218+215) = 1130135552

let consider, size of int = 4 Bytes and size of float = 8 Bytes

memory allocated to this union is 8 * 8 bits = 64 bits

that is referred as int, float, which will update last, this is it's correct datatype.

data.i=10 ==> the memory location ( LSB 32 bits or MSB 32 bits but it is update which it use ) referred as int, ( if you print it %f, then it is showing something else )

data.f = 220.5 ==> the memory location ( 64 bits ) referred as float. ( if you print it %d, then it is depend upon which 32 bits access ===> either little endian or big endian )

How that output changes ?

220.5 ===> Floating point ===> Stored as IEEE double precision standard format., 64 bits ==> 1 sign bit, 11 exponent bits , 52 Mantissa Bits

how it is stored ?

(-1)s . (1.M) * 2true Exponent. ===> 220.5 = ( 1101 1101 . 1 )2 . = ( 1 . 101 1101 1 )2 * 27.

===> true exponent = 7 ===> exponent =7+1023 = 1030 = 1024 + 6

How this 64 bits look like ?

$\underbrace{0} \underbrace{10000000110} \underbrace{10111011000000000000000000000....0}$

if you access by %f, then it is correct,

if you access by %d, it will consider these memory (MSB 32 bits = big-endian ) as integer

==> (0*20) + (0*21) +...+(1*230) +(0*231) = ( 230 + 222 + 221+219+217+216+215+212 )

if it is little endian ===> access least 32 bits ===> 000000000....00 = 0

+1
tnks :)