Sir, why haven't you considered 0 ( **0**11100010 ) as the sign bit? I calculated the answer as 226.

The Gateway to Computer Science Excellence

First time here? Checkout the FAQ!

x

+16 votes

The octal representation of an integer is $(342)_8$. If this were to be treated as an eight-bit integer in an $8085$ based computer, its decimal equivalent is

- $226$
- $-98$
- $76$
- $-30$

+24 votes

Best answer

0

@arjun_sin, it is not mentioned anywhere in question, first bit will be signed bit, i am confused here when we consider 1st bit is signed bit and when we ignore 1st bit

+1

In this question they have mentioned 8bit integer in an 8085 therefore we are considering 8 bit from LSB to MSB, if nothing is mentioned then we have to consider it as a +ve number.

+1

8 bit integer ===> all integers are 8 bits,

if it isn't signed, then we can't represent -ve integers, therefore it should be signed number.

if it isn't signed, then we can't represent -ve integers, therefore it should be signed number.

+4 votes

First, write for each decimal equivalent binary code :

since 8 = 2^{3}

write each digit in 3-bit binary

(342)_{8} = (011 100 010)_{2} ignore initial zero

(342)_{8} = (226)_{10} = (11100010)_{2}

since all processor use 2's complement number system(2's complement number system is weighted number system)

so 11100010 is a negative number

11100010 = 100010 = -2^{5} + 2 = -30

correct answer D

- All categories
- General Aptitude 1.5k
- Engineering Mathematics 7.1k
- Digital Logic 2.7k
- Programming & DS 4.9k
- Algorithms 4.2k
- Theory of Computation 5.3k
- Compiler Design 2.1k
- Databases 4k
- CO & Architecture 3.5k
- Computer Networks 4k
- Non GATE 1.4k
- Others 1.5k
- Admissions 559
- Exam Queries 555
- Tier 1 Placement Questions 23
- Job Queries 69
- Projects 18

47,932 questions

52,335 answers

182,384 comments

67,817 users