# GATE2008-12

6.5k views

Some code optimizations are carried out on the intermediate code because

1. They enhance the portability of the compiler to the target processor

2. Program analysis is more accurate on intermediate code than on machine code

3. The information from dataflow analysis cannot otherwise be used for optimization

4. The information from the front end cannot otherwise be used for optimization

retagged
0
need some refference here.
10
must be tagged as out-of-syllabus.
1
Don't skip this topic if you are praparing for other exam...like ISRO,BARC

Option (B) is also true. But the main purpose of doing some code-optimization on intermediate code generation is to enhance the portability of the compiler to target processors. So Option A) is more suitable here. Intermediate code is machine/architecture independent code. So a compiler can optimize it without worrying about the architecture on which the code is going to execute (it may be the same or the other ). So that kind of compiler can be used by multiple different architectures. In contrast to that, suppose code optimization is done on target code, which is machine/architecture dependent, then the compiler has be specific about the optimizations on that kind of code. In this case the compiler can't be used by multiple different architectures, because the target code produced on different architectures would be different. Hence portability reduces here.

selected by
0

Can anyone explain what is target processors?

4

@Parth Patel 1 target processor means the processor of the machine on which the code will be running.

0

@toxicdesire thank you

0

Option (B) is also true

@vnc

@Parth Patel 1

why option b also true in other contex ?

i mean why Program analysis is more accurate on intermediate code than on machine code.

Ans is (A)

Intermediate codes are machine independent codes. So, intermediate code can be used for code optimization since a given source code can be converted to target machine code.

edited by
0
Yes. And that is the most appropriate choice. There are a lot of architectures present and no one would want to do optimization for each of them.
6
What does the optimization has to do with the PORTABILITY of code. Why would you want to optimize the code just to make that portable !!

Your GENERATING the intermediate code itself ENHANCES the portability of the code. Optimization is something that does not matter in case you are talking of portability.

So even if you DON'T optimize your intermediate code further that is nowhere going to harm your portability. According to me optimizations on intermediate codes are easy and a wide range of optimizations are actually available for INTERMEDIATE code only because it is EASIER to optimize intermediate code rather than machine code where you need to consider the machine architecture as well.

So answer to this question is (B).
0
Its portability of the compiler, and not of the code.
1
But even if talk of portability of COMPILER, doesn't generating intermediate enhances its portability !!. Because we can club any back-end with the generic front-end generating only intermediate code !!

you please explain to me HOW WOULD THE PORTABILITY OF COMPILER BE AFFECTED IF YOU DON'T OPTIMIZE THE CODE.Will not optimizing of intermediate code make it LESS PORTABLE ??
8
It is not that "WE MUST optimize intermediate code". If possible all optimizations must be done on intermediate code so that the same implementation for the optimizations can be used for different front and back ends.

Say you have 5 front ends and 6 back ends. If we optimize at the front end, we need 5 implementation and if we optimize at the back end we need 6 implementations. But if we optimize at the intermediate level, we need JUST 1 implementation.

(There will always be backend dependent optimizations though which can never be avoided)
1
But what's wrong with option B. It also looks correct.
0

Code optimization is any method of code modification to improve code quality and efficiency. A program may be optimized so that it becomes a smaller size, consumes less memory, executes more rapidly, or performs fewer input/output operations.

So option B must be correct

0
You should give better reasoning.
0
i thought the same thing - why A? Acc to me, the main subject here is code optimization and the answer should be related to that, not intermediate code.

http://stackoverflow.com/questions/33184269/what-is-the-purpose-of-code-optimization-at-intermediate-phase-in-compiler

Program analysis is faster and not accurate as explained in the above link. IR code is cleaner and more easier for analysis.

First, it has a sequential representation (similar to binary code) which can be easily modified. Second, the IR preserves most of the information available in the abstract syntax tree. This includes global, local and temporary variable definitions and types. This expressiveness enables the compiler to optimize the code much more effectively. Third, it's low-level such that its instructions are primitive and only one or few consecutive IL instructions are mapped to few target ISA instructions. This helps the code generator to fulfill its purpose quickly.

So Option A) is more suitable here. Intermediate code is machine/architecture independent code. So a compiler can optimize it without worrying about the architecture on which the code is going to execute (it may be the same or the other ).

What does the optimization has to do with the PORTABILITY of code. Why would you want to optimize the code just to make that portable !!

Your GENERATING the intermediate code itself ENHANCES the portability of the front end and hence the code. Optimization is something that does not matter in case you are talking of portability.

So even if you DON'T optimize your intermediate code further that is nowhere going to harm your portability. According to me optimizations on intermediate codes are easy and a wide range of optimizations are actually available for INTERMEDIATE code only because it is EASIER to optimize intermediate code rather than machine code where you need to consider the machine architecture as well.

So answer to this question is (B).
0
B seems correct...Do you know what answer given by IIT on 2008 answer sheet..?
must be tagged as out-of-syllabus.

Don't skip this topic if you are praparing for other exam...like ISRO,BARC

As this is Not Multiple select question so Answer A is more accurate than B.

As there are front end and back end.

front end has 4 parts ( Lexical Analysis, Syntax Analysis , semantic Analysis and Intermediate Code generation)

This front end is same for different Operating systems because it is not converting target code for any particular machine.

Where as for back end there are only 2 parts ( Code Optimization and Target Code generator)

This Back end part is different for different OS.

That’s why Option A is correct.

Some code optimizations are carried out on the intermediate code because

they enhance the portability of the compiler to the target processor

## Related questions

1
7.9k views
An LALR(1) parser for a grammar G can have shift-reduce (S-R) conflicts if and only if The SLR(1) parser for G has S-R conflicts The LR(1) parser for G has S-R conflicts The LR(0) parser for G has S-R conflicts The LALR(1) parser for G has reduce-reduce conflicts
Which of the following statements are true? Every left-recursive grammar can be converted to a right-recursive grammar and vice-versa All $\epsilon$-productions can be removed from any context-free grammar by suitable transformations The language generated by a context-free grammar all of whose productions ... binary trees I, II, III and IV II, III and IV only I, III and IV only I, II and IV only