0% found this document useful (0 votes)
18 views

Code-Optimization_PPT (1)

The document discusses code optimization, highlighting the importance of improving code efficiency through various transformations applied by optimizing compilers. It categorizes optimizations into machine-independent and machine-dependent types, detailing techniques such as common sub-expression elimination, copy propagation, and dead-code elimination. Additionally, it covers loop optimizations and peephole optimization, emphasizing the significance of reducing execution time and memory usage in compiled programs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Code-Optimization_PPT (1)

The document discusses code optimization, highlighting the importance of improving code efficiency through various transformations applied by optimizing compilers. It categorizes optimizations into machine-independent and machine-dependent types, detailing techniques such as common sub-expression elimination, copy propagation, and dead-code elimination. Additionally, it covers loop optimizations and peephole optimization, emphasizing the significance of reducing execution time and memory usage in compiled programs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Code optimization

Code Optimization
• The code produced by the straight forward compiling algorithms can
often be made to run faster or take less space, or both.
• This improvement is achieved by program transformations that are
traditionally called optimizations.
• Compilers that apply code-improving transformations are called
optimizing compilers.
• Optimizations are classified into two categories.
▪ Machine independent optimizations
▪ Machine dependant optimizations

2
Code Optimization
Machine independent optimizations:
•Machine independent optimizations are program transformations that
improve the target code
•without taking into consideration any properties of the target machine.
Machine dependant optimizations:
•Machine dependant optimizations are based on register allocation and
utilization of special.
•machine-instruction sequences.

The optimized code has the following advantages:


• Optimized code has faster execution speed.
• Optimized code utilizes the memory efficiently.
• Optimized code gives better performance. 3

Code Optimization
The criteria for code improvement transformations:
Simply stated, the best program transformations are those that yield the
most benefit for the least effort. The transformations provided by an
optimizing compiler should have several properties. They are:
1.The transformation must preserve the meaning of programs. That is, the
optimization must not change the output produced by a program for a
given input, or cause an error such as division by zero, that was not
present in the original source program.

4
Code Optimization
2. A transformation must, on the average, speedup programs by a
measurable amount. We are also interested in reducing the size of
the compiled code although the size of the code has less
importance than it once had. Not every transformation succeeds in
improving every program, occasionally an “optimization” may slow
down a program slightly.
3. The transformation must be worth the effort. It does not make sense
for a compiler writer to expend the intellectual effort to implement a
code improving transformation and have the compiler expend the
additional time compiling source programs if this effort is not repaid
when the target programs are executed. “Peephole” transformations
of this kind are simple enough and beneficial enough to be included
in any compiler.
5

Code Optimization
• Flow analysis is a fundamental prerequisite for many important types
of code improvement.
• Generally control flow analysis precedes data flow analysis. Control
flow analysis (CFA) represents flow of control usually in form of
graphs, CFA constructs such as control flow graph, Call graph.
• Data flow analysis (DFA) is the process of asserting and collecting
information prior to program execution about the possible
modification, preservation, and use of certain entities (such as
values or attributes of variables) in a computer program.

6
PRINCIPAL SOURCES OF OPTIMIZATION
• A transformation of a program is called local if it can be performed by
looking only at the statements in a basic block; otherwise, it is called
global.
• Many transformations can be performed at both the local and global
levels. Local transformations are usually performed first.

Function-Preserving Transformations (Local Optimization)

There are a number of ways in which a compiler can improve a program


without changing the function it computes.

PRINCIPAL SOURCES OF OPTIMIZATION


The transformations:
•Common sub expression elimination,
•Copy propagation,
•Dead-code elimination, and
•Constant folding
are common examples of such function-preserving transformations.
•The other transformations come up primarily when global optimizations
are performed.
•Frequently, a program will include several calculations of the same
value, such as an offset in an array. Some of the duplicate calculations
cannot be avoided by the programmer because they lie below the level
of detail accessible within the source language.
8
PRINCIPAL SOURCES OF OPTIMIZATION
Common Sub expressions elimination:
•An occurrence of an expression E is called a common sub-expression if E
was previously computed, and the values of variables in E have not
changed since the previous computation.
•We can avoid recomputing the expression if we can use the previously
computed value.
For example
t1: = 4*i
t2: = a [t1]
t3: = 4*j
t4: = 4*I
t5: = n
t6: = b [t4] +t5.
9

PRINCIPAL SOURCES OF OPTIMIZATION


The above code can be optimized using the common sub-expression
elimination as

t1: = 4*i
t2: = a [t1]
t3: = 4*j
t5: = n
t6: = b [t1] +t5

The common sub expression t4:=4*i is eliminated as its computation is


already in t1. And value of i is not been changed from definition to use.

10
PRINCIPAL SOURCES OF OPTIMIZATION
Copy Propagation:
•Assignments of the form f : = g called copy statements, or copies for
short.
•The idea behind the copy-propagation transformation is to use g for f,
whenever possible after the copy statement f: = g.
•Copy propagation means use of one variable instead of another.
•This may not appear to be an improvement, but as we shall see it gives
us an opportunity to eliminate x.
•For example: x=Pi;
……
A=x*r*r;
The optimization using copy propagation can
be done as follows:
A=Pi*r*r;
Here the variable x is eliminated 11

PRINCIPAL SOURCES OF OPTIMIZATION


Dead-Code Eliminations:
•A variable is live at a point in a program if its value can be used
subsequently; otherwise, it is dead at that point.
•A related idea is dead or useless code, statements that compute values
that never get used.
•While the programmer is unlikely to introduce any dead code
intentionally ,it may appear as the result of previous transformations.
An optimization can be done by eliminating dead code.
Example:
i=0;
if(i=1)
{
a=b+5;
} 12
PRINCIPAL SOURCES OF OPTIMIZATION
Dead-Code Eliminations:
•A variable is live at a point in a program if its value can be used
subsequently; otherwise, it is dead at that point.
•A related idea is dead or useless code, statements that compute values
that never get used.
•While the programmer is unlikely to introduce any dead code
intentionally ,it may appear as the result of previous transformations.
An optimization can be done by eliminating dead code.
Example:
i=0;
if(i=1)
{
a=b+5;
} 13

PRINCIPAL SOURCES OF OPTIMIZATION


• Here, ‘if’ statement is dead code because this condition will never get
satisfied.
• We can eliminate both the test and printing from the object code.

• More generally, deducing at compile time that the value of an


expression is a constant and using the constant instead is known as
constant folding.
• One advantage of copy propagation is that it often turns the copy
statement into dead code.

For example,
a=3.14157/2 can be replaced by
a=1.570 there by eliminating a division operation.

14
Loop Optimizations
• We now give a brief introduction to a very important place for
optimizations, namely loops, especially the inner loops where programs
tend to spend the bulk of their time.
• The running time of a program may be improved if we decrease the
number of instructions in an inner loop, even if we increase the amount
of code outside that loop.
• Three techniques are important for loop optimization:
• code motion, which moves code outside a loop
• Induction-variable elimination, which we apply to replace
variables from inner loop.
• Reduction in strength, which replaces and expensive operation by
a cheaper one, such as a multiplication by an addition.
15

Loop Optimizations
Code Motion:
•An important modification that decreases the amount of code in a loop is
code motion.
•This transformation takes an expression that yields the same result
independent of the number of times a loop is executed ( a loop invariant
computation) and places the expression before the loop.
•Note that the notion “before the loop” assumes the existence of an entry
for the loop.
For example, evaluation of limit-2 is a loop-invariant computation in the following
while-statement:
while (i <= limit-2) /* statement does not change limit*/
Code motion will result in the equivalent of
t= limit-2;
while (i<=t) /* statement does not change limit or t */
16
Loop Optimizations
Induction Variables :
•Loops are usually processed inside out. For example consider the loop around B3.
•Note that the values of j and t4 remain in lock-step; every time the value of j
decreases by 1, that of t4 decreases by 4 because 4*j is assigned to t4. Such
identifiers are called induction variables.
•When there are two or more induction variables in a loop, it may be possible to get
rid of all but one, by the process of induction-variable elimination. For the inner
loop around B3 in below Figure. we cannot get rid of either j or t4 completely; t4 is
used in B3 and j in B4.
•However, we can illustrate reduction in strength and illustrate a part of the process
of induction-variable elimination.
•Eventually j will be eliminated when the outer loop of B2-B5 is considered.

17

Loop Optimizations
void quicksort (int m, int n)
/* recursively sorts a[ m ] through a[ n ] */
{ int i , j, v, x;
if (n <= m) return;
/* fragment begins here */
i = m - 1; j = n; v = a[ n ];
while (1) {
do i = i + 1; while (a[ i ] < v);
do j = j - 1; while (a[ j ] > v);
if ( i >= j ) break;
x = a [ i ]; a[ i ] = a [ j ] ; a [ j ] = x; /* swap a [ i ] , a[ j ] */
}
x = a [ i ]; a[ i ] = a[ n ]; a[ n ] = x; /* swap a[ i ] , a[ n ] */
/* fragment ends here */
quicksort ( m, j ); quicksort ( i + 1 ,n );
} 18
Loop Optimizations

19

Loop Optimizations

20
Machine-Dependent Optimization
• This optimization can be applied on target machine instructions.
• This includes register allocation, use of addressing modes and
peephole optimization.
• Instructions involving register operands are faster and shorter
(instruction length is small); if we make use of more registers during
target code generation, efficient code will be generated.
• Hence, register allocation and use of addressing modes also contribute
to optimization.
• The most popular optimization that can be applied on target machine is
peephole optimization.

21

Peephole Optimization

• Generally code generation algorithms produce code, statement by


statement.
• This may contain redundant instructions and suboptimal constructs.
The efficiency of such code can be improved by applying peephole
optimization, which is simple but effective optimization on target code.
• The peephole is considered a small moving window on the target
code.
• The code in peephole need not be contiguous(boundary). It improves
the performance of the target program by examining and transforming
a short sequence of target instructions.

22
Peephole Optimization
• The advantage of peephole optimization is that each improvement
applied increases opportunities and shows additional improvements.
• It may need repeated passes to be applied over the target code to get
the maximum benefit.
• It can also be applied directly after intermediate code generation.
• The following examples of program transformations that are
characteristic of peephole optimizations.
• Redundant Loads and Stores
• Algebraic Simplification
• Dead Code Elimination
• Flow-of-Control Optimization
• Reduction in Strength
• Use of Machine Idioms
23

Peephole Optimization
Redundant Loads and Stores
•The code generation algorithm produces the target code, which is either
represented with single operand or two operands or three operands.
•Let us assume the instructions are with two operands.
•The following is an example that gives the assembly code for the
statement x = y + z.
1.MOV y, R0
2. ADD z, R0
3. MOV R0, x
Instruction 1 moves the value of y to register R0, second instruction performs the
addition of value in z with the register content and the result of the operation is stored in the
register. The third instruction copies the register content to the location x. At this point the
value of x is available in both location of x and the register R0.

24
Peephole Optimization
• If the above algorithm is applied on the code a = b + c, d = a + e then it
generates the code given below:
1. MOV b, R0
2. ADD c, R0
3. MOV R0, a
4. MOV a, R0
5. ADD e, R0
6. MOV R0, d

• Here we can say that 3 and 4 are redundant load and store instructions. These
instructions will not affect the values before or after their execution.
• Such redundant statements can be eliminated and the resultant code is as
follows:
1. MOV b, R0
2. ADD c, R0
3. ADD e, R0
4. MOV R0, d
25

Peephole Optimization
Algebraic Simplification
•There are few algebraic identities that occur frequently enough and are
worth considering.
•Look at the following statements.
x: = x + 0
x: = x * 1
•They do not alter the value of x. If we keep them as it is, later when code
generation algorithm is applied on it, it may produce six statements that
are of no use.
•Hence, such statements whether they are in three address code or target
code can be removed.

26
Peephole Optimization
Dead Code Elimination
•Removal of unreachable code is an opportunity for peephole optimization.
•A statement immediately after an unconditional jump or a statement that
never get a chance to be executed can be identified and eliminated. Such
code is called the dead code.

• For example consider a statement in If this is translated to target code as


high level language code If x=1 goto L1
# define x = 0 goto L2
……. L1: print value
If (x) L2: ……..
{ ----print value Here value will never be printed. So
----- whatever code inside the body of
} “if(x)”is deadcode; hence, it can be
removed. 27

Peephole Optimization
Flow-of-Control Optimization
•Sometimes when we apply code generation algorithms mechanically we
may get jump on jumps as follows:
goto L1
L1: goto L2
…….
L2: goto L3
….
L3: if a < b goto L4
L4:
This can be optimized as
goto L3
……….
L3: if a < b goto L4
L4: 28
Peephole Optimization

Reduction in Strength
•This optimization mainly deals with replacing expensive operations by
cheaper ones.
For example
o x2 => x * x
o fixed-point multiplication and division by a power of 2 => shift
o floating-point division by a constant => floating-point multiplication
by a constant

29

Peephole Optimization
Use of Machine Idioms
•While generating the target code it is better to make use of rich instruction
set supported by the target machine, instead of blindly applying available
code-generation algorithms. This may produce efficient code.
•Feature provided by machine architecture may be identified and used
wherever applicable to reduce the overall execution time significantly.
•For example, consider the statement x = x + 1; if we apply the
code-generation algorithm mentioned in redundant load/store, we get six
instructions but there can be hardware instructions for certain specific
operations auto-increment and auto-decrement addressing mode like INR
x, which is one instruction only.

30

You might also like