100% found this document useful (1 vote)
134 views

CD Questions With Answers

The document discusses compiler design concepts like parse trees, abstract syntax trees, top-down parsing problems, compiler error types, LR parsing tables, attributes, translators, hashing, left factoring, left recursion, triples, activation records, stack and heap allocation. Key points include: parse trees contain rule matches while abstract syntax trees contain programming language syntax; top-down parsing has issues like backtracking, left recursion and ambiguity; compiler errors can occur during lexical, syntactic or semantic analysis; attributes associated with grammar symbols include synthesized and inherited attributes; activation records organize procedure data on the stack; stack allocation stores locals in each call frame while heap allocation dynamically allocates memory.

Uploaded by

shagunverma039
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
134 views

CD Questions With Answers

The document discusses compiler design concepts like parse trees, abstract syntax trees, top-down parsing problems, compiler error types, LR parsing tables, attributes, translators, hashing, left factoring, left recursion, triples, activation records, stack and heap allocation. Key points include: parse trees contain rule matches while abstract syntax trees contain programming language syntax; top-down parsing has issues like backtracking, left recursion and ambiguity; compiler errors can occur during lexical, syntactic or semantic analysis; attributes associated with grammar symbols include synthesized and inherited attributes; activation records organize procedure data on the stack; stack allocation stores locals in each call frame while heap allocation dynamically allocates memory.

Uploaded by

shagunverma039
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Short Notes of Compiler Design

a. What is the difference between parse tree and abstract syntax tree ?
Ans.
S. No. Parse tree Abstract syntax tree

The abstract syntactic structure of source code


An ordered, rooted tree that, in some context-free
1. written in a programming language is represented
grammar, depicts the syntactic structure of a string.
as a tree.

It contains records of the rules (tokens) to match It contains records of the syntax of programming
2.
input texts. language.

b. Explain the problems associated with top-down Parser.


Ans. Problems associated with top-down parsing are:
 1. Backtracking
 2. Left recursion
 3. Left factoring
 4. Ambiguity

c. What are the various errors that may appear in compilation process?
Ans. Types of errors that may appear in compilation process are:
 a. Lexical phase error
 b. Syntactic phase error
 c. Semantic phase error

 Given grammar
 S→aAd/bBd/aBe/bAe
 A→f B→f
 Construct LR(1) Parsing table .Also draw the LALR Table from the derived LR(1)parsing table.

d. What are the two types of attributes that are associated with a grammar symbol?
Ans. Two types of attributes that are associated with a grammar symbol are:
 a. Synthesized attributes
 b. Inherited attributes

e. Define the terms Language Translator and compiler.


Ans. Language translator: A programme called a language translator is used to convert instructions
written in source code from high-level language or assembly language into machine language.
Compiler: When translating from a high-level programming language to a low-level programming
language, a compiler is employed.
f. What is hashing? Explain.
Ans. Hashing is the process of transforming any given key or a string of characters into another
value.

g. What is do you mean by left factoring the grammars ? Explain.


Ans. A grammar G is said to be left factored if any production of it is in form of:

h. Define left recursion. Is the following grammar left recursive ?

Ans. Left recursion: A grammar G(V, T, P, S) is said to be left recursive if it has a production in the
form of:

Numerical:

if we compare above grammar with

Yes, the given grammar is left recursive.

Q. Write down the quadruple, triple, indirect triple for the following expression
(x + y)*(y + z).

i. What is an ambiguous grammar ? Give example.


Ans. Ambiguous grammar: A context free grammar G is ambiguous is at if there least one string in
L(G) having two or more distinct derivation tree.
Proof: Let production rule is given as:
j. List down the conflicts during shift-reduce parsing.
Ans. Shift-reduce conflict:
 a. The most frequent kind of conflict in grammars is the shift-reduce conflict.
 b. This conflict arises because a production rule in the grammar is simultaneously decreased and
shifted for the specific token.
 c. Recursive grammar definitions frequently lead to this issue since the system is unable to
distinguish between when one rule is finished and when another is just getting started.

Section B: Important Notes of Compiler Design


a. Construct the LALR parsing table for the given grammar

Ans. The given grammar is:

The augmented grammar will be:

The LR (1) items will be:


Table. 1
Since table does not contain any conflict. So it is LR(1).
The goto table will be for LALR I3 and I6 will be unioned, I4 and I7 will be unioned, and I8 and I9 will
be unioned.

Table. 2
Since, LALR table does not contain any conflict. So, it is also LALR(1).
DFA:

b. What is an activation record ? Explain how it is related with runtime -storage organization
?
Ans. Activation record:
 1. An activation record is used to organize the data required for a particular procedure run.
 2. When a procedure is called, an activation record is pushed onto the stack, and it is popped when
control is transferred back to the calling function.
Format of activation records in stack allocation:
Sub-division of run-time memory into codes and data areas is shown in Fig.

1. Code: It stores the executable target code which is of fixed size and do not change during
compilation.
2. Static allocation:
 a. The static allocation is for all the data objects at compile time.
 b. The size of the data objects is known at compile time.
 c. This allocation of data objects is done via static allocation, and the names of these objects are only
bound to storage at compile time.
 d. For static allocation, the compiler can estimate how much storage each data item needs. A
compiler can easily locate the location of these data in the activation record as a result.
 e. The locations at which the target code can locate the data on which it works can be filled in by the
compiler at the time of compilation.
3. Heap allocation: There are two methods used for heap management:
 a. Garbage collection method:
 i. An object is considered to be garbaged when all access paths to it are eliminated but the data object
remains.
 ii. One method for reusing that object space is garbage collection.
 iii. Any components with the trash collection bit set to “on” are garbaged during garbage collection
and returned to the free space list.
 b. Reference counter:
 i. Reference counters make an effort to reclaim each heap storage item as soon as it becomes
inaccessible.
 ii. Each memory cell on the heap is connected to a reference counter that keeps track of the values
that point to it.
 iii. The count is raised whenever a new value points to the cell and lowered whenever a value stops
doing so.
4. Stack allocation:
 a. Activation record data structures are stored using stack allocation.
 b. As activations start and terminate, the activation records are pushed and popped, respectively.
 c. The record for each call in the activation procedure contains storage for the locals in that call.
Locals are therefore guaranteed access to new storage in each activation because each call causes a
new activation record to be pushed onto the stack.
 d. When the activation is finished, these local settings are removed.

a. Code optimization techniques



c. Write the quadruple, triple, indirect triple for the following expression (s + y)*(y + 2) + (x +
y + z)
Ans. The three address code for given expression:

i. The quadruple representation:

ii. The triple representation:

iii. The indirect triple representation:


d. Discuss the following terms:
i. Basic block
ii. Next use information
iii. Flow graph
Ans. i. Basic block:
 a. Abasic block is a sequence of consecutive instructions with exactly one entry point and one exit
point.
 b. The exit points may be more than one if we consider branch instructions.
ii. Next use information:
 a. Next-use information is needed for dead-code elimination and register allocation.
 b. Next-use is computed by a backward scan of a basic block.
 c. The next-use information will indicate the statement number at which a particular variable that is
defined in the current position will be reused.
iii. Flow graph:
 1. A flow graph is a directed graph in which the flow control information is added to the basie
blocks.
 2. The nodes to the flow graph are represented by basic blocks.
 3. The block whose leader is the first statement is called initial blocks.
 4. There is a directed edge from block Bi-1 to block Bi if Bi immediately follows Bi-1 in the given
sequence. We can say that Bi-1 is a predecessor of Bi.

e. Construct predictive parse table for the following grammar.

Ans.

First we remove left recursion


After removing left recursion, we get

Predictive parsing table:


Section 3: Stack Allocation and Heap Allocation
a. Construct the SLR parse table for the following Grammar

Ans. The augmented grammar is:

The canonical collection of sets o LR(0) items for grammar are as follows:
Let us numbered the production rules is the grammar as

SLR Parsing table:

DFA for set of items:


b. Differentiate between stack allocation and heap allocation.
Ans.
No. Static allocation Heap allocation

Static allocation allocates memory on the basis of size of Heap allocation makes use of heap for managing the
data objects. allocation of memory at run time.

In static allocation, there is no possibility of creation of In heap allocation, dynamic data structures and objects
dynamic data structures and objects. are created.

In static allocation, the names of the data objects are Heap allocation allocates contiguous block of memory
fixed with storage for addressing. to data objects.

Static allocation is simple, but not efficient memory Heap allocation does memory management in efficient
management technique. way.

Static allocation strategy is faster in accessing data as Heap allocation is slow in accessing as there is chance
compared to heap allocation. of creation of holes in reusing the free space.

It is easy to implement. It is comparatively expensive to implement.

a. Construct predictive parse table for the following grammar. E → E + T/T T →T *F/F F →F /a/b
Section 4: Peephole Optimization
a. Write syntax directed definition for a given assignment statement:

Ans. The syntax directed definition of given grammar is:

b. What are the advantages of DAG ? Explain the peephole optimization.


Ans. Advantages of DAG:
 1. Using the DAG technique, we automatically identify frequent sub-expressions.
 2. It is possible to identify which identifiers’ values are utilized in the block.
 3. It is possible to identify which statements compute values that may be applied outside of the
block.
Peephole optimization:
 1. A specific kind of code optimization called peephole optimization is carried out on a small section
of the code. It is applied to a piece of code’s extremely limited set of instructions.
 2. It primarily relies on the principle of replacement, in which a section of code is changed for a
quicker and shorter version without affecting the output.
 3. The machine-dependent optimization is called Peephole.

Section 5: Lexical Phase Error and Syntactic Error


a. What do you understand by lexical phase error and syntactic error ? Also suggest methods
for recovery of errors.
Ans. Lexical and syntactic error:
 a. A lexical phase error is a character sequence that does not match the pattern of a token, meaning
that the compiler might not produce a valid token from the source programme when scanning it.
 b. Reasons due to which errors are found in lexical phase are:
 i. The addition of an extraneous character.
 ii. The removal of character that should be presented.
 iii The replacement of a character with an incorrect character.
 iv. The transposition of two characters.
For example:
 i. In Fortran, an identifier with more than 7 characters long is a lexical error.
 ii. In Pascal program, the character -, & and @ if occurred is a lexical error.
1. Panic mode recovery:
 a. The majority of parser techniques employ this approach because it is the simplest to implement.
 b. If a parser encounters an error, it discards each input symbol individually until one of the
predetermined set of synchronizing tokens is discovered.
 c. Frequently, panic mode correction misses a sizable portion of the input without examining it for
more faults. It ensures that there won’t be an endless loop.
For example:
Let consider a piece of code:

By using panic mode it skips a = b + c without checking the error in the code.
2. Phrase-level recovery:
 a. When a parser finds an error, it may locally repair the remaining input.
 b. It might substitute a string that enables the parser to proceed for a prefix of the remaining input.
 c. A typical local modification might remove an unnecessary semicolon, insert a needed semicolon,
and replace a comma with a semicolon.
For example:
Let consider a piece of code
while (x > 0) y = a + b;
In this code local correction is done by phrase-level recovery by adding ‘do’ and parsing is
continued.
3. Error production: If error production is used by the parser, we can generate appropriate error
message and parsing is continued.
For example:
Let consider a grammar

When error production encounters *A, it sends an error message to the user asking to use ‘*’ as
unary or not.
4. Global correction:
 a. Global correction is a theoretical concept.
 b. This method increases time and space requirement during parsing.

b. Discuss how induction variables can be detected and eliminated from the given intermediate
code
Ans. In the given question, there is no induction variable.
If we replace j with i. then the block B2 becomes

In the above block B2 value of i is incremented by every time in the loop and value of t 1 is
incremented 4 times the value of i.
So, in the given block i and t 1 are two induction variables.
In this case, if we want to remove the induction variables we use reduction is strength technique.
In this technique, we remove same line of code out of the block or modify the computation code in
the block.
Modification:

Section 6: Static Scope and Dynamic Scope


a. Test whether the grammar is LL (1) or not, and construct parsing table for it.

Ans. Given grammar:

Parsing table:
Since, no column has more than one value. So the given grammar is LL(1) grammar.

b. Distinguish between static scope and dynamic scope. Briefly explain access to non local
names in static scope.
Ans. Difference:
No. Lexical scope Dynamic scope

The binding of name occurrences to declarations is done At runtime, name occurrences are
statistically at compile time. dynamically bound to declarations.

The binding of variables is defined by the


The structure of the program defines the binding of variables.
flow of control at the run time.

The environment in which a process is defined determines the value A free variable gets its value from where
of any free variables used within that procedure. the procedure is called.
Access to non-local names in static scope:
 1. The method used to implement non-local names (variable) access in static scope is called static
chain.
 2. An activation record instance in the stack that is connected by a static chain consists of static links.
 3. An activation record instance for subprogram A points to one of the activation record instances of
A’s static parent via the static ink, or static scope pointer.
 4. When an object declared in a static parent at the surrounding scope nested at level k is referenced
by a subroutine at the nesting level, J-k static links generate a static chain that must be navigated to
reach the frame containing the object.
 5. In order to reach non-local names, the compiler generates code that performs these frame
traversals.
For example: Subroutine A is at nesting level 1 and Cat nesting level 3. When C accesses an object
of A, 2 static links are traversed to get to A’s frame that contains that object
Section 7: Code Generator and Code Loop Optimization
a. What are the various issues in design of code generator and code loop optimization ?
Ans. Various issues of code generator:
 1. Input to the code generator:
 a. The source program’s intermediate representation created by the front end, combined with data
from the symbol table, are the inputs to the code generator.
 b. Graphical representations and three address representations are both included in IR.
 2. The target program:
 a. The complexity of building a decent code generator that creates high quality machine code is
significantly influenced by the target machine’s instruction set architecture.
 b. The most popular target machine architectures are stack based, CISC (Complex Instruction Set
Computer), and RISC (Reduced Instruction Set Computer).
 3. Instruction selection:
 a. The target machine must be able to execute the code sequence that the code generator creates from
the IR program.
 b. If the IR is high level, the code generator may use code templates to convert each IR statement
into a series of machine instructions.
 4. Register allocation:
 a. A key problem in code generation is deciding what values to hold in which registers on the target
machine do not have enough space to hold all values.
 b. Values that are not held in registers need to reside in memory. Instructions involving register
operands are invariably shorter and faster than those involving operands in memory, so efficient
utilization of registers is particularly important.
 c. The use of registers is often subdivided into two subproblems:
 i. Register allocation, during which we select the set of variables that will reside in registers at each
point in the program.
 ii. Register assignment, during which we pick the specific register that a variable will reside in.
 5. Evaluation order:
 a. The order in which computations are performed can affect the efficiency of the target code.
 b. Some computation orders require fewer registers to hold intermediate results than others.
Various issues of code loop optimization:
1. Function preserving transformation: The function preserving transformations are basically
divided into following types:
a. Common sub-expression elimination:
 i. A common sub-expression is just an expression that has previously been computed and is utilized
repeatedly throughout the programme.
 ii. We stop computing the same expression repeatedly if the outcome of the expression is unchanged.
For example:
Before common sub-expression elimination:

After common sub-expression elimination:

 iii. In given example, the equation a = t * 4 – b + c is occurred most of the times. So it is eliminated
by storing the equation into temp variable.
b. Dead code elimination:
 i. Dead code means the code which can be emitted from program and still there will be no change in
result.
 ii. A variable is live only when it is used in the program again and again. Otherwise, it is declared as
dead, because we cannot use that variable in the program so it is useless.
 iii. The dead code occurred during the program is not introduced intentionally by the programmer.
For example:

 iv. If false becomes zero, is guaranteed then code in ‘IF’ statement will never be executed. So, there
is no need to generate or Write code tor this statement because it is dead code.
c. Copy propagation:
 i. Copy propagation is the concept where we can copy the result of common sub-expression and use
it in the program.
 ii. In this technique the value of variable is replaced and computation of an expression is done at the
compilation time.
For example:
Here at the compilation time the value of pi is replaced by 3.14 and r by 5.
d. Constant folding (compile time evaluation):
 i. Constant folding is defined as replacement of the value of one constant in an expression by
equivalent constant value at the compile time.
 ii. In constant folding all operands in an operation are constant. Original evaluation can also be
replaced by result which is also constant.
For example: a = 3.14157/2 can be replaced by a = 1.570785 thereby eliminating a division
operation.
2. Algebraic simplification:
a. Peephole optimization is an effective technique for algebraic simplification.
b. The statements such as

can be eliminated by peephole optimization.

a. Define a DAG. Construct a DAG for the expression: p+p*(q-r)+(q-r)*s

b. Generate the three address code for the following code fragment.

Ans. Three address code for the given code:


NEW SET OF QUESTIONS
Q1. State any two reasons as a why phases of compiler should be grouped.

Ans. Two reasons for phases of compiler to be grouped are:

 1. It helps in the creation of compilers for various source languages.


 2. Several compilers share similar front end phases.

Q2. Discuss the challenges in compiler design.


Ans. Challenges in compiler design are :

 i. There must be no errors in the compiler.


 ii. It must provide accurate machine code that executes quickly.
 iii. The speed of the compiler itself, i.e., the ratio of the compilation time to the
programme size, must be favourable.
 iv. It has to be transportable (i.e., modular, supporting separate compilation).
 v. It ought to display error and diagnostic messages.

Q3. What is translator ?

Ans. A translator transforms a programme written in a source language into a


programme written in a target language.

Q4. Differentiate between compiler and assembler.

Ans.

S. No. Compiler Assembler

1. High level language is transformed into machine language. It transforms machine language f

2. Debugging is slow. Debugging is fast.

3. It is used by C, C++. It is used by assembly language.

Q5. What do you mean by regular expression ?

Ans. Mathematical symbolisms known as regular expressions are used to


characterize a set of strings in a particular language. It offers simple and practical
notation for expressing tokens.
Q6. Differentiate between compilers and interpreters.

Ans.

S.
Compiler Interpreter
No.

It scans through the entire source code and lists each It scans each line individually, and if there
1.
syntax error as it is found. programme immediately stops running.

The compiler’s output is saved in a file. Hence, the file The machine code that an interpreter gene
2.
need not be continuously compiled. As a result, each time, we must interpret th

3. It takes less time to execute. It takes more time to execute.

Q7. What is cross compiler ?

Ans. A cross compiler is a compiler that has the ability to generate executable code
for platforms other than the one it is currently executing on.

Q8. How YACC can be used to generate parser ?

Ans. 1. YACC is a tool which will produce a parser for a given grammar.

2. A (LALRO) grammar compilation software called YACC is used to generate


source code. As a result, a parser is created using it.

Q9. Write regular expression to describe a language consist of strings made of


even number a and b.

Ans. Language of DFA which accept even number of a and b is given by:
Regular expression for above DFA:

(aa + bb + (ab + aa aa + bb)* (ab + ba))*

Q10. Write a CF grammar to represent palindrome.

Ans.

Q11. List the features of good compiler.

Ans. Features of good compiler are:

 i. It takes less time to compile the vast amount of code.


 ii. The source language can be translated into another language with
decreased memory usage.
 iii. If the source programme needs to be amended frequently, it can only
compile the modified code segment.
 iv. Effective compilers work closely with the operating system while
addressing hardware interrupts.
Q12. What are the two parts of a compilation ? Explain briefly.

Ans. Two parts of a compilation are:

 1. Analysis part: It separates the source programme into its component


parts and produces a middle representation of the source programme.
 2. Synthesis part: It constructs the desire target program from the
intermediate representation.

Q13. What are the classifications of a compiler ?

Ans. Classification of a compiler:

 1. Single pass compiler


 2. Two pass compiler
 3. Multiple pass compiler

Unit-II: Basic Parsing Techniques (Short Question)


Q1. State the problems associated with the top-down parsing.

Ans. Problems associated with top-down parsing are:

 1. Backtracking
 2. Left recursion
 3. Left factoring
 4. Ambiguity

Q2. What is the role of left recursion ?

Ans. A string is recognised as being a component of a language by the fact that it


breaks down into another string from the same language (on the left) plus a suffix in
a special example of recursion known as left recursion (on the right).
Q3. Name the data structures used by LL(1) parser.

Ans. Data structures used by LL(1) parser are:

 i. Input buffer
 ii. Stack
 iii. Parsing table

Q4. Give the data structures for shift reduce parser.

Ans. Data structures for shift reduce parser are:

 i. The input buffer storing the input string.


 ii. A stack for storing and accessing the L.H.S and R.H.S of rules.

Q5. What are various types of LR parser?

Ans. Various types of LRL parser are:

 i. SLR parser
 ii. LALR parser
 iii. Canonical LR parser

Q6. Why do we need LR parsing table?

Ans. To parse the input string using the shift-reduce approach, we require LR
parsing tables.

Q7. State limitation of SLR parser.

Ans. Limitation of SLR parser are :


 i. Since SLR grammars are a limited subset of CFG, SLR is less powerful than
LR parser.
 ii. The issue of many entries comes when reduction might not produce the
rightmost preceding derivations.

Q8. Define left factoring.

Ans. A grammar G is said to be left factored if any production of it is in no form of:

Q9. Define left recursion.

Ans. A grammar G(V, T, P, S) is said to be left recursive if it has a production in the


form of:

Unit-III: Syntax Directed Translation (Short


Question)
Q1. What do you mean by Syntax Directed Definition (SDD) ?

Ans. SDD is a generalization of CFG in which each grammar production X → 𝛂 is


associated with a set of semantic rules of the form a = f(b 1,b2,………….. bn) where a
is an attribute obtained from the function f.

Q2. Define intermediate code.

Ans. Intermediate code is a type of code produced during compilation that is


extremely similar to machine code.
Q3. What are the various types of intermediate code representation ?

Ans. Different forms of intermediate code are:

 i. Abstract syntax tree


 ii. Polish (prefix/postfix) notation
 iii. Three address code

Q4. Define three address code.

Ans. Three address code is an abstract form of intermediate code that can be
implemented as a record with the address fields. The general form of three address
code representation is :

a =b op c.

Q5. What is postfix notations?

Ans. Postfix notation is the type of notation in which operator are placed at the right
end of the expression. For example, to add A and B, we can write as AB+ or BA+.

Q6. Why are quadruples preferred over triples in an optimizing compiler?

Ans. 1. In an optimizing compiler, quadruples are preferred over triples because it


can move instructions around.

2. The outcome of any given operation in the triples notation is referenced to by its
position, thus if one instruction is relocated, modifications must be made to all
references that lead to that result.

Q7. Give syntax directed translation for case statement.

Ans.
Q8. What is a syntax tree ? Draw the syntax tree for the following statement: c
bcba–*+–*=

Ans. 1. In contrast to a parse tree, which includes unnecessary information, a syntax


tree only displays the syntactic structure of a programme.

2. Syntax tree is condensed form of the parse tree.

Syntax tree of c b c b a – * + – * = :

In the given statement, number of alphabets is less than symbols. So, the syntax
tree drawn will be incomplete.
Q9. Define backpatching.

Ans. Backpatching is a procedure used in the code generation process to fill in


blanks on labels with suitable semantic actions. Backpatching is done for:

 i. Boolean expressions
 ii. Flow of control statements

Q10. Give advantages of SDD.

Ans. SDD’s primary benefit is that it aids in selecting the evaluation order. The
assessment of semantic SDDD operations may result in the generation of code, the
saving of data to a symbol table, or the issuance of error signals.

Q11. What are quadruples ?

Ans. Quadruples are structure with at most four fields such as op, arg1, arg2,
result.

Q12. Differentiate between quadruples and triples.

Ans.

S.
Quadruples Triples
No.

Quadruple represent three address code by using four fields: OP, Triples represent three addres
1.
ARG1, ARG2, RESULT. fields:OP, ARG1, ARG 2.

Triple faces the problem of c


2. Quadruple can perform code immovability during optimization.
optimization.
Unit-IV: Symbol Tables (Short Question)
Q1. Write down the short note on symbol table.

Ans. A compiler uses a symbol table as a data structure to store details on the
scope, life, and binding of names. The various programme elements, such as
variables, constants, procedures, and the labels of statements, are identified by
these names in the source code.

Q2. Describe data structure for symbol table.

Ans. Data structures for symbol table are:

 1. Unordered list
 2. Ordered list
 3. Search tree
 4. Hash tables and hash functions

Q3. What is mean by activation record ?

Ans. Activation record is a type of data structure that holds crucial state details for a
certain function call instance.

Q4. What is phase level error recovery ?

Ans. By inserting references to error routines into the empty rows of the predictive
parsing table, phase level error recovery is performed. These procedures have the
ability to modify, add, or remove symbols from the input while displaying the proper
error warnings. They could possibly fall out of the stack.

Q5. List the various error recovery strategies for a lexica analysis.
Ans. Error recovery strategies are:

 1. Panic mode recovery


 2. Phrase level recovery
 3. Error production
 4. Global correction

Q6. What are different storage allocation strategies ?

Ans. Different storage allocation strategies are:

 i. Static allocation
 ii. Stack allocation
 iii. Heap allocation

Q7. What do you mean by stack allocation ?

Ans. The storage is arranged as a stack in a stack allocation approach. The control
stack is another name for this stack. This stack uses push and pop operations to
manage the available storage for local variables.

Q8. Discuss heap allocation strategy.

Ans. In order to store data objects during runtime, the heap allocation allocates and
deallocates a continuous block of memory as needed.

Q9. Discuss demerit of heap allocation.

Ans. The demerit of heap allocation is that it causes memory gaps during allocation.
Unit-V: Code Generation (Short Question)
Q1. What do you mean by code optimization ?

Ans. The approach employed by the compiler to improve the execution efficiency of
the generated object code is referred to as code optimisation.

Q2. Define code generation.

Ans. Code generation is the process of developing assembly language/machine


language statements that, when executed, accomplish the operations described by
the source programme. The ultimate activity of a compiler is code generation.

Q3. What are the various loops in flow graph ?

Ans. Various loops in flow graph are:

 i. Dominators
 ii. Natural loops
 iii. Inner loops
 iv. Pre-header
 v. Reducible flow graph

Q4. Define DAG.

Ans.

 1. DAG is an abbreviation for Directed Acyclic Graph.


 2. DAGs are a good data structure for performing transformations on basic
blocks.
 3. A DAG depicts how the value generated by each statement in the basic
block is used in the block’s subsequent statement.

Q5. Write applications of DAG.


Ans. The DAG is used in:

 i. Identifying common sub-expressions.


 ii. Deciding which names are utilized inside and outside the computed block.
 iii. Identifying which block statements may have value outside of the block.
computed
 iv. Simplifying the quadruple list by removing common sub-expressions.

Q6. What is loop optimization ? What are its methods ?

Ans. Loop optimization is a technique in which code optimization is performed on


inner loops. The loop optimization is carried out by following methods:

 i. Code motion
 ii. Induction variable and strength reduction
 iii. Loop invariant method
 iv. Loop unrolling
 v. Loop fusion

Q7. What are various issues in design of code generation ?

Ans. Various issues in design of code generation are:

 i. Input to the code generator


 ii. Target programs
 iii. Memory management
 iv. Instruction selection
 v. Register allocation
 vi. Choice of evaluation order
 vii. Approaches to code generation

Q8. List out the criteria for code improving transformations.

Ans. Criteria for code improving transformations are :


 1. A transformation must preserve meaning of a program.
 2. A transformation must improve program by a measurable amount on
average.
 3. A transformation must worth the effort.

Q9. What is the use of algebraic identities in optimization of basic blocks ?

Ans. Uses of algebraic identities in optimization of basic blocks are:

 1. Using the strength reduction technique, the algebraic transformation may


be produced.
 2. The algebraic transformations can be achieved using the constant folding
technique.
 3. To execute algebraic transformations on fundamental blocks, employ
common sub-expression elimination, associativity, and commutativity.

Q10. Discuss the subset construction algorithm.

Ans.

 1. A subset creation algorithm is an algorithm that partitions algorithms into


subsets by locating the algorithm’s leader.
 2. A leader is the target element of conditional or unconditional goto.
 3. Begin at the leader and end right before the leader statement to build the
basic block.

Q11. How to perform register assignment for outer loops ?

Ans. Global register allocation first allocates registers for variables in inner loops
because that is where a programme spends the most of its time, and the same
register is utilized for variables if they appear in an outer loop.

Q12. What is meant by viable prefixes ?


Ans. Viable prefixes are right sentential form prefixes that can appear on the stack of
a shift-reduce parser.

Q13. Define peephole optimization.

Ans. Peephole optimisation is a type of code optimisation that only affects a small
portion of the code. It is used to a very tiny collection of instructions in a code
segment. Peephole refers to the limited set of instructions or small portion of code on
which peephole optimisation is conducted.

Impotent Topic:

Phases of compiler simple diagram and technical diagram.

Translator and compiler

DAG.

Quadruple, Triple, Indirect triple and its example

Operator Precedence grammar.

Constant folding.

Assembler

Parshing and its types

Parser with types.

Hashing

Symbol table and error handling table

Left recursion, Right recursion and its example

NFA, REGULAR EXPRESSION, DFA, CFG

Constructing process of NFA for any expression

Annotated parse tree and its example, construction of annotated tree

You might also like