0% found this document useful (0 votes)
24 views

SP Question Bank

The document provides information about system programming including definitions of key terms and concepts. It discusses the system hierarchy, machine structure with diagrams, types of computer architectures including von Neumann and Harvard, advantages of high-level languages, the source program lifecycle, levels of system software, and differences between compilers and interpreters.

Uploaded by

fgondaliya721
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

SP Question Bank

The document provides information about system programming including definitions of key terms and concepts. It discusses the system hierarchy, machine structure with diagrams, types of computer architectures including von Neumann and Harvard, advantages of high-level languages, the source program lifecycle, levels of system software, and differences between compilers and interpreters.

Uploaded by

fgondaliya721
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

SP Question Bank

SYSTEM PROGRAMMING(IT603)

QUESTION BANK

CHAPTER 1 : Overview Of System Software

Define Terms :

a) System Programming – it is a process of creating various software and utilities that interacts with computer
hardware and help hardware components run smoothly as well as provide a foundation for other software
application to run efficiently. (~additional - Requires high level of knowledge about architecture and functioning
of computing machines Uses a low-level programming language.)

b) System Software- they are the various software’s that are designed to manage and control a computer's hardware
and provide essential services to other software applications. It includes the operating system, device drivers, utility
programs

(c)Machine Language - Machine language is the lowest level programming language consists of binary code that
directly communicates with the computer's hardware. It tells the computer what to do.

(d) High Level Language - High-level language is a more user-friendly way to write programs. t uses human-readable
syntax and provides a higher level of abstraction, making it easier for programmers to write and understand code.
Examples:- Python, Java, C++, and JavaScript.

(e) Operating System - Operating System is a specific type of system software that acts as the core software
component of a computer act as a intermediate between a computer hardware and user. It manages hardware
resources, provides a user interface, and enables the execution of applications.

1. Explain System Hierarchy.

-> System hierarchy refers to the organization and structure of a computer system from its lowest-level hardware
components to the highest-level software applications. It's often depicted as a pyramid or hierarchy, with hardware
at the base and user applications at the top, illustrating how each layer builds upon the one below it.

APPLICATION SOFTWARE
Designed to perform special functions other than the basic operations carried out by a computer.
Examples : Creating text & image documents, Game playing

UTILITY SOFTWARE
Designed for users to assist in maintenance and monitoring activities. Help maintain and protect system but do not
directly interface with the hardware.
Examples - Anti-virus software, Firewalls
SYSTEM SOFTWARE

They are the various software’s that are designed to manage and control a computer's hardware and provide
essential services to other software applications. It includes the operating system, device drivers, utility programs

Examples - Operating system, Device drivers,

2. Explain Machine structure with necessary diagrams.

A machine structure diagram typically illustrates the major components of a computer system, including the CPU,
memory, input/output devices, and buses.

Central Processing Unit (CPU):

The CPU is the brain of the computer and performs all calculations and data processing. It consists of the Arithmetic
Logic Unit (ALU) for mathematical operations and the Control Unit (CU) for managing instructions.

Memory:

Memory, often divided into RAM (Random Access Memory) and storage (e.g., hard drive or SSD), stores data and
instructions for the CPU to work with.

RAM is used for temporarily holding data and running programs.

Storage retains data even when the computer is powered off.

Input/Output (I/O) Devices:

These devices allow the computer to interact with the outside world. Common I/O devices include the keyboard,
mouse, monitor, printer, and network interfaces.

Buses:

Buses are data highways that connect various components in the computer. They include:

Data Bus: Transfers data between the CPU, memory, and I/O devices.

Address Bus: Specifies memory locations for reading or writing data.


Control Bus: Manages control signals, indicating data read/write, memory access, and device operations.

Motherboard:

The motherboard is the main circuit board of the computer. It houses the CPU, memory modules, and provides slots
and connectors for expansion cards, such as graphics cards and network cards.

Clock and Timing Circuits:

The clock generates timing signals that synchronize the operation of the CPU and other components. Clock speed is
measured in Hertz (Hz) and determines the computer's processing speed.

Cache Memory:

Cache memory is a small but ultra-fast type of memory located close to the CPU. It stores frequently used data and
instructions to speed up access times.

Control Unit (CU):

The Control Unit interprets and manages instructions from the computer's memory, directing the execution of tasks.

Arithmetic Logic Unit (ALU):

The ALU performs mathematical calculations and logical operations, such as addition, subtraction, and comparisons.

Registers:

Registers are small, high-speed storage locations within the CPU used for holding data temporarily during processing.

System Clock:

The system clock generates timing signals that synchronize the operations of various components within the
computer.

Power Supply Unit (PSU):

The PSU provides electrical power to the computer, converting AC (alternating current) from the wall outlet to DC
(direct current) for the computer's components

3. Types of Computer Architecture:

Ans - Computer architecture refers to the design of a computer system, including its organization, structure, and
functionality. There are primarily two types of computer architectures:

a. Von Neumann Architecture: This is the traditional and most common type of computer architecture. It's
characterized by a single shared memory for both data and instructions. In this architecture, the CPU fetches and
executes instructions sequentially. The von Neumann architecture is used in most general-purpose computers and is
known for its simplicity and ease of programming.

b. Harvard Architecture: In this architecture, there are separate memory units for data and instructions. This
separation allows for parallel access to data and instructions, which can result in faster execution times. Harvard
architecture is commonly used in embedded systems and specialized computing devices where speed and efficiency
are critical.

4. Advantage of High-level Language.

High-level programming languages offer several advantages over low-level languages like assembly or machine code:

• Abstraction: High-level languages provide a higher level of abstraction, making it easier to write and
understand code.

• Portability: Code written in a high-level language is often more portable across different platforms, as the
compiler or interpreter handles platform-specific details.

• Productivity: High-level languages enable faster development and debugging, leading to increased
programmer productivity.

• Readability: High-level code is more human-readable, making it easier to maintain and collaborate on
projects.

• Rich Standard Libraries: High-level languages often come with extensive libraries that simplify common
tasks, reducing the need for reinventing the wheel.

5. Draw and explain the life cycle of the source program.


6. Explain levels of system software.

7. Difference between von-neumann architecture and harvard

architecture.

Aspect Von Neumann Architecture Harvard Architecture

Single shared memory for data and


Memory Structure instructions Separate memory units for data and instructions

Slower due to sequential instruction Potentially faster due to parallel access to


Access Speed and data fetching instructions and data

Commonly used in general-purpose Often used in embedded systems and


Usage computers specialized devices

Complexity Simpler to implement and program More complex, especially in hardware design

Instruction/Data Instructions and data fetched from the Instructions and data fetched from separate
Fetching same memory memory units

Limited parallelism due to shared Supports greater parallelism for improved


Parallelism memory performance

Examples Most desktop and laptop computers Embedded systems, microcontrollers, DSPs

Pioneered by John von Neumann, Developed at Harvard University, specialized


Historical Significance widely adopted applications

May require more complex programming for


Programming Typically easier to program memory management

Single instruction set for both data Can have separate instruction sets for data and
Instruction Set and instructions instructions
Aspect Von Neumann Architecture Harvard Architecture

Speed and Efficiency May be slower in terms of raw Can offer faster execution times for specific
Trade-off execution speed applications

In simpler terms, the main difference is that Von Neumann uses one library for everything, which is like a computer's
memory, while Harvard uses two separate libraries for reading and writing, which can make some devices work
faster, like a TV remote.

8. Difference between compiler and interpreter.

9. What is interface and types of interface?


CHAPTER 2: Overview Of Language Processor
All concpet of language processor ch also This Website

Define Terms:

(a) Specification gap (b)Semantic gap (c) execution gap (d)Language Processor

(d) Language Processor:

• A language processor is a software tool or program that translates high-level programming code (written by
humans) into machine code that a computer can execute. It includes both compilers and interpreters.

Or
1. Difference between Language Processor and Programming Language.

Aspect Language Processor Programming Language

Software tool that translates high-level code Formal language with syntax and rules
Definition into machine code or bytecode. for writing code.

Converts human-readable code into Provides the syntax and structure for
Function machine-executable code. writing code.

Includes languages like Python, Java,


Types Includes compilers and interpreters. C++, etc.

Used in software development and Used by programmers to develop


Usage execution. various software applications.

2. Explain Programming Language and Language Processor with necessary diagrams.

Programming Language:

• A programming language is a formalized set of rules and syntax that allows humans to communicate
instructions to a computer.

• It provides a way for programmers to write code that specifies how a computer should perform certain
tasks.

• Examples include Python, Java, and C++.

Language Processor:

• A software tool that translates high-level programming code into a form executable by a computer.

• Types include compilers (translate all at once) and interpreters (process line by line).

• Converts human-readable code into machine-readable code for execution.


3. Explain language processing activity with necessary examples.

5. Explain Problem oriented Language and Procedure oriented Language.

Problem-Oriented Language:
A problem-oriented language, also known as a domain-specific language (DSL), is a programming language designed
to solve a specific class of problems or address a particular domain or field. These languages are tailored to simplify
the development of software solutions for specific types of tasks or industries.

Examples include SQL for databases or MATLAB for mathematical computations.


Procedure-Oriented Language:
A procedure-oriented language, also known as a structured programming language, is a type of programming
language that emphasizes the use of procedures or functions to structure a program. These languages focus on
decomposing a program into smaller, manageable procedures or functions that can be called in a structured manner.

Examples include C and Pascal, where you structure programs around functions.

6. List various phases of a language processor. Explain roles of the first two

it involves two phases:


1. Analyzing the source program
2. Synthesizing the target program
The components of the language processor involved in analyzing the source program constitute the analysis phase.
The components of the language processor involved in synthesizing a target program constitute the synthesis phase.
Analysis Phase
The analysis phase consists of three components.
1. Lexical Rule – It analyzes the formation of valid tokens in the source program.
2. Syntax Rule – It identifies whether the statements in the source program are in a well-defined format.
3. Semantic Rule – It associates meaning with each valid statement in a source program.
Synthesis Phase
The synthesis phase constructs a target language statement for each valid source language statement. However, the
meaning of the target language statement is the same as that of the source language statement.
The synthesis phase involves two crucial tasks.
1. Memory Allocation – Create a data structure in the target program.
2. Code Generation – Generate a target program.
Also explain symbol table
Symbol Table:
• A symbol table is a data structure used by the language processor to store information about identifiers
(variables, functions, classes) used in the source code.
• It associates each identifier with relevant attributes, such as its data type, memory location, scope, and other
properties.
CHAPTER 3: Assembler
Define Terms:

(a) Assembler:

• Definition: An assembler translates assembly language code into machine code.

• Example: In x86 assembly language, the instruction MOV AL, 10h moves the hexadecimal value 10 into the
AL (accumulator) register. An assembler translates this into machine code.

(b) Label:

• Definition: A label is a named point in code used for control flow.

• Example: In assembly language, loop_start: is a label marking the beginning of a loop. JMP loop_start is used
to jump back to that label.

(c) Opcode:

• Definition: An opcode specifies a CPU operation in machine code.

• Example: In x86 assembly, the opcode ADD is used for addition. ADD AX, BX adds the values in the AX and BX
registers.

(d) Operand:

• Definition: An operand is a value acted upon by an instruction.

• Example: In MOV AX, 5, the value 5 is the operand that is moved into the AX register by the MOV instruction.

1. Explain Symbol table with different types of data structure.

Symbol table is created and maintained by the compiler in order to keep track of the semantics of variables i.e. it
stores information about the scope and binding information about names, information about instances of various
entities such as variables and function names, classes, objects, etc.

Binary Search Tree, Hash Tables, and Linear search are the data structures used for the implementation of the symbol
table.
2. Explain different types of Assembly language statements.
4. Write down the function and role of the assembler.

the assembler's function is to translate assembly language code into machine code, resolve symbols, and perform
error detection, while its role is to bridge the gap between high-level programming languages and the computer's
CPU.

Functions:

1. Translation: Converts human-readable assembly code into machine code.

2. Symbol Resolution: Translates symbolic labels and identifiers into memory addresses.

3. Error Detection: Identifies and reports syntax and semantic errors in the code.

4. Code Generation: Produces binary CPU instructions from assembly code.

5. Optional Optimization: May optimize code for efficiency.

Roles:

1. Bridge: Acts as a bridge between high-level programming languages and machine code.

2. Platform Specific: Tailored to specific CPU architectures for precise control.

3. Low-Level Control: Provides fine-grained control over hardware resources.

4. Debugging and Profiling: Supports debugging and profiling at a low level.

5. Efficiency: Generates code that is often more efficient in execution and memory usage compared to high-
level languages.

5. Write down the features of the assembly language.

Assembly language, as a low-level programming language, possesses several distinctive features that make it unique
and well-suited for certain types of programming tasks. Here are some key features of assembly language:

1. Human-Readable: Assembly language uses mnemonics and symbolic names for instructions and memory
locations, making it more readable and understandable than machine code.

2. Low-Level: It is a low-level language, closely tied to the architecture of the computer's CPU. This enables
precise control over hardware resources.

3. Platform-Specific: Assembly language is specific to a particular CPU architecture, which means code written
for one architecture may not run on another without modification.

4. Direct Hardware Interaction: It allows direct interaction with hardware components, making it suitable for
device drivers, embedded systems, and real-time programming.

5. Efficiency: Assembly language programs are often highly efficient in terms of execution speed and memory
usage, making it ideal for performance-critical applications.

6. No Abstractions: Unlike high-level languages, assembly provides no abstractions or high-level data


structures. Programmers must manage memory and data manually.

7. Fine-Grained Control: Assembly language offers fine-grained control over CPU registers, memory locations,
and instruction execution.

6. Explain and show usage by giving examples of following assembler directives: ORIGIN, EQU, LTORG, START.

1. ORIGIN (ORG):
• Explanation: Specifies the starting memory address for a program.
• Usage Example: ORG 1000 sets the program origin to memory address 1000.
2. EQU:
• Explanation: Defines symbolic constants for use in the code.
• Usage Example: VALUE EQU 10 defines a constant named VALUE with a value of 10.
3. LTORG:
• Explanation: Specifies where string literals and constants should be placed in memory.
• Usage Example: After using string literals, LTORG places them efficiently in memory.
4. START:
• Explanation: Indicates the program's entry point for execution.
• Usage Example: START: marks the start of the program or a subroutine.

7. Explain & compare various intermediate code forms(representations) for an assembler.

1. Abstract Syntax Tree (AST):


• Explanation: An AST is a tree-like structure that represents the syntactic structure of the assembly
code. It breaks down the code into nodes representing instructions, operands, and their
relationships.
• Usage: ASTs are used during parsing for syntax analysis to ensure the code's correct structure.
2. Symbol Table:
• Explanation: A symbol table is a data structure storing information about symbols like labels,
variables, and constants used in the assembly code. It associates symbols with attributes like data
type, scope, and memory location.
• Usage: Symbol tables enable symbol resolution and ensure correct symbol usage.
3. Intermediate Representation Code (IR):
• Explanation: IR is a high-level, architecture-independent representation of assembly code. It focuses
on the code's semantic meaning, abstracting away low-level details.
• Usage: IR simplifies code optimization and cross-platform code generation.
4. Three-Address Code (TAC):
• Explanation: TAC represents assembly instructions as three-address statements with an operator,
two operands, and a result. It simplifies code optimization and provides a structured representation.
• Usage: TAC streamlines code optimization and facilitates code generation.
5. Control Flow Graph (CFG):
• Explanation: CFG is a directed graph that illustrates control flow in assembly code. Nodes represent
code blocks, and edges represent control flow.
• Usage: CFG aids in analyzing and optimizing control flow and identifying opportunities for code
restructuring.
6. Quadruples:
• Explanation: Quadruples are a compact representation of assembly code. They express instructions
in a simplified form, with operation codes, two operands, and a result.
• Usage: Quadruples ease optimization and code transformation.

8. Explain the role of Mnemonic Opcode Table, Symbol Table, Literal Table and POOL Table in the assembly process of
assembly language program.

1. Mnemonic Opcode Table:

• Role: The Mnemonic Opcode Table contains a mapping of mnemonic instructions used in the
assembly code to their corresponding binary opcodes. It associates each assembly instruction with
its machine-level representation.

• Usage: During the assembly process, the assembler uses this table to look up the binary opcodes for
assembly instructions, allowing it to generate the appropriate machine code.

2. Symbol Table:
• Role: The Symbol Table is a data structure that stores information about symbols in the assembly
code. Symbols include labels, variables, constants, and function names. The table associates each
symbol with attributes such as its name, data type, scope, and memory location.

• Usage: The Symbol Table is crucial for symbol resolution, ensuring that symbols are correctly defined
and referenced in the code. It tracks symbol values, enabling the assembler to generate correct
machine code and perform various checks and optimizations.

3. Literal Table:

• Role: The Literal Table is used to manage string literals and constants encountered in the assembly
code. It stores information about each literal, including its value and memory location.

• Usage: During assembly, when literals are encountered in the code, the Literal Table is updated to
track their values and memory addresses. This ensures efficient memory allocation for literals and
allows the assembler to generate code that correctly references them.

4. POOL Table:

• Role: The POOL Table keeps track of literal pools within the assembly code. A literal pool is a
collection of literals that are placed together in memory to optimize storage efficiency.

• Usage: When a new literal is encountered, it is added to the current literal pool. The POOL Table
helps the assembler keep track of the active literal pool and indicates when a new pool should be
started. This organization improves memory usage and code generation.

In summary, these tables are essential components of the assembly process:

• The Mnemonic Opcode Table aids in translating assembly instructions into machine code.

• The Symbol Table manages symbols, ensuring their correct usage and resolution.

• The Literal Table handles string literals and constants, tracking their values and memory locations.

• The POOL Table organizes literals into pools, optimizing memory allocation.

9. Compare single pass assembler and two pass assembler. Explain two pass assemblers in detail with suitable
examples.

Single pass Two pass


A two pass assembler is a more sophisticated approach that involves two passes or phases of processing the
assembly code.
Pass 1:
• In the first pass, the assembler reads the entire assembly code and generates a symbol table.
• It identifies labels, their addresses, and attributes.
• It performs error checking and records symbol information.
Pass 2:
• In the second pass, the assembler reads the code again and generates machine code.
• It resolves symbols, including forward references, using the symbol table from pass 1.
• It performs code optimization and produces the final machine code.

10.Consider the following assembly program. Show (i) Contents of Symbol

Table (ii) intermediate codes using Variant representation (iii)

corresponding machine codes

START 100

READ A

READ B

READ C

MOVER AREG, A

ADD AREG, B ADD

AREG, C MULT

AREG, C

MOVEM AREG, RESULT

PRINT RESULT
STOP A

DS 1

B DS 1

C DS 1

RESULT DS 1

END

Instruction opcodes:

READ – 09, MOVER – 04, MOVEM – 05, ADD – 01,

MULT – 03, PRINT – 10, STOP – 00 Assembler-directive codes:

START – 01, END – 02 Register code: AREG – 01


11. Briefly explain the tasks performed by analysis and synthesis phases of simple assembly schemes.

• Analysis Phase:
• Tasks: This phase involves reading and analyzing the source code to gather information about symbols,
their attributes, and the program's structure.
• Specific Tasks:
• Lexical Analysis: Breaking the source code into tokens (e.g., keywords, identifiers, operators).
• Syntax Analysis: Checking the syntax and structure of the code to ensure it follows the
language's grammar rules.
• Semantic Analysis: Examining the code's meaning and context, including type checking and error
detection.
• Symbol Table Construction: Building a symbol table that stores information about labels,
variables, and constants.
• Purpose: The analysis phase ensures the code is well-formed, resolves symbol references, and identifies
any errors or issues.
• Synthesis Phase:
• Tasks: This phase involves generating the target machine code or executable code based on the analyzed
source code.
• Specific Tasks:
• Code Generation: Translating the assembly instructions into machine code or lower-level
representations.
• Symbol Resolution: Resolving symbolic references using the symbol table constructed during
analysis.
• Error Reporting: Reporting any remaining errors or warnings related to code generation.
• Purpose: The synthesis phase produces the final output, such as object code or an executable program,
ready for execution on the target machine.

12.What are advanced assembler directives? Explain any two with suitable examples.

Advanced assembler directives are specialized instructions or commands that provide additional control and
information to the assembler during the assembly process.

13.Define forward references. How can it be solved using back-patching? Explain with examples.

• Forward References: Forward references occur when a symbol (e.g., a label) is used before it is defined in the
code. For example, a branch instruction may reference a label that appears later in the program.

• Back-Patching: Back-patching is a technique used to resolve forward references. It involves initially assigning
a temporary or placeholder value to the forward reference and later updating it with the correct value once
the target symbol is defined.

Example:
Chapter 4:- Macro and Macro Processors
1. Explain use and field of following tables of macro KPDTAB, MDT, EVTAB, SSTAB

The tables you mentioned, KPDTAB, MDT, EVTAB, and SSTAB, are often used in the context of macro assemblers,
which are tools used in software development for translating assembly language code into machine code. Each of
these tables serves a specific purpose in the macro assembler's operation:

1. KPDTAB (Keyword Parameter Default Table):

• Use: KPDTAB is a table used to store default values for macro parameters. Macros often have
parameters that can be provided by the programmer when the macro is called. If a parameter is not
explicitly provided when the macro is invoked, the assembler looks up the default value for that
parameter in KPDTAB.

• Field: This table typically consists of two columns: one for the parameter names and the other for
their default values.

2. MDT (Macro Definition Table):

• Use: MDT is a critical table in the macro assembly process. It stores the definitions of macros in a
compact format. When a macro is invoked, the assembler looks up its definition in MDT to expand
the macro call into assembly code.

• Field: MDT stores macro definitions, including the macro name, the formal parameters, the body of
the macro, and any labels or code inside the macro definition.

3. EVTAB (Expandable Variable Table):

• Use: EVTAB is used to manage expandable variables within macros. An expandable variable is a
placeholder that can be substituted with different values each time a macro is called, making macros
more versatile.

• Field: EVTAB contains entries for expandable variables, associating each variable with a value, and a
counter to keep track of how many times the variable is used and should be expanded.

4. SSTAB (Symbol or Symbolic Substitute Table):

• Use: SSTAB is used for managing symbol substitutions within macros. When a macro is defined, it
might include symbolic names that need to be replaced with actual values or symbols when the
macro is expanded.

• Field: SSTAB contains entries for symbolic names used within macros and their corresponding
substitutions. It allows the assembler to replace symbolic names with actual values during macro
expansion

2. Explain following facilities for expansion time loop with example.

(1) REPT statement

(2) IRP statement

Facilities for expansion-time loops, such as the REPT and IRP statements, are used in assembly language and macro
assembly to repeat a set of instructions or operations multiple times during macro expansion

REPT Statement (Repeat Block):

The REPT statement is used to repeat a block of assembly instructions a specified number of times. It's particularly
useful for generating repetitive code sequences with slight variations.
Eg. Suppose you want to generate a sequence of instructions to clear registers R0 to R3:

REPT 4

CLR R% = % ; % represents the iteration count

ENDR

IRP Statement (Iterative Replacemnt Parameters):

The IRP statement is used to iterate over a list of parameters and generate code for each parameter. It's typically
used when you want to perform the same operation on a list of items.

Suppose you want to generate code to load data from an array of values into registers:

DataArray: .DATA 1, 2, 3, 4

.CODE

IRP Val, DataArray

LDR R0, Val ; Load data into R0

ADD R0, R0, #1 ; Increment the value

ENDIR

3. Draw a flowchart and explain simple one pass macro processor.


4. Write and explain the algorithm for macro expansion.

Macro Expansion Algorithm:

1. Start:

• Initialize necessary data structures, such as the Macro Definition Table (MDT) and other relevant
tables.

2. Read the Source Code Line:

• Begin reading the source code line by line, left to right.

3. Check for a Macro Invocation:

• For each line of code, check if it contains a macro invocation (e.g., MCR X, Y). A macro invocation
typically includes the macro name and its arguments.

4. If it's a Macro Invocation:

• Retrieve the macro definition from the MDT based on the invoked macro's name.

• Map the formal parameters in the macro definition to the actual arguments provided in the
invocation.

• Replace the macro invocation with the macro definition, substituting the formal parameters with the
actual arguments.

5. If it's Not a Macro Invocation:

• If the line is not a macro invocation, it's regular assembly code. Include it as is in the expanded code.

6. Repeat for Each Line:

• Continue reading and processing each line of the source code until you reach the end.

7. End of Source Code:

• Once all lines have been processed, you've completed the macro expansion process.

8. Finish:

• End the macro expansion process.

Explanation of the Algorithm:

• The macro expansion algorithm is a one-pass process, meaning it goes through the source code only once.

• It scans each line to identify macro invocations. When it detects one, it performs the following:

• Retrieves the corresponding macro definition from the MDT.

• Replaces formal parameters in the macro definition with the actual arguments provided in the
invocation.

• Expands the macro by substituting the invocation with the modified macro definition.

• Regular assembly code lines that are not macro invocations are included in the expanded code as is.

• The algorithm continues to process each line until the end of the source code is reached.

The main data structure involved is the MDT (Macro Definition Table), which stores the macro definitions for
reference during expansion. Additionally, other tables like EVTAB (Expandable Variable Table) and SSTAB (Symbolic
Substitute Table) may be used to manage expandable variables and symbol substitutions within macros.
5. Explain in brief the design of a macro assembler.

1. Macro Definition and Storage:


• The macro assembler provides a mechanism for defining macros. These macro definitions typically
include the macro name, formal parameters, and the body of the macro.
• The assembler stores these macro definitions in a data structure known as the Macro Definition
Table (MDT). Each entry in the MDT links the macro name to its definition.
2. Macro Invocation Handling:
• When the assembler encounters a macro invocation in the source code, it looks up the invoked
macro's name in the MDT to retrieve its definition.
• The formal parameters in the macro definition are mapped to the actual arguments provided in the
invocation.
• The assembler replaces the macro invocation with the expanded macro code, substituting formal
parameters with actual arguments.
3. Expandable Variables and Symbol Substitution:
• Macro assemblers often support expandable variables and symbolic substitutions within macro
definitions. These features allow for dynamic data generation and symbol replacement.
• Expandable variables can be defined and substituted within macro bodies.
• Symbolic names can be substituted with actual values or other symbols using tables like EVTAB
(Expandable Variable Table) and SSTAB (Symbolic Substitute Table).
4. Looping and Iteration:
• Macro assemblers may provide facilities for looping and iteration during macro expansion. This
allows for the repetition of instructions or operations.
• Common looping constructs include the REPT (Repeat Block) and IRP (Iterative Replacement
Parameters) statements.
5. Error Handling:
• A well-designed macro assembler includes error handling mechanisms. It can detect and report
errors such as undefined macros, incorrect macro invocations, or syntax issues in macro definitions.
6. Single Pass or Multi-Pass:
• Macro assemblers can be designed for single-pass or multi-pass processing. In a single-pass design,
macros are expanded during the same pass as regular assembly code. Multi-pass assemblers may
involve separate passes for macro expansion and code generation.
7. Output Code Generation:
• As the assembler processes the source code, it generates the final machine code or assembly code.
The expanded code is usually stored in an output file.
8. Optimization:
• Some macro assemblers incorporate optimization techniques to generate efficient code during
macro expansion. For example, they may eliminate redundant code or optimize the placement of
instructions.
9. Documentation and Comments:
• A well-designed macro assembler should also allow for documentation and comments within macro
definitions and invocations, enhancing code readability and maintainability.
10. Testing and Debugging:
• A macro assembler typically includes testing and debugging features to help developers identify and
rectify issues in macros and the expanded code.

Or

A macro processor is functionally independent of the assembler,


and the output of the macro processor will be a part of the input
into the assembler.

• A macro processor, similar to any other assembler, scans and


processes statements.

• Often, the use of a separate macro processor for handling macro


instructions leads to less efficient program translation because
many functions are duplicated by the assembler and macro
processor.

Design of Macro Assembler

• To overcome efficiency issues and avoid duplicate work by the


assembler, the macro processor is generally implemented within
pass 1 of the assembler.

• The integration of macro processor and assembler is often referred


to as macro assembler.

• Such implementations will help in eliminating the overheads of


creating intermediate files, thus improving the performance of
integration by combining similar functions.

Design of Macro Assembler


Advantages of a macro assembler Disadvantages of a macro

advantage

It ensures that many functions need


not be implemented twice.

Results in fewer overheads because


many functions are combined and
do not need to create intermediate
(temporary) files.

It offers more flexibility in


programming and allows the use of
all assembler features in
combination with macros.

Disadvantages

The resulting pass by combining


macro processing and pass 1 of the
assembler may be too large and
sometimes suffer from core
memory problems.

The combination of macro


processor pass 0 and pass I may
sometimes increase the complexity
of program translation, which may
not be desired.

Design of Macro Assembler


• The assembler will be in one of the three modes:
1. In the normal mode, the assembler will read statement lines from
the source file and write them to the new source file. There is no
translation or any change in the statements.
2. In the macro definition mode, the assembler will continuously copy
the source file to the MDT.
3. In the macro expansion mode, the assembler will read statements
from the MDT, substitute parameters, and write them to the new
source file. Nested macros can be implemented using the
Definition and Expansion (DE) mode.

6. Describe the use of stacks in Expansion of Nested macro calls with example.

Stacks play a crucial role in managing the expansion of nested macro calls in a macro assembler. When macros are
nested (i.e., one macro invocation is made within another), a stack data structure is used to keep track of the active
macro calls and their respective parameters.
Use of Stacks in Expansion of Nested Macro Calls:

1. Stack Initialization: At the start of macro expansion, a stack is initialized. This stack is used to keep track of
active macro calls and their state.

2. Push Macro Invocation: When a macro invocation is encountered, the macro processor pushes relevant
information onto the stack. This information typically includes the name of the macro, the current line
number, and the values of formal parameters.

3. Expand the Inner Macro: If the macro being invoked contains another macro call within its definition, the
processor expands the inner macro as a sub-task, pushing the relevant information onto the stack for that
macro.

4. Pop Macro Call: After an inner macro has been fully expanded, the processor pops it from the stack,
returning control to the previous level of macro expansion.

5. Parameter Scoping: The stack helps maintain parameter scoping. When parameters have the same names at
different levels of nesting, the stack ensures that the correct values are used for the corresponding level.

6. Continuation of Expansion: The processor continues the expansion of the outer macro from where it was
interrupted due to the inner macro invocation.

7. Give suitable example for macro by using conditional expansion or expansion time loops.

; Example Macro: SumOfIntegers

; This macro calculates the sum of integers from 1 to N, where N is a parameter.

; It uses conditional expansion and an expansion-time loop.

SUM_OF_INTEGERS MACRO N

IF N > 0

SUM = 0

REPT N

SUM = SUM + % ; % represents the iteration count (1 to N)

ENDM
DISP "Sum of integers from 1 to ", N, " is: ", SUM

ELSE

DISP "Invalid input. N must be greater than 0."

ENDIF

ENDM

; Usage of the SumOfIntegers macro

.DATA

N_VALUE = 5 ; Change the value of N here

.CODE

SumOfIntegers N_VALUE

HALT

8. Write Macro definition with following and explain.

o Macro using expansion time loop

o Macro with REPT statement

Macro Using Expansion-Time Loop:

• A macro is a reusable block of code in assembly language that can be invoked with parameters.

• Expansion-time loops in macros are used to repeat a set of instructions a specified number of times. This is
achieved by using the REPT statement.

Explanation:

1. In a macro definition, you can include a section of code to be repeated.

2. By using the REPT statement followed by a count, you specify how many times the enclosed code should be
repeated during macro expansion.

3. Inside the REPT loop, you can include any assembly instructions or macros.

4. When the macro is invoked, the instructions within the REPT loop are expanded the specified number of
times.

Macro with REPT Statement:

• The REPT statement is a control structure in macro assemblers that allows you to repeat a block of code a
specified number of times.

Explanation:

1. In a macro definition, you can use the REPT statement to create a repetition loop.

2. The REPT statement is followed by a count or an expression that evaluates to a count.

3. The code block following the REPT statement is repeated for the specified number of iterations.
4. You can use this feature to generate repetitive code patterns or to perform the same operation multiple
times.

5. It's useful when you want to avoid copy-pasting code and instead generate it dynamically based on a count or
condition.

9. Define Macro - preprocessor. Explain steps of Macro Preprocessor Design.

A macro preprocessor, often referred to as a macro processor, is a component of an assembly language or high-level
language compiler that handles macros and preprocessing directives. Macros in this context are user-defined symbols
or identifiers that represent a sequence of instructions or text substitutions. The macro preprocessor performs text
substitution and code generation based on the macros and other preprocessing directives defined in the source
code. Here are the steps involved in the design of a macro preprocessor:

Steps of Macro Preprocessor Design:

1. Tokenization and Lexical Analysis:

• The first step in the design of a macro preprocessor is the tokenization and lexical analysis of the
source code. The preprocessor needs to break down the source code into individual tokens, which
include macro names, arguments, preprocessing directives, and assembly code.

2. Macro Definition Handling:

• The preprocessor should identify and process macro definitions in the source code. This involves
recognizing the macro name, formal parameters, and the body of the macro.

• The design should include a mechanism to store macro definitions, often in a data structure like a
Macro Definition Table (MDT).

3. Macro Expansion:

• When a macro invocation is encountered in the source code, the preprocessor should replace it with
the corresponding macro definition.

• The preprocessor must handle parameter substitution by matching formal parameters in the macro
definition with the actual arguments provided in the invocation.

4. Conditional Compilation:

• The macro preprocessor often handles conditional compilation directives, such as #ifdef, #ifndef, #if,
and #endif. These directives are used to include or exclude portions of code based on predefined
macro symbols.

5. Error Handling:

• The design should include error-handling mechanisms to detect and report issues such as undefined
macros, incorrect macro invocations, or syntax errors in macro definitions.

6. Text Replacement and Code Generation:

• After macro expansion, the preprocessor should perform any necessary text replacements based on
macros, conditional compilation directives, and other preprocessing instructions.

• It generates the final preprocessed source code, which is then passed to the main assembler or
compiler for further processing.

7. Nested Macro Handling:

• A well-designed macro preprocessor should handle nested macro calls. When one macro calls
another, the preprocessor should manage the order of expansion correctly.
8. Optimization (Optional):

• Some macro preprocessors include optimization features to reduce redundancy or improve the
efficiency of code generation.

9. Integration with the Main Assembler/Compiler:

• The preprocessor design should ensure seamless integration with the main assembler or compiler, as
the preprocessed code is typically passed to the compiler for the final compilation.

10. Testing and Debugging Support:

• The preprocessor design should incorporate testing and debugging features to help developers
identify and rectify issues in macros and the preprocessed code

10. Explain Nested macro call with suitable example.

Nested macro calls occur when one macro is invoked within the body of another macro. This allows for the creation
of complex and modular code structures.
Chapter 5:- Linkers and Loaders
1. Explain Absolute loader with example.

• An absolute loader is a program that loads a program into memory at a fixed, predetermined memory
location.

• It assumes that the program's addresses are absolute, meaning they are hardcoded and not subject to
relocation or modification.

• Absolute loaders were used in early computing systems, and they require the program to be loaded into
memory at the exact location specified in the program.

• For example, if a program specifies that it starts at memory address 1000, the absolute loader must load the
program at that specific location.

• The advantage of absolute loaders is simplicity, but they lack the flexibility to handle memory relocation or
linking with other programs.

• Modern computing rarely uses absolute loaders, favoring more advanced loaders and linkers that support
relocatable code and dynamic memory allocation.

2. With example explain how relocation is performed by linker?

Relocation Explanation:

• Relocation is a process performed by a linker to adjust addresses in an executable program to match the
actual memory layout where the program will be loaded.

• Object modules, which are compiled separately, contain addresses relative to their starting points.

• The linker assigns absolute addresses to symbols within modules and adjusts references.

Example:

• Consider two object modules, A and B, with relative addresses within their modules.

• The linker assigns absolute addresses to symbols in each module.

• References within the code are adjusted to use the new absolute addresses.

• The linker combines the code and data, generating an executable program ready to run in memory.

3. In brief explain relocating loader.

Relocating Loader in Brief:

• A relocating loader is a program responsible for loading an executable program into memory.

• It adjusts addresses within the program to match the actual memory location where it's loaded.

• This allows the program to execute correctly, regardless of where it's placed in memory.

• Relocating loaders are essential for running programs on different memory systems or when multiple
programs are loaded simultaneously.

• They ensure that references to symbols (functions or variables) within the program are correctly adjusted,
allowing for flexibility in program loading.

• Modern operating systems and compilers often use relocating loaders to simplify program execution and
enable efficient memory management
4. What is program relocation? How relocation is performed by linker?

Program relocation is the process of adjusting the memory addresses within a compiled program to match the actual
memory location where it is loaded for execution. It is a critical step in the linking process to ensure that the program
can run correctly, regardless of its location in memory.

Relocation is performed by the linker during the final stages of compilation:

1. Symbol Resolution: The linker assigns absolute addresses to symbols (e.g., function or variable names)
within each object module. These addresses are based on the positions of these symbols in memory.

2. Adjusting References: The linker scans through the code and data in each object module and identifies
references to symbols. It adjusts these references to use the new absolute addresses assigned during symbol
resolution.

3. Code Combination: The linker combines all the object modules into a single executable program, ensuring
that there are no conflicts between symbols with the same name.

4. Output Generation: The linker generates an executable program that includes the adjusted code and data.
This program can be loaded into memory at the absolute addresses assigned during symbol resolution.

5. Execution Flexibility: The result is an executable program that can run correctly in memory, regardless of its
location. This allows for the flexibility of loading and running programs in various memory locations, enabling
efficient memory management and execution.

5. What is program relocation? How relocation is performed by linker? Explain with example.

Program relocation is the process of adjusting the memory addresses within a compiled program to match the actual
memory location where it is loaded for execution. It is a critical step in the linking process to ensure that the program
can run correctly, regardless of its location in memory.

Relocation is performed by the linker during the final stages of compilation:

1. Symbol Resolution: The linker assigns absolute addresses to symbols (e.g., function or variable names)
within each object module. These addresses are based on the positions of these symbols in memory.

2. Adjusting References: The linker scans through the code and data in each object module and identifies
references to symbols. It adjusts these references to use the new absolute addresses assigned during symbol
resolution.

3. Code Combination: The linker combines all the object modules into a single executable program, ensuring
that there are no conflicts between symbols with the same name.

4. Output Generation: The linker generates an executable program that includes the adjusted code and data.
This program can be loaded into memory at the absolute addresses assigned during symbol resolution.

Eg

START: 1000

VariableA: 1004

The program assumes it will be loaded into memory starting at address 1000. Now, you want to load this program
into memory at a different location, say, starting at address 2000.

The linker adjusts the addresses within the program as follows:

• START is adjusted to 2000.

• VariableA becomes 2004.

This adjustment ensures that the program runs correctly when loaded at address 2000, even though it was originally
designed for 1000. Relocation by the linker makes this flexibility possible.
6. Differentiate Linker and Loader.

7. Write a brief note on MS-DOS Linker.

The MS-DOS Linker, commonly known as the "LINK" utility, was a fundamental tool for developers working with the
MS-DOS operating system. MS-DOS (Microsoft Disk Operating System) was a popular operating system used on
personal computers during the 1980s and early 1990s. The MS-DOS Linker played a crucial role in creating executable
programs for the MS-DOS platform.

Key Features and Functions:

• Executable File Creation: Combines object files into a single executable program.

• Symbol Resolution: Connects references to functions and variables across modules.

• Memory Allocation: Allocates memory space for program components.

• Address Calculation: Ensures program addresses match actual memory layout.

• Import and Export Tables: Manages symbols and linkage with external libraries.

• Executable Format: Produces executable files in COFF or LX formats.

• Linking with Libraries: Supports linking with standard libraries.

• Command-Line Interface: Used via a command-line interface with input files and options.
8. Explain the term self-relocating program.

A self-relocating program is a type of computer program that has the ability to adjust its own memory addresses
dynamically, allowing it to execute correctly regardless of where it is loaded into memory. This self-adjustment, or
self-relocation, is a valuable characteristic for programs in certain computing environments.

Key points

• Dynamic Addressing: Self-relocating programs adjust memory addresses at runtime.

• Memory Independence: They can run correctly in different memory locations.

• Relocation Information: They include embedded information for address adjustments.

• Used in Specific Environments: Common in embedded systems and real-time operating systems.

• Versatility: Adaptability to varying memory configurations is a primary advantage.

9. Compare Absolute Loader with Relocating Loader (BSS Loader).


Chapter 6:- Scanning and Parsing
1. Explain types of grammar.

2. Explain recursive descendent parsing algorithm.

Recursive Descent Parsing is a top-down parsing technique where each non-terminal in the grammar corresponds to
a parsing function. The parser starts from the top-level non-terminal and recursively applies parsing functions to
recognize the input. Here's a high-level overview of the algorithm:

a. Create parsing functions for each non-terminal in the grammar.

b. Start parsing from the top-level non-terminal.

c. In each parsing function, try to match the input against the production rules for that non-terminal.

d. If a match is found, apply the corresponding production rule and continue parsing with the next symbol.

e. If no match is found, report an error or backtrack if necessary.

f. Repeat steps b-e until the entire input is parsed successfully or an error is detected.

3. Write algorithm for practical approach of top down parsing.

A practical approach to top-down parsing is typically implemented using a predictive parsing table. Here's a simplified
algorithm:

a. Create a parsing table that maps (non-terminal, terminal) pairs to production rules or actions.

b. Initialize a stack with the start symbol.

c. Initialize an input buffer with the input string.

d. Repeat the following until the stack is empty:

i. Pop the top of the stack.

ii. If it's a non-terminal, look up the parsing table to determine the production rule or action based on the current
input symbol.

iii. Push the right-hand side of the selected production rule onto the stack (in reverse order if necessary).

iv. If it's a terminal, compare it with the current input symbol. If they match, consume the input symbol.

v. If there's a mismatch, report an error.

e. If the input is fully consumed and the stack is empty, the input is valid; otherwise, report an error.

4. Construct an LL(1) parsing table for the following grammar.


S → aBDh

B → cC

C → bC | ε

D → EF

E→g|ε

F→f|ε

5. Answer the following Questions:

(i) Write unambiguous production rules to produce arithmetic expression consisting

of +, *, ( , ), id.

(ii) Remove left recursion from that unambiguous production rules and generate

LL(1) parsing table for that grammar.

6. Answer the Following:-

(i) Define Operator precedence grammar. Convert following production rules of

grammar into suitable Operator precedence grammar.

E → EAE | id

A→-|*

(ii) Generate operator precedence relation matrix for converted Operator

precedence grammar. Show how id - id * id will be parsed using Operator

Precedence Matrix.

7. Given the Grammar, evaluate the string id - id * id using shift reduce parser.

E-> E – E

E -> E * E

E -> id

8. Explain Ambiguity in grammar and how it can be resolved?

Ambiguity in Grammar: A grammar is said to be ambiguous when it can generate a sentence in more than one way,
leading to multiple parse trees or interpretations for the same input string.

To resolve perform
Left-Factoring: Combine common prefixes of alternative productions into a single production. This reduces ambiguity
by making it clear which production to choose.

9. Explain right most and left most derivation.

• Right-Most Derivation: In a right-most derivation, the rightmost non-terminal symbol in the current
sentential form is replaced in each step. The goal is to reach the target string from the start symbol by
repeatedly applying production rules, always choosing the rightmost non-terminal for replacement. Right-
most derivations are typically associated with bottom-up parsing techniques like LR parsers.

• Left-Most Derivation: In a left-most derivation, the leftmost non-terminal symbol in the current sentential
form is replaced in each step. Similar to right-most derivations, the goal is to reach the target string from the
start symbol, but in this case, we consistently choose the leftmost non-terminal for replacement. Left-most
derivations are associated with top-down parsing techniques like LL parsers.

10. Differentiate Top-down and Bottom-up parsing.

Aspect Top-Down Parsing Bottom-Up Parsing


Begins with the start symbol and attempts Starts with the input string and reduces it to
Starting Point to derive the input string. the start symbol.
Works from top (start symbol) to bottom Works from bottom (terminals) to top (start
Parsing Direction (terminals). symbol).
Rule Application Applies production rules in a top-to- Applies production rules in a bottom-to-top
Order bottom order. order (reduces substrings).
Common Recursive Descent Parsing, LL Parsing LR Parsing (LR(0), SLR(1), LALR(1), LR(1)),
Methods (LL(1), LL(k)) LR(k) Parsing
Lookahead Typically uses a fixed number of lookahead Typically uses a fixed number of lookahead
Symbols symbols (k) for LL parsers. symbols (k) for LR parsers.
Suited for LL(k) grammars, where k Suited for LR(k) grammars, where k
Grammar Type represents lookahead symbols. represents lookahead symbols.
Errors are detected early in the parsing Errors are detected later in the parsing
Error Reporting process. process (often during reduction).
Ease of More complex to construct but can handle a
Construction Easier to construct for simpler grammars. wider range of grammars.
Recursive Descent Parsing for LL(1)
Examples grammars. LR(1) Parsing for LR(1) grammars.
The main difference between top-down and bottom-up parsing is the direction in which they build parse trees or
derive sentences. Top-down parsers start at the top and work down, while bottom-up parsers start at the leaves
(input string) and work up to the root (start symbol).

Chapter 7:-Compilers
1. What is main task of semantic analysis phase? Explain inherited and synthesized

attributes in detail with example.

The main task of the semantic analysis phase in a compiler is to ensure that the program is semantically correct,
meaning it adheres to the language's rules and constraints beyond just its syntax.

• Inherited Attributes: These attributes are passed from the parent node to its child nodes in the abstract
syntax tree (AST). They are used to provide context or information to child nodes. For example, in a syntax
tree for an arithmetic expression, the operator precedence could be an inherited attribute passed from the
parent node to its children.

• Synthesized Attributes: These attributes are computed or synthesized by child nodes and then passed up to
the parent node. They are used to represent information derived from the child nodes. For example, in a
syntax tree for an arithmetic expression, the type of the expression could be synthesized by evaluating the
types of its operands.

2. What is the use of static pointer and dynamic pointer in compiler? Explain working of Display with suitable
example.

1. Static Pointer and Dynamic Pointer:

• Static Pointer: A static pointer is used for accessing variables in a statically scoped programming
language. It points to the appropriate storage location for variables at compile time. Static scoping
means that variable bindings are determined at compile time and do not change during program
execution.

• Dynamic Pointer: A dynamic pointer is used in dynamically scoped programming languages. It points
to variables based on their current scope during program execution. Dynamic scoping means that
variable bindings are determined at runtime and can change as the program executes.

2. Display:

The display is a data structure used to manage the stack of activation records (also known as function call frames)
during program execution. It's particularly important in languages that support nested functions or procedures. The
display contains dynamic pointers to the activation records associated with each level of nested scope.

3. Explain the front end of toy compiler with suitable example.

The front end of a compiler performs tasks like lexical analysis, syntax analysis, and semantic analysis. Let's illustrate
this with a simple example:

• Example: Suppose you have a toy programming language with the following source code:

x = 5; y = 10; z = x + y;

• Front End Tasks:

1. Lexical Analysis: Tokenizes the source code into meaningful tokens like x, =, 5, ;, y, =, 10, ;, z, =, x, +,
y, and ;.

2. Syntax Analysis (Parsing): Checks the syntax and constructs an abstract syntax tree (AST)
representing the code's structure. For example, it recognizes assignment statements, arithmetic
operations, and variable names.

3. Semantic Analysis: Checks the semantics of the code, such as type checking and symbol resolution. It
ensures that variables are declared before use and that expressions have valid types.
4. Write a code fragment to find out whether number is odd or even. Draw control flow

graph. Perform control flow analysis.

5. What is memory binding? Explain dynamic memory allocation using extended stack

model.

Memory binding refers to the process of associating program variables with specific memory locations (e.g., memory
addresses) at different stages of program execution.

Memory binding refers to the process of associating program variables with specific memory locations (e.g., memory
addresses) at different stages of program execution.

6. Explain Analysis and Synthesis phase of Compiler.

Perform lexical, syntax and semantic analysis on below C statement:

int i;

float a, b;

a = b + i;

Analysis Phase: The Analysis phase of a compiler is the initial stage where the source code is analyzed for various
properties, including its structure and correctness. This phase consists of several sub-phases, including lexical
analysis, syntax analysis, and semantic analysis. The goal of the Analysis phase is to break down the source code into
manageable units and check for any errors or inconsistencies.

Synthesis Phase: The Synthesis phase follows the Analysis phase and involves generating the target code (e.g.,
machine code or intermediate code) from the analyzed and processed source code. This phase typically includes
tasks like code optimization and code generation.
Now, let's perform lexical, syntax, and semantic analysis on the provided C statement:

int i; float a, b; a = b + i;
Lexical Analysis: Lexical analysis involves breaking the source code into tokens or lexemes. In C, common tokens
include keywords, identifiers, operators, and literals.
• Tokens:
1. Keyword: int
2. Identifier: i
3. Semicolon: ;
4. Keyword: float
5. Identifiers: a, b
6. Assignment operator: =
7. Plus operator: +
8. Semicolon: ;
Syntax Analysis: Syntax analysis, also known as parsing, checks the arrangement of tokens to ensure they form valid
statements according to the language's grammar.
• Syntax Analysis Result:
• Variable declarations:
• int i;
• float a, b;
• Assignment statement:
• a = b + i;
Semantic Analysis: Semantic analysis checks the meaning and validity of the code in a deeper sense. This includes
type checking and ensuring that variables are declared before use.
• Semantic Analysis Result:
• Variable declarations:
• int i; (Declares an integer variable i)
• float a, b; (Declares two float variables a and b)
• Assignment statement:
• a = b + i;
• Semantic Check 1: Ensure a, b, and i are declared and have compatible types. (e.g., b
and i should be of numeric types)
• Semantic Check 2: Ensure that the assignment is type-correct. (e.g., adding b and i
should produce a valid result compatible with a)

7. List out various Code Optimization techniques used in Compiler. Explain any three

techniques with suitable example.

1. Common Subexpression Elimination (CSE): Identifying and eliminating redundant computations by reusing
the results of expressions already computed.

2. Loop Optimization: Techniques like loop unrolling, loop fusion, and loop interchange are used to improve the
performance of loops.

3. Inlining: Replacing a function call with the actual code of the function to reduce the overhead of the function
call.

4. Strength Reduction: Replacing expensive operations with cheaper ones. For example, replacing
multiplication with addition or shifting.

8. Explain triple, quadruple and indirect triples representation with example.


Chapter 8:-Interpreters & Debuggers

1. Interpreter and Pure vs. Impure Interpreters:

• Interpreter: An interpreter is a program that directly executes the source code of a high-level
programming language without the need for a separate compilation step. It reads the source code
line by line, parses it, and executes the corresponding machine-level instructions or performs actions
as specified by the source code.

• Pure Interpreter: A pure interpreter executes the source code instructions one by one without
generating any intermediate code or representation. It interprets each line or statement as it
encounters it, making it relatively slower but more straightforward in terms of execution.

• Impure Interpreter: An impure interpreter, on the other hand, may involve some form of
intermediate code generation or translation. It might compile parts of the source code into an
intermediate representation (e.g., bytecode) before execution. This can improve execution speed but
retains some aspects of interpretation.

2. Drawbacks and Benefits of Interpretation:

• Benefits:

• Portability: Interpreters can run on multiple platforms without needing to recompile the
code.

• Ease of Debugging: Errors are reported as they occur, making it easier to debug.

• Interactive: Many interpreters offer interactive environments for testing and


experimentation.

• Quick Development: Developers can quickly modify and test code without a lengthy
compilation step.

• Drawbacks:

• Slower Execution: Interpretation is generally slower than compiled code because it involves
parsing and execution simultaneously.

• Lack of Optimization: Interpreters may not perform advanced code optimizations, potentially
leading to slower code execution.

• Limited Security: Malicious code can be run directly in an interpreter, posing security risks.

• Less Efficient Use of Resources: The interpreter itself consumes resources while running,
which can be inefficient for computationally intensive tasks.

3. Interpreter and Benefits Compared to Compiler:

• Interpreter:

• Benefits:

• Portability across platforms.

• Immediate feedback during development.

• Easy debugging.

• Interactive use.

• No need for a separate compilation step.


• Disadvantages:

• Slower execution.

• Lack of advanced optimization.

• Security concerns with executing untrusted code.

• Compiler:

• Benefits:

• Faster execution due to optimization.

• Generates standalone executables.

• Offers better code protection and security.

• More efficient resource usage during execution.

• Disadvantages:

• Longer compilation time.

• Less interactive during development.

• Debugging may be more challenging.

4. Pure and Impure Interpreter:

• Pure Interpreter: Executes code line by line without generating intermediate code. It interprets each
statement as it encounters it, making it relatively simple and slow.

• Impure Interpreter: May involve the generation of intermediate code or translation before
execution. It can be faster due to optimizations but retains some aspects of interpretation.

5. Three Components of the Interpreter:

• Parser: This component reads the source code, performs lexical analysis to tokenize it, and then
parses it into a structured representation, such as an abstract syntax tree (AST).

• Executor: The executor interprets or executes the code based on the parsed representation. It
processes each statement or expression and performs the specified actions.

• Symbol Table: The symbol table keeps track of variable names, their types, and their values during
execution. It helps in variable lookup and management.

6. Difference Between Compiler and Interpreter:

• Compiler:

• Translates the entire source code into machine code or an intermediate representation
before execution.

• Generates standalone executables.

• Generally produces faster code due to advanced optimizations.

• Requires a separate compilation step.

• Slower development feedback.

• Examples include GCC, Clang.

• Interpreter:
• Executes the source code directly, often line by line or statement by statement.

• Does not generate standalone executables.

• Slower code execution compared to compiled code.

• Immediate feedback during development.

• Examples include Python, JavaScript engines, Bash.

Both compilers and interpreters have their own use cases and trade-offs, and they are often used in conjunction in
modern programming environments (e.g., Just-In-Time compilation in JavaScript engines).

You might also like