Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
21 views
CD
Uploaded by
shashi176597
AI-enhanced title
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save cd For Later
Download
Save
Save cd For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
21 views
CD
Uploaded by
shashi176597
AI-enhanced title
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save cd For Later
Carousel Previous
Carousel Next
Save
Save cd For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 149
Search
Fullscreen
1, OVERVIEW OF LANGUAGE PROCESSING SYSTEM. Skeletal Source Program Proprecessor | | Soares proms Comper] rranpaRacamy:oroyrn assembler | | Relocetable Machine Code Tondo inkereditor +—“Tibraty, tocatabte obj file Abselute Machine Code Fig 1.1 Language -processing System Preprocessor A preprocessor produce input to compilers. They may perform the following functions. 1. Maero processing: A preprocessor may allow a user to define macros that are short hands for longer constructs. File inclusion: A preprocessor may include header files into the program text. Rational preprocessor: these preprocessors augment older languages with more modern flow-of-control and data structuring facilities. Language Extensions: These preprocessor attempts to add capabilities to the language by certain amounts to build-in macro Compiler Compiler is a translator program that translates a program written in (HLL) the source program and translates it into an equivalent program in (MLL) the target program. As an important part of a compiler is error showing to the programmer. Source pe FERRE, | ‘aeetPEmExecuting a program written n HLL programming language is basically of two parts. the source program must first be compiled translated into a object program. Then the results object program is loaded into a memory executed. ASSEMBLER: programmers found it difficult to write or read programs in machine language. They begin to use a mnemonic (symbols) for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation (object program). INTERPRETER: An interpreter is a program that appears to execute a source program as if it were machine language. INPUT PROCESS ‘oureuT —__f Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA. also uses interpreter. The process of interpretation can be carried out in following phases 1. Lexical analysis 2, Syntax analysis 3. Semantic analysis 4, Direct Execution Data Advantages: © Modification of user program can be execution proceeds Type of object that denotes various may change dynamically. © Debugging a program and finding errors is simplified task for a program used for interpretation. ‘© The interpreter for the language makes it machine independent. wily made and implemented as Disadvantages: * The execution of theprogramis slower. * Memory consumption is more,Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute, This would waste core by leaving the assembler in memory while the user's program was being executed. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. System programmers developed another component called loader. “A loader is a program that places programs into memory and prepares them for execution.” It would be more efficient if subroutines could be translated into object form the loader could “relocate” directly behind the user“s program. The task of adjusting programs 0 they may be placed in arbitrary core locations is called relocation. Relocation loaders perform four functions. TRANSLATOR A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of d HLL specification would be detected and reported to the programmers. Important role of translator are: ‘Translating the hil program input into an equivalent ml program, 2 Providing diagnostic messages wherever the programmer violates specification of the bil, TYPE OF TRANSLATORS:- + Interpreter © Compiler * preprocessor LIST OF COMPILERS Ada compilers ALGOL compilers BASIC compilers C# compilers C compilers C++ compilers COBOL compilers AgQuaYNE8. Java compilers 2. PHASES OF A COMPILER: ‘A compiler operates in phases. A phase is a logically interrelated operation that takes source program in one representation and produces output in another representation, The phases of a compiler are shown in below There are two phases of compilation, a. Analysis (Machine Independent/Language Dependent) b. Synthesis (Machine Dependent/Language _ independent) ‘Compilation process is partitioned into no-of-sub processes called ,,phases". source program Texical analyzer y syntax analyzer handler symbol table manager intermediate code generator } code: optimizer | code generator t target program Fig 1.5 Phases of a compiler Lexical Analysis: LA or Scanners reads the source program one character at a time, carving the source program into a sequence of automatic units called tokens. Syntax Analysis:— The second stage of translation is called syntax analysis or parsing. In this phase expressions, statements, declarations etc... are identified by using the results of lexical analysis. Syntax analysis is aided by using techniques based on formal grammar of theprogramming language. Intermediate Code Generations:- An intermediate representation of the final machine language code is produced, This phase bridges the analysis and synthesis phases of translation, Code Optimization:~ This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Code Generation The last phase of translation is code generation. A number of optimizations to Reduce the length of machine language program are carried out during this phase. The output of the code generator is the machine language program of the specified computer. Table Management (or) Book-keepin; This is the portion to keep the names used by the program and records essential information about each, The data structure used to record this information called a Symbol Table". Error Handlers:- It is invoked when a flaw error in the source program is detected. The output of LA is a stream of tokens, which is passed to the next phase, the syntax analyzer or parser. The SA groups the tokens together into syntactic structure called as expression. Expression may further be combined to form statements. The syntactic structure can be regarded as a tree whose leaves are the token called as parse trees. The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern that are permitted by the specification for the source language. It also imposes on tokens a tree- like structure that is used by the sub-sequent phases of the compiler. Example, if a program contains the expression A+/B after lexical analysis this expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /, the syntax analyzer should detect an error situation, because the presence of these two adjacent binary operators violates the formulations rule of an expression, Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by identifying which parts of the token stream should be grouped. Example, (A/B*C has two possible interpretations.) 1- divide A by B and then multiply by C or 2- multiply B by C and then use the result to divide A. Each of these two interpretations can be represented in terms of a parse tree. Intermediate Code Generation The intermediate code generation uses the structure produced by the syntaxanalyzer to create a stream of simple instructions. Many styles of intermediate code are possible, One common style uses instruction with one operator and a small number of operands.The output of the syntax analyzer is some representation of a parse tree. The intermediate code generation phase transforms this parse tree into an intermediate language representation of the source program. Code Optimization:- This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Its output is another intermediate code program that does the same job as the original, but in a way that saves time and / or spaces. /* 1, Local Optimization:- There are local transformations that can be applied to a program to make an improvement. For example, IfA > B goto L2 Goto L3 L2: This can be replaced by a single statement If A < B goto L3 Another important local optimization is the elimination of commonsub- expressions E:=B+C+F Might be evaluated as, [l:= B+ C Tl+ D TI+F Take this advantage of the common sub-expressions B + C. Loop Optimization:~ ‘Another important source of optimization concerns about increasing the speed of loops. A typical loop improvement is to move a computation that produces the same result each time around the loop to a point, in the program just before the loop is entered.*/ Code generator C produces the object code by deciding on the memory locations for data, selecting code to access each data and selecting the registers in which each computation is to be done. Many computers have only a few high speed registers in which computations can be performed quickly. A good code generator would attempt to utilize registers as efficiently as possible, Error Handing :- One of the most important functions of a compiler is the detection and reporting of errors in the source program. The error message should allow the programmer to determine exactly where the errors have occurred. Errors may occur in all or the phases of acompiler Whenever a phase of the compiler discovers an error, it must report the error to the error handler, which issues an appropriate diagnostic msg. Both of the table-management and error- Handling routines interact with all phases of the compiler. ‘Example: position: intial + rate 60 Lexical Analyzer 1 Tokens idl = id2 + id3 * iat idl | + a ara | a — {| -—__ uo —, id? . — —, —— iad Tatermediate Code Generator templ-¥int to real (60) temp2:= id * empl temp3:= id2 temp2 ‘dl temp.t Code Optimizer Temp!: = id3 * 60.0 Idl:= id2 +templ ‘Code Generator MOVF id3, 2 MULF +600, 2 MOVE ia2, Pa ADDF 2 » rl MOVF idl LEXICAL ANALYZER: The LA is the first phase of a compiler. Lexical analysis is called as linear analysis or scanning. In this phase the stream of characters making up the source program is read from left-to-right and grouped into tokens that are sequences of characters having a collective meaning, Tokens ANALYZER PARSER EYMEOL TABLEUpon receiving a get next token" command form the parser, the lexical analyzer reads the input character until it can identify the next token. The LA retum to the parser representation for the token it has found. The representation will be an integer code, if the token is a simple construct such as parenthesis, comma or colon, LA may also perform certain secondary tasks as the user interface. One such task is striping out from the source program the commands and white spaces in the form of blank, tab and new line characters, Another is correlating error message from the compiler with the source program. Lexical Analysis Vs Parsing: Lexical analysis Parsing ‘A Scanner simply turns an input Sing (ay a fle) into a list of tokens. These tokens represent things like identifiers, parentheses, operators ete. The lexical analyzer (the "lexer") _ parses individual symbols from the source code file into tokens. From there, the "parser" proper tums those whole tokens into sentences of your grammar A parser converts this Ist of Tokens nfo a Tree like object to represent how the tokens fit together to form a cohesive whole (Sometimes referred to as a sentence). ‘A parser does not give the nodes any meaning beyond structural cohesion. The next thing to do is extract meaning from this structure (sometimes called contextual analysis). Token, Lexeme, Pattern: Token: Token is a sequence of characters that can be treated as a single logical entity. ‘Typical tokens are, 1) Identifiers 2) keywords 3) operators 4) special symbols 5) constants Pattern: A set of strings in the input for which the same token is produ: -d as output. This set of strings is described by a rule called a pattern associated with the token. Lexeme: A lexeme is a sequence of characters in the source program that is matched by the pattern for a token. Example: Description of token Token lexeme pattern const const const if if If< of <= oF = or <> or >= oF letter followed by letters & digit i Pi ‘any numeric constant nun 314 ‘any character biw “and “except™ literal “core” pattern A patter is a rule describing the set of lexemes that can represent a particular token in source program, means that there’ Lexical Errors Lexical errors are the errors thrown by the lexer when unable to continue. Which "sno way to recognise a lexeme as a valid token for you lexer? Syntax errors, on the other side, will be thrown by your scanner when a given set of already recognized valid tokens don't match any of the right sides of your grammar rules. Simple panic-mode error handling system requires that we return to a high-level parsing function when a parsing or lexical error is detected. Enror-recovery actions are: Delete one character from the remaining input, Insert a missing character in to the remaining input. Replace a character by another character. Transpose two adjacent characters, Difference Between Compiler And Interpreter: A compiler converts the high level instruction into machine language while an interpreter converts the high level instruction into an intermediate form. Before execution, entire program is executed by the compiler whereas after translating the first line, an interpreter then executes it and so on. List of errors is created by the compiler after the compilation process while an interpreter stops translating after the first error ‘An independent executable file is created by the compiler whereas interpreter is required by an interpreted program each time. The compiler produce object code whereas interpreter does not produce object code. In the process of compilation the program is analyzed only once and then the code is generated whereas source program is interpreted every time it is to be executed and every time the source program is analyzed. Hence interpreter is less efficient than compiler6. Examples of interpreter: A UPS Debugger is basically graphical source level debugger but it contains built in C interpreter which can handle multiple source files 7. Example of compiler: Borland c compiler or Turbo C compiler compiles the programs. written in C or CH. 4, REGULAR EXPRESSIONS: : SPECIFICATION OF TOKENS There are 3. specifications of tokens 1) Strings 2) Language 3) Regular expression Strings and Languages An alphabet or character class is a finite set of symbols A string over an alphabet is a finite sequence of symbols drawn from that alphabet. A language is any countable set of strings over some fixed alphabet In language theory, the terms "sentence" and "word" are often used as synonyms for "string." The length of a string s, usually written |s}, is the number of occurrences of symbols in s. For example, banana is a string of length six. The empty string, denoted «, is the string of length zet0. Operations on strings The following string-related terms are commonly used: 1. A prefix of string s is any string obtained by removing zero or more symbols from the end of strings, For example, ban is a prefix of banana. 2. A suffix of string s is any string obtained by removing zero or more symbols from the beginning of s. For example, nana is a suffix of banana. 3. A substring of s is obtained by deleting any prefix and any suffix from s. For example, nan is a substring of banana, 4, The proper prefixes, suffixes, and substrings of a string s are those prefixes, suffixes, and substrings, respectively of s that are not ¢ or not equal to s itself. 5. A subsequence of s is any string formed by deleting zero or more not necessarily consecutive positions of sFor example, baan is a subsequence of banana Operations on language The following are the operations that can be applied to languages: 1.Union 2.Concatenation 3.Kleene closure 4.Positive closure The following example shows the operations on strings: Let L={0,1} and S={a,b,c} 1. Union : LU S={0,1,a,b,c} 2. Concatenation — : L.S={0a,1a,0b,1b,0¢,1e} 3. Kleene closure 4 £,0,1,00....} 0, 1,00. 4, Positive closure : L Regular Expressions: Each regular expression r denotes a language L(t). Here are the rules that define the regular expressions over some alphabet E and the languages that those expressions denote: 1.¢ is a regular expression, and L(e) is { & }, that is, the language whose sole member is the empty string, 2. If,a"is a symbol in E, then "is a regular expression, and L(a) = {a}, that is, the language with one string, of length one, with ,,a"in its one position. 3. Suppose r and s are regular expressions denoting the languages L() and L(s). Then, ([(9) is a regular expression denoting the language L(t) U L (1)(5) is a regular expression denoting the language L(n)L(s). (1)* is a regular expression denoting (L(1))*. (#) is a regular expression denoting L(t), ). 9000 4, The unary operator * has highest precedence and is left associative. 5. Concatenation has second highest precedence and is left associative. has lowest precedence and is left associative. 0REGULAR DEFINITIONS: For notational convenience, we may wish to give names to regular expressions and to define regular expressions using these names as if they were symbols. Identifiers are the set or string of letters and digits beginning with a letter. The following regular definition provides a precise specification for this class of string. Example-, Ab*[cd? Is equivalent to (a(b*)) | (c(d?)) Pascal identifier Letter - A|B]......|Z [a] b|......]z] Digits - 0|1]2]....]9 Id ~ letter (letter / digit* Shorthand"s Certain constructs occur so frequently in regular expressions that it is convenient to introduce notational shorthands for them. 1. One or more instances (+ © The unary postfix operator + means “ one or more instances of” © If ris a regular expression that denotes the language L(r), then (ris a regular expression that denotes the language (L (r )) © Thus the regular expression a’ denotes the set of all strings of one or more as. © The operator ~ has the same precedence and associativity as the operator " 2. Zero or one instance ( ? - The unary postfix operator ? means “zero or one instance of”. - The notation r? is a shorthand for re. - If," is a regular expression, then (1)? is a regular expression that denotes the language L( r DU fe}. 3. Character Classes: - The notation [abc] where a, b and c are alphabet symbols denotes the regular expression alble. - Character class such as [a ~ 2] denotes the regular expression a |b | ¢ | |... - We can describe identifiers as being strings generated by the regular expression, [A-Za-z][A~Za-z0-9]*Non-regular Set ‘A language which cannot be described by any regular expression is a non-regular set. Example: The set of all strings of balanced parentheses and repeating strings cannot be described by a regular expression. This set can be specified by a context-free grammar. RI ECOGNITION OF KENS: Consider the following grammar fragment: stmt — if expr then stmt lifexpr then stmt else stmt |e expr — term relop term [term term id frum where the terminals if , then, else, relop, id and num generate sets of strings given by the following regular definitions: If > if then — then else > else relop >
P P= id > letter(letter|digit)” num — digit” (digit’)2(E(+- yedigit)? For this language fragment the lexical analyzer will recognize the keywords if, then, else, as well as the lexemes denoted by relop, id, and num. To simplify matters, we assume keywords are reserved; that is, they cannot be used as identifiers. Texeme ‘Token Name ‘Attribute Value ‘Any ws if if _ then then - else else _ Any id id pointer to table entry ‘Any number number pointer to table entry z Telop LT = relop LE = Telop ET <> relop NE TRANSITION DIAGRAM: Transition Diagram has a collection of nodes or circles, called states. Each state represents a condition that could occur during the process of scanning the input looking for a lexeme that matches one of several pattems .Edges aredirected from one state of the transition diagram to another. each edge is labeled by a symbol or set of symbols.If we are in one state s, and the next input symbol is a, we look for an edge out of state s labeled by a. if we find such an edge we advance the forward pointer and enter the state of the transition diagram to which that edge leads. ‘Some important conventions about transition diagrams are 1. Certain states are said to be accepting or final .These states indicates that a Texeme has been found, although the actual lexeme may not consist of all positions biw the lexeme Begin and forward pointers we always indicate an accepting state by a double circle. 2. In addition, if it is necessary to return the forward pointer one position, then we shall additionally place a * near that accepting state. 3. One state is designed the state ,or initial state ., it is indicated by an edge labeled “start” entering from nowhere .the transition diagram always begins in the state before any input symbols have been used. Returtrdop Retuntitop NE Retun(edop LT) As an intermediate step in the construction of a LA, we first produce a stylized flowchart, called a transition diagram. Position in a transition diagram, are drawn as circles and are called as states. letter or digit “O-O=©6) stun gute vetoThe above TD for an identifier, defined to be a letter followed by any no of letters or digits.A sequence of transition diagram can be converted into program to look for the tokens specified by the diagrams. Each state gets a segment of code. Automata: Automation is defined as a system where information is transmitted and used for performing some functions without direct participation of man. 1. An automation in which the output depends only on the input is called automation without memory. 2. An automation in which the output depends on the input and state also is called as automation with memory. 3. An automation in which the output depends only on the state of the machine is called a Moore machine. 4, An automation in which the output depends on the state and input at any instant of time is called a mealy machine. DESCRIPTION OF AUTOMATA 1. An automata has a mechanism to read input from input tape, 2. Any language is recognized by some automation, Hence these automation are basically language acceptors” o language recognizers”. ‘Types of Finite Automata Deterministic Automata ‘Non-Deterministic Automata. Deterministic Automata: A deterministic finite automata has at most one transition from each state on any input. A DFA is a special case of a NFA in which: 1. ithas no transitions on input €, 2. Each input symbol has at most one transition from any state. DFA formally defined by 5 tuple notation M = (Q, 5, 8, qo, F), where Q is a finite ,set of states”, which is non empty. Y is ,,input alphabets", indicates input set. {40 is an , initial state and go is in Q ie, qo, ¥, QF is asset of ,Final states", 3 is a ,,transmission function" or mapping function, using this function the next state can be determined. The regular expression is converted into minimized DFA by the following procedure: Regular expression > NFA —> DFA— Minimized DFAThe Finite Automata is called DFA if there is only one path for a specific input from current state to next state, a a b From state SiS for input ," there is only one path going to $2. similarly from so there is only one path for input going to SI Nondeterministic Automata: ANEA ia A mathematical model consists of = Asetof states S. "Asset of input symbols = A transition is a move from one state to another. = A state so that is distinguished as the start (or initial) state = Asset of states F distinguished as accepting (or final) state "A number of transition to a single symbol. ‘A NFA can be diagrammatically represented by a labeled directed graph, called a transition graph, in which the nodes are the states and the labeled edges represent the transition function, This graph looks like a transition diagram, but the same character can label two or more transitions out of one state and edges can be labeled by the special symbol € as well as input symbols. The transition graph for an NFA that recognizes the language (alb)*abb is shown war a bb ®Q5. Bootstrapping: When a computer is first tumed on or restarted, a special type of absolute loader, called as bootstrap loader is executed. This bootstrap loads the first program to be run by the computer usually an operating system. The bootstrap itself begins at address © in the memory of the machine. It loads the operating system (or some other program) starting at address 80. After all of the object code from device has been loaded, the bootstrap program jumps to address 80, which begins the execution of the program that was loaded. Such loaders can be used to run stand-alone programs independent of the operating system or the system loader. They can also be used to load the operating system or the loader itself into memory, Loaders are of two types: * Linking loader. * Linkage editor, Linkage loaders, perform all linking and relocation at load time. Linkage editors, perform linking prior to load time and dynamic linking, in which the linking function is performed at execution time. object Program A linkage editor performs linking and some relocation; however, the IMikaged program is written toa file or library instead of being immediately loaded into memory. ‘IT tmiegza reduces the overhead when the program is executed. All that is required Toad timd_—° iple form of relocation, | | I ; ae Linkage ete — . vemory6. Pass And Phases Of Translation: Phases: (Phases are collected into a front end and back end) Frontend: The front end consi source language and is largely independent of the target machine. These normally include lexical and syntactic analysis, the ereation of the symbol table, semantic analysis, and the generation of intermediate code. ‘A certain amount of code optimization can be done by front end as well. the front end also includes the error handling tha goes along with each of these phases: Back end: The back end includes those portions of the compiler that depend on the target machine and generally, these portions do not depend on the source language . 7. Lexical Analyzer Generator: Creating a lexieal analyzer with Lex: «First, a specification of a lexical analyzer is prepared by creating a program lex.1 in the Lex language. Then, lex. is run through the Lex compiler to produce a C program lex.yy.c. + Finally, lex.yy.c is run through the C compiler to produce an object program a.out, which is the lexical analyzer that transforms an input stream into a sequence of tokens. Lex source Lex plex ane ; >} compiler rogram lex. >| c > q kexyy Compiler vo input »| aout p> Sequence of stream tokensLex Specification A Lex program consists of three parts { definitions } %% {rules } %% {user subroutines } © Definitions include declarations of variables, constants, and regular definitions © Rules are statements of the form pl {action1}p2 {action2} ... pn {action} © where pi is regular expression and actioni describes what action the lexical analyzer should take when pattern pi matches a lexeme. Actions are written in C code, © User subroutines are auxiliary procedures needed by the actions. These can be compiled separately and loaded with the lexical analyzer. 8, INPUT BUFFERING The LA scans the characters of the source program one at a time to discover tokens, Because of large amount of time can be consumed scanning characters, specialized buffering techniques have been developed to reduce the amount of overhead required to process an input character. Buffering techniques: 1, Buffer pairs 2, Sentinels The lexical analyzer scans the characters of the source program one a t a time to discover tokens. Often, however, many characters beyond the next token many have to be examined before the next token itself can be determined. For this and other reasons, it is desirable for the lexical analyzer to read its input from an input buffer. Figure shows a buffer divided into two halves of, say 100 characters each. One pointer marks the beginning of the token being discovered. A look ahead pointer scans ahead of the beginning point, until the token is discovered .we view the position of each pointer as being between the character last read and the character next to be read. In practice each buffering scheme adopts one convention either a pointer is at the symbol last read or the symbol it is ready to read. Token beginnings look ahead pointer, The distance which the look ahead pointer may have to travel past the actual token may be large. For example, in a PL/I program we may see: DECALRE (ARG1, ARG2... ARG n) without knowing whether DECLARE is a keyword or an array name until we see the character that follows the right parenthesis.PART-B TOPDOWN PARSIN 1. Context-free Grammars: Definition: Formally, a context-free grammar G is a 4-tuple G= (V, T, P, S), where: 1. Visa finite set of variables (or nonterminals). These describe sets of “related” strings. 2. Tisa finite set of terminals (i-c., tokens). 3. Pisa finite set of productions, each of the form Ata where A © V isa variable, and a © (V U T)* is a sequence of terminals and nonterminals. S € Vis the start symbol. Example of CFG: E ‘AE | (B) | -E | id Asset |=] 1/1] Where E, A are the non-terminals while id, +, *,-,/,) are the terminals. 2, Syntax analysis: In syntax analysis phase the source program is analyzed to check whether if conforms to the source language's syntax, and to determine its phase structure. This phase is often separated into two phases: . Lexical analysis: which produces a stream of tokens? ‘* Parser: which determines the phrase structure of the program based on the context- free grammar for the language? PARSING: Parsing is the activity of checking whether a string of symbols is in the language of some grammar, where this string is usually the stream of tokens produced by the lexical analyzer. If the string is in the grammar, we want a parse tree, and if it is not, we hope for some kind of error message explaining why not, There are two main kinds of parsers in use, named for the way they build the parse trees: © Top-down: A top-down parser attempts to construct a tree from the root, applying productions forward to expand non-terminals into strings of symbols. . Bottom-up: A Bottom-up parser builds the tree starting with the leaves, using productions in reverse to identify strings of symbols that can be grouped together.In both cases the construction of derivation is directed by scanning the input sequence from left to right, one symbol at a time. Parse Tree: x Lexa! Re stof A parse tree is fhénginptticdl-representation offthe | frontend Efructure of a sentence according to itsgramma > Example: > Let the production P is: tau v < EST|E+T TOF ITF Fo VI) Voalbleld The parse tree may be viewed as a representation for a derivation that filters out the choice regarding the order of replacement. Parse tree fora* b+eParse tree for a+b * cis: Parse tree for (a * b) * (c+ d) ! aN SYNTAX TREES: Parse tree can be presented in a simplified form with only the relevant structure information by: ‘+ Leaving out chains of derivations (whose sole purpose is to give operators difference precedence).‘© Labeling the nodes with the operators in question rather than a non-terminal. The simplified Parse tree is sometimes called as structural tree or syntax tree. Syata TEES a*bee atbte (@+b)*(c+ 4) E e e yntax Error Hpnating: : “~ fa compiles Fao process only correc programs, its design & 4 ple tation would be xreatly simplified. But programmers frequently write, incorrect progralns, and a good compiler should assist th programmer in identifying and locating errors.The pfogramécontain errofs at any different levels. 2 be 4 FOr example, eITOrs can De 1) Lexical — such as misspelling an identifier, keyword or operator 2) Syntactic ~ such as an arithmetic expression with un-balanced parentheses. 3) Semantic - such as an operator applied to an incompatible operand. 4) Logical - such as an infinitely recursive call Much of error detection and recovery in a compiler is centered around the syntax analysis phase. The goals of error handler in a parser are: ‘© It should report the presence of errors clearly and accurately. ‘* Itshould recover from each error quickly enough to be able to detect subsequent errors. © It should not significantly slow down the processing of correct programs. Ambiguit Several derivations will generate the same sentence, perhaps by applying the same productions in a different order. This alone is fine, but a problem arises if the same sentence has two distinct patse trees. A grammar is ambiguous if there is any sentence with more than one parse tree. ‘Any parses for an ambiguous grammar has to choose somehow which tree to return. There are a number of solutions to this; the parser could pick one arbitrarily, or we can providesome hints about which to choose. Best of all is to rewrite the grammar so that it is not ambiguous. There is no general method for removing ambiguity. Ambiguity is acceptable in spoken languages. Ambiguous programming languages are useless unless the ambiguity can be resolved, Fixing some simple ambiguities in a grammar: Ambiguous Tanguage unambiguous (i) A>BIAA Lists of one or more B"s A>BC COAIE (i) ABIAA Lists of one or more B"s with punctuation A —> BC C>;AlE (ii) A BJAA|E lists of zero or more B's A>BA|E Any sentence with more than two variables, such as (arg, arg, arg) will have multiple parse trees. Left Recursion: + Aa. for some stringa, then If there is any non terminal A, such that there is a derivation A = the grammar is left recursive, Algorithm for eliminating left Recursion: 1. Group all the A productions together like this: A=>Aa|Aa,|---|Aaa|B:/P:|---/B.Ais the left recursive non-terminal, ovis any string of terminals and Bis any string of terminals and non terminals that does not begin with A. 2. Replace the above A productions by the following: A= B.A‘ PA'|---| Bea’ Ala a, Al as Al] ~~~ late A! Where, A' is a new non terminal. Top down parsers cannot handle left recursive grammars.If our expression grammar is left recursive: This can lead to non termination in a top-down parser. © fora top-down parser, any recursion must be right recursion, ‘© we would like to convert the left recursion to right recursion. Example 1: Remove the left recursion from the production: A> A a| B Left Recursive. Applying the transformation yields: A>BA! AoaAtle Remaining part after A. Example 2: Remove the left recursion from the productions: ESE+T|T TOT*R|F Applying the transformation yields: ESTE ToFT ETE |e To*FT Ie Example 3: Remove the left recursion from the productions: ESE+T|E-T|T TOT*F|TE|F Applying the transformation yields: ESTE ToFT ES+TE|-TEe TotFT ET] Example 4: Remove the left recursion from the productions: S>Aalb A>Ac|Sdle 1, The non terminal S is left recursive because S -» Aa > Sda But it is not immediate left recursive. 2, Substitute S-productions in A — $ d to obtain: A>Ac|Aad|bdle Eliminating the immediate left recursion:S>Aalb AsbdA|Al A> cA'lad Ale Example 5: Consider the following grammar and eliminate left recursion. S>Aalb A>Scld The nonterminal S is left recursive in two steps: S—+Aa>Sca+Aaca+Scaca--- Left recursion causes the parser to loop like this, so remove: Replace A>Sc|d by A>Aac|beld and then by using Transformation rules: A> bcAld a’ Asacdile Left Factoring: Left factoring is a grammar transformation that is useful for producing a grammar suitable for predictive parsing. When it is not clear which of two altemative productions to use to expand a non-terminal A, we may be able to rewrite the productions to defer the decision until we have some enough of the input to make the right choice. Algorithm: For all A non-terminal, find the longest prefix o: that occurs in two or more right-hand sides of A If a € then replace all of the A productions, A afi] Bs]---[o Balt With A>aA'|r A’ Bil |---| Bele Where, A‘ is a new element of non-terminal. Repeat until no common prefixes remain, tis easy to remove common prefixes by left factoring, creating new non-terminal For example consider: Vo>aBlar Change to: VoaVv Vosir Example 1: Eliminate Left factoring in the grammar: SoVesint V > alpha ,[,, int “]" | alphaBecomes: So Ves int V > alpha V! Vio "L, int "} | TOP DOWN PARSING: Top down parsing is the construction of a Parse tree by starting at start symbol and “guessing” each derivation until we reach a string that matches input. That is, construct tree from root to leaves. The advantage of top down parsing in that a parser can directly be written as a program. Table- driven top-down parsers are of minor practical relevance. Since bottom-up parsers are more powerful than top-down parsers, bottom-up parsing is practically relevant. For example, let us consider the grammar to see how top-down parser works: S—>IfF then § else $ | while F do S| print E> true | False |id The input token string is: If id then while true do print else print. 1. Tree: s Input: if id then while true do print else print Action: Guess for S. 2, Tree: s "Bates thes Input: if id then while true do print else print. Action: if matches; guess for E. 3. Tree: s te S eke 8 ie Input: id then while true do print else print. Action: id matches; then matches; guess for S.Smartzworld.com 4. Tree: a then og else Input: while true do print else print. Action: while matches; guess for E. 5, Tree: we Input: true do print else print Action:true matches; do matches; guess S. 6. Tree: AL I | Input: print else print. Action: print matches; else matches; guess for S. jntuworldupdates.org s s Smartworld.asia AA Specworld.in1, Tree: s wo wn ese i Input: print. Action: print matches; input exhausted; done. Recursive Descent Parsin Top-down parsing can be viewed as an attempt to find a left most derivation for an input string. Equivalently, it can be viewd as a attempt to construct a parse tree for the input starting from the root and creating the nodes of the parse tree in preorder. The special case of recursive —decent parsing, called predictive parsing, where no backtracking is required. The general form of top-down parsing, called recursive descent, that may involve backtracking, that is, making repeated scans of the input. Recursive descent or predictive parsing works only on grammars where the first terminal symbol of each sub expression provides enough information to choose which production to use. Recursive descent parser is a top down parser involving backtracking. It makes a repeated scans of the input. Backtracking parsers are not seen frequently, as backtracking is very needed to parse programming language constructs. Example; consider the grammar S-scAd Asabja ‘And the input string w=cad. To construct a parse tree for this string top-down, we initially create a tree consisting of a single node labeled scan input pointer points to c, the first symbol of w. we then use the first production for S to expand tree and obtain the tree of Fig(a). Ss Ss Ss IN I c A od © A d JN | a b a Fig Figib) Fig()The left most leaf, labeled c, matches the first symbol of w, so we now advance the input pointer to a the second symbol of w, and consider the next leaf, labeled A. We can then expand A using the first alternative for A to obtain the tree in Fig (b). we now have a match for the second input symbol so we advance the input pointer to d, the third, input symbol, and compare d against the next leaf, labeled b. since b does not match the d ,we report failure and go back to A to see where there is any alternative for Ac that we have not tried but that might produce a match. In going back to A, we must reset the input pointer to position2,we now try second alternative for A to obtain the tree of Fig(c).The leaf matches second symbol of w and the leaf d matehes the third symbol The left recursive grammar can cause a recursive- descent parser, even one with backtracking, to go into an infinite loop. That is ,when we try to expand A, we may eventually find ourselves again trying to ecpand A without Having consumed any input. Predictive Parsing: Predictive parsing is top-down parsing without backtracking or look a head. For many languages, make perfect guesses (avoid backtracking) by using -symbol look-a-head. ie., if AS alas|---[ae Choose arc cy looking a fst symbol it derive «san llernative choose i ast This approach is also called as predictive parsing. There must be at most one production in order to avoid backtracking. If there is no such production then no parse tree exists and an error is, returned. The crucial property is that, the grammar must not be left-recursive, Predictive parsing works well on those fragments of programming languages in which keywords occurs frequently. For example: stmt — if exp then stmt else stmt | while expr do stmt | begin stmt-list end. then the keywords if, while and begin tell, which alternative is the only one that could possibly succeed if we are to find a statement. The model of predictive parser is as follows:ids ies id | mpur 1 SS c Predictive 2 Parsing [-——P output Program s oracle 4 Parsing Table A predictive parser has: Stack Input Parsing Table Output The input buffer consists the string to be parsed, followed by $, a symbol used as a right end marker to indicate the end of the input string. The stack consists of a sequence of grammar symbols with $ on the bottom, indicating the bottom of the stack. Initially the stack consists of the start symbol of the grammar on the top of s. Recursive descent and LL parsers are often called predictive parsers, because they operate by predicting the next step in a derivation The algorithm for the Predictive Parser Program is as follows: Input: A string w and a parsing table M for grammar G Output: if w is in L(g).a leftmost derivation of w; otherwise, an error indication. ‘Method: Initially, the parser has $S on the stack with S, the start symbol of G on top, and w$ in the input buffer. The program that utilizes the predictive parsing table M to produce a parse for the input is: Set ip to point to the first symbol of wS; repeat let x be the top stack symbol and a the symbol pointed to by ip; if X is a terminal or $ then ifX=a then pop X from the stack and advance ip else error() else /* X is anon-terminal */ if M[X, a] =X > Y,Y, Y, then beginpop X from the stack; push Yi, Yu. Y, onto the stack, with Y, on top; output the production X > Y, Y2 Y% end else error() until X = $ /*stack is empty* FIRST and FOLLOW: The construction of a predictive parser is aided by two functions with a grammar G. these functions, FIRST and FOLLOW, allow us to fill in the entries of a predictive parsing table for G, whenever possible. Sets of tokens yielded by the FOLLOW function can also be used as synchronizing tokens during pannic-mode error recovery. If ais any string of grammar symbols, let FIRST (a) be the set of terminals that begin the strings derived from a. If a=>€,then € is also in FIRST(a). Define FOLLOW (A), for nonterminals A, to be the set of terminals a that can appear immediately to the right of A in some sentential form, that is, the set of terminals a such that there exist a derivation of the form S=>aAaB for some « and B. If A can be the rightmost symbol in some sentential form, then $ is in FOLLOW(A). Computation of FIRST (): To compute FIRST(X) for all grammar symbols X, apply the following rules until no more terminals or € can be added to any FIRST set. © IfX is terminal, then FIRST(X) is (X}. «If X€ is production, then add € to FIRST(X). * If X jis nonterminal and XY, Y.......Y. is a production, then place a in FIRST(X) if for some i,a is in FIRST(Y,) and € is in all of FIRST(Y),and € is in all of FIRST(Y),..... FIRST(Y.,):that is Y,, Yuoeif € is in FIRST(Y,), for all j=,2,3......k, then add € to FIRST(X).for example, everything in FIRST(Y,) is surely in FIRST(X).if Y, does not derive €,then we add nothing more to FIRST(X),but if Y->€,then we add FIRST(Y,) and so on, FIRST (A) = FIRST (a) U FIRST (o,) U-~-U FIRST (4) Where, A a; | as |~=~ [ty are all the productions for A. FIRST (Aq) = if ¢ FIRST (A) then FIRST (A) else (FIRST (A) {<}) U FIRST (a)Computation of FOLLOW (): To compute FOLLOW (A) for all nonterminals A, apply the following rules until nothing can be added to any FOLLOW set. ‘© Place $ in FOLLOW(s), where S is the start symbol and $ is input right end marker © If there is a production A—+aBB,then everything in FIRST(B) except for € is placed in FOLLOW@). © If there is production AaB, or a production A—>aBf where FIRST (B) contains € (ie.,B+€).then everything in FOLLOW(A)is in FOLLOW(B). Example: Construct the FIRST and FOLLOW for the grammar: A> BC|EFGH |H Bob Cele E ele FOCE Gog Hohle Solution: 1. Finding first Qs 1, first (H) = first (h) U first (e) = {h, €} 2. first (G) = first (g) = {g} first (C) = first (c) U first (€) = ¢, €} first (E) first (F) = first (CE) = (first (¢) - {e}) U first (BE) first (e) U first (e) = fe, e} we =(c,8} {e)) U fe, 8} = fe, e, 8} 6. first (B) = first (b)={b} 7, first (A) = first (BC) U first (EFGH) U first (H) = first (B) U (first (E) - { €}) U first (FGH) U {h, €} = {b,h, €} U fe} U (first (F)— fe}) U first (GH) = {b, eh, €} U {C, e} U first (G) = {b,c eh, 6} U fg} = tb, ee, gh, 8}2. Finding follow() sets: 1, follow(A)= {$} 2. follow(B) = first(C)— {e} U follow(A) = {C, $} 3. follow(G)= first(H) — {2} U follow(A) ~{h, ©} — fe} U {$} = fh, $} 4, follow(H) = follow(A) = {S} 5, follow(F) = first(GI1) — {2} = {g} 6. follow(E) = first(FGH) m- {e} U follow(F) = ((first(F) — {e}) U first(GH)) - {e} U follow(F) = tee} U tg} U igh = te, ©, BF = follow(A) U first (E) — fe} U follow (P) ={8} U fe, e} U fg} = fee, 8} 7. follow(C) Example 1: Construct a predictive parsing table for the given grammar or Check whether the given grammar is LL(1) or not. E>E+T|T TOTtFIF F>@®|id Step 1: Suppose if the given grammar is left Recursive then convert the given grammar (and €) into non-left Recursive grammar (as it goes to infinite loop). EOTE ES4+TEe Tort TotPTe F> (lid Step 2: Find the FIRST(X) and FOLLOW(X) for all the variables. The variables are: TTF} Terminals are: {+, *, (,), id} and $ Computation of FIRST( sets:IRST ((E)) U FIRST (id) = {( RST (*FT) U FIRST (c) 7 te} FIRST (T) = FIRST (FT) = FIRST (F) = {( id} FIRST (E) FIRST (©) Computation of FOLLOW ( IRST (+TE) U RST (TE IRST (<)= {+ ¢} FIRST (I) = {( id} ets Relevant production FOLLOW (E) = {8} UFIRST())= {8,)} F>® FOLLOW (E') = FOLLOW (E) = {S, }} E> TE FOLLOW (T) = (FIRST (E}) - {¢}) U FOLLOW (E) U FOLLOW (E) ESTE! =), 8} BO +e FOLLOW (T') = FOLLOW (T) = {+,), $} Torr FOLLOW (F) = (FIRST (T') - {<}) U FOLLOW (T) U FOLLOW (T) ToT = (4.4),8) Step 3: Construction of parsing table Terming * ¢ ) id $ fables E ESTE E> TE! B em Es Else T TFT! T—FT' Ti Toe To *FT Toe Toe F F>(E) Fo id Table 3.4, Posing Table Fill the table with the production on the basis of the FIRST(a). If the input symbol is an ¢ in FIRST(a), then goto FOLLOW(«) and fill «> €, in all those input symbols. Let us start with the non-terminal E, FIRST(E) = {(, id}. So, place the production E> TE! at (an: For the non-terminal E:, FIRST (B= (+, €}. So, place the production E'—> +TE! at + and also as there is a ¢ in FIRST(E), see FOLLOW(E’) = {8, )}. So write the production E'—> € at the place $ and ).Similarly: For the non-terminal T, FIRST(T) = {(, id}. So place the production T -> FT' at (and ia. For the non-terminal T’, FIRST (") = {*, <} So place the production T'—> *FT' at * and also as there is a ¢ in FIRST (T'), see FOLLOW (T') = {+, §, )}, So write the production T'-» ¢ at +, $ and). For the non-terminal F, FIRST (F) So place the production F -> id at id locati (id). and F > (B) at (as it has two productions. Finally, make all undefined entries as error. ‘As these were no multiple entries in the table, hence the given grammar is LL(1), Step 4: Moves made by predictive parser on the input id + id * id is: STACK INPUT REMARKS: E and id are not identical; so see E on id in parse table, the SE id +id*id$ | production is E>TE'; pop E, push E! and T ie., move in reverse order. SET tasigtias |e topuah' and Fs Proceed unl both are identical SETF id+id*idS | Pid SE Tid id + id* id $ | Identical; pop id and remove id from input symbol. SET +id*id$ | See Ton +; T— € s0, pop T* SE +id*id$ | See E' on +; E'-> +TB push E', + and T SET +id*id$ | Identical; pop + and remove ~ from input symbol SET id * id $ SETF id*ids [TOFT SE'Tid id *id$ Fo id SET: *idS SETIE* *idS T>*FT SETIF id$ SE'Tid id$ FoidSET S$ [Toe SE S$ [Boe s | Accept Table 32 Moves made by the parser on puta +a i Predictive parser accepts the given input string. We can notice that $ in input and stuck, are empty, hence accepted. , both 2.6.3 LL (1) Grammar: The first L stands for “Left-to-right scan of input”. The second L stands for “Left-most derivation”. The 1" stands for “1 token of look ahead”, No LL (1) grammar ean be ambiguous or left recursive. If there were no multiple entries in the Recursive decent parser table, the given grammar is LL (1). If the grammar G is ambiguous, left recursive then the recursive decent table will have atleast one multiply defined entry. The weakness of LL(1) (Top-down, predictive) parsing is that, must predict which production to use, Error Recovery in Predictive Parser: Error recovery is based on the idea of skipping symbols on the input until a token in a selected set of synchronizing tokens appear. Its effectiveness depends on the choice of synchronizing set. The Usage of FOLLOW and FIRST symbols as synchronizing tokens works reasonably well when expressions are parsed. For the constructed table,, fill with synch for rest of the input symbols of FOLLOW set and then fill the rest of the columns with error term. Termingk + * ( ) id s fariables E error enor | E>TE | synch | ETE | synch Ey Bo e cl € el € STE error error | E> enor | E> T synch [enor | Torr | synch | Torr | synch Th Toe [Tor] err | Toe error Toe F synch | synch [ FE) [synch | Foid | synch Table3.3 Synchronizing tokens added to parsing table for able 3.1.If the parser looks up entry in the table as synch, then the non terminal on top of the stack is popped in an attempt to resume parsing. If the token on top of the stack does not match the input symbol, then pop the token from the stack. The moves of a parser and error recovery on the erroneous input) id**tid is as follows: STACK INPUT REMARKS: SE )id * + id $ | Error, skip ) SE id *+id SET id*+id$ SETF id *+id SETid id*+id$ ser rid SETIF* *+id$ SETF + id $ | Error; F on + is synch; F has been popped. SET t+id$ SE +id$ SET+ t+id$ SET id S SETIF idS SE'Tid id S sET Ss SE s s $ | Accept. Tle 34 Paring nderror recovery moves made by predive parer Example 2: Construct a predictive parsing table for the given grammar or Check whether the given grammar is LL() or not, S—iEtSs'|a SoS] EbSolution: 1, Computation of First Q) set: 1, First (E) = first (b) = {b} 2, First (S') = first (eS) U first () = {e, e} 3. first (S) = first (IEtSS!) U first (a) = fi, a} 2. Computation of follow() set: 1. follow (S) = {S} U first (S') — {z} U follow (8) U follow (S)) = 1S} U fe} = fe, $} 2. follow (S') = follow (S) = {e, S} 3 follow (E) = first (tSS') = {t} 3 The parsing table for this grammar is: a b © i t s Soa Ss > ESS! St Soe Soe Ses E E>b As the table multiply defined entry. The given grammar is not LL(1). Example 3: Construct the FIRST and FOLLOW and predictive parse table forthe grammar: S—acs Cele A aBCd [BQ] € BbBId Qrqg Solution: 1, Finding the first Q sets: First (Q)= {a} First (B)= {b, d}First (C)= {c, €} First (A) = First (aBCd) U First (BQ) U First (&) = {a} U First (B) U First (4) Ufe} = {a} U First (bB) U First (d) U {e} = fa} U {b} Ud} U fe} = {abd} First (S) = First (ACS) = (First (A) - {e}) U First (C) {e}) U First (©) = (fa, b, d, 8} — {2}) U (le, 8} - fe) U fe} = {a,b,d,¢, &} Finding Follow () sets: Follow (S) = (#} Follow (A) = (First (C)— {e}) U First (S) = ({e, e} - fe} U {8} Follow (A)= {¢, $} Follow (B) = (First (C) — {e}) U First (d) U First (Q) = te} v td} U {a} = te, d, a} Follow (C) = (First (S) U First (d) = (4, $} Follow (Q) = (First (A) = {¢, $} The parsing table for this grammar is: a b € D q 3 ¥ 3 [SSACS_[SSAC [SSAC [SSAC SSAC s s s s A | ADaBCd [ADBQ [Ade | ADBQ Ade B BObB Bd c Ce [Ce Ce Q Q4qMoves made by predictive parser on the input abdede$ is: Stack symbol _| Input Remarks #S abdede$# S> ACS #SCA abdedeS# ‘A> aBCd #$CdCBa abdede$# Popa WSCACB bdedeS# BD bB #$CdCBb bdedeS# Pop b WSCACB cede Bod #SCACd dedes# Pop d CAC edeS# cde #$Cde edeSi# Pop C WSCA SF Pop d #SC cS# Cee #Sc cS# Pope #S S# Pop $ ¥ # ‘ReceptedBottom up parsing 1, BOTTOM UP PARSING: Bottom-up parser builds a derivation by working from the input sentence back towards the start symbol S. Right most derivation in reverse order is done in bottom-up parsing, (The point of parsing is to construct a derivation. A derivation consists of a series of rewrite steps) Sonar, - Sr, r=psentence Bottom-up Assuming the pgoduction A~B, to reduce 1:1; match some RHS B against r, then replace B with its corresponding LHS, A. In terms of the parse tree, this is working from leaves to root. Example —1: Sif E then § else S/while E do S/ print E> true/ False/id Input: if id then while true do print else print. Parse tree: Basic idea: Given input string a, “reduce” it to the goal (start) symbol, by looking for substring that match production RHS. s So ANN true> ifE then S else S Im => ifidthenS else S Im => ifid then while E do S else S im => ifid then while true do S else S im = _ if id then while true do print else S im = __if'id then while true do print else print im <= if E then while true do print else print rm <= if E then while E do print else print rm < __ if E then while E do S else print rm <= ifE then S else print < if E then S else S = s rm ‘Topdown Vs Bottom-up parsing: Top-down, Bottom-up 1. Construct tree from root to leaves 2. “Guers” which RHS to substitute for nonterminal 3. Produces left-most derivation 4, Recursive descent, LL parsers 5, Recursive descent, LL parsers 6. Easy for humans 1. Construct tree from leaves to root 2. “Guers” which rule to “reduce” terminals 3. Produces reverse right-most derivation. 4. Shift-teduce, LR, LALR, ete. 5. “Harder” for humans.> Bottom-up can parse a larger set of languages than topdown, > Both work for most (but not all) features of most computer languages, Example — Right-most derivation SaAcBe Ip: abbede: SaAcBe A-rAbIb > aAcde Bod ~aAbede — abbede Bottom-up approach “Right sentential form™ Reduction abbede aAbede ASD ‘Aacde ASAD ‘AacBe Bod s SaAcBe Steps correspond to a right-most derivation in reverse. (must choose RHS wisely) Example — S—aABe A>Abelb Bod Ip: abbede Right most derivation: S > — aABe > aAde Since () Bod > aAbede Since () A-»Abe > abbede Since () AbParsing using Bottom-up approach: Tnput Production used abbede aAbede AS ‘AAde A>Abe AABe Bod S parsing is completed as we got a start symbol Hence the I/p string is acceptable. Example —4 ESEHE ESE*E E>) Eid Lp: idcridstids Right most derivation E E+E DEtE*E >E+E*id, Etid,*id, didtidstid, Parsing using Bottom-up approach: Go from left to right id\+id,*id, E+id,*id, Eid EvEtid, Esid Etid, EEE EE Evid E = start symbol, Hence acceptable.2. HANDLES: Always making progress by replacing a substring with LHS of a matching production will not lead to the goal/start symbol. For example: abbede aAbede Aad aAAcde = Ab struck Informally, A Handle of a string is a substring that matches the right side ofa production, and whose reduction to the non-terminal on the left side of the production represents one step along, the reverse of a right most derivation. If the grammar is unambiguous, every right sentential form has exactly one handle. More formally, A handle is a production A>, and a position in the current right-sentential form Boo such that: SSaAo>ai/ho For example grammar, if current right-sentential form is, a/Abede Then the handle is A->Ab at the marked position. ,,a"" never contains non-terminals, HANDLE PRUNIN¢ Keep removing handles, replacing them with corresponding LHS of production, until we reach S. Example: ESE+E/E*E/(E)id Right-sentential form Handle Reducing production arbe a Bid Ep b E>idc Bid BEE EE EOE*E EE EE EOEVE E The grammar is ambiguous, so there are actually two handles at next-to-last step. ‘We can use parser-generators that compute the handles for us. 3. SHIFT- REDUCE PARSING: Shift Reduce Parsing uses a stuck to hold grammar symbols and input buffer to hold string to be parsed, because handles always appear at the top of the stack i.e., there's no need to look deeper into the state. A shift-reduce parser has just four actions: 1. Shift-next word is shifted onto the stack (input symbols) until a handle is formed. 2. Reduce - right end of handle is at top of stack, locate left end of handle within the stack. Pop handle off stack and push appropriate LHS. 3. Accept ~ stop parsing on successful completion of parse and report sueces 4, Error — call an error reporting/recovery routine. Possible Confliets: Ambiguous grammars lead to parsing conflicts. 1. Shift-reduce: Both a shift action and a reduce action are possible in the same state (should we shift or reduce) Example: dangling-else problem 2, Reduce-reduee: Two or more distinct reduce actions are possible in the same state, (Which production should we reduce with 2).Example: Stmt rid (param) —_(a(i) is procedure call) Param— id Expr id (expr) /id (a(i) is array subscript) Stack input buffer action Saad 8 Reduce by ? Should we reduce to param or to expr? Need to know the type of a: is it an array or a function, This information must flow from declaration of a to this use, typically via a symbol table. Shift — reduce parsing example: (Stack implementation) Grammar: ESEtE/E*E(EVid Input: id:tids+ids One Scheme to implement a handle-pruning, bottom-up parser is called a shift-reduce parser. Shift reduce parsers use stack and an input buffer. The sequence of steps is as follows: 1. initialize stack with S. 2. Repeat until the top of the stack is the goal symbol and the input token is “end of life” a, Find the handle If we don"t have a handle on top of stack, shift an input symbol onto the stack. b. Prune the handle if we have a handle (A~>B) on the stack, reduce (i) pop /B/ symbols off the stack (ii)push A onto the stack. Stack input ‘Retion 3 ididid Shift Sid HdidS Reduce by Eid SE HddS Shift SEF ididS Shift SEF id; dS Reduce by EidSEE "dS Shift SEAE* iad Shift SEVEF id; 3 Reduce by Eid SEHE*E 3 Reduce by E>E*E SEE 3 Reduce by E>E+E SE 8 ‘Accept Example 2: Goal > Expr Expr > Expr + term | Expr ~ Term | Term Term > Tem & Factor | Term | factor | Factor Factor > number | id | (Expr) The expression grammar : x —z* y ‘Stack Input ‘Action 3 Td- num * id | Shift Sid =num*id | Reduce factor > id ‘S Factor =num*id | Reduce Term > Factor Stem =num*id | Reduce Expr > Term SExpr =num *id | Shift ‘SExpr- num *id | Shift SExpr—num id Reduce Factor > num ‘S Expr — Factor *id Reduce Term > Factor SExpr— Term id Shift ‘S Expr Term * id Shift‘S Expr Term * id : Reduce Factor > id ‘S Expr - Term & Factor | - Reduce Term > Term * Factor ‘S Expr Term = Reduce Expr > Expr— Term SExpr = Reduce Goal > Expr $ Goal ‘Accept 1. shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce. Procedure: 1 Shift until top of stack is the right end of a handle. 2, Find the left end of the handle and reduce. * Dangling-else problem: stmt—if expr then stmt/f expr then stmt/other then example string is: if E, then if E, then S, else Shas two parse trees (ambiguity) and so this grammar is not of LR(K) type. Stmt If expr then gf E if expr then stmt else stmt Stmt If expr EL if’ expr then “Smt = S2 & st3. OPERATOR - PRECEDENCE PARSING: Precedence/ Operator grammar: The grammars having the property: 1. No production right side is should contain c. 2. No production sight side should contain two adjacent non-terminals. Is called an operator grammar. Operator — precedence parsing has three disjoint precedence relations, <.,~and .> between certain pairs of terminals, These precedence relations guide the selection of handles and have the following meanings: RELATION MEANING ab 1a” yields precedence to ,.b. ab +a” has the same precedence ,,b” a>b 1a” takes precedence over ,.b”. Operator precedence parsing has a number of disadvantages: 1, It is hard to handle tokens like the minus sign, which has two different precedences. 2, Only a small class of grammars can be parsed. 3. The relationship between a grammar for the language being parsed and the operator- precedence parser itself is tenuous, one cannot always be sure the parser accepts exactly the desired language. Disadvantages: 1. L(G) (parser) 2. error detection 3. usage is limited 4, They are easy to analyse manually Example: Grammar: E-SEAE|(E)|-Rlid ASH? Input string: idtid*id The operator — precedence relations are:
You might also like
CD_Unit_1
PDF
No ratings yet
CD_Unit_1
20 pages
CD Unit-I
PDF
No ratings yet
CD Unit-I
21 pages
lecture notes of compiler design lab
PDF
No ratings yet
lecture notes of compiler design lab
170 pages
Vino Compiler Notes
PDF
No ratings yet
Vino Compiler Notes
153 pages
Compiler Design Unit-1
PDF
No ratings yet
Compiler Design Unit-1
25 pages
Unit 1
PDF
No ratings yet
Unit 1
9 pages
COMPILER_DESIGN unit 1
PDF
No ratings yet
COMPILER_DESIGN unit 1
25 pages
Unit 1 Compiler Design
PDF
No ratings yet
Unit 1 Compiler Design
70 pages
Module - I: Introduction To Compiling: 1.1 Introduction of Language Processing System
PDF
No ratings yet
Module - I: Introduction To Compiling: 1.1 Introduction of Language Processing System
7 pages
Compiler Notes
PDF
No ratings yet
Compiler Notes
170 pages
Compiler Design Short Notes
PDF
No ratings yet
Compiler Design Short Notes
133 pages
CD KCS502 Unit 1 A
PDF
No ratings yet
CD KCS502 Unit 1 A
8 pages
Compiler 2021 Module 1
PDF
No ratings yet
Compiler 2021 Module 1
15 pages
Lecture Notes: Sir C R Reddy College of Engineering
PDF
No ratings yet
Lecture Notes: Sir C R Reddy College of Engineering
25 pages
Compiler Design Lecture Notes (10CS63) : D C S & E
PDF
No ratings yet
Compiler Design Lecture Notes (10CS63) : D C S & E
96 pages
cd unit I
PDF
No ratings yet
cd unit I
20 pages
UNIT-1 1.1. Introduction of Language Processingsystem
PDF
No ratings yet
UNIT-1 1.1. Introduction of Language Processingsystem
14 pages
KCA105 Unit1
PDF
No ratings yet
KCA105 Unit1
18 pages
Kca015 Unit1
PDF
No ratings yet
Kca015 Unit1
23 pages
CS8602 Notes Compiler Design
PDF
No ratings yet
CS8602 Notes Compiler Design
92 pages
CD Unit1
PDF
No ratings yet
CD Unit1
21 pages
Compiler Design Notes
PDF
No ratings yet
Compiler Design Notes
130 pages
Compiler Notes Arv
PDF
No ratings yet
Compiler Notes Arv
171 pages
UNIT 1 COMPILER DESIGN
PDF
No ratings yet
UNIT 1 COMPILER DESIGN
43 pages
Principle of Compiler Design: Translator
PDF
No ratings yet
Principle of Compiler Design: Translator
20 pages
Compiler-Design U1
PDF
No ratings yet
Compiler-Design U1
10 pages
Compiler Design UNIT 1
PDF
No ratings yet
Compiler Design UNIT 1
27 pages
DFJDFJ
PDF
No ratings yet
DFJDFJ
12 pages
Unit 1
PDF
No ratings yet
Unit 1
29 pages
Unit 1
PDF
No ratings yet
Unit 1
29 pages
Language Processing System:-: Compiler
PDF
No ratings yet
Language Processing System:-: Compiler
6 pages
Compiler Design CS8602 Full Lecture Notes Unique
PDF
No ratings yet
Compiler Design CS8602 Full Lecture Notes Unique
92 pages
Compiler Design Ch1
PDF
No ratings yet
Compiler Design Ch1
13 pages
ATCD-Unit4
PDF
No ratings yet
ATCD-Unit4
81 pages
Lecture#1 2
PDF
No ratings yet
Lecture#1 2
54 pages
Compiler Design
PDF
No ratings yet
Compiler Design
11 pages
Chapter 1 - Introduction
PDF
No ratings yet
Chapter 1 - Introduction
13 pages
unit 1
PDF
No ratings yet
unit 1
43 pages
CSE353 Slides
PDF
No ratings yet
CSE353 Slides
76 pages
Compiler Design Unit 1
PDF
No ratings yet
Compiler Design Unit 1
26 pages
CD Unit - 1 Lms Notes
PDF
No ratings yet
CD Unit - 1 Lms Notes
58 pages
Quick Book of Compiler
PDF
100% (1)
Quick Book of Compiler
66 pages
Introduction To Compiling
PDF
100% (1)
Introduction To Compiling
26 pages
Introduction To Compilers Complier: Ompiler Source Program Target Program Error Message
PDF
No ratings yet
Introduction To Compilers Complier: Ompiler Source Program Target Program Error Message
23 pages
Introduction To Compilation
PDF
No ratings yet
Introduction To Compilation
33 pages
CD Unit-I
PDF
No ratings yet
CD Unit-I
25 pages
CD Unit1 Notes
PDF
No ratings yet
CD Unit1 Notes
28 pages
Compiler 2024
PDF
No ratings yet
Compiler 2024
179 pages
7MCE1C4-Principles of Compiler Design
PDF
No ratings yet
7MCE1C4-Principles of Compiler Design
117 pages
1-Phases of compiler
PDF
No ratings yet
1-Phases of compiler
68 pages
Unit 5 SP
PDF
No ratings yet
Unit 5 SP
13 pages
Compiler Construction
PDF
No ratings yet
Compiler Construction
63 pages
Language Translation: Programming Tools
PDF
No ratings yet
Language Translation: Programming Tools
7 pages
Compiler Design
PDF
No ratings yet
Compiler Design
118 pages
III Year-V Semester: B.Tech. Computer Science and Engineering 5CS4-02: Compiler Design UNIT-1
PDF
100% (1)
III Year-V Semester: B.Tech. Computer Science and Engineering 5CS4-02: Compiler Design UNIT-1
11 pages
CH 2 Complier&Linker
PDF
No ratings yet
CH 2 Complier&Linker
22 pages
Compiler Design Notes
PDF
No ratings yet
Compiler Design Notes
185 pages