•Adding additional auxiliary memory to Finite Automaton; in
form of ‘ Stack’ ; is Pushdown Automaton.
•While removing the elements LIFO ( Last In First Out) basis.
Pushdown Automata
•Has Read only Input Tape
•An input Alphabet
•Finite state control
•Set of final states
•Initial state
•In Addition to this has Stack “Pushdown Store”.
•It is a Read Write Pushdown Store, as element added to PDA
or removed from PDA
•PDA is in some state and on reading an input symbol and the
topmost symbol in PDA, it moves to a new state and
writes(adds) a string of symbol in PDA.
Pushdown Automata
Pushdown automataare for context-free
languages what finite automata are for regular
languages.
PDAs are recognizing automata that have a
single stack (= memory):
Last-In First-Out pushingand popping
Difference: PDAs are inherently nondeterministic.
(They are not practical machines.)
Types of PDA
•Deterministic PDA-In PDA, there exits exactly
one transition for each input symbol.
•Non Deterministic PDA-In PDA, there may
exits more than one transition for each input
symbol.
Construct a PDA for}0:{ nba
nn
`
Top Down Parsing
•start at the root of derivation tree and
fill in.
•picks a production and tries to match the
input .
•may require backtracking.
Top Down Parsing
Top Down Parsing
Bottom up Parsing
•startattheleavesand
fillin
•startinastatevalidforlegal
firsttokens.
•asinputisconsumed,changestatetoencode
possibilities.
•useastacktostorebothstateandsentential
forms
Bottom Up Parsing
ACCEPTANCE BY pda
•Null Store Acceptance
•Final State Acceptance
Model of PDA
LL(K) Grammer
•In this section we present certain techniques
for top-down parsing which can be applied to
a certain subclass of context-free languages.
We illustrate them by means of some
examples. We discuss LL(l) parsing, LL(k)
parsing, left factoring and the technique to
remove left recursion.
•EXAMPLE
LL(1) Grammer
LL(2) Grammer
LR(K) Grammer
Bottom-Up Parsing
•Start at the leaves and grow toward root
•As input is consumed, encode possibilities in
an internal state
•A powerful parsing technology
•LR grammars
–Construct right-most derivation of program
–Left-recursive grammar, virtually all programming
language are left-recursive
–Easier to express syntax
Bottom-Up Parsing
•Right-most derivation
–Start with the tokens
–End with the start symbol
–Match substring on RHS of production, replace by
LHS
–Shift-reduce parsers
•Parsers for LR grammars
•Automatic parser generators (yacc, bison)
Bottom-Up Parsing
•Example Bottom-Up Parsing
S S + E | E
E num | (S)
(1+2+(3+4))+5 (E+2+(3+4))+5 (S+2+(3+4))+5
(S+E+(3+4))+5 (S+(3+4))+5 (S+(E+4))+5
(S+(S+4))+5 (S+(S+E))+5 (S+(S))+5
(S+E)+5 (S)+5 E+5
S+5 S+E S
Bottom-Up Parsing
•Advantage
–Can postpone the selection of productions until
more of the input is scanned
S
S
E
1
+ E
2
S
S
E
( S
S
E
1
+ E
2
)
+ E
Top-Down
Parsing
Bottom-Up
Parsing
SS + E | E
E num | (S)
More time to decide what rules to apply
Properties of LR(K)
•Every LR(k) grammar G is unambiguouS.
Computational Complexity:
Measuring Time & Space Complexity
•Space Complexity-
•Space complexity
•The better the time complexity of an algorithm is, the faster the
algorithm will carry out his work in practice. Apart from time
complexity, itsspace complexityis also important: This is
essentially the number of memory cells which an algorithm needs.
A good algorithm keeps this number as small as possible, too.
•There is often atime-space-tradeoffinvolved in a problem, that is,
it cannot be solved with few computing timeandlow memory
consumption. One then has to make a compromise and to exchange
computing time for memory consumption or vice versa, depending
on which algorithm one chooses and how one parameterizes it.
Measuring Time Complexity
•Time complexity
•How long does this sorting program run? It
possibly takes a very long time on large inputs
(that is many strings) until the program has
completed its work and gives a sign of life
again. Sometimes it makes sense to be able to
estimate the running timebeforestarting a
progr