Meaning Representations for-Natural Languages Design, Models, and Applications.pdf

YunyaoLi 74 views 318 slides Jun 18, 2024
Slide 1
Slide 1 of 352
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162
Slide 163
163
Slide 164
164
Slide 165
165
Slide 166
166
Slide 167
167
Slide 168
168
Slide 169
169
Slide 170
170
Slide 171
171
Slide 172
172
Slide 173
173
Slide 174
174
Slide 175
175
Slide 176
176
Slide 177
177
Slide 178
178
Slide 179
179
Slide 180
180
Slide 181
181
Slide 182
182
Slide 183
183
Slide 184
184
Slide 185
185
Slide 186
186
Slide 187
187
Slide 188
188
Slide 189
189
Slide 190
190
Slide 191
191
Slide 192
192
Slide 193
193
Slide 194
194
Slide 195
195
Slide 196
196
Slide 197
197
Slide 198
198
Slide 199
199
Slide 200
200
Slide 201
201
Slide 202
202
Slide 203
203
Slide 204
204
Slide 205
205
Slide 206
206
Slide 207
207
Slide 208
208
Slide 209
209
Slide 210
210
Slide 211
211
Slide 212
212
Slide 213
213
Slide 214
214
Slide 215
215
Slide 216
216
Slide 217
217
Slide 218
218
Slide 219
219
Slide 220
220
Slide 221
221
Slide 222
222
Slide 223
223
Slide 224
224
Slide 225
225
Slide 226
226
Slide 227
227
Slide 228
228
Slide 229
229
Slide 230
230
Slide 231
231
Slide 232
232
Slide 233
233
Slide 234
234
Slide 235
235
Slide 236
236
Slide 237
237
Slide 238
238
Slide 239
239
Slide 240
240
Slide 241
241
Slide 242
242
Slide 243
243
Slide 244
244
Slide 245
245
Slide 246
246
Slide 247
247
Slide 248
248
Slide 249
249
Slide 250
250
Slide 251
251
Slide 252
252
Slide 253
253
Slide 254
254
Slide 255
255
Slide 256
256
Slide 257
257
Slide 258
258
Slide 259
259
Slide 260
260
Slide 261
261
Slide 262
262
Slide 263
263
Slide 264
264
Slide 265
265
Slide 266
266
Slide 267
267
Slide 268
268
Slide 269
269
Slide 270
270
Slide 271
271
Slide 272
272
Slide 273
273
Slide 274
274
Slide 275
275
Slide 276
276
Slide 277
277
Slide 278
278
Slide 279
279
Slide 280
280
Slide 281
281
Slide 282
282
Slide 283
283
Slide 284
284
Slide 285
285
Slide 286
286
Slide 287
287
Slide 288
288
Slide 289
289
Slide 290
290
Slide 291
291
Slide 292
292
Slide 293
293
Slide 294
294
Slide 295
295
Slide 296
296
Slide 297
297
Slide 298
298
Slide 299
299
Slide 300
300
Slide 301
301
Slide 302
302
Slide 303
303
Slide 304
304
Slide 305
305
Slide 306
306
Slide 307
307
Slide 308
308
Slide 309
309
Slide 310
310
Slide 311
311
Slide 312
312
Slide 313
313
Slide 314
314
Slide 315
315
Slide 316
316
Slide 317
317
Slide 318
318
Slide 319
319
Slide 320
320
Slide 321
321
Slide 322
322
Slide 323
323
Slide 324
324
Slide 325
325
Slide 326
326
Slide 327
327
Slide 328
328
Slide 329
329
Slide 330
330
Slide 331
331
Slide 332
332
Slide 333
333
Slide 334
334
Slide 335
335
Slide 336
336
Slide 337
337
Slide 338
338
Slide 339
339
Slide 340
340
Slide 341
341
Slide 342
342
Slide 343
343
Slide 344
344
Slide 345
345
Slide 346
346
Slide 347
347
Slide 348
348
Slide 349
349
Slide 350
350
Slide 351
351
Slide 352
352

About This Presentation

COLING-LREC'2024 Tutorial "Meaning Representations for Natural Languages: Design, Models and Applications"

Instructors: Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li and Nianwen Xue

Abstract: This tutorial introduces a research area that has the potential to cre...


Slide Content

Tutorial
Meaning Representations for Natural Languages:
Design, Models and Applications
Jeffrey
Flanigan
Ishan
Jindal
Nianwen
Xue
Julia
Bonn
Jan
Hajič
Yunyao
Li

What should be in a Meaning
Representation?

Motivation: From Sentences to Propositions
Who did what to whom, when, where and how?
. . .
“When Powell met Zhu Rongji on Thursday they discussed the return of the spy plane.”
MEET(somebody 1,somebody 2)
MEET(Powell,Zhu)DISCUSS( [Powell,Zhu],RETURN(X,plane))
Powell met Zhu Rongji.
Powell and Zhu Rongji met.
Powell met with Zhu Rongji.
Powell and Zhu Rongji had a
meeting.
Proposition: MEET(Powell,Zhu Rongji)
“When Powell met Zhu Rongji on Thursdaythey discussed the return of the spy plane.”“When Powell met Zhu Rongji on Thursday they discussed the return of the spy plane.”

•[ Tim ]broke [ the window ] .
•[ The window ] was broken [ by the hurricane ] .
•[ The window] broke[ into pieces ] [ when it slammed shut] .
Capturing Semantic Roles
Breaker -AGENT
Thing broken -PATIENT
Thing broken -PATIENT

discuss
“Zhu and Powell discussed the return of the spy plane.”
DISCUSS([Powell,Zhu],return(X,plane))
Zhu and
Powell
return
of the
spy plane
A Proposition as a Tree

VALENCY LEXICON
discuss.01 -talk about
ALIASES:discuss (VERB)
discussion (NOUN)
have_discussion(LIGHT VERB CONSTRUCTION)
ROLES:
ARG0:discussantARG1:topicARG2:conversation partner, if explicit
see Kingsbury & Palmer (LREC 2002) –Pradhan et. al. (*SEM 2022)
Semantic Role Labeling: PropBank Frame Files
11,500+ rolesets

“Zhu and Powell discussed the return of the spy plane.”
discuss.01
ARG0: Zhu and Powell
ARG1:return-02
ARG1:of the spy plane
PropBank Frame Files
“Zhu and Powell discussed the return of the spy plane.”
discuss.01
ARG0: Zhu and Powell
ARG1:return-02
ARG1:of the spy plane
DISCUSS([Powell,Zhu],RETURN(X,plane))

A Proposition as a Tree
discuss
Zhu and
Powell
return
of the
spy plane
Arg0Arg1
Arg1
??? (Zhu)
“Zhu and Powell discussed the return of the spy plane.”
DISCUSS([Powell,Zhu],return(X,plane))

•Hand annotated predicate argument structures for Penn Treebank
•Standoff XML, points directly to syntactic parse tree nodes
•Doubly annotated and adjudicated
•(Kingsbury & Palmer, 2002, Palmer, Gildea, Xue, 2004, …).
•Based on PropBank Frame Files
•English valency lexicon: ~4K verb entries (2004) → ~11K v,n, adj, prep (2022)
•Core arguments –Arg0-Arg5
•ArgM’s for modifiers and adjuncts
•Mappings to VerbNet and FrameNet
•Annotated PropBank Corpora
•English 2M+, Chinese 1M+, Arabic .5M, Hindi/Urdu .6K, Korean, …
Proposition Bank

An Abstract Meaning Representation as a Graph
discuss.01
Zhu and
Powell
return.02
of the
spy plane
Arg0Arg1
Arg1
“Zhu and Powell discussed the return of the spy plane.”
DISCUSS([Powell,Zhu],return(X,plane))

An Abstract Meaning Representation as a Graph
discuss.01
and return.02
spy plane
Arg0Arg1
Arg1
“Zhu and Powell discussed the return of the spy plane.”
DISCUSS([Powell,Zhu],return(X,plane))
AMR drops:
Determiners
Function words
adds:
NEtags
Wiki links
ZhuPowell
op1op2

An Abstract Meaning Representation as a Graph
discuss.01
and return.02
plane
Arg0Arg1
Arg1
“Zhu and Powell discussed the return of the spy plane.”
DISCUSS([Powell,Zhu],return(X,plane))
ZhuPowell
op1op2
spy.01
Arg0-of
AMR drops:
Determiners
Function words
adds:
NEtags
Wiki links
Implicit arguments
coreference links
Zhu
Arg0
AMR drops:
Determiners
Function words
adds:
NEtags
Wiki links

Motivation: From Sentences to Propositions
Who did what to whom, when, where and how?
. . .
“When Powell met Zhu Rongji on Thursday they discussed the return of the spy plane.”
MEET(somebody 1,somebody 2)
MEET(Powell,Zhu) DISCUSS( [Powell,Zhu],RETURN(X,plane) )
Powell met Zhu Rongji.
Powell and Zhu Rongji met.
Powell met with Zhu Rongji.
Powell and Zhu Rongji had a
meeting.
Proposition: MEET(Powell,Zhu Rongji)
“When Powell met Zhu Rongji on Thursdaythey discussed the return of the spy plane.”“When Powell met Zhu Rongji on Thursday they discussed the return of the spy plane.”
English!
debate
consult
battle
wrestle
join

Motivation: From Sentences to Propositions
Who did what to whom, when, where and how?
. . .
“When Powell met Zhu Rongji on Thursday they discussed the return of the spy plane.”
REUNIR(somebody 1,somebody 2)
REUNIR(Powell,Zhu) HABLAR( [Powell,Zhu],REGRESAR(X,avión) )
Powell reunió Zhu Rongji.
Powell y Zhu Rongji reunió.
Powell reunió con Zhu Rongji.
Powell y Zhu Rongji
tuvo una reunión.
Proposition: REUNIR(Powell,Zhu Rongji)
“When Powell met Zhu Rongji on Thursdaythey discussed the return of the spy plane.”“Powell se reunió con Zhu Rongji el jueves y hablaron sobre el regreso del avión espía.”
Spanish
зустрів
اﻟﺘﻘﻰ
遇见मुलाकातकी
พบOther languages?
Thai
Hindi
Chinese
Ukrainian
Arabic

•Several languages already have valency lexicons
•Chinese, Arabic, Hindi/Urdu, Korean PropBanks, ….
•Czech Tectogrammatical SynSemClass , https://ufal.mff.cuni.cz/synsemclass
•VerbNets, FrameNets: Spanish, Basque, Catalan, Portuguese, Japanese, …
•Linguistic valency lexicons: Arapaho, Lakota, Turkish, Farsi, Japanese, …
•For those without, follow EuroWordNet approach: project from English?
•Universal Proposition Banks for Multilingual Semantic Role Labeling
•See Ishan Jindal in Part 3
•Can AMR be applied universally to build language specific AMRs?
•Uniform Meaning Representation
•See Nianwen Xue after the AM break
How do we cover thousands of languages?

Morning Session
•Part 1: Introduction –Julia Bonn
•Part 2a: Common Meaning Representations:
•AMR –Julia Bonn
•Other Meaning Representations –Jan Hajič
•Break
•Part 2b: Common Meaning Representations
•UMR –Nianwen Xue
Tutorial Outline

Afternoon Session
•Part 3: Modeling Meaning Representation:
•SRL –Ishan Jindal
•AMR –Jeff Flanigan
•Break
•Part 4: Applying Meaning Representations
–Jeff Flanigan
Part 5: Open Questions and Future Work
–Nianwen Xue
Tutorial Outline

Meaning Representations for Natural Languages Tutorial Part 2
Common Meaning Representations
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Nianwen Xue

Representation Roadmap
Meaning Representations for Natural Languages Tutorial Part 2A
Common Meaning Representations
AMR
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li, Nianwen Xue•AMR Format & Basics
•Some Details & Design Decisions
•Practice -Walking through a few
AMRs
•Multi-sentence AMRs
•Relation to Other Formalisms

•AMR as a format is older (Kasper 1989,
Langkilde & Knight 1998), but with no
PropBank, no training data.
•Propbank showed that large-scale
training sets could be annotated for SRL
•Modern AMR (Banarescu et al. (2013)
main innovation: making large-scale
sembanking possible:
•AMR 3.0 more than 60k sentences in
English
•CAMR more than 20k sentences in
Chinese
Abstract Meaning Representation: AMR
and me… AMR/PropBank Lexicon Unification

•Shift from SRL to AMR –from spans to graphs
•In SRLwe separately represent each predicate’s arguments with spans
AMR Basics –SRL to AMR
like-01
the little catto eat cheese
Arg0Arg1
eat-01
the little catcheese
Arg1Arg0
“ [The little cat] [likes][to eat cheese].”
0 -2 3 4 -6
ARG0 rel ARG1
•AMR instead uses graphswith one node per concept
“ [The little cat[likes[to eat cheese]. ”
0 -2 3 4 -6
like-01
cat eat-01Arg0
Arg1
little
mod
cheese
Arg1
Arg0

AMR Basics –PENMAN
(l / like-01
:ARG0 (c / cat
:mod (l2 / little))
:ARG1 (e / eat-01
:ARG0 c
:ARG1 (c2 / cheese)))
like-01
cat eat-01
Arg0
Arg0
Arg1
little
mod
cheese
Arg1
“The little cat likes to eat cheese”
Penman Notation

•concepts from the sentence appear as nodes
AMR Basics –PENMAN
(l /like-01
:ARG0 (c/ cat
:mod (l2/little))
:ARG1 (e/eat-01
:ARG0 c
:ARG1 (c2/cheese)))
like-01
cat eat-01
Arg0
Arg0
Arg1
little
mod
cheese
Arg1
“The little cat likes to eat cheese”

•concepts from the sentence appear as nodes
•unique variablesidentify each concept
AMR Basics –PENMAN
(l/like-01
:ARG0 (c/ cat
:mod (l2 /little))
:ARG1 (e/eat-01
:ARG0 c
:ARG1 (c2/cheese)))
Arg0
Arg0
Arg1
mod Arg1
“The little cat likes to eat cheese”
l
c
l2
e
c2
like-01
cat eat-01
littlecheese

•Edgesare represented by:
•→ indentation
•colons (:EDGE)
AMR Basics –PENMAN
(l / like-01
:ARG0(c / cat
:mod(l2 / little))
:ARG1(e / eat-01
:ARG0c
:ARG1(c2 / cheese)))
like-01
cat eat-01
Arg0
Arg0
Arg1
little
mod
cheese
Arg1
“The little cat likes to eat cheese”

AMR Basics –PENMAN
(l / like-01
:ARG0 (c / cat
:mod (l2 / little))
:ARG1 (e / eat-01
:ARG0 c
:ARG1 (c2 / cheese)))
like-01
cat eat-01
Arg0
Arg0
Arg1
little
mod
cheese
Arg1
“The little cat likes to eat cheese”
Re-entrancyof variables:
•For concepts that are the target of multiple edges in a graph
•Once a concept has a variable:
•use that variable to refer to it anywhere else in the graph
•applies to anykind of reference to the same entity--paraphrases, pronouns, etc.

Inverse roles:
•Allow us to encode things like relative clauses
•Any relation of the form “:X-of” is an inverse
•Meaning is interchangeable!
(predicate, ARG0, entity) = (entity, ARG0-of, predicate)
AMR Basics –PENMAN
(l / like-01
:ARG0 (h / he)
:ARG1 (c / cat
:ARG0-of(e / eat-01
:ARG1 (c2 / cheese))))
like-01
he cat
Arg0Arg1
cheese
eat-01
Arg0-of
“He likes cats that eat cheese”
Arg1

Semantically-rooted graphs:
•Same graph for “cats eat cheese” and“cats thateat cheese”?
AMR Basics –PENMAN
(c / cat
:ARG0-of(e / eat-01
:ARG1 (c2 / cheese)))
(e / eat-01
:ARG0(c / cat)
:ARG1 (c2 / cheese))
eat-01
catcheese
Arg1Arg0
eat-01
catcheese
Arg1Arg0
“Cats eat cheese.”
TOP
eat-01
catcheese
Arg1Arg0-of
“cats that eat cheese”
TOP
•No! Every graph gets a“Top”edge defining the semantic head/root

Named Entities:
•Head node is a category
•AMR provides 70+ categories
•NE annotations:
•:name, for name tokens
•:wiki, for name of Wikipedia
page (if available)
•given as strings
•these are constants--not
assigned variables
AMR Basics –PENMAN
(l / like-01
:ARG0 (a / animal
:name(n / name :op1 “Grumpy”:op2 “Cat”)
:wiki“Grumpy_Cat”)
:ARG1 (e / eat-01
:ARG0 a
:ARG1 (c2 / cheese)))
like-01
Grumpy Cateat-01
Arg0
Arg0
Arg1
cheese
Arg1
“Grumpy Cat likes to eat cheese”

“The dog ate the four bones it found.”
(e / eat-01
:ARG0 (d / dog)
:ARG1 (b / bone :quant 4
:ARG1-of (f / find-01
:ARG0 d)))
30
AMR Basics –PENMAN
•That’s AMR notation! Let’s review:
“The dog ate the four bones it found.”
(e / eat-01
:ARG0 (d / dog)
:ARG1 (b / bone:quant 4
:ARG1-of (f / find-01
:ARG0 d)))
concepts
“The dog ate the four bones it found.”
(e/ eat-01
:ARG0 (d/ dog)
:ARG1 (b/ bone:quant 4
:ARG1-of (f/ find-01
:ARG0 d)))
conceptsvariables
“The dog ate the four bones it found.”
(e/ eat-01
:ARG0(d/ dog)
:ARG1(b/ bone:quant4
:ARG1-of(f/ find-01
:ARG0d)))
conceptsvariables
edges
“The dog ate the four bones it found.”
(e/ eat-01
:ARG0(d/ dog)
:ARG1(b/ bone:quant4
:ARG1-of(f/ find-01
:ARG0d)))
conceptsvariables
edges
constant
“The dog ate the four bones it found.”
(e/ eat-01
:ARG0(d/ dog)
:ARG1(b/ bone:quant4
:ARG1-of(f/ find-01
:ARG0d)))
conceptsvariables
edges
constant
inverse role
“The dog ate the four bones it found.”
(e/ eat-01
:ARG0(d/ dog)
:ARG1(b/ bone:quant4
:ARG1-of(f/ find-01
:ARG0d)))
conceptsvariables
edges
constant
inverse rolere-entrancy

•AMR doeslimited normalization
•reduces arbitrary syntactic variation(“syntactic sugar”)
•maximizes cross-linguistic robustness
AMR Basics 2 –Annotation Philosophy
•All predicative things→ PropBank rolesets
•verbs, adjectives, many nouns
•Some morphological decomposition
•Limited speculation:
•represent direct contents of sentence
•add pragmatic content only when it can be done consistently
•Canonicalize the rest:
•removal of semantically light predicates and some features like definiteness (controversial)

AMR Basics 2 –Annotation Philosophy
Normalization of predicates:
•We generalize across parts of speechand etymologically related
words:
32
My fear of snakesNOUN
fear-01
I’m terrified of snakes ADJECTIVE
terrify-01
Snakes creep me outVERB+PARTICLE
creep_out-03
My fear of snakesNOUN
fear-01
I am fearful of snakes ADJECTIVE
fear-01
I fear snakes VERB
fear-01
I’m afraid of snakesADJECTIVE
fear-01
•But we don’t generalize over synonyms (hard to do consistently):

AMR Basics 2 –Annotation Philosophy
Normalization of predicates:
•Predicates use the
PropBank inventory.
•Each lemma leads annotators
to a list of senses.
•Each sense hasits own definitions for its
numbered (core) arguments
33

AMR Basics 2 –Annotation Philosophy
Roles beyond predicates:
•If a semantic role is not in the core rolesfor a roleset, AMR provides an inventory of non-core roles
•These express things like :time, :manner, :part, :location, :frequency
•Inventoryon handout, or in editor (the [roles]button)
34

AMR Basics 2 –Annotation Philosophy
Semantic-concept-to-node ratio:
•Ideally 1:1
•But, multi-word expressions?
•modeled as a single node
•Morphologically complex words?
•Some → decomposed
•but, limited •e.g. kill does not become “cause to die”
35
“The thief was lininghis pockets with
their investments”
(l / line-pocket-02
:ARG0(p / person
:ARG0-of (t / thieve-01))
:ARG1 (t2 / thing
:ARG2-of(i2 / invest-01
:ARG0 (t3 / they))))

AMR Basics 2 –Annotation Philosophy
36
a cat
the cat
cats
the cats
(c / cat)
eating
eats
ate
will eat
(e / eat-
01)
they
their
them
(t / they)
Canonical forms:
•All concepts drop plurality, aspect,definiteness, and tense
•Non-predicative terms simply represented in singular, nominative form

37
The man described the mission as adisaster.
The man’sdescription ofthe mission:disaster.
Asthe man described it,the missionwasadisaster.
The man described the missionasdisastrous.
(d / describe-01
:ARG0 (m / man)
:ARG1 (m2 / mission)
:ARG2 (d / disaster))
AMR Basics 2 –Annotation Philosophy

Representation Roadmap
Meaning Representations for Natural Languages Tutorial Part 2A
Common Meaning Representations
AMR
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li, Nianwen Xue•AMR Format & Basics
•AMR: Some Details & Design Decisions
•Practice -Walking through a few AMRs
•Multi-sentence AMRs
•Relation to Other Formalisms

Details -Specialized Normalizations
•AMR uses special abstract conceptswe use for normalizable entities and
quantities.
39
(d / date-entity
:weekday(t / tuesday)
:day19
:month5)
“Tuesday the 19th of May”
date-entity
:day
:quarter
:month:dayperiod
:year:season
:weekday:decade
:time:century
:timezone:calendar
:era

Details -Specialized Normalizations
•AMR uses special abstract conceptswe use for normalizable entities and
quantities.
40
(m / monetary-quantity
:quant5
:unit(d / dollar))
“five bucks”
(t / temperature-quantity
:quant 100
:unit (d / degree)
:scale(c / celsius))
“100°Celsius”
monetary-quantity
:quant
:unit dollar, euro, pound, yen …
temperature-quantity
:quant
:unit degrees, kelvins …
:scale celsius, fahrenheit
frequency-quantity
:quant hertz …
etc.

Details -Specialized Normalizations
41
(r / rate-entity-91
:ARG1 (m / monetary-quantity
:unit dollar
:quant 2)
:ARG2 (t / taco
:quant 1)
:ARG4 (d / date-entity
:weekday (t / tuesday))
“$2/taco Tuesdays”
•And special abstract rolesetswe can use for more complex normalizable
entities.
rate-entity-91
:ARG1 quantity (implied default 1)
:ARG2 per quantity
:ARG3 regular interval between events
:ARG4 entity on which recurring event happens

Details -Specialized Rolesets
•Other complex relationsare also given special abstract rolesets:
•ex: organizational/employment roles
42
(p / person
:ARG0-of(h / have-org-role-91
:ARG1(c / country
:name (n / name :op1 "US")
:wiki "United_States")
:ARG2(p2 / president)))
“The US president”
have-org-role-91
:ARG0 office-holder
:ARG1 organization
:ARG2 title of office held
:ARG3 description of responsibility

Details -Specialized Predicates
43
(b / be-located-at-91
:ARG1(i / i)
:ARG2(c / city
:name (n / name :op1 “Macau”)))
“Iam inMacau.”
•Reification -91 rolesets:
be-located-at-91reification of :location
:ARG1 entity
:ARG2 location

Details -Reduction of Semantically-Light Matrix Verbs
44
Specific predicates are NOTused in AMR:
●English Copula be:
●semantically-light
●many languages don’t use a copula
●Replace with relative semantic relation
●e.g. :domain= “is an attribute of”
= “is a category of”
“The pizza isfree.”
(f / free-01
:ARG1(p / pizza))
“The house isa pit.”
(p / pit
:domain(h / house))

Details -Reduction of Semantically-Light Matrix Verbs
45
Specific predicates are NOTused in AMR:
●Light Verb Constructions:
●semantically-light verb dropped
●roleset for heavy noun used instead
“I took a walkin the park.”
(w / walk-01
:ARG0(i2 / i)
:location (p / park))

•For two-placediscourse connectives, we define abstract rolesets
46
Details -Discourse Connectives and Coordination
“We walked homeeven thoughit was raining.”
(h / have-concession-91
:ARG1(w / walk-01
:ARG0 (w2 / we)
:destination (h / home))
:ARG2(r / rain-01))
“applesand bananas”
(a / and
:op1(a2 / apple)
:op2(b / banana)
•For list-likediscourse connectives, we use an abstract conceptwith any
number of sequential :oproles:
have-concession-91
:ARG1main clause
:ARG2‘although’ clause
and
:op11st thing
:op22nd thing
:op33rd thing
(etc.)

Representation Roadmap
Meaning Representations for Natural Languages Tutorial Part 2A
Common Meaning Representations
AMR
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li, Nianwen Xue•AMR Format & Basics
•AMR: Some Details & Design Decisions
•Practice -Walking through a few AMRs
•Multi-sentence AMRs
•Relation to Other Formalisms

Practice -Let’s Try some Sentences
•Feel free to annotate by hand (or ponder how you’d want to represent them)
•Edmund Pope tasted freedom today for the first time in more than eight months.

Practice -Let’s Try some Sentences
•Edmund Pope tasted freedom today for the first time in more than eight months.•Edmund Pope tasted freedom today for the first time in more than eight months.
taste-01
:ARG0 experiencer
:ARG1 stimulus
(t / taste-01) taste-01(t / taste-01
:ARG0 (p / person:wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”)))
•Edmund Popetasted freedom today for the first time in more than eight months.
Edmund
Pope
Arg0
•Edmund Pope tasted freedom today for the first time in more than eight months.
free-04
:ARG1 free entity
:ARG2 free from what
:ARG3 free to do what
(t / taste-01
:ARG0 (p / person :wiki “Edmund_Pope”
:name(n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04)
free-04
Arg1
(t / taste-01
:ARG0 (p / person :wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04
:ARG1 p))
Arg1
today
time
•Edmund Pope tasted freedomtoday for the first time in more than eight months.
:temporal
(t / taste-01
:ARG0 (p/ person :wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04
:ARG1 p)
:temporal (t / today))
•Edmund Pope tasted freedom todayfor the first time in more than eight months.
:temporal
:ord
:value
(t / taste-01
:ARG0 (p/ person :wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04
:ARG1 p)
:temporal (t / today)
:ord (o / ordinal-entity:value1)
ordinal-entity
ord
value
1
•Edmund Pope tasted freedom todayfor the first timein more than eight months.
:temporal
:ord
:value
:range
(t / taste-01
:ARG0 (p/ person :wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04
:ARG1 p)
:temporal (t / today)
:ord (o / ordinal-entity :value 1
:range(m / more-than)))
more-than
range
•Edmund Pope tasted freedom todayfor the first timein more than eight months.
(t / taste-01
:ARG0 (p/ person :wiki “Edmund_Pope”
:name (n / name :op1 “Edmund” :op2 “Pope”))
:ARG1 (f / free-04
:ARG1 p)
:temporal (t / today)
:ord (o / ordinal-entity :value 1
:range (m / more-than
:op1 (t2 / temporal-quantity :quant8
:unit(m2 / month)))))
temporal-
quantity
range
quant
1
unit
month

Representation Roadmap
Meaning Representations for Natural Languages Tutorial Part 2A
Common Meaning Representations
AMR
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li, Nianwen Xue•AMR Format & Basics
•AMR: Some Details & Design Decisions
•Practice -Walking through a few AMRs
•Multi-sentence AMRs
•Relation to Other Formalisms

A final component in AMR: Multi-sentence!
•AMR 3.0 release contains Multi-sentence AMR annotations
•Document-level coreference:
•Connecting mentions that co-refer
•Connecting somepartial coreference (bridging)
•Making cross-sentence implicit semantic roles
•Johntook his carto the store.
•Hebought milk[from the store].
• Heput it in the trunk.

A final component in AMR: Multi-sentence!
Coreference annotation:
•Annotations track relations between AMR variables, not raw text
1.“Johntook his car to the store.”
(s1t / take-01:ARG0 (s1p/ person :name (n / name :op1 “John”)):ARG1 (s1c / car :poss s1p):ARG3 (s1s / store)
1.“Hebought milk.”
(s2b / buy-01:ARG0 (s2h / he):ARG1 (s2m / milk))
identity chain:
‘John’
s1p
s2h

A final component in AMR: Multi-sentence!
Partial coreference (bridging) annotation:
•Annotations track relations between AMR variables, not raw text
1.“John took his car to the store.”
(s1t / take-01:ARG0 (s1p / person :name (n / name :op1 “John”)):ARG1 (s1c / car :poss s1p):ARG3 (s1s / store)
3.“He put it in the trunk.”
(s3p / put-01:ARG0 (s3h / he):ARG1 (s3i2 / it):ARG2 (s3t / trunk)
whole entity:
s1c “car”
parts:
s3t “trunk”

A final component in AMR: Multi-sentence!
Implicit roles:
•After sentence-level annotation, unused numbered arguments are added back into the graphs
•Available for coreference annotation
1.“John took his car to the store [from his house].”
(s1t / take-01:ARG0 (s1p / person :name (n / name :op1 “John”)):ARG1 (s1c / car :poss s1p):ARG2[s1x / implicit :op1 “taken from, start point”]:ARG3 (s1s / store)
1.“He bought milk [from the store].”
(s2b / buy-01:ARG0 (s2h / he):ARG1 (s2m / milk):ARG2[s2x / implicit :op1 “seller”])
identity chain:
‘the store’
s1s
s2x

A final component in AMR: Multi-sentence!
Implicit roles:
•Worth considering for meaning representation, especially for languages other than
English
•Null subject (and sometimes null object) constructions are very cross-linguistically
common, can carry lots of information
•Arguments of nominalizations can carry a lot of assumed information in scientific
domains

Special Note on Special Domain AMR Extensions
-Spatial AMR (Bonn et al., 2020): -Fine grained, multimodal extension of AMR for grounded corpora-Annotates frame of reference-Minecraft Dialogue Corpus-Used for downstream Human-robot interaction applications
-THYME colon cancer medical corpus(Wright-Bettner et al, 2019)-Fine grained cross-document temporal relations-Greatly expanded Medical PropBank lexicon-Handling of complex multi-word expressions
Multi-sentence, implicit annotation is vitally important in these special domains!

•We gratefully acknowledge the support of the National Science Foundation
Grants for VerbNet, Semantic Parsing, Word Sense Disambiguation, Richer
Representations for Machine Translation, and Uniform Meaning
Representations, the NSA for Proposition Banks (English and Chinese), ARO for
Symbolic Resources for MT, Lockheed Martin for Verb Classes, DARPA-GALE
via a subcontract from BBN, DARPA-BOLT & DEFT via subcontracts from LDC,
DARPA CwC via UIUC, DARPA AIDA, DARPA KAIROS via RPI, and NIH THYME I, II
and III
•Many thanks to the 2014 JHU Summer Workshop in Prague and our CL-AMR
colleagues; and all the students, postdocs and colleagues
•Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of
the National Science Foundation, the NSA, ARO, DARPA or NIH.
57
Acknowledgements

•Albright, Daniel, Arrick Lanfranchi, Anwen Fredriksen, William Styler, Collin Warner, Jena Hwang, Jinho Choi, Dmitriy Dligach, RodneyNielsen, James MarLn, Wayne Ward, Martha Palmer, and Guergana Savova. 2013. Towards syntacFc and semanFc annotaFons of the clinical narraFve.Journal of the American Medical Informa4cs Associa4on., 0:1-9. doi:10.1136/amiajnl-2012-001317
•Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project.In Proceedings of COLING/ACL-98, pages 86--90, Montreal.
•Baker, Collin F. and Josef Ruppenhofer. 2002.FrameNet’s Frames vs. Levin’s Verb Classes.In Proceedings of the 28th Annual Mee4ng of the Berkeley Linguis4cs Society.
•BhaFa, Archna, Rajesh Bha[, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, DipL Misra Sharma, Michael Tepper, Ashwini Vaidya, Fei Xia. 2010. Empty Categories in a Hindi Treebank.In the Proceedings of the 7th Interna4onal Conference on Language Resources and Evalua4on (LREC'10),Valle[a, Malta,
•Bonial, Claire, Susan Windisch Brown, Jena D. Hwang, Christopher Parisien, Martha Palmer, and Suzanne Stevenson. 2011.IncorporaFng Coercive ConstrucFons into a Verb Lexicon.In the RELMS Workshop, held in conjunc4on with the Associa4on of Computa4onal Linguis4cs Mee4ng,Portland, Oregon.
•Bonn, Julia, Martha Palmer, Jon Cai, and KrisLn Wright-Be[ner. 2020. “SpaFal AMR: Expanded spaFal annotaFon in the context of a grounded MinecraY corpus."In Proceedings of the 12th Conference on Language Resources and Evalua4on (LREC 2020).
•Carreras, Xavier and Lluís Màrquez. 2004. IntroducFon to the CoNLL-2004 Shared Task: SemanFc Role Labeling.In Proceedings of the Eighth Conference
on Computa4onal Natural Language Learning (CoNLL-04), pages 89–97
•Carreras, Xavier and Lluís Màrquez. 2005. IntroducFon to the CoNLL-2005 Shared Task: SemanFc Role Labeling.In Proceedings of the Ninth Conference on Computa4onal Natural Language Learning (CoNLL-05).
•Chen, John and Owen Rambow. 2003. Use of Deep LinguisFc Features for the RecogniFon and Labeling of SemanFc Arguments.In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-03), pages 41–48.
References

5
9
•Choi, Jinho D., Claire Bonial, and Martha Palmer. 2010. Multilingual Propbank Annotation Tools: Cornerstone and Jubilee.In the Proceedings of NAACL-HLT'10: Demos, pp. 13-16, Los Angeles, CA.
•Cruse, D. A., (Ed.). 1973. Lexical Semantics. Cambridge University Press, Cambridge, England.
•Dang, Hoa Trang, Karin Kipper, Martha Palmer, and Joseph Rosenzweig. 1998. Investigating Regular Sense Extensions Based on Intersective Levin Classes.In Proceedings of the 17th Inter-national Conference on Computational Linguistics (COLING/ACL-98), pages 293–299, Montreal. ACL
•Dowty, David. 2003. The Dual Analysis of Adjuncts and Complements in Categorial Grammar.In Ewald Lang, Claudia Maienborn, and Catherine Fabricius-
Hansen, (Eds.), Modifying Adjuncts. de Gruyter, Berlin -New York, pages 1–22.
•Dowty, David R. 1991. Thematic Proto-Roles and Argument Selection.Language, 67(3):547–619.
•Ellsworth, Michael, Katrin Erk, Paul Kingsbury, and Sebastian Pado. 2004. PropBank, Salsa, and FrameNet: How Design Determines Product. In LREC 2004
Workshop on Building Lexical Resources from Semantically Annotated Corpora, Lisbon, Portugal.
•Fillmore, Charles J. 1968. The Case for Case.In Emmon W. Bach and Robert T. Harms, (Eds.), Universals in Linguistic Theory. Holt, Rinehart & Winston, New York, pages 1–88.
•Fillmore, Charles J. and Collin F. Baker. (2001).Frame semantics for text understanding. In the Proceedings of NAACL WordNet and Other Lexical Resources WorkshopPittsburgh, June.
•Fillmore, Charles J., Christopher R. Johnson, and Miriam R.L. Petruck. 2002. Background to FrameNet.International Journal of Lexicography, 16(3):2435–250
•Gildea, Daniel and Daniel Jurafsky. 2002. Automatic Labeling for Semantic Roles.Computational Linguistics, 28(3):245–288.
•Giuglea, Ana-Maria and Alessandro Moschitti. 2006. Semantic Role Labeling Via FrameNet, Verb-Net and PropBank. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL-06), pages
929–936, Sydney, Australia.
References

6
0
•Gordon, Andrew and Reid Swanson. 2007. Generalizing Semantic Role Annotations Across Syntactically Similar Verbs.In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07).
•Hwang, Jena D., Rodney D. Nielsen and Martha Palmer. 2010. Towards a Domain Independent Semantics: Enhancing Semantic Representation with
Construction Grammar.In the Proceedings of Extracting and Using Constructions in Computational Linguistic Workshop held in conjunction with NAACL HLT 2010, LA, CA.
•Hwang, Jena, Archna Bhatia, Clare Bonial, Aous Mansouri, Ashwini Vaidya,Nianwen Xue, and Martha Palmer. 2010. PropBank Annotation of
Multilingual Light Verb Constructions.In the Proceedings of the Linguistic Annotation Workshop held in conjunction with ACL-2010. Uppsala, Sweden.
•Jackendoff, Ray. 1972. Semantic Interpretation in Generative Grammar. MIT Press, Cambridge, Massachusetts.
•Kipper, Karin, Hoa Trang Dang, and Martha Palmer. 2000. Class-Based Construction of a Verb Lexicon. In Proceedings of the Seventeenth National
Conference on Artificial Intelligence (AAAI-00), Austin, TX, July-August.
•Kipper, Karin, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A Large-Scale Classification of English Verbs.Language Resources and Evaluation Journal, 42(1):21–40.
•Kipper Schuler, Karin. 2005. VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon.Ph.D. thesis, University of Pennsylvania.
•Korhonen, Anna and Ted Briscoe. 2004. Extended Lexical-Semantic Classification of English Verbs. In Proceedings of HLT/NAACL Workshop on Computational Lexical Semantics, Boston, Mass. ACL.
•Levin, Beth. 1993. English Verb Classes And Alternations: A Preliminary Investigation.University of Chicago Press, Chicago.
•Litkowski, Ken. 2004. Senseval-3 task: Automatic Labeling of Semantic Roles.In Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (Senseval-3), pages 9–12, Barcelona, Spain, July.
References

6
1
•Loper, Edward, Szu-ting Yi, and Martha Palmer. 2007. Combining Lexical Resources: Map-ping Between PropBank and VerbNet. In the Proceedings of the 7th International Workshop on Computational Semantics, Tilburg, the Netherlands.
•Merlo, Paola and Lonneke van der Plas. 2009. Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both?In
Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-09)
•Meyers, A., R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004.Annotating Noun Argument Structure for NomBank. In Proceedings of the Language Resources and Evaluation Conference (LREC-04), Lisbon, Portugal.
•Palmer, Martha, Daniel Gildea, and Nianwen Xue. 2010. "Semantic Role Labeling." Synthesis Lectures on Human Language Technology Series, ed. Graeme Hirst, Mogan & Claypoole, ISBN: 9781598298321.
•Palmer, Martha, Rajesh Bhatt, Bhuvana Narasimhan, Owen Rambow, Dipti Misra Sharma, and Fei Xia. 2009. Hindi Syntax: Annotating
Dependency, Lexical Predicate-Argument Structure, and Phrase Structure. In the Proceedings of the 7th International Conference on Natural Language Processing, ICON-2009, Hyderabad, India.
•Palmer, Martha, Ann Bies, Olga Babko-Malaya, Mona Diab, Mohamed Maamouri, Aous Mansouri, and Wajdi Zaghouni. 2008. A Pilot Arabic
PropBank. In Proceedings of the Language Resources and Evaluation Conference (LREC-08), Marrakech, Morocco.
•Palmer, Martha, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106.
•Palmer, Martha, Jena D. Hwang, Susan Windisch Brown, Karin Kipper Schuler, and Arrick Lan-franchi. 2009. Leveraging Lexical Resources for the Detection of Event Relations.In AAAI Spring Symposium on Learning by Reading and Learning to Read, Stanford, CA.
•Palmer, Martha, Shijong Ryu, Jinyoung Choi, Sinwon Yoon, and Yeongmi Jeon. 2006. Korean PropBank. OLAC Record
oai:www.ldc.upenn.edu:LDC2006T03
•Pradhan, Sameer, Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2007. OntoNotes: A Unified Relational Semantic Representation.International Journal of Semantic Computing,Vol. 1, No. 4, pp. 405-419.
References

6
2
•Rambow, Owen, Bonnie Dorr, Karin Kipper, Ivona Kucerova, and Martha Palmer. 2003. AutomaFcally Deriving TectogrammaFcal Labels From Other Resources: A Comparison of SemanFc Labels From Other Resources.In Prague Bulle4n of Mathema4cal Linguis4cs, volume 79-90, pages 23–35.
•Shi, L. and R. Mihalcea. 2005.Pueng Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust SemanFc Parsing.In Proceedings of
the 6th Interna4onal Conference on Intelligent Text Processing and Computa4onal Linguis4cs (CICLing), pages 100–111, Mexico City, Mexico.
•Surdeanu, Mihai, Richard Johansson, Adam Meyers, Lluís Màrquez, and Joakim Nivre. 2008.The CoNLL 2008 Shared Task on Joint Parsing of SyntacFc and SemanFc Dependencies. In Proceedings of the Twel`h Conference on Computa4onal Natural Language Learning (CoNLL-08), pages
159–177.
•Taule, Mariona, M. A. Marn, , and Marta Recasens. 2008. AnCora: MulFlevel Annotated Corpora for Catalan and Spanish.In Proceedings of the Language Resources and Evalua4on Conference (LREC-08), Marakech, Morocco.
•Vaidya, Ashwini, Jinho D. Choi, Martha Palmer and Bhuvana Narasimhan. 2012. Empty Argument InserFon in the Hindi PropBank. In the Proceedings of LREC-2012, Istanbul, Turkey.
•Wright-Begner,KrisLn, Martha Palmer, Guergana K. Savova, Piet C. de Groen and Timothy Miller. 2019. “Cross-document coreference: An
approach to capturing coreference without context.”Conference on Empirical Methods in Natural Language Processing.
•Xue, Nianwen. 2008. Labeling Chinese Predicates with SemanFc Roles.Computa4onal Linguis4cs, 34(2):225–255.
•Xue, Nianwen and Martha Palmer. 2009. Adding SemanFc Roles to the Chinese TreeBank.Natural Language Engineering, 15(1):143–172.
•Yi, Szu-Ting, Edward Loper, and Martha Palmer. 2007. Can SemanFc Roles Generalize Across Genres?In Proceedings of the Human Language Technology Conference/North American Chapter of the Associa4on for Computa4onal Linguis4cs Annual Mee4ng (HLT/NAACL-07).
•Zapirain, Benat, Eneko Agirre, Lluıs Ma`rquez, and Mihai Surdeanu. 2013. ‘SelecFonal preferences for semanFc role classificaFon’.Computa4onal
Linguis4cs.
References

Meaning Representations for Natural Languages Tutorial Part 2
Common Meaning Representations
•Format & Basics•Some Details & Design Decisions•Practice -Walking through a few
AMRs•Multi-sentence AMRs•Relation to Other Formalisms•UMRs •Open Questions in Representation
RepresentaIon Roadmap
Jan Hajič
Charles Univ., Prague

Comparison to Other Frameworks
64
•Meaning representaaons vary along many dimensions•How meaning is connected to text•Anchoring, alignment, mulT-layer vs. text-span only•Relaqonship to logical and/or executable form•Mapping to Lexicons/Ontologies•General, task-oriented•Relaqonship to discourse and discourse-like phenomena
•We’ll overview these now

Compositionality, Alignment to Text (1)
65Oepen & Kuhlmann (2016) “flavors” of meaning representa?ons
Type 0: BilexicalType 1: AnchoredType 2: Unanchored
Nodes each correspond to one token
(Dependency parsing)
Nodes are aligned to text (can
be subtoken or multi-token)
No mapping from graph to surface
form
Universal DependenciesUCCAAMR
MRS-connected frameworks (DM, EDS)DRS-based frameworks (PMB /
GMB)
Some executable/task-specific
semantic parsing frameworks
Prague Dependency Treebanks
Analytical (Surface Dependency)
Layer
Prague Tectogrammatical
(Semantic) Layer
•Historical approach to meaning representations
•Represent “context-free semantics”, as defined by a particular grammar model
•AMR at other extreme
•AMR graph annotated for a single sentence, but no individual mapping from tokens to nodes

Compositionality, Alignment to Text (2)
66
•Less thoroughly defined: adherence to grammar/compositionally
•Emily M. Bender, Dan Flickinger, Stephan Oepen, Woodley Packard, and Ann Copestake. 2015. Layers of Interpretation: On Grammar and Compositionality. In
Proceedings of the 11th International Conference on Computational Semantics, pages 239–249, London, UK. Association for Computational Linguistics.•Some frameworks (MRS/ DRS below) have particular assertions about how a given
meaning representation was derived•tied to a particular grammar•AMR encodes many useful things that are often notconsidered compositional —
named entity typing, cross-sentence coreference, word senses, etc.
<-“Sentence meaning”Extragrammatical
inference ->
Only encode “compositional”
meanings predicted by a
particular theory of grammar
some useful pragmatic inference
(e.g. sense distinctions, named
entity types)
Any wild inferences
needed for task

Compositionality, Alignment to Text –UCCA (1)
67
●Universal Conceptual Cogniave Annotaaon:
●Dixon’s BLT*-based coarse-grained semanWcsacross languages
●Core noqons:
●„Scene“ ~ BLT’s “semanTc clause”, predicate+arg/adj structure
●“Unit” ~ abstract concept ([unlabelled] node in the rep. graph)
●(Coarse-grained) labelled edges/relaTons between the Units
●Single capitalized leTers (a signature property of UCCA)
●Similar to a cross between dependency and consqtuency parses
●someTmes very syntacTc
●Introduced in 2013 by
●Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cogni4ve Annota4on (UCCA). In Proceedings of the 51st Annual
Mee4ng of the Associa4on for Computa4onal Linguis4cs (Volume 1: Long Papers), pages 228–238, Sofia, Bulgaria. Associa4on
for Computa4onal Linguis4cs.
*http://www.glottopedia.org/index.php/Basic_Linguistic_Theory

Compositionality, Alignment to Text –UCCA (2)
68
•Coarse-grained roles (only 17 labels), e.g.:
•P: Process
•S: State
•A: participant
•C: Center
•D: Adverbial
•E: Elaborator
•F: Function
•“Anchored” graphs (Type 1), in the Open &
Kuhlman taxonomy (somewhat compositional,
but no formal rules for how a given node is
derived)

Compositionality, Alignment to Text –Prague PDT (1)
69
•Based on Functional Generative Description (dependency theory)
•Petr Sgall, Eva Hajičová and Jarmila Panevová. The Meaning of the Sentence in its
Semantic and Pragmatic Aspects. Dordrecht: Reidel. 1986. Pp. xi + 353.
•Used in the Prague Dependency Treebank family of corpora
•Czech, English, Arabic, published/extended 2001-2023
•3 layers of annotation:
•Tectogrammatical, or “meaning” (example )
•syntactic-semantic annotation
•Somewhat similar to F-layer of LFG
•Analytical
•Surface syntactic (dependency syntax)
•Morphology, lemmatization, tokenization
•For spoken corpora, also audio
•Fully alignedbetween layers (Type 1)For the Czech version of “An earthquake struck Northern
California, killing more than 50 people.” (Čmejrek et al. 2004)
Prague Dependency Treebanks

Compositionality, Alignment to Text–Prague PDT (2)
70
•Nodes: lexically based(and aligned to surface text)
•Only contentnodes (and some structural ones holding the graph together), no function words, null nodes
•Many semantic attributes (tense, number, modalities, …)
•Information structureby topic/focus labels and node order
•Edges:
•Primary: dependency, labeled by (mostly) semantic relations and/orvalency lexicon arguments
•Secondary:
•Co-reference(including cross-sentence), bridging
•Discourserelations between clauses (incl. cross-sentence)
•Many aspects similar to AMR/UMR
•AMR annotation for Czech exists, in parallel to PDT style
•UMR annotation in progress (by conversion + corrections)
The Meaning (Tectogrammatical) layer
Lit: [He] worked as an engineer
and [he] liked the work.

Compositionality, Alignment to Text–Prague PDT (3)
71
•Aligned to syntactic
dependency graph layer (“type 1”)
•m:n, incl. m=0 or n=0
•Each node aligned to surface
syntactic graph nodes corresponding to
•Lexical (content) word
•Auxiliary (function) words (if any)
•Graphical symbols (if any and if relevant)
Underlying verb + tense
Deep function
Elided Actor in
Prepositions out
Another ellipsis...
(TR: sublayer 1 only shown)
Alignment to surface dependencies and words

Compositionality, Alignment to Text–Prague PDT (4)
72
•Example:•Baker bakes rolls. vs. BakerIC bakes rolls.
Information structure (Topic-focus annotation)
Context: talking about bakers,
adding that it is rollsthey bake
Context: talking about rolls, adding
that it is bakerswho make them

Compositionality, Alignment to Text –Prague PDT (5)
73
•Prague Czech-English Dependency Treebank•Parallel Czech-English treebank to compare differences (Czech translation of English text, 1 mil. words)•Simplified annotation on the Tectogrammatical layer•Aligned with the (manual) Penn Treebank annotation
Multilingual PDT style annotation
According to his opinion UAL's
executives were misinformed
about the financing of the original
transaction.
Podle jeho názoru bylo vedení
UAL o financování původní
transakce nesprávně
informováno.
(almost) 1:1

Logical & Executable Forms
74
•Lots of logical desiderata:•Modeling whether events happen and/or are believed(and other modality questions): •Sam believes that Bill didn’t eat the plums.•Understanding quantifications: reference to one song or many?•Every child has a favorite song.
•PDT (Prague tectogrammatical layer):
•Scoping negation within Information structure annotation (schematically only):
•We did not visit grandmatopicNeg.RHEMon Fridayf(but on Thursday) vs.
•We did not visit Neg.RHEMgrandmafocuson Fridayf(but our aunt on Saturday)
•AMR: (with certain assumptions), PENMAN is a bracketed tree that can be treated like a logical form
•Default assumption for AMR:
•”:polarity –“ is a feature of a single node; no semantics for quantifiers like “every” –assumption is
Neo-Davidsonian: bag of triples like (“instance-of(b, believe-01)”, “instance-of(h, he), “ARG0(b, h)”
•One cannot modify more than one node in the graph
•Competing frameworks like DRS and MRS more specialized for this

Logical & Executable Forms –DRS (1)
75
•Grounded in long theoreacal DRS tradiaon (Heim & Kamp)
for handling discourse referents, presupposiOons, discourse connecOves, temporal relaOons across
sentences, etc.
•Kamp, H., 1981, “A theory of truth and semanSc representaSon”, in J.A.G. Groenendijk, T.M.V. Janssen, and M.B.J. Stokhof (eds),
Formal methods in the Study of Language, MathemaScal Centre Tracts 135, Amsterdam: MathemaSsch Centrum, pp. 277–322.
•DRS for “everyone was killed” (Liu et al. 2021)

Logical & Executable Forms –DRS (2)
76
•DRS frameworks
•Scoped meaning representation
•Outputs originally modified from CCG parser LF outputs -> DRS
•DRS uses “boxes” which can be negated, asserted, believed in, …
•Notnatively a graph representation!
•“box variables”(bottom): one way of thinking about these
•a triple like “agent(e1, x1)” is part of b3
•Box b3 is modified (e.g. b2 POS b3)
•Annotations in Groeningen Meaning Bank and Parallel Meaning Bank
•Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik van Noord, Pierre Ludmann, Duc-Duy Nguyen,
Johan Bos (2017): The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with
Compositional Meaning Representations. Proceedings of the 15thEACL, pp. 242–247, Valencia, Spain.

Logical & Executable Forms –MRS (1)
77
•Minimal Recursion Semantics(and related frameworks)
•Copestake, A., Flickinger, D. P., Sag, I. A., & Pollard, C. (2005). Minimal Recursion Semantics. An introduction. In Research on Language and Computation. 3:281–332
•Define set of constraintsover which variables outscope other variables
•Copestake (1997) model proposed for semantics of HPSG -this is connected to other underspecification solutions (Glue semantics / hole semantics / etc. )
•Asudeh, Ash & Crouch, Richard. (2002). Glue semantics for HPSG. Proceedings of the International Conference on Head-Driven Phrase Structure Grammar. 10.21248/hpsg.2001.1.
•HPSG grammars like the English Resource Grammar
•Ann Copestake and Dan Flickinger. 2000. An Open Source Grammar Development Environment and Broad-coverage English Grammar Using HPSG. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00), Athens, Greece. European Language Resources Association (ELRA).
•produce ERS (English Resource Semantics) outputs (which are roughly MRS)
•Also modified into a simplified DM format (“type 0” bilexical dependency)

Logical & Executable Forms –MRS (2)
78
•Underspecification in practice:
•MRS can the thought of as many fragments with constraints on how they scope together
•Those define a set of MANY possible combinationsinto a fully scoped output, e.g.:
Every dog barks and chases a white cat(as interpreted in Manshadi et al. 2017)

Logical & Executable Forms –MRS (3)
79
•Variables starting with hare
“handle” variables used to define
constraints on scope.
•h19= things under scope of negation
•h21= leave_v_1 head
•h19 =q h21: equality modulo
quantifiers
•(neg outscopes leave)
•“forest” of possible readings
•Takeaway: Constraints on which
variables “outscope" others can
add flexible amounts of scope info

Lexicon/Ontology Differences
80
•Predicates can use different ontologies –e.g. more grounded in grammar/valency, or more tied to taxonomies like WordNet, or a combination (SynSemClass)
•Semantic Roles can be encoded differently, e.g. with non-lexicalized semantic roles (discussed for UMR later)
•Some additional proposals: “BabelNet Meaning Representation” propose using VerbAtlas (clusters over wordnet senseswith VerbNet semantic role templates);
•R. Navigli, M. Bevilacqua, S. Conia, D. Montagnini and F. Cecconi. Ten Years of BabelNet: A Survey. Proc. of IJCAI 2021, pp. 4559-4567
•SynSemClass: An event-type multilingual ontology
•Z. Urešová, E. Fučíková, E. Hajičová, J. Hajič (2020): SynSemClass Linked Lexicon: Mapping Synonymy between Languages. In: Proceedings of the 2020 Globalex Workshop on Linked Lexicography (LREC 2020), pp. 10-19, Marseille, France, ISBN 979-10-95546-46-7
DRS (GMB/PMB)MRSPrague (PDT, PCEDT, PDTSC, …)AMR UCCA
Semantic RolesVerbNet (general
roles)
General rolesGeneral roles + valency lexicon
[SynSemClass –upcoming]
Lexicalized
numbered arguments
Fixed general roles
PredicatesWordNetgrammatical
entries
PDT-Vallex valency lexicon
(Propbank-like) + [SynSemClass –
upcoming]
Propbank PredicatesA few types (State vs
process …)
non-predicateswordnetLemmasLemmasNamed entity typesLemmas

Task-specific Representations (1)
81
•Many use “Semantic Parsing” to refer to task-specific, executable
representations
•Text-to-SQL (long history, since 1990s)
•Air traffic information system (ATIS –IBM and others’)
•interaction with robots, text to code/commands
•From T. Winograd block system (1970s)
•interaction with deterministic systems like calendars/travel planners
•Similar distinctions to a general-purpose meaning representation, BUT
•May need to map into specific task taxonomiesand ignore content not relevant to task
•Good and bad
•Can require more detail or implicit inference(vs. “context-free” representations)
•Good and bad
•Often can be thought of as first-order logicforms —simple predicates + scope

Task-specific Representations (2)
82
•Classic datasets (Table from Dong & Lapata 2016) regard household commands or querying KBs
•Recent tasks for text-to-SQL

Discourse-Level AnnotaIon
83
•Do you do multi-sentence coreference? •Partial coreference (set-subset, implicit roles, etc.)?•Discourse connectives?
•Treatment of multi-sentence tense, modality, etc.?•Prague Tectogrammatical annotations & AMR only general-purpose representations with extensive multi-sentence annotations

Overviewing Frameworks vs. AMR
AlignmentLogical Scoping &
Interpretation
Ontologies and
Task-Specifc
Discourse-Level
DRS (Groeningen /
Parallel)
Compositional
/Anchored
Scoped representation
(boxes)
Rich predicates
(WordNet), general
roles
Can handle referents,
connectives
MRSCompositional
/Anchored
Underspecified scoped
representation
Simple predicates,
general roles
n/a
UCCAAnchoredNot really scopedSimple predicates,
general roles
Some implicit roles
Prague
Tectogrammatical
Representation Layer
AnchoredNot really scoped
with exceptions
(negation)
Rich predicates, semi-
lexicalizekd roles
Rich multi-sentence
conference, discourse
AMRUnanchored
(English); Anchored
(Chinese)
Not really scoped yetRich predicates,
lexicalized roles
Rich multi-sentence
conference

End of Meaning Representation Comparison
•What’s next: UMR —substantial evolution of AMR.
•Questions about how AMR is annotated?
•Questions about how it relates to other meaning representation formalisms?

Meaning Representations for Natural Languages Tutorial Part 2
Common Meaning Representations
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, and Nianwen Xue

Meaning Representations for Natural Languages Tutorial Part 2
Common Meaning Representations
•AMR Format & Basics•Some Details & Design Decisions•Pracqce -Walking through a few
AMRs•Mulq-sentence AMRs•Relaqon to Other Formalisms•UMR•Open Quesqons in Representaqon
Representation Roadmap

Outline
►Background
►Do we need a new meaning representation? What’s wrong with existing
meaning representations?►Aspects of Uniform Meaning Representation (UMR)
►UMR starts with AMR but made a number of enrichments
►UMR is a document-level meaning representation that represents temporal
dependencies, modal dependencies, and coreference
►UMR is a cross-lingual meaning representation that
separates aspects of meaning that are shared across languages
language-independent from those that are idiosyncratic to individual
languages (language-specific)
►UMR-Writer --a tool for annotating UMRs

Why aren’t existing meaning representations
sufficient?
►Existingmeaningrepresentationsvaryagreatdealintheirfocus
andperspective
►Formal semantic representations aimed at supporting logical inference
focus on the proper representation of quantification, negation, tense,
and modality (e.g., Minimal Recursion Semantics (MRS) and Discourse
Representation Theory (DRT).►Lexical semantic representations focus on the proper representation
of core predicate-argument structures, word sense, named entities
and relations between them, coreference (e.g., Tectogrammatical
Representation (TR), AMR).
►Thesemanticontologytheyusealsodifferagreatdeal.For
example,MRSdoesn’thaveaclassificationofnamedentitiesat
all,whileAMRhasover100typesofnamedentities

UMR uses AMR as a starting point
►Our starting point is AMR, which has a number of
attractive properties:
►Easy to read,
►scalable (can be directly annotated without relying on syntactic
structures),
►has information that is important to downstream applications (e.g.,
semantic roles, named entities and coreference),
►represented in a well-defined mathematical structure (a
single-rooted, directed, acylical graph)
►Our general strategy is to augment AMR with meaning
components that are missing and adapt it to cross-lingual
settings

ParHcipants of the UMR project
►UMR stands for Uniform Meaning Representation, and it is an
NSF funded collaborative project between Brandeis
University, University of Colorado, and University of New
Mexico, with a number of partners outside these institions

From AMR to UMR Gysel et al. (2021)
►At the sentence level, UMR adds:
►An aspect attribute to eventive concepts
►Person and number attributes for pronouns and other nominal
expressions►Quantification scope between quantified expressions
►At the document level UMR adds:
►Temporal dependencies in lieu of tense►Modal dependencies in lieu of modality►Coreference relations beyond sentence boundaries
►To make UMR cross-linguistically applicable, UMR
►defines a set of language-independent abstract concepts and participant
roles,►uses lattices to accommodate linguistic variability►designs specifications for complicated mappings between words and UMR
concepts.

UMR sentence-level additions
►An Aspect attribute to event concepts
►Aspect refers to the internal constituency of events -their
temporal and qualitative boundedness
►Person and number attributes for pronouns and other
nominal expressions
►A set of concepts and relations for discourse relations
between clauses
►Quantification scope between quantified expressions to
facilitate translation of UMR to logical expressions

UMR attribute: aspect
Aspect
Habitual
Imperfective
Process
Stat
e
Atelic
Process
Perfective
Activity
Endeavor
Performance
Reversible State
Irreversible State
Inherent State
Point State
Undirected Activity
Directed Activity
Semelfactive Undirected
Endeavor Directed
Endeavor
Incremental Accomplishment
Nonincremental
Accomplishment Directed
Achievement
ReversibleIrreversible

UMR aMribute: coarse-grained aspect
►State: unspecified type of state
►Habitual: an event that occurs regularly in the past or
present, including generic statements
►Activity: an event that has not necessarily ended and may be
ongoing at Document Creation Time (DCT).
►Endeavor: a process that ends without reaching completion
(i.e., termination)
►Performance: a process that reaches a completed result
state

Coarse-grained Aspect as an UMR attribute
He wants to travel to Albuquerque.
(w / want
:aspect State)
She rides her bike to
work.
(r / ride
:aspect Habitual)
He was writing his
paper yesterday.
(w / write
:aspect Activity)
Mary mowed the lawn for thirty
minutes.
(m / mow
:aspect Endeavor)

Fine-grained Aspect as an UMR attribute
My cat is hungry.
(h / have-mod-91
:aspect Reversible state)
The wine glass is
shattered.
(h / have-mod-91
:aspect Irreversible state)
My cat is black and white.
(h / have-mod-91
:aspect Inherent state)
It is 2:30pm.
(h / have-mod-91
:aspect Point state)

AMR vs UMR on how pronouns are represented
►In AMR, pronouns are treated as unanalyzable concepts
►However, pronouns differ from language to language, so UMR
decomposes them into person and number attributes
►These attributes can be applied to nominal expressions too
AMR:
(s / see-01
:ARG0 (h/ he)
:ARG1 (b/ bird
:mod (r/ rare)))
UMR:
(s / see-01
:ARG0 (p / person
:ref-person 3rd
:ref-number Sing.)
:ARG1 (b / bird
:mod (r/ rare)
:ref-number Plural))
“He saw rare birds
today.”

UMR attributes: Person
Person
Non-third
Non-first
Third
Inclusive
First
Second
Exclusive

UMR attributes: number
Number
Singular
Non-singula
r
Paucal
Plural
Non-dual
Paucal
Dual
Greater
Plural
Trial
Non-trial
Paucal

Discourse relations in UMR
►In AMR, there is a minimal system for indicating
relationships between clauses -specifically coordination:
►and concept and :opX relations for addition
►or/either/neither concepts and :opX relations for disjunction
►contrast-01 and its participant roles for contrast
►Many subordinated relationships are represented through
participant roles, e.g.:
►:manner
►:purpose
►:condition
►UMR makes explicit the semantic relations between (more
general) “coordination” semantics and (more specific)
“subordination” semantics

Discourse relations in UMR
Discourse
Relations
inclusive-disj
or
and + but
exclusive-disj
and +unexpected
and +contrast
but-91
and
consecutive
additive
unexpected-co-
occurrence-91
contrast-91
:apprehensive
:condition
:cause
:purpose
:temporal
:manner
:pure-addition
:substitute
:concession
:concessive
-
condition
:subtraction

Disambiguation of quantification scope in UMR
“Someone didn’t answer all the questions”
(a / answer-01
:ARG0 (p / person)
:ARG1 (q / question :quant All :polarity -)
:pred-of (s / scope :ARG0 p :ARG1 q))
∃p(person(p) ∧ ¬∀q(question(q) →
∃a(answer-01(a) ∧ ARG1(a, q) ∧ ARG0(a, p))))

Quantification scope annotation
►Scope will not be annotated for summation readings, nor is
it annotated where a distributive or collective reading can be
predictably derived from the lexical semantics.
►The linguistics students ran 5 kilometers to raise money for charity
(distributive).►The linguistics students carried a piano into the theater. (collective)
►Ten hurricanes hit six states over the weekend. (summative)
►The scope annotation only comes into play when some
overt linguistic element forces an interpretation that
diverges from the lexical default
►The linguistics students together ran 200 kilometers to raise
money for charity.►The bodybuilders each carried a piano into the theater.
►Ten hurricanes each hit six states over the weekend.

From AMR to UMR Gysel et al. (2021)
►At the sentence level, UMR adds:
►An aspect attribute to eventive concepts
►Person and number attributes for pronouns and other nominal
expressions►Quantification scope between quantified expressions
►At the document level UMR adds:
►Temporal dependencies in lieu of tense►Modal dependencies in lieu of modality►Coreference relations beyond sentence boundaries
►To make UMR cross-linguistically applicable, UMR
►defines a set of language-independent abstract concepts and participant
roles,►uses lattices to accommodate linguistic variability►designs specifications for complicated mappings between words and UMR
concepts.

UMR is a document-level representation
►Temporal relations are added to UMR graphs as temporal
dependencies
►Modal relations are also added to UMR graphs as modal
dependencies
►Coreference is added to UMR graphs as identity or subset
relations between named entities or events
►UMR favors relations over attributes where possible

UMR represents temporal relations in a document
as temporal dependency structures (TDS)
►The temporal dependency structure annotation involves
identifying the most specific reference time for each event
►Time expressions and other events are normally the
most specific reference times
►In some cases, an event may require two reference times
in order to make its temporal location as specific as
possible
Zhang and Xue (2018); Yao et al. (2020)

TDS Annotation
►If an event is not clearly linked temporally to either a
time expression or another event, then it can be linked
to the DCT or tense metanodes
►Tense metanodes capture vague stretches of time that
correspond to grammatical tense
►Past_Ref, Present_Ref, Future_Ref
►DCT is a more specific reference time than a tense
metanode

Temporal dependency Structure (TDS)
►Ifweidentifyareferencetimeforeveryeventand
timeexpressioninadocument,theresultwillbea
TemporalDependencyGraph.
descended
arrested
assaulte
d
ROOT
Temporal
DCT (4/30/2020
Depends-on
today
Contained
Contained
Contained
AfterBefore
“700peopledescendedonthestateCapitoltoday,according
toMichiganStatePolice.StatePolicemadeonearrest,where
oneprotesterhadassaultedanother,Lt.BrianOleksyksaid.”

Genre in TDS AnnotaHon
►Temporal relations function differently depending on the
genre of the text (e.g., Smith 2003)
►Certain genres proceed in temporal sequence from one
clause to the next
►While other genres involve generally non-sequenced
events
►News stories are a special type
►many events are temporally sequenced
►temporal sequence does not match with sequencing in the text

Modality in AMR
►Modality characterizes the reality status of events, without which the
meaning representation of a text is incomplete
►AMR has six concepts that represent modality:
►possible-01, e.g., “The boy can go.”►obligate-01, e.g., “The boy must go.”►permit-01, e.g., “The boy may go.”►recommend-01, e.g., “The boy should go.”►likely-01, e.g., “The boy is likely to go.”►prefer-01, e.g., “They boy would rather go.”
►Modality in AMR is represented as senses of an English verb or
adjective.
►However, the same exact concepts for modality may not apply to other
languages

Modal dependency structure
►There are two types of nodes in the modal
dependency structure: eventsand conceivers
►Conceivers
►Mental-level entities whose perspective is modelled in the
text►Each text has an author node (or nodes)
►All other conceivers are children of the AUTH node
►Conceivers may be nested under other conceivers
►Mary said that Henry wants...
AUTHMaryHenry

Epistemic strength lattice
Epistemic
Strength
Non-neutral
Non-full
Partial
Full
Neutral
Strongpartial
Weakpartial
Strongneutral
Weakneutral
Full: The dog barked.
Partial: The dog probably barked.
Neutral: The dog might have barked.

Modal dependency structure (MDS)
Michigan State Police
descended
arrestedassaulte
d
ROOT
MODAL
AUTH (CNN)
FULLAFFFULLAFF
FULLAFF
Lt. Brian Oleksyk
FULLAFFFULLAFF
“700peopledescendedonthestateCapitoltoday,according
toMichiganStatePolice.StatePolicemadeonearrest,where
oneprotesterhadassaultedanother,Lt.BrianOleksyksaid.”
(Vigus et al., 2019; Yao et al., 2021):

Entity Coreference in UMR
►same-entity:
1.Edmund Pope tasted freedom today for the first time
in more than eight months.2.Hedenied any wrongdoing.
►subset:
1.Heis very possesive and controlling but he has no right
to be as weare not together.

Event coreference in UMR
►same-event
1.El-Shater and Malek’s property was confiscatedand is believed to be
worthmillions of dollars.
2.Abdel-Maksoud stated the confiscationwill affect the Brotherhood’s
financial bases.►same-event
1.The Three Gorges project on the Yangtze River has recently introduced
the first foreign capital.2.The loan , a sum of 12.5 million US dollars , is an export credit provided
to the Three Gorges project by the Canadian government , which will be
used mainly for the management system of the Three Gorges project .►subset:
1.1 arresttook place in the Netherlands and another in Germany.2.The arrestswere ordered by anti-terrorism judge fragnoli.

An UMR example with coreference
Heis controlling but he has no right to be as weare not together.
(s4c / but-91
:ARG1 (s4c3 / control-01
:ARG0 (s4p2 / person
:ref-person 3rd
:ref-number Singular))
:ARG2 (s4r / right-05
:ARG1 s4p2
:ARG1-of (s4c2 / cause-01
:ARG0 (s4h / have-mod-91
:ARG0 (s4p3 / person
:ref-person 1st
:ref-number Plural)
:ARG1 (s4t/ together)
:aspect State
:modstr FullNeg))
:modstr FullNeg))
(s / sentence
:coref ((s4p2 :subset-of s4p3)))

The challenge: Integration of different meaning components
into one graph
►How do we represent all this information in a unified structure that
is still easy to read and scalable?
►UMR pairs a sentence-level representation (a modified form of
AMR) with a document-level representation.
►We assume that a text will still have to be processed sentence by
sentence, so each sentence will have a fragment of the
document-level super-structure.

Integrated UMR representation
1.Edmund Pope tasted freedom today for the first time
in more than eight months.
2.Pope is the American businessman who was convicted
last week on spying charges and sentenced to 20 years
in a Russian prison.
3.He denied any wrongdoing.

Sentence-level representaHon vs document-level representaHon
(s1t2 / taste-01
:Aspect Performance
:ARG0 (s1p / person
:name (s1n2 / name
:op1 “Edmund”:op2 “Pope”))
:ARG1 (s1f / free-04 :ARG1 s1p)
:time (s1t3 / today)
:ord (s1o3 / ordinal-entity
:value 1
:range (s1m / more-than
:op1 (s1t / temporal-quantity
:quant 8
:unit (s1m2 / month)))))
Edmund Pope tasted freedom today for the first time in more
than eight months.
(s1 / sentence
:temporal ((DCT :before s1t2)
(s1t3 :contained s1t2)
(DCT :depends-on s1t3))
:modal ((ROOT :MODAL AUTH)
(AUTH :FullAff s1t2)))

UMR graph

From AMR to UMR Gysel et al. (2021)
►At the sentence level, UMR adds:
►An aspect attribute to eventive concepts
►Person and number attributes for pronouns and other nominal
expressions►Quantification scope between quantified expressions
►At the document level UMR adds:
►Temporal dependencies in lieu of tense►Modal dependencies in lieu of modality►Coreference relations beyond sentence boundaries
►To make UMR cross-linguistically applicable, UMR
►defines a set of language-independent abstract concepts and participant roles,►uses lattices to accommodate linguistic variability►designs specifications for complicated mappings between words and UMR
concepts.

Elements of AMR are already cross-linguistically
applicable
►Abstract concepts (e.g., person, thing, have-org-role-91):
►Abstractconceptsareconceptsthatdonothaveexplicitlexicalsupport
butcanbeinferredfromcontext
►Some semantic relations (e.g., :manner, :purpose, :time) are also
cross-linguistically applicable

Language-independent vs language-specific aspects of AMR
加入-01
person
董事会date-entity
name
temporal-quantity
” 文肯”
” 皮埃尔”
61

have-org-role-91
董事
1129
Arg0Arg1time
name
op1op2
age
quantunit
Arg1-ofArg0
Arg2
month day
mod
执行
polarity
-
“61 岁的Pierre Vinken 将于11 月29 日加入董事会,担
任非执行董事。”

Language-independent vs language-specific aspects of AMR
join-01
person
boarddate-entity
name
temporal-quantity
”Vinken”
”Pierre”
61
year
have-org-role-91
director
1129
Arg0Arg1time
name
op1op2
age
quant
unit
Arg1-ofArg0
Arg2
month day
mod
executive
polarity
-
“Pierre Vinken , 61 years old , will join the board as
a nonexecutive director Nov. 29 .”

Abstract concepts in UMR
►Abstract concepts inherited from AMR:
►Standardization of quantities, dates etc.: have-name-91,
have-frequency-91, have-quant-91, temporal-quantity, date-entity...
►New concepts for abstract events: “non-verbal” predication.
►New concepts for abstract entities: entity types are annotated for
named entities and implicit arguments.
►Scope: scope concept to disambiguate scope ambiguity to facilitate
translation of UMR to logical expressions (see sentence-level
structure).
►Discourse relations: concepts to capture sentence-internal discourse
relations (see sentence-level structure).

Sample abstract events
Clause Type
UMR
Predicates
Arg0Arg1Arg2
Thetic/presen
tational
possession
have-91possessorpossessum
Predicative
possession
belong-91possessumpossessor
Thetic/presen
tational
location
exist-91locationtheme
Predicative
location
have-
location-91
themelocation
property-
predication
have-mod-91 themeproperty
Object
predication
have-role-91themeRef pointObject
category
Equational identity-91themeequated referent

Language-independent vs language-specific participant roles
►Coreparticipantrolesaredefinedinasetofframefiles(valency
lexicon,seePalmeretal.2005).Thesemanticrolesforeach
senseofapredicatearedefined:
►E.g. boil-01: apply heat to water
ARG0-PAG: applier of heat ARG1-PPT:
water
►Most languages do not have frame files
►But see e.g. Hindi (Bhat et al. 2014), Chinese (Xue 2006)
►UMR defines language-independent participant roles
►Based on ValPaL data on co-expression patterns of different
micro-roles (Hartmann et al., 2013)

Language-independent roles: an incomplete list
UMR Annotation
Actor
Definition
animate entity that initiates the action
Undergoer
theme
Recipient
force
Causer
causer
experiencer
stimulus
entity(animateorinanimate)thatisaffected
bytheaction
entity(animateorinanimate)thatmoves
fromoneentitytoanotherentity,either
spatiallyormetaphorically
animateentitythatgainspossession(orat
least temporary control) of another entity
inanimate entity that initiates the action
animate entity that acts on another animate
entity to initiate the action
animate entity that acts on another animate
entity to initiate the action
animate entity that cognitively or sensorily
experiences a stimulus
entity (animate or inanimate) that is experi-

How UMR accommodates cross-linguistic
variability
►Not all languages grammaticalize/overtly express the same
meaning contrasts:
►English: I (1SG) vs. you (2SG) vs. she/he (3SG)
►Sanapaná: as-(1SG) vs. an-/ap-(2/3SG)
►However, there are typological patterns in how semantic
domains get subdivided:
►A 1/3SG person category would be much more surprising than a
2/3SG one
►UMR uses lattices for abstract concepts, attribute values, and
relations to accommodate variability across languages.
►Languages with overt grammatical distinctions can choose to use
more fine-grained categories

Lattices
►Semantic categories are organized in “lattices”
to achieve cross-lingual compatibility while
accommodating variability.
►We have lattices for abstract concepts,
relations, as well as attributes
Non-3rdNon-1st
1st2nd3rd
Excl.Incl.
person

Wordhood vs concepthood across languages
►The mapping between words and concepts in languages is
not one-to-one: UMR designs specifications for
complicated mappings between words and concepts.
►Multiple words can map to one concept (e.g., multi-word
expressions) ►One word can map to multiple concepts (morphological
complexity)

Multiple words can map to a single (discontinuous) concept
(x0/帮忙-01
:aspect Performance
:arg0 (x1/地理学)
:affectee (x2/我)
:degree (x3/大))
地理学帮了我很大的忙。
“Geography has helped me a lot”
(w / want-01
:Aspect State
:ARG0 (p / person)
:ref-person 3rd
:ref-number Singular
:ARG1 (g / give-up-07
:ARG0 h
:ARG1 (t / that)
:aspect Performance
:modpred w)
:ARG1-of (c / cause-01
:ARG0 (a / umr-unknown))
:aspect State)
“Why would he want to give that up?”

One word maps to multiple UMR concepts
►One word containing predicate and arguments
Arapaho:
he'ih'iixooxookbixoh'oekoohuutoono' he'ih'ii-xoo-xook-
bixoh'oekoohuutoo-no'
NARR.PST.IPFV-REDUP-through-make.hand.appear.quickly-PL
``They were sticking their hands right through them [the ghosts] to the other side.''
(b/ bixoh'oekoohuutoo `stick hands through'
:actor (p/ person :ref-person 3rd :ref-number Plural)
:theme (h/ hands)
:undergoer (g/ [ghosts])
:aspect Endeavor
:modstr FullAff)
►Noun Incorporation (less grammaticalized): identify predicate and argument
concept

UMR-Writer
►The annotation interface we use for UMR annotation is
called UMR-Writer
►UMR-Writerincludesinterfacesforprojectmanagement,
sentence-levelanddocument-levelannotation,aswellas
lexicon(framefile)creation.
►UMR-Writer has both keyboard-based and click-based
interfaces to accommodate the annotation habits of
different anntotators.
►UMR-Writer is web-based and supports UMR annotation
for a variety of languages and formats. So far it supports
Arabic, Arapaho, Chinese, English,Kukama Navajo, and
Sanapana. It can easily extended to more languages.

UMR writer: Project management

UMR writer: Project management

UMR writer: Sentence-level interface

UMR writer: Lexicon interface

UMR Writer: Document-level interface

UMR summary
►UMR is a rooted directed node-labeled and edge-labeled
document-level graph.
►UMR is a document-level meaning representation that
builds on sentence-level meaning representations
►UMR aims to achieve semantic stability across syntactic
variations and support logical inference
►UMR is a cross-lingual meaning representation that
separates language-general aspects of meaning from those
that are language-specific
►We are doing UMR English, Chinese, Arabic, Arapaho,
Kukama, Sanapana, Navajo, Quechua

Use cases of UMR
►Temporal reasoning
►UMR can be used to extract temporal dependencies, which can
then be used to perform temporal reasoning
►Knowledge extraction
►UMR annotates aspect, and this can be used to extract habitual
events or state, which are typical knowledge forms
►Factuality determination
►UMR annotates modal dependencies, and this can be used to
verify the factuality of events or claims
►As intermediate representation for dialogue systems where
control is more needed.
►UMR annotates entities and coreferences, which helps
tracking dialogue states

UMR activities coming up
•The 5th International Workshop on Designing Meaning
Representations will be held May 21, 2024 (Tomorrow)•UMR summer schools:•June 9-15, 2024, Boulder, University of Colorado•Summer, 2025: Waltham, MA, USA, Brandeis University
•UMR 1.0 Release:•Poster presentation, May 24, Session 3

UMR 1.0
released via
https://umr4nlp.github.io/web/

References
Banarescu, L., Bonial, C., Cai, S., Georgescu, M., Griffitt, K., Hermjakob, U., Knight, K., Koehn, P., Palmer, M., and Schneider, N.
(2013). Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and
interoperability with discourse, pages 178–186.
Hartmann, I., Haspelmath, M., and Taylor, B., editors (2013). The Valency Patterns Leipzig online database. Max Planck Institute
for Evolutionary Anthropology, Leipzig.
Van Gysel, J. E. L., Vigus, M., Chun, J., Lai, K., Moeller, S., Yao, J., O’Gorman, T. J., Cowell,
A., Croft, W. B., Huang, C. R., Hajic, J., Martin, J. H., Oepen, S., Palmer, M., Pustejovsky, J., Vallejos, R., and Xue, N.
(2021). Designing a uniform meaning representation for natural language processing. Künstliche Intelligenz, pages 1–18.
Vigus, M., Van Gysel, J. E., and Croft, W. (2019). A dependency structure annotation for modality. In Proceedings of the First
International Workshop on Designing Meaning
Representations, pages 182–198.
Yao,J.,Qiu,H.,Min,B.,andXue,N.(2020).Annotatingtemporaldependencygraphsviacrowdsourcing.InProceedingsofthe
2020ConferenceonEmpiricalMethodsinNaturalLanguageProcessing(EMNLP),pages5368–5380.
Yao, J., Qiu, H., Zhao, J., Min, B., and Xue, N. (2021). Factuality assessment as modal dependency parsing. In Proceedings of
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on
Natural Language Processing (Volume 1: Long Papers), pages 1540–1550.
Zhang, Y. and Xue, N. (2018). Structured interpretation of temporal relations. In
Proceedings of LREC 2018.

Acknowledgements
We would like to acknowledge the support of National Science Foundation:
•NSF IIS (2018): “Building a Uniform Meaning Representation for Natural
Language Processing” awarded to Brandeis (Xue, Pustejovsky), Colorado (M.
Palmer, Martin, and Cowell) and UNM (Croft).
•NSF CCRI (2022): ``Building a Broad Infrastructure for Uniform Meaning
Representations'', awarded to Brandeis (Xue, Pustejovsky) and Colorado (A.
Palmer, M. Palmer, cowell, Martin), with Croft as consultant
All views expressed in this paper are those of the authors and do not
necessarily represent the view of the National Science Foundation.

Tutorial
Meaning Representations for Natural Languages:
Design, Models and Applications
Jeffrey
Flanigan
Ishan
JindalYunyao
Li
Nianwen
Xue
Julia
Bonn
Jan
Hajič

Morning Session
•Part 1: Introduction –Julia Bonn
•Part 2a: Common Meaning Representations:
•AMR –Julia Bonn
•Other Meaning Representations –Jan Hajič
•Break
•Part 2b: Common Meaning Representations
•UMR –Nianwen Xue
Tutorial Outline
Tutorial Outline

Afternoon Session
•Part 3: Modeling Meaning Representation:
•SRL –Ishan Jindal
•AMR –Jeff Flanigan
•Break
•Part 4: Applying Meaning Representations
–Yunyao Li, Jeff Flanigan
Part 5: Open Questions and Future Work
–Nianwen Xue
Tutorial Outline

Meaning Representations for Natural Languages Tutorial Part 3a
Modeling Meaning Representation:
Semantic Role Labeling (SRL)
Julia Bonn, Jeffrey Flanigan, Jan Hajic, Ishan Jindal, Yunyao Li, Nianwen Xue

Whodidwhattowhom,when,whereand
how?(Palmar, 1990; Gildea and Jurafsky, 2000; Màrquez et al.,
2008)
151
Semantic Role Labeling (SRL)

brokeDerekthe window with a hammerto
152
Predicate Identification1 Identify all predicates in the sentence
broke
Semantic Role Labeling (SRL)
escapeescape

break.01
broke
Predicate Identification1
2
Identify all predicates in the sentence
Sense DisambiguationClassify sense of each predicate
153
break.01, break
A0: breaker
A1: thing broken
A2: instrument
A3: piecesA4: arg1 broken
away from what?
English Propbank
Breaking_apart
Pieces
Whole
Criterion
Manner
Means
Place…
FrameNet Frame
Break-45.1
Agent
Patient
Instrument
Result
VerbNe
t
Semantic Role Labeling (SRL)
Derekthe window with a hammerto escape.

break.01
Predicate Identification1
2
3
Identify all predicates in the sentence
Sense DisambiguationClassify sense of each predicate
Argument IdentificationFind all roles of each predicate
154
Argument identification can either be -Identification of span, (span SRL) OR-Identification of head (dependency SRL)
broke
Semantic Role Labeling (SRL)
Derekthe windowwith a hammerto escape

Predicate Identification1
2
4
3
Identify all predicates in the sentence
Sense DisambiguationClassify sense of each predicate
Argument IdentificationFind all roles of each predicate
Argument ClassificationAssign semantic label to each role
155
Breaker thing brokenbreak.01instrumentPurpose
Semantic Role Labeling (SRL)
break.01
brokeDerekthe windowwith a hammerto escape

Predicate Identification1
2
4
3
Identify all predicates in the sentence
Sense DisambiguationClassify sense of each predicate
Argument IdentificationFind all roles of each predicate
Argument ClassificationAssign semantic label to each role
156
A0: Breaker A1: thing brokenbreak.01A2: instrumentAM-PRP: Purpose
Semantic Role Labeling (SRL)
break.01
brokeDerekthe windowwith a hammerto escape
If using
PropBank

Predicate Identification1
2
4
3
Identify all predicates in the sentence
Sense DisambiguationClassify sense of each predicate
Argument IdentificationFind all roles of each predicate
Argument ClassificationAssign semantic label to each role
157
A0: Breaker A1: thing brokenbreak.01A2: instrumentAM-PRP: Purpose
Semantic Role Labeling (SRL)
break.01
brokeDerekthe windowwith a hammerto escape
5Global OptimizationGlobal constraints (predicates and arguments)

158
Outline
❑Early SRL approaches [< 2017]
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnostic neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑Practical SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentations for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate semantic role label definitions❑Autoregressive Models❑SRL as MRC task❑SRL Generation❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

159
Outline
❑Early SRL approaches
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnostic neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑Practical SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentations for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate semantic role label definitions❑Autoregressive Models❑SRL as MRC task❑SRL Generation❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

160
Early SRL Approaches
2 to 3 steps to obtain complete predicate-
argument structure
Predicate IdentificationGenerally considered as not a task, as all the
existing SRL datasets provided Gold predicate
location.
Predicate sense disambiguationLogistic Regression [Roth and Lapata, 2016]
Argument IdentificationBinary classifier [Pradhan et al., 2005; Toutanova et
al., 2008]
Role LabelingLabeling is performed using a classifier (SVM,
logistic regression) Argmax over roles will result in a local assignment
Requires Feature EngineeringMostly Syntactic [Gildea and Jurafsky,
2002]
Global OpomizaoonEnforce linguis?c and structural constraint (e.g., no
overlaps, discon?nuous arguments, reference
arguments, ...) Viterbi decoding (k-best list with constraints)
[Täckström et al., 2015] Dynamic programming [Täckström et al., 2015;
Toutanova et al., 2008] Integer linear programming [Punyakanok et al.,
2008] Re-ranking [Toutanova et al., 2008; Bjö̈rkelund et
al., 2009]

161
Outline
❑Early SRL approaches
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnostic neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑Practical SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentations for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate semantic role label definitions❑Autoregressive Models❑SRL as MRC task❑SRL Generation❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT
Types of encoder-LSTMs, Attention
-MLP
Typical Neural SRL
Components
162
A typical neural SRL model contains three
components
ClassifierAssign a semantic role label to each token
in the input sentence. [Local + Global]
Encoder:Encodes the context information to each
token.
Embedder:Represent input token into continuous
vector representation.

Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT
Neural SRL Components –Embedder
163
Embedder:Represent input token into continuous
vector representation.
Hehaddaredtodefynature
Embedder
Could be static or dynamic embeddingsCould incorporate syntax information
Usually, a binary flag0 represents no predicate1 represent predicate
End-to-end systems do not include this flag
Sub-task: Argument Classificaoon

Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT Dynamic Embeddings
Merchant et al., 2020
Neural SRL Components –
Embedder
Static Embeddings
GLoVe:•He et al., 2017•Strubell et al., 2018
SENNA:•Ouchi et al., 2018
ELMo: •Marcheggiani et al., 2017 •Ouchi et al., 2018•Li et al., 2019•Lyu et al., 2019•Jindal et al., 2020•Li et al., 2020
BERT: •Shi et al., 2019•Jindal et al., 2020•Li et al., 2020
BERT: •Shi et al., 2019•Conia et al., 2020•Zhang et al., 2021•Tian et al., 2022
RoBERTa:•Conia et al., 2020•Blloshmi et al., 2021•Fei et al., 2021•Wang et al., 2022•Zhang et al. 2022
XLNet:•Zhou et al., 2020•Tian et al., 2022
164
Embedder:Represent input token into continuous
vector representation.
Sub-task: Argument Classification

85.28
89.6
91.491.592.693.3
70
75
80
85
90
95
100
RandomELMo; Li et al.,
2019
BERT; Conia
et al., 2020
WSJ
F1
75.09
79.3
83.2884.6785.987.2
70
75
80
85
90
95
100
RandomELMo; Li et al.,
2019
BERT; Conia
et al., 2020
Brown
F1
Static Static
Dataset: CoNLL09 EN
Performance Analysis
Best performing model for each word embedding type
165
Does this mean that we only
need better contextualized
embeddings to perform well
on SRL task?
Sub-task: Argument Classification

Encoder
Classifier
Embedder
Input Sentence
Neural SRL Components –Encoder
166
Encoder:Encodes the context information to each
token.
Types of encoder-BiLSTMs-Attention
Hehaddaredtodefynature
Embedder
EncoderLeft pass
Right pass
Encoder could be Stacked BiLSTMs or some variant of LSTMs ATenoon Network
Incorporates syntax informaoon
Sub-task: Argument Classification

Encoder
Classifier
Embedder
Input Sentence
Neural SRL Components –Classifier
167
ClassifierAssign a semantic role label to each token
in the input sentence.
Hehaddaredtodefynature
Embedder
Encoder
Usually a FF followed by Softmax
-MLP
Classifier
B-A000B-A2I-A2I-A2
Sub-task: Argument Classification

168
Outline
❑Early SRL approaches
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnostic neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑Practical SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentations for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate semantic role label definitions❑Autoregressive Models❑SRL as MRC task❑SRL Generation❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

169
What and Where Syntax?
Surface form
Lemma form
U{X}POS
Dependency
Relation
Everything or anything that explains the syntactic structure of the sentence
Parsed with UDPipe Parser: http://lindat.mff.cuni.cz/services/udpipe/
What Syntax for SRL?

Syntax at Embedder
Concatenate {POS, dependency relation, dependency head and other syntactic information}
Where the Syntax is being used?Marcheggiani et al.,2017b
Li et al., 2018
He et al., 2018
Wang et al., 2019
Kasai et al., 2019
HE et al., 2019
Li et al., 2020
Zhou et al., 2020
170
Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT
EMB

Syntax at Encoder
Dependency tree-Graphs
-LSTMs Trees
Marcheggiani et al., 2017
Zhou et al., 2020
Marcheggiani et al., 2020
Zhang et al., 2021
Tian et al., 2022
171
Encoder
Classifier
Embedder
Input Sentence
Types of encoder-BiLSTMs-Attention
ENC
Where the Syntax is being used?

Joint Learning
At what level Syntax is used?
Strubell et al., 2018
Shi et al., 2020
Multi-task learning
172
Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT
Types of encoder-BiLSTMs-Attention
-MLP

87.788
89.589.890.290.8690.9991.2791.7
92.83
80
82
84
86
88
90
92
94
Marcheggiani et
al.,2017b
He et al., 2018Kasai et al., 2019Lyu et al., 2019LI et al., 2020
WSJ
F1
Dataset: CoNLL09 EN
20182019202020212017
EmbEncEmbEncEmbEmbEnc
+
Emb
EncEmb
BERT/Fine-tune Regime
+2.0-2.9
Comparing Syntax aware models
Performance Analysis
Enc
173
Encoder level is best suited for utilizing dependency
graphs that provides an extra information about how the
tokens are connected to each other syntactically

A Simple and Accurate Syntax-Agnostic Neural Model for
Dependency-based Semantic Role Labeling
Marcheggiani et al., 2017
Predict semantic dependency edges between
predicates and arguments.
Use predicate-specific roles (such as make-A0
instead of A0) as opposed to generic sequence
labeling task.
174
Syntax at embedder level
Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017.A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling. InProceedings
of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 411–420, Vancouver, Canada. Association for Computational Linguistics.

Marcheggiani et al., 2017
Wp

Randomly inioalized word embeddings
Wr

Pre-trained word embeddings
PO

Randomly initialized POS embeddings
Le

Randomly initialized Lemma embeddings

Predicate specific feature [Binary]
Embedder OR
Input word representation
Hehaddaredtodefynature
Embedder
175
W
pW
r
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
POLe
Wp
Wr
PO
Le
Syntax at embedder level

Marcheggiani et al., 2017
Encoder
Hehaddaredtodefynature
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
POLeEmbedder
Encoder
Several BiLSTMs layers
-Capturing both the left and the right context-Each BiLSTM layer takes the lower layer as input
176
Syntax at embedder level

Marcheggiani et al., 2017
Preparaoon for classifier
Provide predicate hidden state as another another
input to classifier along with each token.
Hehaddaredtodefynature
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
POLeEmbedder
Encoder
+ ~6% F1 on CoNLL09 EN
177
The two ways of encoding predicate information,
using predicate-specific flag at embedder level and
incorporating the predicate state in the classifier,
turn out to be complementary.
Syntax at embedder level
Predicate
Hidden state

Marcheggiani et al., 2017
86.987.387.387.787.7
80
81
82
83
84
85
86
87
88
89
90
Bjö̈rkelund et al.
(2010)
FitzGerald et al.
(2015)
Marcheggiani et al.
(2017)
WSJ
75.675.775.276.177.7
65
70
75
80
85
90
Bjö̈rkelund et al.
(2010)
FitzGerald et al.
(2015)
Marcheggiani et
al. (2017)
Brown
178
Syntax at embedder level
Dataset: CoNLL09 EN

Marcheggiani et al., 2017
Takeaway
s
Appending POS does help approx. 1 F1 points gain
Predicate specific encoding does help approx. 6 F1 point
gain
Quite effective for the classification of arguments which
are far from the predicate in terms of word distance.
Noted: Substantial improvement on EN OOD over previous
works.
Hehaddaredtodefynature
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
PO
Le
Wp
Wr
POLeEmbedder
Encoder
Classifier
A0000A20
179
Syntax at embedder level

Encoding Sentences with Graph Convoluoonal Networks for
Semanoc Role Labeling
Marcheggiani et al., 2017b
Hehaddaredtodefynature
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLeEmbedder
Encoder
Classifier
A0000A20
K layers GCN
Basic SRL components remains the same as compared
to [Marcheggiani et al., 2017]
GCN layers are inserted between Encoder and
Classifier.Re-encoding the encoder representations based
on syntactic structure of the sentence. Modeling syntactic dependency structure
180
Syntax at encoder level
Diego Marcheggiani and Ivan Titov. 2017.Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. InProceedings of the 2017 Conference on
Empirical Methods in Natural Language Processing, pages 1506–1515, Copenhagen, Denmark. Association for Computational Linguistics.

Marcheggiani et al., 2017b
Hehaddaredtodefynature
Want to encode informaoon k nodes awayUse k layers to encode k-order neighborhood.
Helped capture the widened syntacoc neighborhood.
ReLUReLUReLUReLUReLUReLU
nsubjxcompobj
auxmark
ReLUReLUReLUReLUReLUReLU
181
What is syntacoc GCN?
Syntax at encoder level

Encoding Sentences with Graph Convolutional Networks for
Semantic Role Labeling
Marcheggiani et al., 2017b
Hehaddaredtodefynature
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLeEmbedder
Encoder
Classifier
A0000A20
K layers GCN
Claim: GCN helps capture long range dependencies.But: encoding k-hope neighborhood seems to
hurt the performance. (k = 1 works the best)
182
Syntax at encoder level

Encoding Sentences with Graph Convolutional Networks for
Semantic Role Labeling
Marcheggiani et al., 2017b
Hehaddaredtodefynature
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLeEmbedder
Encoder
Classifier
A0000A20
K layers GCN
Gold dependency can significantly improve the
performance.
82.783.3
86.4
75
80
85
90
No SyntaxGCN
(Predicted)
GCN (Gold)
DEV set
183
Syntax at encoder level

Marcheggiani et al., 2017b
86.987.387.387.787.788
80
81
82
83
84
85
86
87
88
89
90
Bjö̈rkelund et al.
(2010)
FitzGerald et al.
(2015)
Marcheggiani et al.
(2017)
WSJ
75.675.775.276.177.777.2
65
70
75
80
85
90
Bjö̈rkelund et al.
(2010)
FitzGerald et al.
(2015)
Marcheggiani et al.
(2017)
Brown
184
Syntax at encoder level
Dataset: CoNLL09 EN
ENC ENCENB ENB

Marcheggiani et al., 2017b
Hehaddaredtodefynature
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLeEmbedder
Encoder
Classifier
A0000A20
K layers GCN
Takeaway
s
Appending POS does help approx. 1 F1 points gain
Predicate specific encoding does help approx. 6 F1 point
gain
Model syntacoc dependencies via syntacoc GCN further
improve the SRL performance. NEED High quality syntacoc
parser
Noted: Improvement only on EN in-domain over previous
works.However previous work show improvement over OOD set.
185

A unified Syntax-aware Framework for Semanoc role labeling
Li et al., 2018
Hehaddaredtodefynature
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLe
WpWrPOLeEmbedder
Encoder
Classifier
Syntacoc Layer
[Marcheggiani et al., 2017b][Tai et al., 2015]
186Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018.A Unified Syntax-aware Framework for Semantic Role Labeling.
InProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2401–2411, Brussels, Belgium. Association for Computational Linguistics.
[Qian et al., 2017]
Syntax at encoder level
ExtensionofBiLSTMs.
Incorporatesthesyntacticinformation
intoeachwordrepresentationby
introducinganadditionalgate
ExtensionofBiLSTMs.
Modeltree-structuredtopologies

Li et al., 2018
87.387.387.787.788
89.8
80
81
82
83
84
85
86
87
88
89
90
Täckström et al.
(2015)
Roth and Lapata
(2016)
Marcheggiani et al.
(2017)
WSJ
75.775.276.177.777.2
79.8
65
70
75
80
85
90
Täckström et al.
(2015)
Roth and Lapata
(2016)
Marcheggiani et al.
(2017)
Brown
187
Syntax at encoder level
Dataset: CoNLL09 ENGloveELMo

188
Outline
❑Early SRL approaches
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnosOc neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑PracOcal SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentaOons for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate seman?c role label defini?ons❑Autoregressive Models❑SRL as MRC task❑SRL Genera?on❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

Syntax-AgnosIc Model
He et al., 2017
He et al., 2018
Cai et al., 2018
Ouchi et al., 2018
Guan et al., 2019
LI et al., 2019
Shi et al., 2019
Conia et al., 2020
Jindal et al., 2020
Zhou et al., 2020
Conia et al., 2021
Blloshmi et al., 2021
Wang et al., 2022
Zhang et al. 2022
Syntax-Agnostic Models
189
Encoder
Classifier
Embedder
Input Sentence
Word embeddings-FastText, GloVe-ELMo, BERT
Types of encoder-BiLSTMs-Attention
-MLP

88.789.689.189.6
90.8
92.491.4
92.692.492.293.3
80
82
84
86
88
90
92
94
He et al.,
2018
Cai et
al., 2018
LI et al.,
2019
Guan et
al., 2019
Jindal et
al., 2019
Shi et al.,
2019
Zhou et
al., 2020
Conia et
al., 2020
Blloshmi
et al.,
2021
Zhang et
al. 2022
Wang et
al., 2022
WSJ
F1
2018
Dataset: CoNLL09 ENComparing Syntax agnostic models
201920202021
BERT/Fine-tune Regime
+2.5-2.1
190
Performance Analysis

78.87978.979.7
8585.7
87.3
85.985.286
87.2
75
77
79
81
83
85
87
89
He et al.,
2018
Cai et
al., 2018
LI et al.,
2019
Guan et
al., 2019
Jindal et
al., 2019
Shi et al.,
2019
Zhou et
al., 2020
Conia et
al., 2020
Blloshmi
et al.,
2021
Zhang et
al. 2022
Wang et
al., 2022
Brown
Dataset: CoNLL09 ENComparing Syntax agnostic models
F1
2018201920202021
BERT/Fine-tune Regime
+2.3-6.2
191
Performance Analysis

He et al., 2017
Hehaddaredtodefynature
Embedder
wr

Pre-trained word embeddings

Predicate specific feature [Binary]
Pre-trained word embeddingsUse predicate flag
Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017.Deep Semantic Role Labeling:
What Works and What’s Next. InProceedings of the 55th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pages 473–483, Vancouver, Canada.
Association for Computational Linguistics.
Embedder OR Input word representaoon
192
W
pW
r
PO
Le
wr
W
pW
r
PO
Le

He et al., 2017
Hehaddaredtodefynature
Embedder
Encoder
Stacked BILSTM
Highway Connecoons [Srivastava et al., 2015]To alleviate vanishing gradient problem
Recurrent Dropout [Gal et al., 2016]To reduce overfi}ng
Encoder
193

He et al., 2017
Hehaddaredtodefynature
Embedder
Encoder
Classifier
B-A000B-A2I-A2I-A2
Constraint A* decodingBIO constraintUnique core rolesCononuaoon ConstraintReference constraintSyntacoc constraint
Classifier with MLP layer followed by Somax
SRL Constraints were previously discussed by
Punyakanok et al. (2008) and Tackstrom et al. (2015)
194
Local classifier
Global opomizaoon

He et al., 2017
77.2
79.779.979.4
82.883.1
75
77
79
81
83
85
87
89
Surdenau et al.
(2007)
Täckström et al.
(2015)
Zhou and Xu (2015)
CoNLL05 WSJ
F1
67.767.8
71.371.2
69.4
72.1
65
70
75
80
85
90
Surdenau et al.
(2007)
Täckström et al.
(2015)
Zhou and Xu
(2015)
CoNLL05 Brown
195
Dataset: CoNLL05

He et al., 2017
How well do LSTMs model global structural consistency,
despite condioonally independent tagging decisions?
Long range dependencies:
Performance tends to degrade, for all models, for
arguments further from the predicate
196

He et al., 2017
197Hehaddaredtodefynature
Embedder
Encoder
Classifier
B-A000B-A2I-A2I-A2
Takeaway
sGeneral label confusion between core arguments and
contextual arguments is due to the ambiguous definioons
in frame files.
Layers of BiLSTMs help captures the long–range
predicate-argument structures.
The number of BIO violaoons decreases when we use a
deeper model
Deeper BiLSTMs are beTer at enforcing structural
consistencies, although not perfectly.

Tan, Zhixing, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. "Deep semantic role
labeling with self-attention." InProceedings of the AAAI conference on artificial intelligence, vol.
32, no. 1. 2018.
Tan et al., 2018
Do we really need all these hacks!!!!

Let’s break Recurrence and allow every posioon in the
sentence to aTend over all posioons in the input sequence
No Syntax
Use predicate specific flag
Use Mulo-head self aTenoon
Use Glove Embeddings
Hehaddaredtodefynature
Embedder
Encoder
SoqmaxClassifier
B-A000B-A2I-A2I-A2
RNN/CNN/FNN
Mulo-Head Self-ATenoon
10x
198

Tan et al., 2018
77.2
79.779.979.4
82.883.1
84.8
75
77
79
81
83
85
87
89
Surdenau et
al. (2007)
Täckström et
al. (2015)
Zhou and Xu
(2015)
Tan et al.,
2018
WSJ
67.767.8
71.371.2
69.4
72.1
74.1
65
70
75
80
85
90
Surdenau et
al. (2007)
Täckström et
al. (2015)
Zhou and Xu
(2015)
Tan et al.,
2018
Brown
199
Dataset: CoNLL05

Tan et al., 2018
Takeaway
sSubstanoal improvements on CoNLL05 WSJ as compared to
[He et al., 2017]
No need of CONSTRAINED Decoding (slows down) Just use
Argmax decoding. 83.1 83.0 [Token classificaoon]
As reported earlier, Model depth is the key as compared
against model width
FNN seems beTer choice over CNN and RNN when
aTenoon is used as encoder
Posioonal embeddings are necessary to gain actual
performance
Hehaddaredtodefynature
Embedder
Encoder
SoqmaxClassifier
B-A000B-A2I-A2I-A2
RNN/CNN/FNN
Mulo-Head Self-ATenoon
10x
200

Simple BERT model for relaoon extracoon and SRL
Shi et al., 2019
Hehaddaredtodefynature
Encoder
Classifier
A0000A20
BERT
[CLS] [SEP]dared[SEP]
❑Use BERT LM to obtain predicate-aware contextualized
embeddings for encoder.
❑BiLSTMs are encoder layer (1x)
❑Concatenate predicate hidden state to the hidden state of
the rest of the tokes similar to [Marcheggiani et al., 2017]
and then fed into one-layer MLP classificaoon.
201
Shi, Peng, and Jimmy Lin. "Simple bert models for relation extraction and semantic role
labeling."arXiv preprint arXiv:1904.05255(2019).
Are high quality contextual embedding enough for SRL task?

Shi et al., 2019
79.4
82.883.1
84.886
88.188.8
75
77
79
81
83
85
87
89
FitzGerald et
al. (2015)
He et al.
(2017)
Strubell et al.,
(2018) ELMo
Shi et al.,
(2019) BERT-
L
CoNLL05 WSJ
71.269.4
72.174.1
76.5
80.982.1
65
70
75
80
85
90
FitzGerald et
al. (2015)
He et al.
(2017)
Strubell et al.,
(2018) ELMo
Shi et al.,
(2019) BERT-
L
CoNLL05 Brown
202
Are high quality contextual embedding enough for SRL task?
Dataset: CoNLL05
+2.1
+4.4

Shi et al., 2019
Hehaddaredtodefynature
Encoder
Classifier
A0000A20
BERT
[CLS] [SEP]dared[SEP]
Powerful Contextualized embeddings is all we need for
SRL??
We do not need syntax to perform beTer on SRL??
Do we know if BERT embeddings encodes syntax
implicitly??Yes [Jawaher et al., 2019]Explicit syntax informaoon shown to further improve
the SoTA SRL performance.
203
Are high quality contextual embedding enough for SRL task?

88
89.689.8
92.4
90.99
92.691.7
93.392.83
80
82
84
86
88
90
92
94
Marcheggiani et al.,
2017
LI et al., 2018Lyu et al., 2019LI et al., 2020Fei et al., 2021
WSJ
Dataset: CoNLL09 EN
Comparison
Syntax-agnostic (SG) Vs. Syntax-aware(SA) models
BERT/Fine-tune Regime
SG
SGSGSGSASASASA
SA
F1
20182019202020212017
204

Dataset: CoNLL09 EN
Comparison
Syntax-agnostic (SG) Vs. Syntax-aware(SA) models
77.7
7979.8
85.7
80.8
87.386.8487.2
75
77
79
81
83
85
87
89
Marcheggiani et
al.,2017b
LI et al., 2018Kasai et al., 2019Zhou et al., 2020PLACEHOLDER
WSJ
BERT/Fine-tune Regime
SGSGSGSGSA
??
SASASASA
F1
20182019202020212017
205

206
Outline
❑Early SRL approaches
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnosOc neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑PracOcal SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentaOons for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate seman?c role label defini?ons❑Autoregressive Models❑SRL as MRC task❑SRL Genera?on❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

He et al., 2018
Hehaddaredtodefynature
Embedder
Encoder
Classifier
❑Jointly predicong all predicates, arguments spans and the
relaoon between them
❑Build upon coreference resoluoon model [Lee et al., 2017].
❑Embedder: ❑No predicate locaoon specified instead concatenate
word embeddings with the output of charCNN.
❑Each edge is idenofied by independently predicong which
role, if any, holds between every possible pair of text
spans, while using aggressive beam pruning for efficiency.
The final graph is simply the union of predicted SRL roles
(edges) and their associated text spans (nodes)
Encoder
Representaoon
207
Syntax-agnosoc end-to-end SRL system
Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018.Jointly Predicting Predicates
and Arguments in Neural Semantic Role Labeling. InProceedings of the 56th Annual Meeting of
the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369,
Melbourne, Australia. Association for Computational Linguistics.

He et al., 2018
Task: Predict a set of labeled predicate argument
relaoons
Set of all predicate-argument
relaLons
Set of all tokens
Set of all the possible spans
Set of all SRL labels
Encoder Representaoon
208
Syntax-agnosoc end-to-end SRL system

He et al., 2018
He hadHe had daredto defy naturedared
Predicate RepresentaoonSPAN Representaoon
He
To obtain predicate and argument representaoons
Predicate representaOon is simply the BiLSTM
output at the posioon index p
Argument RepresentaOon contains the following:-End points from BiLSTM ouput-A so head word-Embedded span width feature
209
Syntax-agnosoc end-to-end SRL system

He et al., 2018
Jointly predicong predicates and Arguments in Neural SRL
He hadHe had daredto defy naturedared
Encoder Representaoon
Unary scores
Compute Unary score for predicates and arguments
He
210
Syntax-agnosoc end-to-end SRL system

He et al., 2018
Jointly predicong predicates and Arguments in Neural SRL
He hadHe had daredto defy naturedared
Encoder Representaoon
Unary scores
Compute Unary score for predicates and arguments
Relaoon score
Compute Relaoon score between predicates and
arguments
He
Number of possible relaoons
211
Syntax-agnosoc end-to-end SRL system

He et al., 2018
Jointly predicong predicates and Arguments in Neural SRL
He hadHe had daredto defy naturedared
Encoder Representaoon
Unary scores
Compute Unary score for predicates and arguments
Relaoon score
Compute Relaoon score between predicates and
arguments
Combined score
212
Syntax-agnosoc end-to-end SRL system
Classifier

He et al., 2018
Hehaddaredtodefynature
Embedder
Encoder
Classifier
He hadHe had daredto defy naturedared
-An end-to-end Neural SRL Model
87.486
80.4
76.1
70
75
80
85
90
Gold
predicate
end-to-end
Argument classificaoon results on CoNLL05
WSJBrown
213
Syntax-agnosoc end-to-end SRL system

He et al., 2018
Hehaddaredtodefynature
Embedder
Encoder
Classifier
He hadHe had daredto defy naturedared
Takeaway
sFirst end-to-end neural SRL model.
Strong performance against models with gold predicates.
Empirically, the model does beTer at long range
dependencies and agreement with syntacoc boundaries,
but is weaker at global consistency, due to our strong
independence assumpoon
214
Syntax-agnosoc end-to-end SRL system

Strubell et al., 2018
Hehaddaredtodefynature
Encoder
Embedder
Classifier
B-A00000B-A1
MulL-Head Self-A[enLon + FF
SyntacLcally-informed Self-A[enLon + FF
MulL-Head Self-A[enLon + FF
Predicate + POS Tagging
FFBilinearFF
Predicat
e
Rol
e
DareB-A000B-A2I-A2I-A2
defyLinguisocally-Informed Self-ATenoon for Semanoc Role
Labeling
Syntax strikes back-A mulo-task learning framework with stacked mulo-head
self-aTenoon-Jointly predicts POS and predicates-Perform parsing-ATend to syntacoc parse parent while assigning semanoc
role label.
215
Syntax-aware end-to-end SRL system
Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum.
2018.Linguistically-Informed Self-Attention for Semantic Role Labeling. InProceedings of the 2018
Conference on Empirical Methods in Natural Language Processing, pages 5027–5038, Brussels,
Belgium. Association for Computational Linguistics.

Strubell et al., 2018
Hehaddaredtodefynature
Encoder
Embedder
Classifier
B-A00000B-A1
MulL-Head Self-A[enLon + FF
SyntacLcally-informed Self-A[enLon + FF
MulL-Head Self-A[enLon + FF
Predicate + POS Tagging
FFBilinearFF
Predicat
e
Rol
e
DareB-A000B-A2I-A2I-A2
defy
❑Replace one aTenoon head with the deep bi-affine
model of Dozat and Manning (2017).
❑Use a bi-affine operator U to obtain aTenoon
weights for that single head.
❑Encode both the dependency and the dependency
label
216
Syntax-aware end-to-end SRL system

Strubell et al., 2018
Encoder
Embedder
Classifier
B-A00000B-A1
MulL-Head Self-A[enLon + FF
SyntacLcally-informed Self-A[enLon + FF
MulL-Head Self-A[enLon + FF
Predicate + POS Tagging
FFBilinearFF
Predicat
e
Rol
e
DareB-A000B-A2I-A2I-A2
defy
217
Syntax-aware end-to-end SRL system
Hehaddaredtodefynature
Syntacoc Head
Semanoc Heads

Strubell et al., 2018
Encoder
Embedder
Classifier
B-A00000B-A1
MulL-Head Self-A[enLon + FF
SyntacLcally-informed Self-A[enLon + FF
MulL-Head Self-A[enLon + FF
Predicate + POS Tagging
FFBilinearFF
Predicat
e
Rol
e
DareB-A000B-A2I-A2I-A2
defyLinguisocally-Informed Self-ATenoon for Semanoc Role
Labeling
Predicate-specific
representaoon
Argument-specific
representaoon
Bilinear Transformaoon
operator
218
Syntax-aware end-to-end SRL system
Hehaddaredtodefynature

Strubell et al., 2018
79.979.4
82.883.183.984.8
86
75
77
79
81
83
85
87
89
Täckström et
al. (2015)
Zhou and Xu
(2015)
He et al.
(2018)
Strubell et al.,
(2018)
WSJ
71.371.2
69.4
72.173.774.1
76.5
65
70
75
80
85
90
Täckström et
al. (2015)
Zhou and Xu
(2015)
He et al.
(2018)
Strubell et al.,
(2018)
Brown
219
Syntax-aware end-to-end SRL system
SASA
Dataset: CoNLL05

Strubell et al., 2018
Hehaddaredtodefynature
Encoder
Embedder
Classifier
B-A00000B-A1
MulL-Head Self-A[enLon + FF
SyntacLcally-informed Self-A[enLon + FF
MulL-Head Self-A[enLon + FF
Predicate + POS Tagging
FFBilinearFF
Predicat
e
Rol
e
DareB-A000B-A2I-A2I-A2
defyTakeaway
sShows strong performance gain over other methods with
and w/o gold predicate locaoon
Incorporaong parse informaoon helpful for resolving span
boundary errors (Merge spans, split spans etc.)
220
Syntax-aware end-to-end SRL system

Zhou et al., 2019
❑Semanocs is usually considered as a higher layer of
linguisocs over syntax, most previous studies focus on
how the laTer helps the former
❑Semanocs benefit from syntax, but syntax may also
benefit from semanocs.
❑Joint training of (Mulo-task learning) following 5 tasks❑Semanoc ❑Dependency❑Span❑Predicate❑Syntax❑Consotuent❑DependencyHehaddaredtodefynature
Encoder
Embedder
SRL
Classifier
MulL-Head Self-A[enLon + FF
FFBilinearFF
Predicat
e
Rol
e
Dependency Head score
Consotuent Span score
221
Syntax-aware end-to-end SRL system
Junru Zhou, Zuchao Li, and Hai Zhao. 2020.Parsing All: Syntax and Semantics, Dependencies and Spans. InFindings of the Association for Computational Linguistics: EMNLP
2020, pages 4438–4449, Online. Association for Computational Linguistics.

Zhou et al., 2019
Table 2 from the paper: Joint learning analysis on CoNLL-
2005, CoNLL-2009, and PTB dev sets
❑Joint training of dependency and span for SRL
helps improve both. Further strengthened by Fei
et al. (2021).
Interesong Insights
❑Further improve for both is observed when
combined with syntacoc consotuent.
SEMANTICS
❑Though marginal, semanoc do improve syntax
SYNTA
X
222
Can we jointly predict dependency and span?
Hao Fei, Shengqiong Wu, Yafeng Ren, Fei Li, and Donghong Ji. 2021.Better Combine Them Together! Integrating Syntactic Constituency and Dependency Representations for
Semantic Role Labeling. InFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 549–559, Online. Association for Computational Linguistics.
❑Not so when combined with syntacoc dependency

Jindal et al., 2022
223Ishan Jindal, Alexandre Rademaker, Michał Ulewicz, Ha Linh, Huyen Nguyen, Khoi-Nguyen Tran, Huaiyu Zhu, and Yunyao Li. 2022.Universal Proposition Bank 2.0.
InProceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1700–1711, Marseille, France. European Language Resources Association.
SPADE: SPAn and DEpendency SRL model
Hehadtodefynature
A0000A20
BERT
[CLS][SEP]dared[SEP]
B-A0000B-A2I-A2
A mulo-task learning framework-Train simultaneously on argument heads and the argument spans.
Enclosing constraints
Observaoons:❑Slight drop on argument head
performance.❑Gain on argument span performance.
These observaoons are consistent with Zhou
et at., 2019
Can we jointly predict dependency and span?

Zhou et al., 2019
224
79.979.4
82.883.184.88687.888.788.188.8
75
80
85
90
Täckström et al.
(2015)
Zhou and Xu (2015)Tan et al., (2018)
ELMO
Zhou et al., (2019)
ELMo
Shi et al., (2019)
BERT-S
CoNLL05 WSJ
71.371.269.472.174.176.580.281.280.982.1
65
75
85
Täckström et al.
(2015)
Zhou and Xu (2015)Tan et al., (2018)
ELMO
Zhou et al., (2019)
ELMo
Shi et al., (2019)
BERT-S
CoNLL05 Brown
Parsing All: Syntax and Semanocs, Dependencies and Spans
Can we jointly predict dependency and span?

225
Outline
❑Early SRL approaches [< 2017]
❑Typical neural SRL model components❑Performance analysis
❑Syntax-aware neural SRL models❑What, When and Where?❑Performance analysis❑How to incorporate Syntax?
❑Syntax-agnosOc neural SRL models❑Performance Analysis❑Do we really need syntax for SRL?❑Are high quality contextual embedding enough for SRL
task?
❑PracOcal SRL systems❑Should we rely on this pipelined approach?❑End-to-end SRL systems❑Can we jointly predict dependency and span?
❑MachineRepresentaOons for SRL❑Autoencoder Models❑BERT for SRL❑Incorporate seman?c role label defini?ons❑Autoregressive Models❑SRL as MRC task❑SRL Genera?on❑LLaMA Adapters❑Probing ChatGPT on SRL
❑Conclusion

Understanding BERT based model beTer for beTer SRL performance.
Understand BERT for SRL
226
Ilia Kuznetsov and Iryna Gurevych. 2020.A matter of framing: The impact of linguistic formalism on probing results. InProceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP), pages 171–182, Online. Association for Computational Linguistics.
BERT “rediscovers” the classical NLP pipeline [Tenney et al., 2019]
❑Lower layers tend to encode mostly lexical level informaoon, while
❑Upper layers seem to favor sentence-level informaoon.

Understanding BERT based model beTer for beTer SRL
performance.
Understand BERT for SRL
Hehaddaredtodefynature
Encoder
Classifier
A0000A20
BERT
[CLS] [SEP]dared[SEP]
= f( , , , , ….. )
227Simone Conia and Roberto Navigli. 2022.Probing for Predicate Argument Structures in Pretrained Language Models. InProceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers), pages 4622–4632, Dublin, Ireland. Association for Computational Linguistics.
StaOc: Last layer acovaoons as staoc embeddings
Top-4: Concatenate top 4 layers acovaoons
W-avg: Parametric sum of all layer acovaoons

Understanding BERT based model beTer for beTer SRL
performance.
Understand BERT for SRL
Hehaddaredtodefynature
Encoder
Classifier
A0000A20
BERT
[CLS] [SEP]dared[SEP]
= f( , , , , ….. )
228
❑Predicate senses and argument structures are encoded at
different layers in LMs
❑Verbal and nominal predicate-argument structures are
represented differently across the layers of a LM; ❑SRL system benefits from treaong them separately;
Interesong Insights

Label-aware NLP
•Model is given the definitions of labels, and
can effectively leverage them in many
tasks
▪Sentiment/entailment: (Schick and Schutze,
2021)
▪Event extraction: (Du and Cardie, 2020;
Hongming et al., 2021)
▪Word sense disambiguation: (Kumar et al., 2019)
•Strong even with few-shot
•Many more, but NOT for SRL (why?)
▪Semantic roles are specific to predicates
▪There are many predicates, thus many
roles; very sparse
▪8500 Predicate senses in CoNLL09 data
▪~8500*3 argument labels ~ 25K
229
IncorporaIng Role DefiniIons

Label-aware NLP for SRL
230
IncorporaIng Role DefiniIons
Li Zhang, Ishan Jindal, and Yunyao Li. 2022.Label Definitions Improve Semantic Role Labeling. InProceedings of the 2022 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies, pages 5613–5620, Seattle, United States. Association for Computational Linguistics.
[Zhang et al., 2022]❑Make n+1 copies of the sentence where n is number of
core arguments defined for frame.❑N is number of core arguments❑+1 is for contextual arguments
❑Append label definioon at the end of the sentence.
❑Convert K class classificaoon problem into binary class
classificaoon. ❑That is to determine whether a token is worker or
not in this example.

231
IncorporaIng Role DefiniIons
Low-Frequency Predicates. -SRL suffers from the long-tail phenomenon.-LD outperforms base by up to 4.4 argument F1 for unseen
predicates, notably helping with low-frequency predicates.
Few-Shot Learning. -LD outperforms base by up to 3.2 F1 in-and out-domain.-The performance gap diminishes as training size approaches
100, 000.
Distant Domain AdaptaOon-evaluate models trained on CoNLL09 (news arocles) on the
Biology PropBank.-LD model achieves 55.5 argument F1, outperforming base
which achieves 54.6..
Interesong Insights

232
IncorporaIng Role DefiniIons
Zheng, Ce, Yiming Wang, and Baobao Chang. "Query your model with definitions in framenet: an effective method for frame semanticrole labeling." InProceedings of the AAAI
Conference on Artificial Intelligence, vol. 37, no. 11, pp. 14029-14037. 2023.
Query your model with definitions in
framenet: an effective method for
framesemantic role labeling
-A query-based framework to model label semanocs and
strengthen interacoons between arguments in FSRL.
-AGED achieve beTer performance on FSRL, especially in
zero-shot and few-shot scenarios.

SRL as extracove machine Reading Comprehension task [Wang et al., 2022]
SRL as MRC Task
233Nan Wang, Jiwei Li, Yuxian Meng, Xiaofei Sun, Han Qiu, Ziyao Wang, Guoyin Wang, and Jun He. 2022.An MRC Framework for Semantic Role Labeling. InProceedings
of the 29th International Conference on Computational Linguistics, pages 2188–2198, Gyeongju, Republic of Korea. International Committee on Computational
Linguistics.

DEEPSTRUCT: Pretraining of Language Models for Structure
Predicoon [Wang et al., 2022]
SRL GeneraIon
234Wang, Chenguang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, and Dawn Song. "DeepStruct: Pretraining of Language Models for Structure Prediction." InFindings of
the Association for Computational Linguistics: ACL 2022, pp. 803-823. 2022.
-SRL as structure predicoon task
-Reformulaong structure predicoon as a series
of unit tasks–triple predicoon tasks
-Showed significant performance gain over
autoencoders models.

LLaMA-Adapter: Efficient Fine-tuning of Large Language
Models with Zero-inioalized ATenoon
SRL Adapters
235
-lightweight adapoon method to efficiently
finetune LLaMA into an instrucoon-following
model
-fine-tune RoBERTalargefor NER and SRl
structured predicoon tasks
-Showed significant performance gain over
autoencoders models.
Renrui Zhang,Jiaming Han,Chris Liu,Aojun Zhou,Pan Lu,Yu Qiao,Hongsheng Li,Peng Gao, “LLaMA-Adapter: Efficient Fine-tuning of Large Language Models with
Zero-initialized Attention”, ICLR 2024

Pushing the limits of chatgpt on NLP tasks
Probing ChatGPT on SRL
236
-fine-tuned models for beTer demonstraoon
retrieval
-transforming tasks to formats that are more
tailored to the generaoon nature
-self-verificaoon strategy to address the
hallucinaoon issue of LLMs
Sun, Xiaofei, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li et al. "Pushing the limits of chatgpt onnlp tasks."arXiv preprint
arXiv:2306.09719(2023).

237
Conclusion
❑Syntax maXers❑Yes, at least for argument spans.❑Not for dependency SRL.❑Eventually, you need syntax to compute span. ❑SRL can help syntax
❑Contextualized embeddings❑Carry major chunk of performance gain in SRL.❑Fine-tunning LM for SRL further raised the bar.
❑End-to-End Systems❑More pracocal, but computaoonally expensive❑Predicate and arguments task shown to improve
each other.
❑SRL in few shot seZng❑Probe SRL informaoon from large LMs.❑Given the sparsity of the SRL label space
finding a right prompt is quite challenging.
❑Mul[lingual SRL❑Mulolingual SRL Resources❑Universal PropBanks for SRL❑A long way to go
❑Datasets❑Dataset without predicate sense annotaoons❑Ethical issues
❑SRL Model Re-Evalua[ons
ObservaaonsOpportuniaes

238
References
1.Merchant, A., Rahimtoroghi, E., Pavlick, E., & Tenney, I. (2020, November). What Happens To BERT Embeddings During Fine-tuning?.
InProceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP(pp. 33-44).2.Tan, Z., Wang, M., Xie, J., Chen, Y., & Shi, X. (2018, April). Deep semanLc role labeling with self-a[enLon. In Proceedings of the AAAI conference on arLficial
intelligence (Vol. 32, No. 1).3.Marcheggiani, D., Frolov, A., & Titov, I. (2017, August). A Simple and Accurate Syntax-AgnosLc Neural Model for Dependency-based SemanLc Role Labeling.
In Proceedings of the 21st Conference on ComputaLonal Natural Language Learning (CoNLL 2017) (pp. 411-420).4.A Unified Syntax-aware Framework for SemanLc Role Labeling5.Tian, Y., Qin, H., Xia, F., & Song, Y. (2022, June). Syntax-driven Approach for SemanLc Role Labeling. In Proceedings of the Thirteenth Language Resources and
EvaluaLon Conference (pp. 7129-7139).6.Zhang, Z., Strubell, E., & Hovy, E. (2021, August). Comparing span extracLon methods for semanLc role labeling. In Proceedingsof the 5th Workshop on
Structured PredicLon for NLP (SPNLP 2021) (pp. 67-77).7.Fei, H., Wu, S., Ren, Y., Li, F., & Ji, D. (2021, August). Be[er combine them together! integraLng syntacLc consLtuency and dependency representaLons for
semanLc role labeling. In Findings of the AssociaLon for ComputaLonal LinguisLcs: ACL-IJCNLP 2021 (pp. 549-559).8.Wang, N., Li, J., Meng, Y., Sun, X., & He, J. (2021). An mrc framework for semanLc role labeling. arXiv preprint arXiv:2109.06660.9.Blloshmi, R., Conia, S., Tripodi, R., & Navigli, R. (2021). GeneraLng Senses and RoLes: An End-to-End Model for Dependency-and Span-based SemanLc Role
Labeling. In IJCAI (pp. 3786-3793).10.Zhang, L., Jindal, I., & Li, Y. (2022, July). Label DefiniLons Improve SemanLc Role Labeling. In Proceedings of the 2022 Conference of the North American
Chapter of the AssociaLon for ComputaLonal LinguisLcs: Human Language Technologies (pp. 5613-5620).11.Cai, J., He, S., Li, Z., & Zhao, H. (2018, August). A full end-to-end semanLc role labeler, syntacLc-agnosLc over syntacLc-aware?. In Proceedings of the 27th
InternaLonal Conference on ComputaLonal LinguisLcs (pp. 2753-2765).12.He, S., Li, Z., & Zhao, H. (2019, November). Syntax-aware MulLlingual SemanLc Role Labeling. In Proceedings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th InternaLonal Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 5350-5359).13.Conia, S., Bacciu, A., & Navigli, R. (2021, June). Unifying cross-lingual SemanLc Role Labeling with heterogeneous linguisLc resources. In Proceedings of the
2021 Conference of the North American Chapter of the AssociaLon for ComputaLonal LinguisLcs: Human Language Technologies (pp.338-351).

239
References
14.Conia, S., & Navigli, R. (2020, December). Bridging the gap in mulLlingual semanLc role labeling: a language-agnosLc approach. In Proceedings of the 28th
InternaLonal Conference on ComputaLonal LinguisLcs (pp. 1396-1410).15.Kasai, J., Friedman, D., Frank, R., Radev, D., & Rambow, O. (2019, June). Syntax-aware Neural SemanLc Role Labeling with Supertags. In Proceedings of the
2019 Conference of the North American Chapter of the AssociaLon for ComputaLonal LinguisLcs: Human Language Technologies, Volume 1 (Long and Short
Papers) (pp. 701-709).16.He, L., Lee, K., Levy, O., & Ze[lemoyer, L. (2018, July). Jointly PredicLng Predicates and Arguments in Neural SemanLc Role Labeling. In Proceedings of the
56th Annual MeeLng of the AssociaLon for ComputaLonal LinguisLcs (Volume 2: Short Papers) (pp. 364-369).17.Shi, T., Malioutov, I., & İrsoy, O. (2020, November). SemanLc Role Labeling as SyntacLc Dependency Parsing. In Proceedings of the 2020 Conference on
Empirical Methods in Natural Language Processing (EMNLP) (pp. 7551-7571).18.Zhou, J., Li, Z., & Zhao, H. (2020, November). Parsing All: Syntax and SemanLcs, Dependencies and Spans. In Findings of the AssociaLon for ComputaLonal
LinguisLcs: EMNLP 2020 (pp. 4438-4449).19.Zhou, J., Li, Z., & Zhao, H. (2020, November). Parsing All: Syntax and SemanLcs, Dependencies and Spans. In Findings of the AssociaLon for ComputaLonal
LinguisLcs: EMNLP 2020 (pp. 4438-4449).20.Wang, Y., Johnson, M., Wan, S., Sun, Y., & Wang, W. (2019, July). How to best use syntax in semanLc role labelling. In Annual MeeLng of the AssociaLon for
ComputaLonal LinguisLcs (57th: 2019) (pp. 5338-5343). AssociaLon for ComputaLonal LinguisLcs.21.He, S., Li, Z., Zhao, H., & Bai, H. (2018, July). Syntax for semanLc role labeling, to be, or not to be. In Proceedings of the 56th annual meeLng of the
associaLon for computaLonal linguisLcs (Volume 1: Long papers) (pp. 2061-2071).22.Marcheggiani, D., & Titov, I. (2020, November). Graph ConvoluLons over ConsLtuent Trees for Syntax-Aware SemanLc Role Labeling. In Proceedings of the
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 3915-3928). 23.Marcheggiani, D., & Titov, I. (2017, September). Encoding Sentences with Graph ConvoluLonal Networks for SemanLc Role Labeling. In Proceedings of the
2017 Conference on Empirical Methods in Natural Language Processing (pp. 1506-1515).24.Marcheggiani, D., & Titov, I. (2017, September). Encoding Sentences with Graph ConvoluLonal Networks for SemanLc Role Labeling. In Proceedings of the
2017 Conference on Empirical Methods in Natural Language Processing (pp. 1506-1515).25.Li, Z., Zhao, H., Wang, R., & Parnow, K. (2020, November). High-order SemanLc Role Labeling. In Findings of the AssociaLon forComputaLonal LinguisLcs:
EMNLP 2020 (pp. 1134-1151).

240
References
26.Lyu, C., Cohen, S. B., & Titov, I. (2019, November). SemanLc Role Labeling with IteraLve Structure Refinement. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing and the 9th InternaLonal Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 1071-
1082).27.Li, Z., He, S., Zhao, H., Zhang, Y., Zhang, Z., Zhou, X., & Zhou, X. (2019, July). Dependency or span, end-to-end uniform semanLc role labeling. In Proceedings
of the AAAI Conference on ArLficial Intelligence (Vol. 33, No. 01, pp. 6730-6737).28.Ouchi, H., Shindo, H., & Matsumoto, Y. (2018). A Span SelecLon Model for SemanLc Role Labeling. In Proceedings of the 2018 Conference on Empirical
Methods in Natural Language Processing (pp. 1630-1642).29.Strubell, E., Verga, P., Andor, D., Weiss, D., & McCallum, A. (2018). LinguisLcally-Informed Self-A[enLon for SemanLc Role Labeling. In Proceedings of the
2018 Conference on Empirical Methods in Natural Language Processing (pp. 5027-5038).30.He, L., Lee, K., Lewis, M., & Ze[lemoyer, L. (2017, July). Deep semanLc role labeling: What works and what’s next. In Proceedings of the 55th Annual
MeeLng of the AssociaLon for ComputaLonal LinguisLcs (Volume 1: Long Papers) (pp. 473-483).31.FitzGerald, N., Täckström, O., Ganchev, K., & Das, D. (2015, September). SemanLc role labeling with neural network factors. In Proceedings of the 2015
Conference on Empirical Methods in Natural Language Processing (pp. 960-970). 32.Guan, C., Cheng, Y., & Zhao, H. (2019, June). SemanLc Role Labeling with Associated Memory Network. In Proceedings of the 2019 Conference of the North
American Chapter of the AssociaLon for ComputaLonal LinguisLcs: Human Language Technologies, Volume 1 (Long and Short Papers)(pp. 3361-3371).33.Jindal, I., Aharonov, R., Brahma, S., Zhu, H., & Li, Y. (2020). Improved SemanLc Role Labeling using Parameterized NeighborhoodMemory AdaptaLon. arXiv
preprint arXiv:2011.14459.34.Jindal, Ishan, Alexandre Rademaker, Khoi-Nguyen Tran, Huaiyu Zhu, Hiroshi Kanayama, Marina Danilevsky, and Yunyao Li. "PriMeSRL-Eval:
A Practical Quality Metric for Semantic Role Labeling Systems Evaluation." InFindings of the Association for Computational Linguistics: EACL
2023, pp. 1761-1773. 2023.35.Zhang, Li, Ishan Jindal, and Yunyao Li. "Label definitions improve semantic role labeling." InProceedings of the 2022 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5613-5620. 2022.

Thank
You

Meaning Representaqons for Natural Languages Tutorial Part 3b
Modeling Meaning Representa9on: AMR
Julia Bonn, Jeffrey Flanigan, Jan Hajic, Ishan Jindal, Yunyao Li, Nianwen Xue

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
243

❏Linearize the AMR graphs
❏AMR parsing as sequence-to-sequence modeling
❏Can use any seq2seq method and pre-training method (BART, etc)
Konstas et al. Neural AMR: Sequence-to-Sequence Models for Parsing and GeneraOon. ACL 2017.
inter alia.
Seq2seq AMR Parsing
244

❏Linearizaaon order of the AMR graph usually maiers
AMR LinearizaIon
Bevilacqua et al. One SPRING to Rule Them Both: Symmetric AMR
Semantic Parsing and Generation without a Complex Pipeline. AAAI
2021
245

❏Linearizaaon order of the AMR graph usually maiers
AMR LinearizaIon
van Noord & Bos. Neural Semantic Parsing by Character-b
Translation: Experiments with Abstract Meaning Representations.
Computational Linguistics in the Netherlands Journal. 2017.
246

❏Remove variables and adding them back-in with post-processing heurisacs
Removing Variables
247
van Noord & Bos. Neural Semantic Parsing by Character-b
Abstract Meaning Representations. Computational Linguistics in the Netherlands Journal. 2017.

❏Rather than removing variables (lossy) use special tokens
Removing Variables
248
Bevilacqua et al. One SPRING to Rule Them Both: Symmetric AMR
Semantic Parsing and Generation without a Complex Pipeline. AAAI 2021

Pre-Processing for Transi:on and Graph-Based: Recategoriza0on
249
Figure from Zhou et al. Structure-aware Fine-tuning
of Sequence-to-sequence Transformers for
Transition-b
❏Collapsing verbalized concepts❏Anonymizing named enOOes (recovered
with alignments)❏Removing sense nodes (predict most
frequent sense)❏Remove wiki links (predict with wikifier)
Zhang et al 2019. AMR Parsing as Sequence-to-
Graph Transduction. ACL 2019

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏TransiWon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
250

❏Construct the graph using a sequence of acaons that build the graph
❏Use a classifier to predict the next acaon
❏Inspired by transiaon-based dependency parsing
Wang et al. A TransiOon-based Algorithm for AMR Parsing. NAACL 2015, inter alia.
TransiIon-Based AMR Parsing
251

TransiIon-Based AMR Parsing
252
Zhou et al. AMR Parsing with Action-Pointer Transformer.
NAACL 2021

TransiIon-Based AMR Parsing
253
Zhou et al. AMR Parsing with Action-Pointer Transformer.
NAACL 2021
Zhou et al. Structure-aware Fine-tuning of Sequence-to-sequence
Transformers for Transition-b
Simplified TransiTon AcTons

❏Simplified system: Transiaon system has 6 acaons
TransiIon-Based AMR Parsing
254
Zhou et al. Structure-aware Fine-tuning of Sequence-to-sequence
Transformers for Transition-b

TransiIon-Based AMR Parsing
255
Zhou et al. AMR Parsing with Action-Pointer Transformer.
NAACL 2021
Zhou et al. Structure-aware Fine-tuning of Sequence-to-sequence
Transformers for Transition-b
Simplified TransiTon AcTons

TransiIon-Based AMR Parsing
256
Zhou et al. Structure-aware Fine-tuning of Sequence-to-sequence
Transformers for Transition-b

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
258

❏Graph-based methods use the graph structure when predicang
❏Inspired by graph-based methods for dependency parsing
❏Can be done incrementally or using a structured predicaon method
Flanigan et al. A DiscriminaOve Graph-Based Parser for the Abstract Meaning RepresentaOon. ACL 2014.
inter alia.
Graph-Based AMR Parsing
259

Graph-Based AMR Parsing
260Cai & Lam. AMR Parsing via Graph-Sequence Iterative Inference. ACL 2020.

Graph-Based AMR Parsing
261Cai & Lam. AMR Parsing via Graph-Sequence Iterative Inference. ACL 2020.

Graph-Based AMR Parsing
262
Cai & Lam 2020. AMR Parsing via
Graph-Sequence Iterative Inference.
ACL 2020.

Graph-Based AMR Parsing
263Cai & Lam. AMR Parsing via Graph-Sequence Iterative Inference. ACL 2020.

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏EvaluaWon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
264

❏Can use fine-grained evaluaaon to examine strengths and weakness
EvaluaIon
265
Damonte et al. An Incremental Parser
for Abstract Meaning Representation.
EACL 2017

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR GeneraOon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
266

AMR GeneraIon: Overview
267Hao et al. A Survey : Neural Networks for AMR-to-Text. 2022

❏Linearize the AMR graphs
❏AMR generaaon as sequence-to-sequence modeling
❏Can use any seq2seq method and pre-training method (BART, etc)
AMR GeneraIon: Seq2seq
268

AMR GeneraIon: Graph-Based
269Hao et al. A Survey : Neural Networks for AMR-to-Text. 2022

AMR GeneraIon: Graph-Based
270Hao et al. Heterogeneous Graph Transformer for Graph-to-Sequence Learning. ACL 2020

AMR GeneraIon: Graph-Based
271
Damonte & Cohen. Structural Neural
Encoders for AMR-to-text Generation.
NAACL 2019

AMR GeneraIon: Comparison
272
Hao et al. A Survey : Neural Networks
for AMR-to-Text. 2022

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
273

❏Gold datais human labeled data
❏Silver datais where you run an exisang parser on unlabeled data
❏You can add silver data to the training data to improve performance
❏Usually people use Gigaword for the silver data (more on this later)
Silver Data (Semi-supervised learning)
274

❏Silver data someames helps parsing, usually on out-of-domain data
Silver Data for AMR Parsing
275
In-domain
Out-of-domain
Bevilacqua et al. One SPRING to Rule Them Both:
Symmetric AMR Semantic Parsing and Generation
without a Complex Pipeline. AAAI 2021

❏Silver data always helps generaaon, but be careful! Results are misleading!
❏Silver data hurts out of domain data
Silver Data for AMR GeneraGon
276
In-domain (official test sets)
Out-of-domain
Bevilacqua et al. One SPRING to Rule Them Both:
Symmetric AMR Semantic Parsing and Generation
without a Complex Pipeline. AAAI 2021
Baseline +Silver data

❏Silver data always helps generaaon, but be careful! Results are misleading!
Silver Data for AMR GeneraGon
277
Du & Flanigan. Avoiding Overlap in Data
Augmentation for AMR-to-Text Generation. ACL
2020

❏Recommend excluding parts of Gigaword that may overlap with test data
Silver Data for AMR GeneraGon
278
Du & Flanigan. Avoiding Overlap in Data
Augmentation for AMR-to-Text Generation. ACL
2020
hxps://github.com/jlab-nlp/amr-clean

❏AMR Parsing
❏Sequence-to-sequence methods
❏Pre/post processing
❏Transiqon-based methods
❏Graph-based methods
❏Evaluaqon
❏AMR Generaaon:
❏Sequence-to-sequence methods
❏Graph-based methods
❏Silver data
❏Pre-training
Outline
279

❏Pre-training the encoder, such as BERT, helps a lot
❏Pre-training the decoder, such as BART, helps even more
❏Structural pre-training helps as well
AMR Parsing: Pretraining
280
Bai et al. Graph Pre-training for AMR Parsing and Generation. ACL 2022

❏Structural pre-training helps as well
Structural Pretraining
281
Bai et al. Graph Pre-training for AMR Parsing and Generation. ACL 2022

❏Structural pre-training helps as well
Structural Pretraining
282
Bai et al. Graph Pre-training for AMR Parsing and Generation. ACL 2022

AMR GeneraIon: Pretraining
283
Hao et al. A Survey : Neural Networks
for AMR-to-Text. 2022
❏Pre-training helps a lot❏Pre-training the encoder
and decoder helps the
most (BART)

AMR GeneraIon: Pretraining
284
Hao et al. A Survey : Neural Networks
for AMR-to-Text. 2022
❏Pre-training helps a lot❏Pre-training the encoder
and decoder helps the
most (BART)

❏There’s a lot more work we didn’t have ame to cover
❏See the AMR bibliography
Lots More Work
285
https://nert-nlp.github.io/AMR-Bibliography/

Meaning Representacons for Natural Languages Tutorial Part 4
Applying Meaning RepresentaDons
Jeffrey Flanigan, Tim O’Gorman, Ishan Jindal, Yunyao Li, Nianwen Xue, Julia Bonn

Informa(on Extrac(on
•OneIE [Lin et al., ACL2020] framework extracts the information graph from a given sentence in
four steps: encoding, identification, classification, and decoding

Moving from Seq-to-Graph to Graph-to-
Graph
Slide credit: Heng Ji
●AMR converts input sentence into a directedand acyclicgraph
structure with fine-grainednode and edge type labels
●AMR parsing shares inherent similarities with information
network (IE output)●Similar node and edge semantics●Similar graph topology
●Semantic graphs can better capture non-local contextin a
sentence
Zixuan Zhang, Heng Ji. AMR-IE: An AMR-guided encoding and decoding framework for IE. NAACL’2021Slide credit: Heng Ji
Key Idea:
Exploit the similarity between AMR and IE to for joint information
extraction

AMR-IE
Zixuan Zhang, Heng Ji. AMR-IE: An AMR-guided encoding and decoding framework for IE. NAACL’2021Slide credit: Heng Ji

AMR Guided Graph Encoding: Using an Edge-Condi6oned
GAT
Zixuan Zhang, Heng Ji. AMR-IE: An AMR-guided encoding and decoding framework for IE. NAACL’2021Slide credit: Heng Ji
●Map each candidate entity and event to AMR nodes.
●Update entity and event representations using an edge-conditioned GATto incorporate
information from AMR neighbors.

AMR Guided Graph Decoding: Ordered decoding
guided by AMR
Zixuan Zhang, Heng Ji. AMR-IE: An AMR-guided encoding and decoding framework for IE. NAACL’2021Slide credit: Heng Ji
●Beam search based decoding as in OneIE(Lin et al. 2020).●The decoding order of candidate nodes are determined by the hierarchy
in AMR in a top-to-down manner.●E.g., the correct ordered decoding in the following graph is:

Examples on how AMR graphs help
Slide credit: Heng Ji

Leverage Meaning RepresentaXon for High-quality
Rule-based IE
Llio Humphreys et al. Populating Legal Ontologies
using Semantic Role Labeling LREC’20
extracLon rules

Machine Transla(on
●Repeaqng words with same meaning
●MT methods using Transformers can make semanqc errors
●Hallucinate informaqon not contained in the source

Machine Transla(on
Goal: inject semanac informaaon into Machine translaaon
This is mostly due to
Failing to accurately capture
the semanacs of the source in
some cases.

Machine Transla(on
Song et al. Semantic Neural Machine Translation using
AMR. TACL 2019.

Machine Transla(on
Nguyen et al. Improving Neural Machine
Translation with AMR Semantic Graphs.
Hindawi Mathematical Problems in
Engineering 2021.

Machine Transla(on
Nguyen et al. Improving Neural Machine
Translation with AMR Semantic Graphs.
Hindawi Mathematical Problems in
Engineering 2021.

Machine Transla(on
Li & Flanigan. Improving Neural Machine Translation
with the Abstract Meaning Representation by Combining
Graph and Sequence Transformers. DLG4NLP 2022.

Machine Transla(on
Li & Flanigan. Improving Neural Machine Translation with
the Abstract Meaning Representation by Combining
Graph and Sequence Transformers. DLG4NLP 2022.

Machine Transla(on
Li & Flanigan. Improving Neural Machine Translation
with the Abstract Meaning Representation by Combining
Graph and Sequence Transformers. DLG4NLP 2022.

Summariza(on
Liao et al. Abstract Meaning Representation for Multi-Document Summarization. ICCL 2018

Summariza(on
Liao et al. Abstract Meaning Representation for Multi-Document Summarization. ICCL 2018

Natural Language
InferenceDoes premise PjusHfy an inference to hypothesisH?
P: The informaXon from the actor stopped the banker.
H: The banker stopped the actor.

Natural Language
InferenceDoes premise PjusHfy an inference to hypothesisH?
P: The informaXon from the actor stopped the banker.
H: The banker stopped the actor.
shallow heurisWcs
due to dataset biases
(e.g. lexicon overlap)
low generalizaWon
on out-of-distribucon
evaluacon sets.
The HANS challenge dataset [McCoy et al., 2019] showed that NLI models trained on MNLI or SNLI
datasets get fooled easily by heurisocs when the input sentence pairs have high lexical similarity.

Semancc informacon(SRL) ○Improve the semancc knowledge of the NLI models○Less prone to dataset biases.
How Can Meaning RepresentaIon Help?
P: The informaXon from the actor stopped the banker.
H: The banker stopped the actor.
VERBARG1
ARG1ARG0
ARG0
VERB

SemBERT: Seman,c Aware
BERT
ZhuoshengZhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou:
Semantics-Aware BERT for Language Understanding. AAAI 2020
Incorporate SRL informaoon with
BERT representaoons.

SemBERT: Seman,c Aware
BERT
ZhuoshengZhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou:
Semantics-Aware BERT for Language Understanding. AAAI 2020
Results on GLUE benchmark
Works parLcularly well for smaller dataset

Joint Training with SRL improves NLI generaliza:on
Main idea: Improve sentence understanding
(hence out-of-distribution generalization) with
joint learning of explicit semantics
Cemil Cengiz, Deniz Yuret. Joint Training with Semantic Role Labeling
for Better Generalization in Natural Language Inference.
Rep4NLP’2020

Joint Training with SRL improves NLI generaliza:on
Main idea: Improve sentence understanding
(hence out-of-distribution generalization) with
joint learning of explicit semantics
Cemil Cengiz, Deniz Yuret. Joint Training with Semantic Role Labeling
for Better Generalization in Natural Language Inference.
Rep4NLP’2020

Is SemanGc-Aware BERT More LinguisGcally Aware?
Ling Liu, Ishan Jindal, Yunyao Li. Is Semantic-aware BERT more Linguistically
Aware? A Case Study on Natural Language Inference. SUKI’2022
Infuse semantic knowledge via predicate-
wise concatenation with BERT

Is SemanGc-Aware BERT More LinguisGcally Aware
Ling Liu, Ishan Jindal, Yunyao Li. Is Semantic-aware BERT more Linguistically
Aware? A Case Study on Natural Language Inference. SUKI’2022

Performance on HANS non-entailment
examples by models fine-tuned on SNLI.
Examples in black and normal font are where
BERT made wrong predictions and LingBERT
made correct predictions. Examples in blue
and italics are where none of the three models
made the correct prediction. The last three
columns are the accuracy in % on the non-
entailment examples by BERT, SemBERT,
and LingBERT respectively.
Beter differenTate lexical
similarity from world
knowledge
Fails to help with subsequence
/consTtuent heurisTcs

NSQA: AMR for Neural-Symbolic QuesHon Answering over
Knowledge Graph
Pavan Kapanipathi et al∗Leveraging Abstract Meaning Representation for
Knowledge Base Question Answering. ACL’2021

AMR Graph → Query Graph
Acer nigrum is used in making what?
AMR Graph
Query Graph
Count the awards received by the
ones who fought the battle of france?”
What cities are located on the
sides of mediterranean sea?
Pavan Kapanipathi et al∗Leveraging Abstract Meaning Representation for
Knowledge Base Question Answering. ACL’2021

AMR-Based Ques(on
Decomposi(on
Zhenyun Deng et al.Interpretable AMR-Based Question Decomposition for Multi-hop
Question Answering. IJCAI’2022

AMR-Based Ques(on
Decomposi(on
Zhenyun Deng et al.Interpretable AMR-Based Question Decomposition for Multi-hop
Question Answering. IJCAI’2022

AMR-Based Ques(on
Decomposi(on
Better accuracy of the final answer and the quality of sub-questions
Zhenyun Deng et al.Interpretable AMR-Based Question Decomposition for Multi-hop
Question Answering. IJCAI’2022

AMR-Based Ques(on
Decomposi(on
Outperforming existing question-decomposition-based multi-hop QA approaches.
Zhenyun Deng et al.Interpretable AMR-Based Question Decomposition for Multi-hop
Question Answering. IJCAI’2022

Cross-Document MulI-hop Reading
Comprehension
Zheng and Kordjamshidi. SRLGRN: SemanFc Role Labeling Graph Reasoning Network. EMNLP’2020.

Heterogeneous SRL Graph
Zheng and Kordjamshidi. SRLGRN: SemanFc Role Labeling Graph Reasoning Network. EMNLP’2020.

HotpotQA Result
SRL graph improves the completeness of the graph network over NER graph
Zheng and Kordjamshidi. SRLGRN: SemanFc Role Labeling Graph Reasoning Network. EMNLP’2020.

Dialog Modeling via AMR Transforma,on &
Augmenta,on
Mitchell Abrams, Claire Bonial, L. Donatelli. Graph-to-graph meaning representation transformations for human-robot
dialogue. SCIL. 2020
Claire Bonial et al. Augmenting Abstract Meaning Representation for Human-Robot Dialogue. ACL-DMR. 2019

Dialog Modeling via AMR Transforma>on &
Augmenta>on
Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang. Semantic Representation for Dialogue Modeling.ACL 2021

Dialog Modeling via AMR Transforma>on &
Augmenta>on
Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang. Semantic Representation for Dialogue Modeling.ACL 2021
(a) Using AMR to enrich text representation. (b,c) Using AMR independently.

Dialog Modeling via AMR Transforma>on &
Augmenta>on
Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang. Semantic Representation for Dialogue Modeling.ACL 2021
semantic knowledge in formal AMR is helpful
for dialogue modeling
manually added relations are useful in dialog
relation extraction and dialog generation

●Reference free: Requiring no gold summary
●Adjustable weights for tuple comparison
●Extensible: coreference resolu6on, alterna6ve similarity func6ons
SRLScore for Factual Consistency in Text
Summariza,on
Jing Fan, Dennis Aumiller, Michael Gertz. Evaluating Factual Consistency of Texts with Semantic Role Labeling. *SEM 2023

SRLScore for Factual Consistency in Text
Summariza,on
Jing Fan, Dennis Aumiller, Michael Gertz. Evaluating Factual Consistency of Texts with Semantic Role Labeling. *SEM 2023
•Pearson (ρ) and Spearman (s) correlation of metrics with human ratings on the evaluated datasets.
•No significant differences between any of the factuality-specific metrics (SRLScore, BARTScore, and CoCo)

SRLScore for Factual Consistency in Text
Summariza,on
Jing Fan, Dennis Aumiller, Michael Gertz. Evaluating Factual Consistency of Texts with Semantic Role Labeling. *SEM 2023
•SRL-b agent, relation, patient) triplets
simplified
triplet
representatio
n

Interpretable Automa>c Fine-grained Inconsistency
Detec>on
Hou Pong Chan1 Qi Zeng2 Heng Ji. Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization. ACL (findings)
2023

Interpretable Automa>c Fine-grained Inconsistency
Detec>on
Hou Pong Chan1 Qi Zeng2 Heng Ji. Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization. ACL (findings)
2023

Improved Open-Domain Dialogue Evalua>on
*Incorporate AMR graph feature into the traditional SLM (i.e., sentence transformer)
*Integrate the output score and AMR graph information into the prompt of LLM for better dialogue evaluation
B. Yang et. al. Structured Information Matters: Incorporating Abstract Meaning Representation into LLMs for Improved Open-DomainDialogue Evaluation.
https://arxiv.org/pdf/2404.01129

Improved Open-Domain Dialogue Evalua>on
*AMR graph feature helps correctly identify negative responses despite of the overlapping words.
B. Yang et. al. Structured Information Matters: Incorporating Abstract Meaning Representation into LLMs for Improved Open-DomainDialogue Evaluation.
https://arxiv.org/pdf/2404.01129

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al.Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al.Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021
Element

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021
Element
Expanded SRL as
SemanTc NLP PrimiTves
Provided by SystemT
[ACL '10, NAACL ‘18]

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021
Element
Expanded SRL as
SemanTc NLP PrimiTves
Business transact. verbs
in future tense
with positive polarity

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Case Study -Watson Discover Content
Intelligence
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Explainability + Tooling → BeHer Root Cause
Analysis
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021
Yannis Katsis and Christine T. Wolf. ModelLens: An Interactive System to Support the Model Improvement Practices of Data Science
Teams.CSCW 2019

Model Stability with Increasing Complexity
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Effec,veness of Feedback Incorpora,on
A. Agarwal et al. Development of an Enterprise-Grade Contract Understanding System. NAACL (industry) 2021

Human& Machine Co-
Crea(on
Prithvi Sen. et al. HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop. ACL’2019
Prithvi Sen. et al. Learning Explainable Linguistic Expressions with Neural Inductive Logic Programming for Sentence Classification.
EMNLP’2020

User Study: Human& Machine Co-Crea>on
Prithvi Sen. et al. HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop. ACL’2019
Prithvi Sen. et al. Learning Explainable Linguistic Expressions with Neural Inductive Logic Programming for Sentence Classification.
EMNLP’2020
User study
–4 NLP Engineers with 1-2 year experience
–2 NLP experts with 10+ years experience
Key Takeaways
●Explanation of learned rules: Visualization
tool is very effective
●Reduction in human labor: Co-created
model created within 1.5 person-hrs
outperforms black-box sentence classifier
●Lower requirement on human expertise:
Co-created model is at par with the model
created by Super-Experts

Summary: Value of Meaning Representation
Work Out-of-box Deeper understanding of text
Overcome Low-resource
Challenges
Robustness against linguistics
variants & complexity
Better model
generalization
Explainability &
Interpretability
Information Extraction✔ ✔ ✔
Text Classification✔ ✔ ✔ ✔
Natural Language
Inference

Question Answering✔ ✔
Dialog ✔
Machine Translation✔ ✔
Evaluation✔ ✔ ✔ ✔
SRL
AMR

Meaning Representaqons for Natural Languages Tutorial Part 5
Open Ques9ons and Future Work
Julia Bonn, Jeffrey Flanigan, Jan Hajič, Ishan Jindal, Yunyao Li, and Nianwen Xue

•Producing more UMR-annotated data sets for more languages
•More accurate and more robust SRL/AMR/UMR parsers
•New MS-AMR/UMR parsers that can parse text into document-level graphs
•MS-AMR/UMR evaluation metrics
•Exploring the trade-off/complementarity between LLMs and MR systems in
NLP applications
Future work

•More extreme scenarios•As an alternaTve to LLMs, for scenarios where transparency is of paramount importance in every module
of an NLP system?
•MRs have no role at all, as MR-based systems vastly underperform end-to-end deep learning system due
to error propagaTon?
•Between the two extremes:
•As an intermediate representaTon that can be used to train semanTcs-aware LLMs, to help with the
robustness and generalizability of LLMs?•As a layer on top of LLMs, to help with explainabilityand controllability of LLM-based systems?
•As a way of compuTng rewards for LLM-based systems in an RL framework to improve applicaTons that
produces output similar to MRs (e.g., event argument extracTon) ?
Open Questions -Symbolic MRs vs LLMs?

•Information Extraction: Exploiting similarities between AMR with information
networks
•Machine Translation: combining AMR graphs with sequence representations
•Text Summarization: Condensing documents into summary graphs to
generate summary sentences
•Question Answering: Knowledge graph QA, multi-hop reasoning
•Dialog systems: Using a graph transformer in a sequence-to-sequence
system.
•……
MRs have been used to improve many applications

•Extracting facts that help train LLM-based systems to respect the facts
•Generating data sets with logical representations to improve the logical
reasoning capabilities of LLM-based systems
•Improving multi-hop reasoning in LLM-based QA systems
•Providing an induction bias in low-resource scenarios
•Providing a representation in dialogue systems where more control is needed
in dialogue state tracking.
•…
New research opportuniHes that MRs provides in the era of LLM-
based systems

•LLMs are arguably deficient in terms of mathematical and logical
reasoning. This might be an area where AMR/UMR can help, but under-explored.
Open QuesGons: Can MRs help with reasoning?

•LLMs are known to hallucinate things. Can MRs be used to extract facts that
can help train LLMs that are more truthful?
Open Questions: Can MRs help LLMs be more truthful?