Task 8: Meaning Representation Parsing

Thanks for participating! Want more? Join our task in 2017 here!

 

Overview:

Abstract Meaning Representation (AMR) is a compact, readable, whole-sentence semantic annotation. Annotation components include entity identification and typing, PropBank semantic roles, individual entities playing multiple roles, entity grounding via wikification, as well as treatments of modality, negation, etc.

Here is an example AMR for the sentence “The London emergency services said that altogether 11 people had been sent to hospital for treatment due to minor wounds.”

(s / say-01
      :ARG0 (s2 / service
            :mod (e / emergency)
            :location (c / city :wiki ‘‘London’’
                  :name (n / name :op1 ‘‘London’’)))
      :ARG1 (s3 / send-01
            :ARG1 (p / person :quant 11)
            :ARG2 (h / hospital)
            :mod (a / altogether)
            :purpose (t / treat-03
                  :ARG1 p
                  :ARG2 (w / wound-01
                        :ARG1 p
                        :mod (m / minor)))))

Note the inclusion of PropBank semantic frames (‘say-01’, ‘send-01’, ‘treat-03’, ‘wound-01’), grounding via wikification (‘London’), and multiple roles played by an entity (e.g. ‘11 people’ are the ARG1 of send-01, the ARG1 of treat-03, and the ARG1 of wound-01).

With the recent public release of a sizeable corpus of English/AMR pairs (LDC2014T12), there has been substantial interest in creating parsers to recover this formalism from plain text.  Several parsers have already been released (see reference list below) and more may be on their way soon.  It seems an appropriate time to conduct a carefully guided shared task so that this nascent community may cleanly evaluate their various approaches side by side under controlled scenarios. 

Rules:

Participants will be provided with parallel English-AMR training data. They will have to parse new English data and return the obtained AMRs. Participants may use any resources at their disposal (but may not hand-annotate the blind data or hire other human beings to hand-annotate the blind data). The SemEval trophy goes to the system with the highest Smatch score.

Existing AMR Parsers: (send email to jonmay@isi.edu if yours is missing and you want a citation)

 

Contact Info

Organizer

  • Jonathan May, University of Southern California Information Sciences Institute (USC/ISI)

email : Jon May

amr website: At ISI

Other Info

Announcements

  • Jan. 7: New Smatch scoring script 2.0.2 fixes small bug in 2.0.1 version. Thanks to Guntis Barzdins and Didzis Gosko for the patch. Get it here.
  • Dec. 22: New Smatch scoring script 2.0.1 fixes some bugs in 2.0 version. Get it here.
  • Dry run was held on Monday, December 14. Thanks for participating. The real evaluation is coming up soon!
  • Sept. 24: Welcome new members! Please register for the task and sign up for the google group. Check out the Data and Tools tab for a plethora of resources.
  • Sept. 18: Sign up for our google group here
  • Sept. 18: Get started with a baseline by using our fork of JAMR
  • Aug. 31: Get the training data by filling out this form