Data and Tools
- Demo data: http://amr.isi.edu/download/amr-bank-v1.4.txt Note: this data is not meant to represent the precise format of the training data. It is intended to give an indication of what an AMR looks like, for a substantial corpus of naturally occurring data. Note also that this demo data does not include some recent additions to the AMR specification, such as wikification.
- AMR specification: https://github.com/kevincrawfordknight/amr-guidelines/blob/master/amr.md
- Training data: For access to the training data, download this form and mail or email it to LDC (contact info contained within form). They will send you a link to download the data.
Tools: (Note: all tools are provided as-is and provided graciously by their authors. They are, mostly, not maintained by the organizers of this task. We do not guarantee their suitability for your needs and will not provide technical support beyond documentation found on this page, of which there is currently none).
- JAMR baseline parser, courtesy of Jeff Flanigan, and updated to work out of the box with the task's training data (0.57 on test): https://github.com/isi-nlp/jamr
- Deterministic, input-agnostic baseline 'parser,' courtesy of Ulf Hermjakob (0.26 on test): here
- Unsupervised AMR-to-English aligner, courtesy of Nima Pourdamghani: http://isi.edu/~damghani/papers/Aligner.zip
- [NEW] Smatch 2.0.2 scoring script, bugfix of script below (thanks to Guntis Barzdins and Didzis Gosko) here
Smatch 2.0.1 scoring script, bugfix of script below here
Smatch 2.0 scoring script, courtesy of Shu Cai: http://amr.isi.edu/download/smatch-v2.0.tar.gz
- Python library, courtesy of Nathan Schneider: https://github.com/nschneid/amr-hackathon
- English tokenizer, courtesy of Ulf Hermjakob: here.
Eval data format and Validator: (very important so it gets its own paragraph) Download a simple python script to check your evaluation submission and learn about the eval and submission formats here