Papers
14 July 2020 - Camera-ready papers are entitled to 1 additional page to address feedback from reviewers. Thus:
-
system description papers can be up to 6 pages for most submissions (9 pages if participating in multiple (sub)tasks) (not counting acknowledgments/bibliography/appendices)
- Remember to cite the task description paper, and if possible include a link to your code
- Ensure that author names are properly capitalized in START metadata and appear in the same order/spelling as the PDF
- Ensure there is a footnote with license text per style guide instructions
-
task description papers can be up to 10 pages (not counting acknowledgments/bibliography/appendices)
- Remember to include a link to the dataset release, e.g. on Zenodo
- Ensure that author names are properly capitalized in START metadata and appear in the same order/spelling as the PDF
- Ensure there is a footnote with license text per style guide instructions
31 March 2020 - Updates to paper requirements specified below!
Both task organizers and participants are to submit papers describing the tasks and their systems respectively. These will be part of the official SemEval proceedings.
NEW! Best Task, Best Paper Awards
SemEval 2020 will feature two overall awards, one for organizers of a task and one for a team participating in a task. The awards are:
- Best Task (task organizers): This award recognizes a task that stands out for making an important intellectual contribution to empirical computational semantics, as demonstrated by a creative, interesting, and scientifically rigorous dataset and evaluation design, and a well-written task overview paper.
- Best Paper (task participants): This award recognizes a system description paper that advances our understanding of a problem and available solutions with respect to a task. It need not be the highest-scoring system in the task, but it must have a strong analysis component in the evaluation, as well as a clear and reproducible description of the problem, algorithms, and methodology.
Length of papers:
Task organizers are to submit an 9-page task description paper describing their task, data, evaluation, results, and a summary of participating systems. In case of extenuating circumstances such as a large number of sub-tasks, more than 8 pages is acceptable.
Task participants are to submit a 5-page system description paper describing their system and submission(s). If you are participating in multiple SemEval tasks or in multiple sub-tasks within a task, then 8-pages is acceptable. For example, if two similar systems produced by the same team are used to make submissions to Task X and Task Y, then they should write a single system-description paper and its length can exceed five pages (in this case, they can aim for eight pages). If the same team, or two teams with overlapping team members, create two very different systems for two very different tasks, then they should write two separate papers (one for each task; and each five pages long). For more information about the format and content of the system description paper, see these guidelines.
References do not count against the page limits (9 pages for task descriptions, 5 pages for system descriptions). You may have as many additional pages of references as you want.
Due Dates:
See detailed schedule of reviewing deadline, acceptance notification date, and camera-ready deadline on the home page (Important Dates).
Submission Website: https://www.softconf.com/coling2020/SemEval/
Style Files:
The review process is single-blind. Submissions are not anonymous and should use the COLING camera-ready formatting. Latex templates adapted for SemEval can be found here.
Non-Anonymous Reviewing
Reviewing of papers is not anonymous. You are to enter your names and affiliations on the paper (be it the system description paper or the taskdescription paper).
Access to the Task Description Paper
We will set things up such that all participants of a task can view the task description paper when it is uploaded for review.
FAQ:
Q. What to include in a system description paper?
A. Here are some key pointers:
- Replicability: present all details that will allow someone else to replicate your system
- Analysis: focus more on results and analysis and less on discussing rankings; report results on several runs of the system (even beyond the official submissions); present ablation experiments showing usefulness of different features and techniques; show comparisons with baselines.
- Duplication: cite the task description paper; you can avoid repeating details of the task and data, however, briefly outlining the task and relevant aspects of the data is a good idea. (The official BibTeX citations for papers will not be released until the camera-ready submission period, so during the initial submission, please use some sort of placeholder citation.)
For a detailed outline, as well as links to some past system description papers, see these guidelines.
It may be helpful to look at some of the papers from past SemEvals, e.g., from https://aclweb.org/anthology/S/S16/ and https://aclweb.org/anthology/S/S17/.
Q. What to focus on in a task-description paper?
A. Here are some key pointers:
- Replicability: present all details that will allow someone else to replicate the data creation process and evaluation.
- Analysis: focus more on results and analysis and less on discussing rankings
- Summary of systems: Summarize the techniques, features, and resources used. Highlight what tended to work and what did not, across the systems.
It may be helpful to look at some of the papers from past SemEvals, e.g., from https://aclweb.org/anthology/S/S16/ and https://aclweb.org/anthology/S/S17/.
Q. Can we change the name of our team after the evaluation results are announced?
A. Contact your task organizer for this. This is usually not a problem, and has been allowed in the past.