SemEval-2015: 8th International Workshop on Semantic Evaluations
Call for Task Proposals
We invite proposals for tasks to be run as part of SemEval-2015.
Starting with 2015, SemEval will run on a two-year cycle, which will give both task organizers and task participants more time for all steps of the process, including data preparation, system design, analysis, and paper writing.
We welcome tasks that can test an automatic system for semantic analysis of text, be it application-dependent or application-independent. We especially welcome tasks for different languages and cross-lingual tasks.
We encourage the following aspects in task design:
Common data formats
To ensure that newer annotations conform to existing annotation standards, we encourage the use of existing data encoding standards such as MASC and UIMA. Where possible, reusing existing annotation standards and tools will make it easier to participate in multiple tasks. Moreover, the use of readily available tools should make it easier for participants to spot bugs and to improve their systems.
Common texts and multiple annotations
For many tasks, finding suitable texts for building training and testing datasets in itself can be a challenge or somewhat ad hoc. To make it easier for task organisers to find suitable texts, we encourage the use of resources such as Wikipedia, ANC and OntoNotes. Where this makes sense, the SemEval program committee will encourage task organizers to share the same texts for different tasks. In due time, we hope that this process will allow the generation of multiple semantic annotations for the same text.
To lower the obstacles to participation, we encourage the task organizers to provide baseline systems that participants can use as a starting point. A baseline system typically contains code that reads the data, creates a baseline response (e.g., random guessing), and outputs the evaluation results. If possible, baseline systems should be written in widely used programming languages. We also encourage the use of standards such as UIMA.
To reduce fragmentation of similar tasks, we will encourage task organisers to propose larger tasks that include several subtasks. For example, Word Sense Induction in Japanese and Word Sense Induction in English could be combined into a single umbrella task that includes several subtasks. We welcome task proposals for such larger tasks. In addition, the program committee will actively encourage task organisers proposing similar tasks to combine their efforts into larger umbrella tasks.
We welcome tasks that are devoted to developing novel applications of computational semantics. As an analogy, the TREC Question-Answering (QA) track was solely devoted to building QA systems to compete with current IR systems. Similarly, we will encourage tasks that have a clearly defined end-user application showcasing and are enhancing our understanding of computational semantics, as well as extending the current state-of-the-art.
The SemEval-2015 Workshop will be co-located with a major NLP conference in 2015.
The task proposals should ideally contain the following:
If you are not yet at a point to provide outlines of all of these, that is acceptable, but please give some thought to each, and present a sketch of your ideas. We will gladly give feedback.
Please submit proposals as soon as possible, preferably by electronic mail in PDF format to the SemEval email address:
Preslav Nakov, Qatar Computing Research Institute
Torsten Zesch, University of Duisburg-Essen, Germany
The SemEval DISCUSSION GROUP
Please join our discussion group at email@example.com in order to receive announcements and participate in discussions.
The SemEval-2015 Website: