SemEval-2015 Task 2: Semantic Textual Similarity

 

Semantic textual similarity (STS) has received an increasing amount of attention in recent years, culminating with the Semeval/*SEM tasks organized in 2012, 2013 and 2014, bringing together more than 60 participating teams. Please check http://ixa2.si.ehu.es/stswiki/ for more details on previous tasks. 

Given two sentences of text, s1 and s2, the systems participating in this task should compute how similar s1 and s2 are, returning a similarity score, and an optional confidence score.The annotations and systems will use a scale from 0 (no relation) to 5 (semantic equivalence), indicating the similarity between two sentences. Participating systems will be evaluated using the same metrics traditionally employed in the evaluation of STS systems, and also used in previous Semeval/*SEM STS evaluations, i.e., mean Pearson correlation between the system output and the gold standard annotations.

In 2015 we will continue to evaluate STS systems on the following subtasks (see respective tab above for more details):

  • English STS, with sentence pairs on news headlines, image captions, student answers, answers to question in public forums and sentences expressing commited belief.
     
  • Spanish STS, with sentence pairs extracted from encyclopedic content and newswire, and text snippet pairs obtained from news headlines.
     
  • NEW for 2015, we devised a pilot subtask on interpretable STS. With this pilot task we want to explore whether STS systems are able to explain WHY they think the two sentences are related / unrelated, adding an explanatory layer to the similarity score. As a first step in this direction, participating systems will need to align the segments in one sentence in the pair to the segments in the other sentence, describing what kind of relation exists between each pair of segments. This pilot subtask will provide specific training data.

Note that if you want to participate you need to register.

Registered or not, please join the mailing list for updates at http://groups.google.com/group/STS-semeval.

SemEval-2015 schedule

Evaluation start: December 5, 2014 
Evaluation end: December 20, 2014
Paper submission due: January 30, 2015
Paper reviews due: February 28, 2015
Camera ready due: March 30, 2015
SemEval workshop: June 4-5, 2015 (co-located with NAACL-2015 in Denver, Colorado)

REFERENCES

Eneko Agirre; Carmen Banea; Claire Cardie; Daniel Cer; Mona Diab; Aitor Gonzalez-Agirre; Weiwei Guo; Rada Mihalcea; German Rigau; Janyce Wiebe. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity. Proceedings of SemEval 2014. [pdf]

Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, WeiWei Guo. *SEM 2013 shared task: Semantic Textual Similarity, Proceedings of *SEM 2013. [pdf]

Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. Proceedings of SemEval 2012. [pdf]

Contact Info

email list: sts-semeval@googlegroups.com

Other Info

Announcements

  • NEW Nov. 10: final train data for interpretable STS, with updated evaluation script
  • Oct. 16: interpretable STS updated description, train data, guidelines
  • Aug. 15: subtasks with descriptions and trial data available
  • Please fill in SemEval registration form
  • Please join the mailing list for updates