Results

The gold labels, submissions and scores for all teams can be found here

The gold labels inside the test XML can be found here

The task description paper is here.

 

@InProceedings{nakov-EtAl:2017:SemEval,
  author    = {Nakov, Preslav  and  Hoogeveen, Doris  and  M\`{a}rquez, Llu\'{i}s  and  Moschitti, Alessandro  and  Mubarak, Hamdy  and  Baldwin, Timothy  and  Verspoor, Karin},
  title     = {{SemEval-2017 Task 3}: Community Question Answering},
  booktitle = {Proceedings of the 11th International Workshop on Semantic Evaluation},
  series    = {SemEval~'17},
  month     = {August},
  year      = {2017},
  address   = {Vancouver, Canada},
  publisher = {Association for Computational Linguistics},
  pages     = {27--48},
  abstract  = {We describe SemEval--2017 Task 3 on Community Question Answering. This year,
we reran the four subtasks from SemEval-2016: (A) Question--Comment Similarity,
(B) Question--Question Similarity, (C) Question--External Comment Similarity,
and (D) Rerank the correct answers for a new question in Arabic, providing all
the data from 2015 and 2016 for training, and fresh data for testing.
Additionally, we added a new subtask E in order to enable experimentation with
Multi-domain Question Duplicate Detection in a larger-scale scenario, using
StackExchange subforums. A total of 23 teams participated in the task, and
submitted a total of 85 runs (36 primary and 49 contrastive) for subtasks A--D.
Unfortunately, no teams participated in subtask E. A variety of approaches and
features were used by the participating systems to address the different
subtasks. The best systems achieved an official score (MAP) of 88.43, 47.22,
15.46, and 61.16 in subtasks A, B, C, and D, respectively.  These scores are
better than the baselines, especially for subtasks A--C.},
  url       = {http://www.aclweb.org/anthology/S17-2003}
}

Contact Info

Organizers


  • Preslav Nakov, Qatar Computing Research Institute, HBKU
  • Lluís Màrquez, Qatar Computing Research Institute, HBKU
  • Alessandro Moschitti, Qatar Computing Research Institute, HBKU
  • Hamdy Mubarak, Qatar Computing Research Institute, HBKU
  • Timothy Baldwin, The University of Melbourne
  • Doris Hoogeveen, The University of Melbourne
  • Karin Verspoor, The University of Melbourne

email : semeval-cqa@googlegroups.com

Other Info

Announcements


  • 14 Feb. 2017: Submit your paper by February 27
  • 11 Feb. 2017: The results and all scores are released
  • 30 Jan. 2017: The closing date for test submissions is January 30th midnight UTC-12.
  • 24 Jan. 2017: Test set for subtask E is available now. (here)
  • 12 Jan. 2017: Test sets for subtasks A-D are available now. (data webpage)
  • 9 Jan. 2017: The release of test data for subtasks A-D is delayed by some days. Apologies for the inconvenience.
  • 5 Jan. 2017: Submission deadline is set to be January 30.
  • 5 Jan. 2017: New web page created with instructions on how to submit system results.
  • 8 Dec 2016: Separate competitions for the subtasks have been set up at CodaLab, where you can submit your results: Subtask A, Subtask B, Subtask C, Subtask D, and Subtask E. You can submit results both for the development set and the test set here, receive scores and choose what to publish on the leaderboard.
  • 8 Dec 2016: A new scorer is now available from the Data and Tools page, which can also be used for subtask E
  • Register to participate here