Automatic Identification and Verification of Claims in Political Debates

Important Dates

  • Registration opens: 8 November 2017
  • Registration closes: 27 April 2018
  • Beginning of evaluation cycle Task 1: 28 April 2018
  • End of evaluation cycle for Task 1: 4 May 2018
  • Beginning of evaluation cycle Task 2: 5 May 2018
  • End of evaluation cycle for Task 2: 11 May 2018
  • Results posted: TBA
  • Submission of Participant Papers [CEUR-WS]: 31 May 2018
  • Notification of Acceptance Participant Papers [CEUR-WS]: 15 June 2018
  • Camera Ready Copy of Participant Papers [CEUR-WS]: 29 June 2018
  • CEUR-WS Working Notes Preview for Checking by Authors: 18-24 July 2018
  • CLEF-2018: 10-14 September, 2018 (Avignon, France)
     

Discussion Group

Please join our discussion group clef-factcheck@googlegroups.com to receive announcements and participate in discussions. For details, questions, and discussion on a particular task, visit the task website. You will also find there a link for the task mailing list.

Rules of the Game

  1. General Rules
    1. Participation with multiple teams is not allowed.
    2. Teams are not allowed to perform manual predictions; the entire process should be fully automatic.
    3. Any effort to misuse the dataset or its source is forbidden. Further details are included in 2 below.
    4. Participants are encouraged to release the software they use (e.g., on github)
  2. Data-related rules
    1. For task 1, using external datasets with fact-checking related annotation is forbidden.  Only the released training data is allowed.
    2. For task 2, we allow any data source except from factcheck.org. For example, participants can use datasets such as
      1. Popat, et al. 2016. Credibility assessment of textual claims on the web. In Proceedings of CIKM 2016
      2. Ma, et al, 2017. Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning. In Proceedings of ACL 2017
    3. For both tasks, it is OK to use information from the Web, from Twitter, etc., but the retrieved URLs have to be checked for sanity. We include a Python script in the repository. In order for their submission to be considered valid, the participants must use this script to check any URL whose contents they would like to use. In particular, the datasets used in the following papers should not be used:
      1. Gencheva, et al. A context-aware approach for detecting worth-checking claims in political debates. In Proceedings of RANLP 2017
      2. Patwari, et al. Tathya: A Multi-Classifier System for Detecting Check-Worthy Statements in Political Debates. In Proceedings of CIKM 2017.
  3. Evaluation rules
    1. Runs for Task 1 will be received from 28 April 0:01 until 4 May, 23:59 CET.
    2. Runs for Task 2 will be received from 5 May, 0:01 until 11 May, 23:59 CET.
    3. Participants can send multiple submissions (only the last one counts). Each one could contain one primary and up to two contrastive runs.  The participants must clearly flag each of the three. Only the last submission with the (up to) three runs will be considered as the final one.
    4. Format checkers are released as part of the dataset and will be run upon submission. A submission failing the format checking will be considered null.

Published on  April 23rd, 2018