Task Description

Due to the extended use of Web forums, such as Qatar Living, Yahoo! Answers or Stackoverflow, there has been a renewed interest in Community Question Answering (cQA). cQA combines traditional question answering with a modern Web scenario, where users pose questions hoping to get the right answers from other users. If a user posts a new question which is similar (even semantically equivalent) to a previously posted question,she should not wait for answers or for another user to address her to the archived forum thread. An automatic system can search for previously-posted relevant questions, providing light to the current information requirement instantaneously.
In this challenge, given a new user question and a set of —previously posted— forum questions, together with their corresponding answer threads, a machine learning model must rank the forum questions according to their relevance against the new question.
Even if this task involves both natural language processing (NLP) and information retrieval (IR), the goal of the challenge is to focus on the machine learning aspect. Therefore, we take care of NLP and IR and provide the participants with features derived from the text in the original and forum questions, as well as the similarity matrices built by applying kernel functions to their parse trees. A few other features express the relevance of the thread comments, associated to the forum question, against the original question.
Participants are expected to exploit these data in order to build a machine learning model to predict the best possible ranking of forum questions given a new one: the most relevant questions in the thread must appear on top of the ranking.

Task Evaluation

Each new question u has associated a set of questions qi (typically 10). Each qi is labeled as Perfect Match, Relevant, or Irrelevant with respect to u. In the following example, q1 is Relevant, q2 is Irrelevant, and q3 is a Perfect Match.

u Good Bank
Which is a good bank as per your experience in Doha
q1 Best Credit Card in Doha
I would like to apply for a credit card that gives me Points when i spend and not just a simple card that does not get me anything in return for using it... Which card would you recommend gives the most reward for using the card and which card should i stay away from?? (PS I understand that i would either need to shift my salary to the bank or put a deposit of some sort...) thanks!!
I am currently working here in Qatar; our employer informed us that they will transfer to contractor with less salary; but we have option not to accept; but the problem is I have Personal Loan in one of the BAnk here in DOHA; what if I will not accept the offer; what will happen to me ; I mean did the BAnk will force to pay or not?
q3 Best bank in Qatar?
Greetings everybody. I will like to see if someone can help me; I want to know which is the best bank in Qatar for opening a personal bank account. Best regards. Have a nice weekend everyone.
For evaluation purposes, both Perfect Match and Relevant questions are considered equivalent. That is, a perfect ranking would put on top all the questions labeled as Perfect Match or Relevant (in any order) and all the Irrelevant questions afterwards.
The ranking generated for each question u is evaluated by means of average precision. The mean of all the average precision values represents the overall official evaluation measure: mean average precision (MAP).

Data description top

In order to focus on the machine learning aspects of the competition, we provide a set of state-of-the-art features for the task as well as similarity matrices between the parse trees of the questions.


A total of 64 features are provided, divided in three sub-groups:

  • 47 features evaluate various similarities between the new and its related forum question;
  • If the forum thread contains good answers to the new question, we can suppose that the forum question is strongly related to the new question. Following such intuition, an Answer Selection Classifier is applied to estimate the quality of the comments from the forum question thread with respect to the new question, and 4 features are extracted from such scores.
  • An Answer Selection Classifier is applied to estimate the quality of the comments from the forum question thread with respect to both the new and the forum question. A total of 13 features are extracted to evaluate the discrepancy between such scores.
  • The feature file we provide is in libsvm format (still participants are free to use any other software in their experiments). The id of the example is added at the end of each line as a comment.

Similarity matrices between parse trees

We provide some pre-computed kernel matrices that contain the tree kernel similarities between the syntactic parse trees of the questions (multiple combinations between u and q are possible, see this link for details). Given two pairs of new and forum questions pi=(ui,qi) and pj=(uj,qj), we provide four different matrices, which store all the possible Tree Kernel (TK) computations, i.e., TK(ui,uj), TK(qi,qj), TK(ui,qj), and TK(qi,uj). Furthermore, we provide a Java program to combine such values into customizable tree kernel combinations.

Baseline System

The baseline system consists of a combination of vectorial features and tree kernels. Once you have registered and downloaded the data, you can create the kernel matrix used in the baselineby running the following command (on a single line):
java -jar TreeKernelCombinationBuilder.jar trainDevUvsU.txt trainDevQvsQ.txt trainDevUvsQ.txt trainDevQvsU.txt outputKernelMatrix
The jar file and its source code are provided together with the corpus. This program can be used to create more kernel matrices. The source code of the program is provided for further combinations/modifications: refer to file TreeKernelCombination.java and modify line 138.

The baseline is a binary Perfect Match+Relevant against Irrelevant SVM classifier obtained by combining a linear kernel on the features and the tree kernels computed with the Java program above.

Submission format

The submission format consists of a three columns tab-separated file:
<example_id>TAB<relevance_score>TAB<predicted_label> where the first column is the id of the example, the second one is the relevance score produced by the system, and the third column is the predicted class. Given the above example, if the developed system ranked the related questions qi as follows:

the performance of the system would be perfect: 1.0. Note that...
  • The relevance score must be higher for the questions ranked in higher positions.
  • The examples must appear in the same order as they appear in the development/test set.
  • The third column allows to compute F-measure. If your model does not generate binary decisions, set all the values to true and consider the MAP evaluation only.
Automation of massive submissions is not allowed. A team could be disqualified if we observe extraneous submission behaviors.

How to Participate top

  • Register to the challenge
  • You will get an email with your team passcode.
  • Use the passcode on the top-right box to enter your team page, where you can download the data and submit your runs.
  • Submit your predictions on the development set to check your performance evolution. You are allowed up to 5 submissions per day.
  • The test set will be released on the final period of the competition. You will have few days to submit your final predictions and only the final submission will be evaluated and considered to decide the overall winner.
  • A selected number of participants will be invited to submit a report describing their system and to present it at the workshop in the ECML/PKDD 2016 conference. This is mandatory for the winners. The format is the same one as in the main conference. We will define the challenge-specific submission mechanism in short.
  • The detailed rules of the game are available here.


Best Performing system on the test set: 1000 EUR*
Best Performing system on the development set: 500 EUR*+
*Price eligibity is subject to certain conditions. The amount will be split if more than one team obtains identical MAP values.
+If the same team performs best on both the development and the test sets, the 500 euro will be given to the second best system on the test dataset.

Important dates (all times in CET; 24-hour clock)

May 16th Release of the training and development sets.
May 16th Opening of the online oracle for submissions on the development set.
July 22nd, 12:00:00 Registration deadline.
End of submission period on the development set.
July 23rd Release of the test set.
July 24th, 23:59:59 Deadline for submission of the system description draft version.
July 30th, 12:00:00 End of submission period on the test set.
July 30th Winners preliminary announcement
Aug 7th Deadline for submission of the system description camera-ready version.
September 23rd, 11:40 Workshop on the Challenge at ECML

Registration top

Registration is now closed.

cQA Challenge Chairs

  • Alberto Barrón-Cedeño, Qatar Computing Research Institute, HBKU
  • Giovanni Da San Martino, Qatar Computing Research Institute, HBKU
  • Simone Filice, Università degli Studi di Roma "Tor Vergata"
  • Preslav Nakov, Qatar Computing Research Institute, HBKU

Discovery Challenge Chairs

  • Elio Masciari, ICAR CNR, Italy
  • Alessandro Moschitti, University of Trento, Italy