Results

 

Download the final results for subtask1 and subtask2. Full package including gold test keys is also available for download on Data and Tools.

 

The results are divided into separate .tsv files (tab-separated). We report Pearson, Spearman and their harmonic mean results (sorted by the latter). Apart from the specific scores for each language (or pair of languages in subtask 2), we also show the global results for systems which submitted results for at least four languages in subtask 1 or 6 pair of languages in subtask 2. The evaluation was followed as indicated in Evaluation.

 

Note: If the run number was not specified in the submission, we have considered the submission included in the CodaLab leaderboard as “run1” and the other one as “run2”.
 

We have included some symbols next to some system names:

 

* - Systems with a star (*) have used the shared training corpora indicated in Data and Tools (Wikipedia corpus for subtask1 and the Europarl parallel corpus for subtask2).

 

& - These systems made use of the Wikipedia corpus in subtask2.

 

(a.d.) - These teams submitted the results after the deadline.

 

(baseline) - As baseline system we have included the results of the sense-based NASARI vectors (Camacho-Collados et al., AIJ 2016). We performed the evaluation with the 300-dimensional English embed vectors (version 3.0, http://lcl.uniroma1.it/nasari/) and used them for all languages.

 

Task Description Paper

 

The reference paper for the task (bib) is the following:


 

Contact Info

[*Contact persons]

email:
collados [at] di.uniroma1.it
mp792 [at] cam.ac.uk

Join our Google Group:
semeval-wordsimilarity

Other Info

Announcements