VarDial Evaluation Campaign

The VarDial Evaluation Campaign 2018 is now finished. We would like to thank all teams that participated in the five tasks.

Results

The results of the five tasks are available here.

Description

The second VarDial evaluation campaign follows the previous DSL shared tasks which focused on the identification of similar languages and language varieties such as the DSL 2014, DSL 2015, and the DSL 2016 which also included Arabic dialects, and the first VarDial Evaluation Campaign organized together with VarDial 2017.

We are organizing five shared tasks this year:

  • (ADI) Arabic Dialect Identification
    Task Organizers: Ahmed Ali  Qatar Computing Research Institute, HBKU, Qatar), Preslav Nakov (Qatar Computing Research Institute, HBKU, Qatar), Suwon Shon (Massachusetts Institute of Technology, United States), and James Glass (Massachusetts Institute of Technology, United States)
    Contact: amali(at)hbku.edu.qa
    Task Description: The third edition of the ADI task will address the multi-dialectal challenge in spoken Arabic in broadcast news domain. Previously, we have shared acoustic features and lexical word sequence extracted from large-vocabulary speech recognition (LVCSR). This year, we will add phonetic features, which will enable researchers to use both prosodic and phonetic features, which are helpful for distinguishing between different dialects. We have seen many researchers combine acoustic with lexical features, and thus it will be interesting to explore the contribution of phonetic features in the overall dialect identification system combination.
    Tracks: Closed and Open
  • (GDI) German Dialect Identification
    Task Organizers: Yves Scherrer (University of Helsinki, Finland), and Tanja Samardžić (University of Zurich, Switzerland)
    Contact: yves.scherrer(at)gmail.com
    Task Description: After a successful first edition of the (Swiss) German Dialect Identification task in 2017, we organize a second iteration of this task. We provide updated data on the same Swiss German dialect areas as last year (Basel, Bern, Lucerne, Zurich), but add a fifth "surprise dialect" for which no training data is made available. The participants may take part in two sub-tracks, depending on whether they focus on the traditional four-way classification (without surprise dialect) or the five-way classification (with surprise dialect).
    Tracks: Closed
  • (MTT) Morphosyntactic Tagging of Tweets
    Task Organizers: Nikola Ljubešić (Jožef Stefan Institute, Slovenia and University of Zagreb, Croatia) and Jörg Tiedemann (University of Helsinki, Finland)
    Contact: jorg.tiedemann(at)helsinki.fi
    Task Description: This task focuses on morphosyntactic annotation (900+ labels) of non-canonical Twitter varieties of three South-Slavic languages -- Slovene, Croatian, and Serbian. Task participants will be provided with large manually annotated and raw canonical datasets, as well as small manually annotated Twitter datasets. Two dimensions of variety can be exploited in the task: (i) the canonical vs. non-canonical, and (ii) the overall proximity of the three languages.
    Tracks: Open-Public
  • (DFS) Discriminating between Dutch and Flemish in Subtitles
    Task Organizers: Chris van der Lee (Tilburg University, The Netherlands), Stef Grondelaers (Radboud University, The Netherlands), Nelleke Oostdijk (Radboud University, The Netherlands), Dirk Speelman (University of Leuven, Belgium), and Antal van den Bosch (Meertens Institute and Radboud University, The Netherlands)
    Contact: c.vdrlee(at)tilburguniversity.edu
    Task Description: The task focuses on determining whether a text is written in the Netherlandic or the Flemish variant of the Dutch language. For this task, participants are provided with a dataset consisting of almost 100,000 professionally produced subtitles for movies, documentaries and television shows. Since there is a lack of automatic classification studies on Netherlandic and Flemish Dutch varieties, and no Netherlandic/Flemish corpus of this size exists, we believe it is a scientifically interesting step forward to develop and compare language variety classification using subtitles, and thereby analyze the proximity of the language varieties in a new way. The latter is not only of interest for improving computational linguistics applications, but also to add to insights in variational linguistics in general.
    Tracks: Open-Public
  • (ILI) Indo-Aryan Language Identification
    Task Organizers: Ritesh Kumar (Bhim Rao Ambedkar University, India), Bornini Lahiri (Jadavpur University, India), and Mayank Jain (Jawaharlal Nehru University, India)
    Contact: vardial.ili(at)gmail.com
    Task Description: This task focusses on identifying 5 closely-related languages of Indo-Aryan language family – Hindi (also known as Khari Boli), Braj Bhasha, Awadhi, Bhojpuri and Magahi. These languages form part of a continuum starting from Western Uttar Pradesh (Hindi and Braj Bhasha) to Eastern Uttar Pradesh (Awadhi and Bhojpuri) and the neighbouring Eastern state of Bihar (Bhojpuri and Magahi). For this task, participants will be provided with a dataset of approximately 15,000 sentences in each language, mainly from the domain of literature, published over the web as well as in print. It will be the first dataset that will be made available for these languages (except Hindi) and it will not only be useful for automatic identification of languages and developing NLP applications but will also help in gaining insights into the proximity level of these languages (which are hypothesised to form part of a continuum and lot of times mistaken as varieties of Hindi, especially outside the scholarly linguistic circles).
    Tracks: Open-Public

Submission Tracks

Each shared task has different submission tracks summarized as follows:

  • Closed Training: Using ONLY the training data provided by the shared task organizers.
  • Open Training: Using ANY corpus for training including the training data provided by the shared task organizers.
  • Open-Public Training: Using ANY corpus for training including the training data provided by the shared task organizers as long as the training data used is public or will be made public immediately after the submission. 

For both Open and Open-Public we request participants to inform the datasets used for training when submitting their system output.

Evaluation Campaign Dates

  • Training set release: March 12, 2018
  • Test set release: April 23,2018
  • Submissions due: April 25, 2018
  • Results announced: April 27, 2018
  • System papers deadline: May 25, 2018
  • Review feedback: June 20, 2018
  • Camera-ready versions: June 30, 2018

System Description Papers

After the evaluation phase, participants will be invited to submit a paper describing their system. Papers will be published in the VarDial workshop proceedings and presented at the workshop.

Contact

For task-specific questions please contact the respective task organiser(s).

For general questions about the VarDial evaluation campaign please contact Marcos Zampieri - m.zampieri(at)wlv.ac.uk