CFP

CALL FOR PARTICIPATION

 

SemEval 2016 Task 5 - Aspect Based Sentiment Analysis (ABSA)
(http://alt.qcri.org/semeval2016/task5/)

 

The SemEval ABSA task for 2016 (SE-ABSA16) gives the opportunity to participants to experiment with sentence-level ABSA -as in SE-ABSA15 (http://alt.qcri.org/semeval2015/task12/)-, and/or with text-level ABSA (new subtask). The task provides training and testing datasets for several domains in 8 languages. For each domain (e.g. restaurants) a common set of annotation guidelines is used across all languages. SE-ABSA16 offers 3 subtasks which are described below. Participating teams are free to submit runs (system outputs) for the subtasks, slots, domains and languages of their choice.

 

DOMAINS & LANGUAGES
********************************
Restaurants (Customer Reviews): English, Dutch, French, Russian, Spanish, Turkish
Hotels (Customer Reviews): English, Arabic
Consumer Electronics (Customer Reviews):
          o Laptops: English
          o Mobile Phones: Chinese, Dutch
          o Digital Cameras: Chinese
Telecommunications (Twitter): Turkish

 

TASK DESCRIPTION
**************************

Subtask 1: Sentence-Level ABSA
----------------------------------------------
Given a review text about a target entity (laptop, restaurant, etc.), identify the following information:

Slot 1: Aspect Category. Identify every entity (E) and attribute (A) pair (E#A) towards which an opinion is expressed in the given text. E and A should be chosen from predefined domain-specific inventories of entity types  (e.g. laptop, keyboard, operating system, restaurant, food, drinks) and attribute labels (e.g. performance, design, price, quality). Each E#A pair is considered an aspect category of the given text. The inventories of entity  types and attribute labels are described in the annotation guidelines. Some examples highlighting the required information follow:
a. It is extremely portable and easily connects to WIFI at the library and elsewhere. →{LAPTOP#PORTABILITY}, {LAPTOP#CONNECTIVITY}
b. The exotic food is beautifully presented and is a delight in delicious combinations. → {FOOD#STYLE_OPTIONS}, {FOOD#QUALITY}

Slot 2: Opinion Target Expression (OTE). An opinion target expression (OTE) is an expression used in the given text to refer to the reviewed entity E of a pair E#A. The OTE is defined by its starting and ending offsets in the given text. The OTE slot takes the value “NULL”, when there is no (explicit) mention of the entity E. This slot is required only for particular domains. Below are some examples:
a. Great for a romantic evening, but over-priced. → {AMBIENCE#GENERAL, “NULL”}, {RESTAURANT# PRICES, “NULL”}
b. The fajitas were delicious, but expensive. → {FOOD#QUALITY, “fajitas”}, {FOOD# PRICES, “fajitas”}

Slot 3: Sentiment Polarity. Each identified E#A pair of the given text has to be assigned a polarity (positive, negative, or neutral). The neutral label applies to mildly positive or mildly negative sentiment, as in the second example below. 
a. The applications are also very easy to find and maneuver. → {SOFTWARE#USABILITY,  positive}
b. The fajitas are nothing out of the ordinary” →{FOOD#QUALITY, “fajitas”,  neutral}

 

Subtask 2: Text-Level ABSA
--------------------------------------
Given a set of customer reviews about a target entity (e.g. a restaurant), the goal is to identify a set of {aspect, polarity} tuples that summarize the opinions expressed in each review.

 

Subtask 3: Out-of-domain ABSA
-------------------------------------------
The participating teams will have the opportunity to test their systems in a previously unseen domain for which no training data will be made available.

 

IMPORTANT DATES
************************
Evaluation period starts: January 10, 2016
Evaluation period ends: January 31, 2016
Paper submission due: February 28, 2016 [TBC]
Paper reviews due: March 31, 2016 [TBC]
Camera ready due: April 30, 2016 [TBC]
SemEval workshop: Summer 2016

 

MORE INFORMATION
**************************
The Semeval-2016 Task 5 website includes further details on the training data, evaluation, and examples of expected system outputs: http://alt.qcri.org/semeval2016/task5/

Team registration: here
Join our Google Group: semeval-absa@googlegroups.com (Important announcements for the task will be posted there)

 

ORGANIZERS
********************
Maria Pontiki (ILSP, Athena Research and Innovation Center, Greece) [Primary Contact]
Dimitrios Galanis (ILSP, Athena Research and Innovation Center, Greece)
Haris Papageorgiou (ILSP, Athena Research and Innovation Center, Greece)
Suresh Manandhar (University of York, UK)
Ion Androutsopoulos (Athens University of Economics and Business, Greece)

 

Multilingual Datasets are supported by:
---------------------------------------------
Arabic: Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Bashar Talafha, Omar Qawasmeh (Jordan University of Science and Technology, Jordan)
Chinese: Yanyan Zhao, Bing Qin, Duyu Tang, Ting Liu (Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China)
Dutch: Orphée De Clercq, Els Lefever, Véronique Hoste (Language and Translation Technology Team (LT3), Ghent University, Belgium)
French: Marianna Apidianaki, Xavier Tannier (LIMSI-CNRS, Orsay, France)
Russian: Loukachevitch Natalia (Lomonosov Moscow State University, Russia), Kotelnikov Evgeny, Blinov Pavel (Vyatka State Humanities University, Russia)
Spanish: Núria Bel (IULA, Universitat Pompeu Fabra (UPF), Barcelona, Spain.), Salud María Jiménez Zafra (SINAI, Universidad de Jaén, Spain)
Turkish: Gülşen Eryiğit (Istanbul Technical University, Turkey), Fatih Samet Çetin, Ezgi Yıldırım, Can Özbey, Tanel Temel (Turkcell Global Bilgi, Turkey)

Contact Info

Organizers


Join our Google Group: semeval-absa@googlegroups.com. Important announcements for the task will be posted there.

    Multilingual Datasets Support

Other Info

Announcements


  • Gold-standard annotations released!(available at Data&Tools page)

  • Evaluation Results released! (available at Data&Tools page)

  • Call For Papers: http://alt.qcri.org/semeval2016/ index.php?id=call-for-papers

  • Test Data for Phase B have been released! Results for phase B must be submitted until Friday 29/01/2016, 23.00 GMT

  • Test Data for Phase A have been released! Results for phase A must be submitted until Friday 22/01/2016, 23.00 GMT

  • Evaluation period: January 18-29, 2016

  • Evaluation-validation code/description, Submission guidelines and baselines Released!

  • Annotation Guidelines Released!

  • Spanish Training Data Released!

  • Chinese Training Data Released!

  • French Training Data Released!

  • Turkish Training Data Released!

  • Dutch Training Data Released!

  • Russian Training Data Released!

  • Arabic Training Data Released!

  • English Training Data Released!

  • Trial Data Released!

  • ABSA16 Languages: English, Arabic, Chinese, Dutch, French, Russian, Spanish, Turkish!

Last updated on

  • 16/02/2016