SemEval-2015 Task 9: CLIPEval Implicit Polarity of Events

 

Current research in sentiment analysis is mostly centered on lexical resources that store polarity values. For bag-of-words approaches the polarity of a text depends on the presence/absence of a set of lexical items. This methodology is successful for opinions about entities (such as in reviews) but it fails when complex opinions about events - involving perspectives and points of view - are expressed.

 

With the aim of promoting a more holistic approach to sentiment analysis, combining the detection of implicit polarity with the expression of opinions on events, we propose CLIPEval, a task based on a dataset of events annotated as instantiations of pleasant and unpleasant events previously collected in psychological researches as the ones on which human judgments converge (Lewinsohn and Amenson 1978; MacPhillamy and Lewinsohn 1982).
Some of these events are general and can be instantiated by a cluster of more specific events (for example, re-arranging or redecorating my room or house is the general event for painting the wall or dying the curtains).

Given a sentence about an event not containing polarity marked items, the CLIPEval task concerns classification of the events that are pleasant or unpleasant for an experiencer that writes in first person.

We will annotate sentences from a web corpus. All the textual units extracted will be manually annotated at sentence level according to four classes:

i.) Explicit pleasant event   (Yesterday I met a beautiful woman);
ii.) Explicit  unpleasant event  (I ate a bad McRib this week); 
iii.) Implicit pleasant  event   (Last night I finished the sewing project);
iv.) Implicit unpleasant event  (Today, I lost a bet with my grandma).  

 

All the textual units that are instantiations of psychologically grounded pleasant and unpleasant events will be labelled accordingly:

 

After my lesson, I walked the beach to Cabarete, a laid-back village lined with beach bars, seafood restaurants and palm trees [instantiation of Taking a walk]

As a family friend, I’m happy he’s in the place he wanted to be [instantiation of Seeing good things happen to my family or friends]

 

The CLIPEval task will be organized around two subtasks:

SUBTASK A: identify the polarity value associated to the event instance. Participants are required to associate each sentence with a polarity value (POSITIVE. NEGATIVE or NEUTRAL). Evaluation and ranking will be performed on F1 score for the polarity values. For this subtask, participants are not required to identify the event instantiation.

SUBTASK B: identify the event instantiations and associated polarity values. Participants are required to associate each sentence both with a class label of the event instance and the polarity value (POSITIVE. NEGATIVE or NEUTRAL). Training and test set will present a set of event instance labels. Evaluation and ranking will be done on F1 score on the correct identification of the event instance type and polarity value. 

 

Test and training sets will be composed by implicit pleasant events and implicit unpleasant events.
An additional dataset with explicit events will be provided, with the aim of comparing what can be detected with bag-of-words approaches based on lexical resources.

 

REFERENCES

Lewinsohn, J. and C.S. Amenson. 1978. Some Relations between Pleasant and Unpleasant Events and Depression. Journal of Abnormal Psychology 87(6): 644 – 654.
MacPhillamy, D. and P. M. Lewinsohn. 1982. The Pleasant Event Schedule: Studies on Reliability, Validity, and Scale Intercorrelation. Journal of Counseling and Clinical Psychology 50(3): 363 – 380.

Contact Info

Organizers

  • Irene Russo, ILC-CNR, Italy
  • Tommaso Caselli, VUA, The Netherlands
  • Carlo Strapparava, Fondazione Bruno Kessler, Italy

email: clipeval2015@gmail.com google group: clipeval2015@googlegroups.com

Other Info

Announcements

  • 06/06/2014 Trial data (104 examples) online
  • 9/17/2014 Training data (1280 examples) online
  • 11/27/2014 Revised training data online
  • 8/12-15/12 2014 Evaluation period
  • 1/19/2015 Official results are here gold standard and evaluation script have been uploaded in "Data and Tools"