SemEval-2015 Task 10: Sentiment Analysis in Twitter

 

SUMMARY

In the past decade, new forms of communication, such as microblogging and text messaging have emerged and become ubiquitous. While there is no limit to the range of information conveyed by tweets and texts, often these short messages are used to share opinions and sentiments that people have about what is going on in the world around them. We propose this task and the development of a Twitter sentiment corpus to promote research that will lead to better understanding of how sentiment is conveyed in tweets and short messages. There will be five sub-tasks: an expression-level task, a message-level task, a topic-related task, a trend task, and a task on prior polarity of terms; participants may choose to participate in one or more subtasks.

 

INTRODUCTION

 In the past decade, new forms of communication, such as microblogging and text messaging have emerged and become ubiquitous.  While there is no limit to the range of information conveyed by tweets and texts, often these short messages are used to share opinions and sentiments that people have about what is going on in the world around them.
Working with these informal text genres presents challenges for natural language processing beyond those typically encountered when working with more traditional text genres, such as newswire data.  Tweets and texts are short: a sentence or a headline rather than a document.  The language used is very informal, with creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, such as, RT for “re-tweet” and #hashtags, which are a type of tagging for Twitter messages.  How to handle such challenges so as to automatically mine and understand the opinions and sentiments that people are communicating has only very recently been the subject of research (Jansen et al., 2009; Barbosa and Feng, 2010; Bifet and Frank, 2010; Davidov et al., 2010; O’Connor et al., 2010; Pak and Paroubek, 2010; Tumasjen et al., 2010; Kouloumpis et al., 2011).
Another aspect of social media data such as Twitter messages is that it includes rich structured information about the individuals involved in the communication. For example, Twitter maintains information of who follows whom and re-tweets and tags inside of tweets provide discourse information. Modeling such structured information is important because: (i) it can lead to more accurate tools for extracting semantic information, and (ii) because it provides means for empirically studying properties of social interactions (e.g., we can study properties of persuasive language or what properties are associated with influential users).
We believe that a freely available, annotated corpus that can be used as a common testbed is needed in order to promote research that will lead to a better understanding of how sentiment is conveyed in tweets and texts.  Our primary goal in this task is to create such a resource.  The few corpora with detailed opinion and sentiment annotation that have been made freely available, e.g., the MPQA corpus (Wiebe et al., 2005) of newswire data, have proved to be valuable resources for learning about the language of sentiment.  While a few twitter sentiment datasets have been created, they are either small and proprietary, such as the i-sieve corpus (Kouloumpis et al., 2011), or they rely on noisy labels obtained from emoticons or hashtags.  Furthermore, no twitter or text corpus with expression-level sentiment annotations has been made available so far, other that in the precursor Semeval tasks (2013 task 2, 2014 task 9), which we will extend in the present task.

 

TASK DESCRIPTION

There will be four subtasks: an expression-level subtask, a message-level subtask, a topic-related suvtask, a trend subtask; participants may choose to participate in one or more tasks.
 

  • Subtask A: Contextual Polarity Disambiguation: Given a message containing a marked instance of a word or phrase, determine whether that instance is positive, negative or neutral in that context.
  •  Subtask B: Message Polarity Classification: Given a message, classify whether the message is of positive, negative, or neutral sentiment. For messages conveying both a positive and negative sentiment, whichever is the stronger sentiment should be chosen.
  • Subtask C: Topic-Based Message Polarity Classification: Given a message and a topic, classify whether the message is of positive, negative, or neutral sentiment towards the given topic. For messages conveying both a positive and negative sentiment towards the topic, whichever is the stronger sentiment should be chosen.
  • Subtask D: Detecting Trends Towards a Topic: Given a set of messages on a given topic from the same period of time, determine whether the dominant sentiment towards the target topic in these messages is (a) strongly positive, (b) weakly positive, (c) neutral, (d) weakly negative, or (e) strongly negative.
  • Subtask E: Determining strength of association of Twitter terms with positive sentiment (or, degree of prior polarity): Given a word or a phrase, provide a score between 0 and 1 that is indicative of its strength of association with positive sentiment. A score of 1 indicates maximum association with positive sentiment (or least association with negative sentiment) and a score of 0 indicates least association with positive sentiment (or maximum association with negative sentiment). If a word is more positive than another, then it should have a higher score than the other. See complete task description here.

 

REFERENCES

  1. Barbosa, L. and Feng, J. 2010. Robust sentiment detection on twitter from biased and noisy data.  Proceedings of Coling.
  2. Bifet, A. and Frank, E. 2010. Sentiment knowledge discovery in twitter streaming data.  Proceedings of 14th International Conference on Discovery Science.
  3. Davidov, D., Tsur, O., and Rappoport, A. 2010.  Enhanced sentiment learning using twitter hashtags and smileys.  Proceedings of Coling.
  4. Jansen, B.J., Zhang, M., Sobel, K., and Chowdury, A. 2009.  Twitter power: Tweets as electronic word of mouth.  Journal of the American Society for Information Science and Technology 60(11):2169-2188.
  5. Kiritchenko, Svetlana, Zhu, Xiaodan, and Mohammad, Saif M. Sentiment Analysis of Short Informal Texts. Journal of Artificial Intelligence Research, volume 50, pages 723-762, August 2014.
  6. Kouloumpis, E., Wilson, T., and Moore, J. 2011. Twitter Sentiment Analysis: The Good the Bad and the OMG! Proceedings of ICWSM.
  7. Mohammad, Saif M., Kiritchenko, Svetlana, and Zhu, Xiaodan. NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets, In Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013), June 2013, Atlanta, USA.
  8. O’Connor, B., Balasubramanyan, R., Routledge, B., and Smith, N. 2010.  From tweets to polls: Linking text sentiment to public opinion time series.  Proceedings of ICWSM.
  9. Nakov, P., Kozareva, Z., Ritter, A., Rosenthal, S. Stoyanov, V. and Wilson, T. Semeval-2013 Task 2: Sentiment Analysis in Twitter To appear in Proceedings of the 7th International Workshop on Semantic Evaluation. Association for Computational Linguistics. June 2013, Atlanta, Georgia
  10. Pak, A. and Paroubek, P. 2010.  Twitter as a corpus for sentiment analysis and opinion mining.  Proceedings of LREC.
  11. Tumasjan, A., Sprenger, T.O., Sandner, P., and Welpe, I. 2010.  Predicting elections with twitter: What 140 characters reveal about political sentiment.  Proceedings of ICWSM.
  12. Janyce Wiebe, Theresa Wilson and Claire Cardie (2005). Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, volume 39, issue 2-3, pp. 165-210.

 

CONTACT

Sara Rosenthal

 

Contact Info

  • Sara Rosenthal, Columbia University
  • Alan Ritter, The Ohio State University
  • Veselin Stoyanov, Facebook
  • Svetlana Kiritchenko, NRC Canada
  • Saif Mohammad, NRC Canada
  • Preslav Nakov, Qatar Computing Research Institute

email: semevaltweet@googlegroups.com

Other Info

Announcements