#HashtagWars: Learning a Sense of Humor

 

Introduction

Humor is an essential trait of human intelligence that has not yet been addressed extensively in the current AI research. Most work on humor detection to date has approached the problem as binary classification: humor or not humor. This representation ignores both the continuous nature of humor and the fact that humour is subjective. To address these concerns, we introduce the task #HashtagWars, Learning a Sense of Humor. The goal of this task is to encourage the development of methods that take into account the continuous nature of humor, on the one hand, and aim to characterize the sense of humour of a particular source, on the other hand. To further such research, we propose a dataset based on humorous responses submitted to a Comedy Central TV show.

Picture from hulu.com

Debuting in Fall 2013, the Comedy Central show @midnight (http://www.cc.com/shows/-midnight) is a late-night "game-show" that presents a modern outlook on current events by focusing on content from social media. The show's contestants (generally professional comedians or actors) are awarded points based on how funny their answers are. The segment of the show that best illustrates this attitude is the Hashtag Wars (HW).

 

#HashtagWars

Every episode the show's host proposes a topic in the form of a hashtag, and the show's contestants must provide tweets that would have this hashtag. Viewers are encouraged to tweet their own responses. In the next episode, the show finds the ten funniest tweets from the viwers' responses. From this top-10, the show selects a single winning tweet. From the point of view of the show, all tweets in the top-10 are funnier than the non-top-10 tweets, and the winning tweet is funnier than the rest of the tweets in the top ten. Therefore, we are able to apply labels that determine how relatively humorous the show finds a given tweet. 

 

We advise potential participants to watch clips from the HW segment available from the show's webpage for a better understanding of the task.

 

Because of the contest's format, it provides an adequate method for addressing the selection bias often present in machine learning techniques. Consequently, tweets are seen not as humor/non-humor, but rather varying degrees of wit and cleverness. Moreover, given the subjective nature of humor, labels in the dataset are only "gold" with respect to the show's sense of humor. This concept becomes more grounded when considering the use of supervised systems for the dataset.

 

 

Goal of the task

The goal of the task is to learn to characterize the sense of humor represented in this show.  Given a set of hashtags, the goal is to predict which tweets the show will find funnier within each hashtag. The degree of humor in a given tweet is determined by the labels provided by the show. 

 

We evaluate potential predictive models based on a pairwise comparison task in an initial effort to leverage the HW dataset. The pairwise comparison task will be to select the funnier tweet, and the pairs will be derived from the labels assigned by the show to individual tweets.

 

 

There have been numerous computational approaches to humor within the last decade. However, the majority of this work decomposes the notion of humor into two groups: humor and non-humor. This representation ignores the continuous nature of humor, while also not accounting for the subjectivity in perceiving humor. Humor is an essential trait of human intelligence that has not been addressed extensively in the current AI research, and we feel that shifting from the binary, "objective" approach of humor detection is a good pathway towards advancing this work. Lastly, we believe that understanding the humor in certain tweets requires external knowledge, as, for example, tweets are often puns on proper names. Consequently, we believe this is an interesting dataset for developing models that strive to incorporate external knowledge.

Contact Info

Discussion Group
Hashtag Wars SemEval

Other Info

Announcements

  • 2/6/2017 [new]
    The results have been posted!
  • 1/9/2017
    Evaluation data has been released!
  • 12/6/2016
    CodaLab competitions are ready!
  • 10/19/2016
    Evaluation scripts for both subtasks have been released!
  • 9/5/2016
    Train data has been released!
  • 8/1/2016
    Trial data has been released!
  • For participation in any of this year's tasks, please register by completing this form