Natural language processing- classification of sentiment on twitter data
For the purpose of this project, we use Twitter dataset created by SemEval. SemEval shared task Twitter data, labeled for sentiment. The Semantic Evaluation conference, SemEval, has recently added sentiment detection in Twitter messages and other social media genres to its shared tasks. For example, in 2014, it ran as Task 9, sub-task B, Message Polarity Classification: http://alt.qcri.org/semeval2014/task9/. The data was manually labeled and each dataset is available as a Twitter id paired with the manual label. There are 5 labels used for each message: “positive”, “negative”, “objective”, “neutral”, “objective OR neutral”. For this series of experiments, three class, “pos”, “neg” and “neu” are chosen. To actually get the tweets, a script can be run to get Tweets from Twitter. If a Twitter user has retracted their tweet, then Twitter will no longer send it out and it is marked as “Not Available”. The dataset given here was collected in the Spring 2014 from Twitter.
The tweets dataset need to be preprocessed, to remove noise. For example, stop words, differently cased words, etc. Following is the order of steps used to preprocess the tweets, for one of the experiments:
- Convert every word to lower.
- Apply filter for alphabets determined by the regular expression: ^[^a-z] +$
- Apply filter for stop words- The list of stop words is obtained from ‘stopwords_twitter.txt’
For the other experiments, words are converted to lower case.
Experiments are performed with three kinds of feature sets. Document, SL, and POS To create the feature set, all the words are taken into consideration. In this case, it is recommended to not filter any words for these. However, as one of the experiments, it is chosen to create feature set using the filtered document.
In a bag of words feature, all the words in the corpus are collected and some number of most frequent words are selected to be the word features.
This feature set defines features that include word counts of subjectivity words negative feature will have number of weakly negative words + 2 * number of strongly negative words. positive feature has similar definition, not counting neutral words.
The part of speech tagger feature set takes a document list of words and returns a feature dictionary which it gets by running the default POS tagger (the Stanford tagger) on the document and counting four types of POS tags to use as features
Performance of Document feature on preprocessed (and filtered) tweets
Performance of Document feature on all tweets
Performance of subjective lexicon feature set on all tweets
Performance of part of speech feature set on all tweets
Performance of subjective lexicon feature set on all tweets with K fold cross validation
Performance of subjective lexicon feature set on all tweets with K fold cross validation
When the results of all these classification models are compared, it can be seen that there is not a major difference in the performance measures of them all. For future work, some additional features may be picked up. Entity extraction can improve the feature set and performance result leading to a better model. This was out of scope for this project, however, it would be a good extension to take up.