Sentiment Analysis, Twitter and R

Within the Data Science and Analytic Higher Diploma, I have been asked:

“Research an area of sentiment analysis that is of interest to you. Describe the process that is required to implement the analysis and how you would do this.”

So I through myself to research using ‘Google’ some information about Sentiment Analysis and I found this Youtube video that explains how to use R and Twitter to do some Sentiment Analysis.

Personally I think Michael Herman did a fair job with this video; however, it was published on 2012 – therefore with different changes going on in Youtube and new R versions, the code provided show various errors.

So, after playing around with the code provided in the video and doing some searches I successfully analysed some data.

Here is my R code for the Sentiment Analysis proposed by Michael:

#Import data
#Install packages required
install.packages(‘stringr’,dependencies=T) #Use this code to install any packages that you don’t have already installed i.e. ROAuth, twitterR…
#Open the libraries that you will use
# Set SSL certs globally
options(RCurlOptions = list(cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”)))
reqURL <- “”  #important at the moment that it is https –> Twitter needs a secure connection
consumerKey <- “xxxxxxxxxxxxxxxxxxxxxxxx” #if you don’t have this values, you can get them in twitter developer page – create an API (it doesn’t cost anything and you get the values pretty easy)
consumerSecret <- “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”
twitCred <- OAuthFactory$new(consumerKey=consumerKey,
+                              consumerSecret=consumerSecret,
+                              requestURL=reqURL,
+                              accessURL=accessURL,
+                              authURL=authURL)
twitCred$handshake() # the program will ask you for a PIN – this is obtained by Authorising the App in your browser.
tweets=searchTwitter(“#abortion”, n=1500) #here we are using #abortion because it is the example from the video
length(tweets) #it tells you how many tweets do you download (it should be 1500 – notice n=1500 on the previous line)
#Data Manipulation and Algorithm Implementation
tweets.text = laply(tweets, function(t)t$getText())
#now if you haven’t download the documents that Michael mention on his video, you definitely need to do it now. Remember to save them in the same folder that your R code
score.sentiment = function(sentences, pos.words, neg.words, .progress=’none’)
# we got a vector of sentences. plyr will handle a list or a vector as an “l” for us
# we want a simple array of scores back, so we use “l” + “a” + “ply” = laply:
scores = laply(sentences, function(sentence, pos.words, neg.words) {
# clean up sentences with R’s regex-driven global substitute, gsub():
sentence = gsub(‘[[:punct:]]’, ”, sentence)
sentence = gsub(‘[[:cntrl:]]’, ”, sentence)
sentence = gsub(‘\\d+’, ”, sentence)
# and convert to lower case:
sentence = tolower(sentence)
# split into words. str_split is in the stringr package
word.list = str_split(sentence, ‘\\s+’)
# sometimes a list() is one level of hierarchy too much
words = unlist(word.list)
# compare our words to the dictionaries of positive & negative terms
pos.matches = match(words, pos.words)
neg.matches = match(words, neg.words)
# match() returns the position of the matched term or NA
# we just want a TRUE/FALSE:
pos.matches = !
neg.matches = !
# and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():
score = sum(pos.matches) – sum(neg.matches)
}, pos.words, neg.words, .progress=.progress )
scores.df = data.frame(score=scores, text=sentences)
#this positive and negative words are related to abortion
pos = scan(‘positive-words.txt’, what=’character’,comment.char=’;’)
neg = scan(‘negative-words.txt’, what=’character’,comment.char=’;’)
#Analyse the results
analysis = score.sentiment(tweets.text, pos, neg, .progress=’none’)
> table(analysis$score)
> mean(analysis$score)
> median(analysis$score)
> hist(analysis$score)