Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In my previous blog post, I analyzed my Twitter archive and explored some aspects of my tweeting behavior. When do I tweet, how much do retweet people, do I use hashtags? These are examples of one kind of question, but what about the actual verbal content of my tweets, the text itself? What kinds of questions can we ask and answer about the text in some programmatic way? This is what is called natural language processing, and I’ll give a first shot at it here. First, let’s load some libraries I’ll use in this analysis and open up my Twitter archive.
Cloudy With a Chance of Lots of Words
Perhaps the simplest question we might want to ask is about the frequency of words used in text. What words do I use a lot in my tweets? Let’s find out, and make a common visualization of term frequency called a word cloud.
First, let’s remove the Twitter handles (i.e., @juliasilge
or similar) from the tweet texts so we can just have the real English words.
Now, let’s go through and clean up the remaining text. We will turn this into a “corpus”, a collection of documents containing natural language text that the tm
text mining package knows how to deal with. After that, we’ll go through and apply specific mappings to the documents (i.e., tweet texts). These can actually be done all in one call to tm_map
but I have them separated here to show clearly what is going on.
The mappings do pretty much what they sound like they do. First, we remove all punctuation, then we change everything to all lower case. The next line removes English stop words; these are very common words that aren’t super meaningful. If we didn’t remove stop words, then most people’s text will look pretty much the same and my word cloud would just be an enormous the with various prepositions and conjunctions and such around it. In the next line, let’s remove some “words” that were leftover from how special characters were treated in the Twitter archive. The last line strips whitespace from before and after each word so that now the corpus holds nice, neat, cleaned-up words.
A common transformation or mapping to do when examining term frequency is called stemming; this means to take words like cleaning and cleaned and cleaner and remove the endings of the words to keep just clean for all of these examples. Then we can count all of them together instead of separately, which sounds like it would be an excellent thing to do. The stemming algorithm in tm
did not work very well for my purposes for my tweet texts, however. It turned some common words in my tweet texts into misspellings; it changed baby to babi, morning to morn, little to littl, etc.
This might actually be a sensible, great thing to do if my ultimate goal was to algorithmically compare my corpus to another corpus. It wouldn’t much matter if the words being counted were correctly spelled English words, and it would make more difference pay attention to stemming. However, I want to make a word cloud for us all to look at, so I don’t want to have anything nonsensical or incorrectly spelled; I will leave out stemming. It did not make a big difference in my results here when I did and did not include stemming. That’s actually pretty interesting; it must be related to the tense and inflection that I use in tweets.
So now let’s make the word cloud. The wordcloud
function takes either a corpus like we have here, or a vector of words and their frequencies, or even a bunch of text, and then plots a word cloud. We can tinker with the scale of the sizes of the words to be plotted, the colors used, and how many words to plot. For what it’s worth, I could not get wordcloud
to play nicely with svglite
, so I am using the default PNG format from knitr
here.
I just… Like, really?
You can see that there are a few examples where stemming would have made sense, thing and things, for example. Also, we can definitely see here that I have been a social, personal user of Twitter since I joined in 2008. These words all look conversational, personal, and informal. We would see a very different word cloud for someone who has used Twitter for professional purposes over her whole Twitter career. Another way to quantify the information shown in the word cloud is called a term-document matrix; this is a matrix of numbers (0 and 1) that keeps track of which documents in a corpus use which terms. Let’s see this for the corpus of my tweets.
So this term-document matrix contains information on 14712 terms (i.e, words) and 10202 documents (i.e., tweets). We could also make a document-term matrix, which has the rows and columns the other way around. Let’s look at part of this matrix.
This is how the term-document matrix keeps track of which documents (tweets, in this example) contains which terms. Notice that document 272 contains the word sugar and so has a 1 in that row/column. (And also Sufjan shows up in my term-document matrix! So seasonally appropriate. Best Christmas album ever? Or absolutely Christmas album ever?) We can see we might want to use stemming; do we really want to keep suffer and suffering separate in this matrix? That would depend on what we want to do with it. We might want to calculate the similarity of this corpus as compared to another corpus and use the term-document matrices, or we might want to do some linear algebra like a principal component analysis on the matrix. Whatever step we might want to take next, notice what a sparse matrix this is – LOTS of zeroes and only a very few ones. This can be a good thing because we could use special numerical techniques for doing linear algebra with sparse matrices that are faster and more efficient.
My Twitter BFFs
We made the first word cloud by getting the English words from the tweet texts, after removing the Twitter handles from them, but now let’s look at a word cloud of just the Twitter handles from my tweet texts, my Twitter friends and acquaintances, as it were.
Now we have a new corpus made up of all the twitter handles extracted from all the tweet texts. These have been extracted directly from the text of the tweets, so these will include everyone who I retweet and reply to, including both more famous-type people and also personal friends.
As I look at this word cloud, I also see the evidence that I have used Twitter in a social, personal way; these are almost all personal friends, not professional contacts or the like.
All the Feels
We can use an algorithm to evaluate and categorize the feelings expressed in text; this is called sentiment analysis and it can be, as you might imagine, tricky. For example, sentiment analysis algorithms are built in such a way that they are more sensitive to expressions typical of men than women. And how good will computers ever be at identifying sarcasm? Most of these concerns won’t have a big effect on my analysis here because I am looking at text produced by only one person (me) but it is always a good idea to keep in mind the limitations and problems in the tools we are using. Let’s load some libraries and get sentimental.
The sentiment analysis algorithm used here is based on the NRC Word-Emotion Association Lexicon of Saif Mohammad and Peter Turney. The idea here is that these researchers have built a dictionary/lexicon containing lots of words with associated scores for eight different emotions and two sentiments (positive/negative). Each individual word in the lexicon will have a “yes” (one) or “no” (zero) for the emotions and sentiments, and we can calculate the total sentiment of a sentence by adding up the individual sentiments for each word in the sentence. Not every English word is in the lexicon because many English words are pretty neutral.
All the words in that sentence have entries that are all zeroes except for “holy” and “shining”; add up the sentiment scores for those two words and you get the sentiment for the sentence.
The only word in that sentence with any non-zero sentiment score is “frightful”, so the sentiment score for the whole sentence is the same as the sentiment score for that one word.
Let’s look at a few of the sentiment scores for my tweets and then bind them to the data frame containing all the rest of the tweet information.
What would some examples look like?
Celebrating Rob's birthday today, and about to try to get a flourless chocolate cake out of a springform pan. Wish me luck.
— Julia Silge (@juliasilge) December 21, 2014
This tweet has anticipation and joy scores of 4, a surprise score of 2, a trust score of 1, and zero for all the other sentiments.
Stomach bug thoughts: cholera would be a terrible way to die.
— Julia Silge (@juliasilge) January 1, 2013
This tweet, in contrast, has disgust and fear scores of 4, a sadness score of 3, an anger score of 1, and zeroes for all the other sentiments.
Let’s look at the sentiment scores for the eight emotions from the NRC lexicon in aggregate for all my tweets. What are the most common emotions in my tweets, as measured by this sentiment analysis algorithm?
I am a positive person on Twitter, apparently, with joy, anticipation, and trust far outweighing the other emotions. Now let’s see if the sentiment of my tweets has changed over time. We can use dplyr
and cut
to group the tweets into time intervals for analysis.
The positive sentiment scores are always much higher than the negative sentiment scores, but both show some decrease over time. The positive sentiment exhibits a stronger downward trend. Apparently, I am gradually becoming less exuberant in my vocabulary on Twitter?
Now let’s see if there are any associations between day of the week and the emotions from the NRC lexicon. Use the wday()
function from the lubridate
package and dplyr
to group the tweets by weekday.
These are mostly consistent with being flat, although it looks like joy and anticipation may be higher on the weekends than on weekdays.
Similarly, let’s see if there are associations between the eight emotions and the months of the year.
The first thing to notice here is that the variance is much higher for the months of the year than it was for the days of the week. Most (all?) of this must be from our old friend, the Central Limit Theorem. The sample size used to calculate each point is about 12/7 bigger for the plot showing the days of the week compared to the plot showing the months of the year, so we would expect the variance to be 12/7 (or 1.714) times larger in the months-of-the-year plot. Another thing I notice is that joy and anticipation show a distinct increase in December. May your December also be full of joy and anticipation!
The End
The R Markdown file used to make this blog post is available here. I am happy to hear feedback and suggestions; these are some very powerful packages I used in this analysis and I know I only scratched the surface of the kind of work that can be done with them.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.