6 minute read

How often do people talk about a specific topic? How popular is a hashtag in twitter? In order to answer this kind of questions, we may examine how fast the next tweet will arrive. This post shows how to visualize inter-arrival times of tweets with a specific hashtag.

1. Collecting Tweets

Gathering sample tweets by using R is discussed at the post [Crawling Tweets by using the Search API]( {{site.url}}{{site.baseurl}}{% post_url 2017-02-02-twitter-oauth-search-api %} ).

Follow the steps [in the post]({{site.url}}{{site.baseurl}}{% post_url 2017-02-02-twitter-oauth-search-api %}) to complete the following steps.

Here, we choose the hashtag #coffee for the query to request 1000 sample tweets. The function twListToDF converts the tweet list into a data frame tweets.df from which we will extract the creation time of the tweets by accessing the created field in the data frame.

tweets <- searchTwitter("#coffee", n=1000)
tweets.df <- twListToDF(tweets)

Creating a Histogram Plot of the Creation Times

We will firstly take a look at the creation time of all the tweets. Each tweet has a key whose value is its creation time with the UTC time zone. The following shows statistics of the creation time by calling the R function summary.

                 Min.               1st Qu.                Median 
"2017-02-14 16:24:47" "2017-02-14 17:02:24" "2017-02-14 17:43:05" 
                 Mean               3rd Qu.                  Max. 
"2017-02-14 17:41:03" "2017-02-14 18:19:46" "2017-02-14 18:57:11" 

Then we create a histogram plot of the column createdof the data frame tweets.df. The bins are left side inclusive.

# hist
     main="",xlab="Creation time", col='bisque')
# ggplot2
ggplot(tweets.df, aes(x=created)) + 
  geom_histogram(bins=18, closed='left', colour='black', aes(fill=..count..,alpha=0.2)) +
  scale_fill_gradient('Count', low='aliceblue', high ='blue') 

2. Calculating Inter-Arrival Times

How soon will be the next tweet arriving? We need to calculate the time intervals between every two consecutive tweets. The sample tweets are not ordered. Thus, we need to sort the tweets by the creation time in ascending order.

Run the following function which shows the type of the created vector is POSIXct.


[1] “POSIXct” “POSIXt”

It needs to be coerced to integer type for sorting.

The function as.integer will convert the times to integers. The function sort will sort the time integers in ascending order. Run the following sort function.

# integer casting and sorting
created.sort <- sort(as.integer(tweets.df[,'created']))

After running the statement above, we have the sorted time integers in a vector created.sort. We can inspect the first 10 values in created.sort by running the expression:

# inspect the first 10 values

To find the difference in seconds between each pair, simply calling the function diff which will repeat calculating difference for every two consecutive integers in the vector.

# find the difference in seconds for every two consecutive tweets
created.diff <- diff(created.sort)
# inspcet created.diff
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  0.000   3.000   6.000   9.153  13.000  57.000 

Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000 3.000 6.000 9.153 13.000 57.000

Next, we will visualize the frequency of the inter-arrival times in created.diff.

3. Creating a Histogram Plot of the Inter-Arrival Times

# Frequency plot
ggplot(data=as.data.frame(created.diff), aes(x=created.diff)) + 
  geom_histogram(bins=32, closed='left', colour='black', aes(fill=..count..,alpha=0.2)) +
  scale_fill_gradient('Count', low='blue', high ='orange')  
# Plot probabilities of each bin
ggplot(data=as.data.frame(created.diff), aes(x=created.diff)) + 
  geom_histogram(aes(y = ..density..,fill=..count..,alpha=0.2),bins=32, closed='left') +

Density of a random variable describes the relative likelihood for this random variable to take on a given value.

In the next step, we will calculate the cumulative probabilities for each possible interval. For each time interval, its cumulative probability describes the likelihood of expecting the next tweet with a shorter wait time than the length of interval.

4. Calculating cumulative probabilities

The following snippet will calculate the cumulative probabilities in cumProb.

 1# Calculating the cumulative probabilities of created.diff
 2# 1. create bins
 3bin.width <- 1 #specify the width of each bin
 4min <- min(created.diff)
 5max <- max(created.diff)  
 6breaks <- seq(min, max+1, by = bin.width) # specify end points of bins
 7cuts <- cut(created.diff, breaks, right=FALSE) # assign each interval with a bin
 9# 2. the table function returns a table for counts/frequency of each level/bin
10freq <- table(cuts)
11# 3. returns a vector whose elements are the cumulative sums
12cumFreq <- cumsum(freq)
13# 4. divide freqency by the total to get cumulative probability
14cumProb <- cumFreq/length(created.diff)

5. Grouping breaks, cumulative probabilities and the hashtag into a data frame

Before plotting, we want to make a data frame to wrap all the data that will be used in the plot.

 1# create a sequence from min to max
 2x <- c(min(created.diff) : max(created.diff))
 3# convert cumProb to a vector
 4y <- as.vector(cumProb)
 5# create a factor for legend
 6tag <- c('#coffee')
 7legend <- as.factor(rep(tag, times=length(y))) #factor vector of same length of y
 8# group x, y and legend into a data frame
 9dat <- data.frame(x,y,legend) 
10# inspect dat1

6. Plotting Cumulative Probability Distribution of Inter-Arrival Times

Firstly, be sure to have easyGgplot2 installed in R. If not, run the following


Then run the following snippet which will plot the cumulative probability distribution of inter-arrival times for #coffee.

 1# one
 2p1 <- ggplot(data = dat, aes(x=x, y=y)) + geom_line()
 3# two
 4p2 <- p1  + 
 5  geom_line(aes(colour=legend)) +
 6  xlim(-1,60) +
 7  xlab('Inter-Arrival Time in Seconds') +
 8  ylab('Cumulative Probability') + 
 9  theme(axis.text=element_text(size=7),
10        axis.title=element_text(size=7),
11        plot.title= element_text(lineheight=.2),
12        legend.text=element_text(size=7),
13        legend.position='bottom')
14# three
15p3 <- p2 + geom_point(aes(colour=legend))
18ggplot2.multiplot(p1,p2,p3, cols=3)

The generated plots shows a classic Poisson distribution of arrival probabilities.

The Possion distribution is a discrete frequency distribution that gives the probability of a number of independent events occurring in a fixed time. The Poisson distribution deals with mutually independent events, occurring at a known and constant rate r per unit (of time or space). The rate r is the expected or most likely outcome.

If the tweets arrive rapidly, the curve will become steep.

7. Finding the Probability of the Next Tweet Arriving Less Than x Seconds

To find the likelihood of seeing the next tweet less than 5 seconds with the hashtag coffee, run the following snippet:

total <-  1000


The result above tells us that the probability of the wait time being less than 5 seconds is about 0.46. This means it is not very realistic to expect the next tweet with #coffee less than 5 seconds.

If we want to find out the amount of seconds that 75 percent of tweets will arrive in less than that amount of seconds, run the quantile function:

quantile(created.diff, 0.75)

75%: 13

The result shows that 75 percent of tweets will arrive in less than 13 seconds.

8. Performing a Poisson Test

total <-  1000
mu <- mean(created.diff)
c <- sum(created.diff<=mu)


The poisson test shows that for \(95%\) of the samples from twitter with #coffee with a sample size of 1000 and the proportion of samples that arrive in mu seconds or less will fall into the confidence interval: 58.2% to 68.1%

comments powered by Disqus