An Activation Metric for Paying Users

Which Behaviors Indicate Retention?

In a previous analysis we used some simple EDA techniques to explore “activation” for new Buffer users. In this analysis, we’ll use a similar approach to explore what activation could look like for users that subscribe to Buffer’s Awesome plan. We’ll define success in this case as being retained – not cancelling the subscription – for at least six months. The features we’ll analyze are: The number of days that the user was a Buffer user before becoming a paid customer. [Read More]

Defining an Activation Rate

What Makes New Users Successful?

A couple years ago we discovered that new users were much more likely to be successful with Buffer if they scheduled at least three updates in their first seven days after signing up. We defined success as still being an active user three months after signing up. In this analysis we’ll revisit the assumptions we made and determine if this “three updates in seven days” activation metric is still appropriate for today. [Read More]

How Buffer Uses Slack

Buffer started using Slack (again) on Thursday, June 2, 2016. Slack makes available some great data to all members of the team, so I thought it would be fun to analyze some of Buffer’s usage in the past couple of years. Transparency is one of our core values, so it is always good to check in and see how we’re doing on that front. In this analysis, we will look at message frequency over time, the percentage of messages sent in private and public channels, and the percentage of messages sent in DMs. [Read More]

How Many Twitter Accounts Are Selected in Tailored Posts Sessions?

This question came to me last week from one of our product managers. Let’s set about answering it! To do so, we’ll gather updates sent in the past months from Tailored Posts sessions, calculate the average number of Twitter profiles selected for each user, then average that average. As of today, Tailored Posts has been rolled out to around 50% of Buffer users. Findings The vast majority of Tailored Posts sessions that include at least one Twitter profile selected only have a single Twitter profile selected. [Read More]

Analyzing Tweets with TextFeatures

I recently came across another useful package from Mike Kearney called textfeatures. It’s a simple package for extracting useful features from character objects, like the number of hashtags, mentions, urls, capital letters, exclamation points, etc. In this analysis we’ll analyze tweets from Buffer for Business users, and see which features correlate most closely to engagement. First let’s load the libraries we’ll need. # load libraries library(buffer) library(dplyr) library(tidyr) library(ggplot2) library(hrbrthemes) library(ggridges) library(textfeatures) library(corrplot) We now need to gather tweets sent from Buffer for Business users in the past few weeks. [Read More]

Bot Or No Bot?

Identifying Twitter bots with machine learning

I recently happened across this Tweet from Mike Kearney about his new R package called botornot. It’s core function is to classify Twitter profiles into two categories: “bot” or “not”. Having seen the tweet, I couldn’t not take the package for a spin. In this post we’ll try to determine which of the Buffer team’s Twitter accounts are most bot-like. We’ll also test the botornot model on accounts that we know to be spammy. [Read More]

Revisiting Churn Surveys

A Tidy Text Analysis

Back in July, I analyzed responses from Buffer’s churn surveys. In this post we’ll recreate that analysis with more recent data. The goal is to see if general themes and trends have changed over time. It will also help remind us of the reasons why people choose to leave Buffer. We’ll use data collected from four separate surveys that represents different types of churn: The exit survey prompts users to explain why they are abandoning the Buffer product completely. [Read More]

How does the 280 character limit affect tweet length?

Twitter increased the character limit to 280 for most countries in November of 2017. We quickly followed suit and enabled the functionality in our composer and browser etensions. In this analysis we’ll take a look at a random sample of tweets scheduled with Buffer in the past couple of years to see if people have been taking advantage of the increased character limit. We’ll gather the tweets by querying Buffer’s updates table, but we could also use the handy rtweet package to gather the tweets. [Read More]

Churn Prediction with Deep Learning

In this analysis we’ll try to predict customer churn with an atrificial neural network (ANN). We’ll use Keras and R to build the model. Our analysis will mirror the approach laid out in this great blog post. First let’s load the libraries we’ll need. # load libraries library(keras) library(lime) library(tidyquant) library(rsample) library(recipes) library(yardstick) library(forcats) library(corrr) library(buffer) library(hrbrthemes) The features used in our models can be found in this Look. The look contains all Stripe subscriptions that were active 32 days ago. [Read More]

Which type of post gets the best reach on Facebook?

An analysis of Buffer's Facebook posts

People often ask what type of post gets the most reach on Facebook. It’s evident that videos have become more prevalent on Facebook in the past couple of years, but is the increase in reach worth the time and effort it takes to create video content? In this analysis, we’ll try to answer that question within the narrow scope of Buffer’s Facebook posts. We’ll analyze all of the posts published to Buffer’s Facebook Page in 2017 and estimate the effect that the type of post has on reach. [Read More]