After a great run, the Winning the Internet blog has been retired. However, you can still keep in touch with New Media Mentors here.
Expert and practitioner of analytics and testing, Jim Pugh, who worked with the Obama campaign, Organizing for America and the Democractic National Committee shares his insights and knowledge in our latest interview from the field. This is great advice – not to be missed! (Oh and he has a Ph.D. in Distributed Robotics. You might want to call him doctor.)
My background is in computer science and, more specifically, robotics. In 2008, I was finishing up my Ph.D. in the field of Distributed Robotics at the Swiss Federal Institute of Technology in Lausanne, Switzerland. I had seen then-Senator Obama speak in video clips online, and had been very impressed with his candor and intelligence.
Since I was finishing up my degree, I decided that I would head out to Chicago once I was done and do what I could to help Obama win the election. Come July, I hadn’t actually quite finished my thesis, but I went out to Chicago anyway and offered my help. While there wasn’t any way for me to directly apply my knowledge of robotics, they had just created an “Analytics team” in the New Media department of the campaign — my background in computer science made me a good fit to help out with that work.
I worked with the Obama campaign through the election in 2008, doing data analysis and helping with testing on our webpages and emails. I hadn’t planned on continuing with political work after that, but with the start of Organizing for America in 2009, I was lured back to eventually run the analytics and testing program for the Obama digital program and the Democratic National Committee.
Are there basic rules of testing you would recommend to nonprofits?
1. Start simple. Testing can be complicated, and it can be easy to feel overwhelmed. Instead of trying to do it all at once, start with the basics: try running a two-way subject-line test on your next email, and see which one does better. Once you’ve gotten into the routine of that, you can move on to testing three or four subject lines at once, email content tests, and webpage experimentation.
2. Randomize your audiences. Results from a head-to-head test are only valid if you use randomized groups for each test variant. If you forget to do this, it introduces bias into your metrics, with one group often having more active people than the other. This makes whatever results you get pretty much meaningless.
3. Ignore open rates. People often judge their emails based on what percentage of people are opening them. In reality, this is a completely useless evaluation metric — you don’t care about how many people open your email, you can about how many people take action. Look at action rates to evaluate how well your email did.
What is the most important element to test?
For email campaigns, the first thing to test is the subject line — it’s easy and straightforward to come up with a couple of different ideas for email subject lines for your draft, send them out to randomly selected groups of subscribers, and see which one people respond to most strongly. You can routinely get boosts of 15% or more with very little effort.
You can see a bigger boost (often 50%+) by doing email content tests, pitting different drafts head-to-head. This requires the additional effort in creating multiple drafts, though, so it’s only a good option if you have the person-power to pull it off.
Really, though, you should test everything you possibly can. It’s incredibly hard to predict which things will have a big impact on user response, so the best approach is to test them all.
Do you have an example of a success story in testing you could share with us?
One of my favorite testing stories is from when I was working on the Obama digital team, and we were running webpage tests on the “splash page of the Obama website which was the page that new visitors would see before they got to the homepage. We wanted to maximize the number of people who would sign up for our email list there.
The page we were using had already been through four or five rounds of testing, so we thought it was pretty optimized at that point. It showed a big picture of President Obama, the text “Join Me”, text fields for the user to input their email address and ZIP code, and a button that said “Get Involved”. This simplistic approach had consistently outperformed more complex pages that we’d tried.
As a final round of testing, we decided to try out some alternatives to the four words we had on the page: “Join Me” and “Get Involved”. We came up with a couple of ideas, one of which was “Fired Up?” and “Let’s Go!”.
After launching the test and waiting for a few thousand visitors to come to the site, we discovered that the new language increased the signup rate on the page by a solid 29%. That was quite the impressive boost for a difference of only four words.
What are the biggest mistakes people make when testing?
Not doing it. Usually when things start to get busy, testing is the first thing that goes out the window (I’ve even been guilty of this myself). Because the benefits you get from testing are often difficult to see, it can be easy to forget just how valuable doing it really is in comparison to your other work.
Any last words of wisdom?
Get to know your data. Test results and other metrics are a lot more meaningful when you have a good sense of what these numbers actually mean and how they vary over time. Pay attention to your email and webpage statistics to become more familiar with them, and all this stuff will start coming to you more naturally.