A-B testing – it’s about what we know

In his post about keeping Apps relevant and prevalent, Rick wrote about the importance of ensuring Apps are functionally relevant to those using them, that they are fit for purpose and contain features that help, not hinder or get in the way. App analytics provide insight to what features are being used, by which users, for how long, etc. But where do we look to answer how under-performing features can be improved?

This is where A-B testing can come in to play. A-B testing is a simple way to test small changes to user interface elements against the current design and determine which variation produces better results. It is a method to validate that changes improve how many users take some desired action – the conversion rate – before those changes are deployed for all users. The high-level process for A-B testing can be illustrated as follows.

 

The process begins with a hypothesis – a change that is thought to produce an improvement in conversions. A-B testing shows the two versions of the UI to different users, and lets user behaviour determine the winner. Running an A-B test takes the guesswork out of optimization, and shifts design conversations from “we think” to “we know.” Over time, repeatedly testing and optimizing improves the overall user experience and provides valuable insight about user behaviour, helping the organization to learn what design practices are most effective for its target audience.

A-B testing has been in use on the web since the early 2000’s. Google was a pioneer of the experimentation approach to design, famously testing variations of their search algorithm to gauge effectiveness. Facebook recently made headlines with their A/B Testing practices, but it is a common practice now among many companies large and small. Recently, A-B testing has been making it’s way into native mobile App development. Optimizely, a leading A-B testing platform, now provides a native iOS SDK that, once integrated into an App, makes it possible to create variations in the native user interface, without resubmitting to the App store.

 

While A-B testing is certainly not the only technique you should consider when it comes to optimizing your user experience, it has some distinct advantages:

  • Results are quantitative, so you can say with confidence which variation is the “winner”
  • It’s very good at accurately measuring the effectiveness of small changes
  • It’s an excellent way to unambiguously resolve design trade-offs
  • Especially when supported by a good tool, it can be done rapidly and inexpensively

A-B testing is not, however, effective is for measuring the impact of sweeping design changes. With a major overhaul of an App user experience, the comparison that  A-B testing relies upon for systematic measurement is no longer possible. Qualitative input from user feedback (App store reviews, user surveys and/or direct observation of user behaviour) is more insightful for identifying “big” user experience problems and measuring the effectiveness of the correspondingly big user experience changes.

When used as part of an overall usability and optimization strategy, though, A-B testing provides reliable and actionable feedback on design choices. With it’s focus on small, incremental improvements, A-B testing is a valuable technique to help ensure App functionality is continuously improved to optimize effectiveness and remain relevant to users.

Leave a Reply

Your email address will not be published.

top