User Impact at the Heart of Product Development

Keeping user impact at the centre of product development: one AB test at a time

What is AB testing?

AB testing is a simple way of drawing “causal” inferences. An AB test consist of two variants, A and B. Suppose we want to measure the impact of a specific dimension, then the variants are exactly identical except for that dimension. Suppose we call group A as control and group B as treatment, then the experiment intends to compare the two variants by observing which of the two variants is more effective on a targeted goal. When the test goes live, users are distributed randomly into each subset – the randomness removes any bias.

AB tests are also known as randomized control trials (RCT) and are used in all fields including drug testing and design of social incentives. (The 2019 Nobel Prize in Economics was awarded to Esther Duflo, Abijit Banerjee and Michael Kremer for their work adapting the method of randomized control trials (RCTs) to the field of development.)

Source: https://towardsdatascience.com/how-to-conduct-a-b-testing-3076074a8458

Causal inference, i.e. “what causes what”, is critical for us to understand complex real life systems and behaviour. We establish causal inferences by conducting experiments. That said, every experiment cannot be structured in the form of an AB test. AB testing is just one method of experimentation.

Two common issues in AB tests are (a) The variants differ from each other on more than one dimension and (b) there is a sampling bias because we are unable to distribute users randomly.

In EdTech particularly, an AB test might mean withholding a benefit from a set of users (children) and providing it to others and comparing end results. Running such a test is not always ethically desirable. However, quasi-experimental studies or observational studies can help us confirm causality.

Why an (AB) test?

Identifying what works and what does not work in EdTech is challenging. Educators often find that their best assumptions on how students interact and learn are not backed up by the data. EdTech tools need to strike a balance between conflicting objectives. A simple example is that a piece of content may need to be pedagogically sound as well as engaging. It might need to offer short term gratification as well as long term learning benefits. Parent expectations may also vary across age segments, income levels, cultures, etc.

Given that these requirements pull product teams in different directions, empirical confirmation through AB tests can be useful in validating ideas scientifically.

AB testing is a key requirement for effective product development. Digital learning systems can collect large amounts of data from users – this is a huge advantage when compared to non-digital conventional systems. With AB test, the product teams can iterate rapidly.

Stages in AB testing

In the below image, we can see a graphical representation of the lifecycle of an AB test and how it aligns with the data-centric approach of continuous improvement.

Source: https://www.orderhive.com/knowledge-center/ab-testing

Concluding thoughts

It might seem like AB testing is the most obvious way for product teams to quickly innovate. However, AB testing costs resources. Very often, product teams can’t afford the time and money required in doing causal analysis via AB testing. To use AB testing effectively, a product team must have the right attitude, infrastructure, systems and processes.

In the next article, we will present two examples to showcase how we are leveraging AB testing in improving the Freadom app.

fREADom App

A productive screen time app for ages 3 to 12, that focuses on improving English Language skills.

fREADom LIVE

Online English classes for ages 5 to 12. Proven methods for children to improve academic performance and confidence.

1 comment

Leave a Reply

You May Also Like