We wrote earlier about AB testing in detail. We described what an AB test is and when it can and cannot be used. We also mentioned at the end of that article that we would be sharing two case studies of AB testing soon.
Read on to know more about these two case studies to optimize design elements and content discovery on the Freadom App.
1. Optimize landing page design elements for better user activation
Every experiment starts with a hypothesis. The subsequent steps are executed in alignment with the said hypothesis. It is very important to outline the hypothesis explicitly and precisely at the start of the experiment. Most importantly, our hypothesis dictates the selection of the key outcome to be tracked in order to decide if the experiment was successful or not.
In this experiment, we wanted to test two design element changes and check if these changes will lead to better user activation.
Design Element 1: Change the title of the ‘News’ section to ‘Buzz’ as it is more contemporary and in alignment with the type of articles being shared in the section
Design element 2: Move the ‘Search’ banner to landing page (Feed page) to make it easier for user to search for content on their first interaction with the app
Read more about this in detail here:Optimize-landing-page-design-elements-for-better-user-activation
2. Test new algorithm to optimize content discovery on the Freadom app
Reading stories is one of the primary activity users do on the app. The Freadom team continuously analyses app data to understand user behaviour. The goal is to leverage this data to improve user experience and learning outcomes.
One such exploratory analysis lead to the insight that some users have the tendency to ignore “recommended stories” – instead they explored the app on their own and picked stories at random. For these users, these “recommended stories” would remain on their feed since the default design was for them to move out of the tray only after being read. Thus, some stories were getting stuck on user home page occupying precious real estate.
In order to remedy this situation, we decided to tweak our story selection algorithm and added a component of randomness. This way, the stories which a user ended up ignoring on one day would not repeat on the next day. At the same time they were still eligible for the algorithm to pick up again in the future. It was important not to remove these stories from the eligible story pool as a user’s failure to engage with a story could not be construed as disinterest given the random nature of exploration. Additionally, these stories were recommended by the algorithm for that user, and our recommendation algorithm has largely proven to be useful for the user.
The change was implemented across the entire user base in one release. It was important for us to ascertain if the new algorithm leads to better user engagement. Since an AB test was not carried out upfront, we chose to run a quasi – experiment research methodology to understand the impact of the new algorithm.
You can go through the detailed description of this case study here:Introducing-new-algorithm-to-optimize-content-discovery-on-the-app
A productive screen time app for ages 3 to 12, that focuses on improving English Language skills.
Online English classes for ages 5 to 12. Proven methods for children to improve academic performance and confidence.
Read this 2-3 times. Really love the info flow of these used cases. Permission to share your article on my personal linkedin and add my two cents, in terms of product?