Collecting and Providing User Behavior Data for Better Decisions

Duration: March 2023 - November 2023
Role: Product Manager

Context

When I took on the role of Product Manager for the Checkout product at the company, I faced challenges analyzing the user journey during the checkout process. We had a few purchase completion events, but nothing that provided insights into errors or how users interacted with the fields. Additionally, I couldn’t understand if there were any behavioral patterns in specific types of checkouts, as each checkout was customized depending on the seller.

Challenge

We needed to collect user interaction events within the Hotmart checkout to help us quantitatively and visually identify pain points or difficulties users encountered while trying to complete their purchases.

My role

As the Product Manager with expertise in Design and Data, my role was to create the project proposal, present it to the leadership team, and be involved in every stage of its validation.

Problem
Impact

Our goal with the checkout was to increase conversion rates, and for that, we needed to create different experiments. However, it was highly complex to choose the most promising experiments or analyze their results without seeing how users behaved throughout the entire flow, beyond just clicking the purchase button.

Goal

To collect and display user interaction events in a Dashboard, allowing us to measure friction and errors within the checkout flow.

Team
  • Product Manager (me)

  • Front-end (2 - Non-dedicated)

  • Back-end (2 - Non-dedicated)

  • Data Analyst (1 - Non-dedicated)

Process

As part of the development process, which followed the Scrum methodology with two-week sprints, the Discovery and Delivery phases were similar. The main goal during Discovery was to create a proposal that would convince leadership of the project’s importance. The idea was not only to create a foundation for new initiatives but also to provide valuable data for other teams that needed information about user interactions in the checkout to test cart recovery experiences.

In this project, we didn’t have a dedicated designer, so I took on that role. Below is the diagram of the activities planned for each stage of the project.

Discovery and Definition

Initial Research Before choosing the tool we would use to collect data in the checkout, I spoke with other teams in the company to understand the existing tools and how they were used for user experience decisions.

Many tools had been contracted and later canceled due to high costs and low returns on projects.

Comparing existing tools

Comparing new tools

Internal Tool

The company had a Data Warehouse, a "library" of data organized into categories called Datamarts. This structure allowed for efficient and organized access to relevant information. Additionally, we had the internal tool Astrobox, which I had previously used as a Product Designer. Astrobox allowed access and visualization of data and the creation of custom views, as well as connecting with Google Sheets.

I realized that, due to a lack of knowledge about the tools or difficulties in using them, many teams didn’t fully leverage their resources and struggled to measure the results of their experiments.

My goal was to find a robust tool to gain a broader view of the quantitative data on user engagement. I researched market alternatives, but all presented challenges, mainly in terms of cost.

The three tools we compared were very suitable for our objectives, but there were some cost issues that made it difficult to justify their use initially. Additionally, challenges like sending data to an external tool and the difficulty of cross-referencing behavioral data with specific customer characteristics led me to explore an internal solution instead.

Since I had previously worked in the data department, I was familiar with our data structure and the well-organized event-tracking processes, although these weren't widely known or utilized within the company.

After technical evaluations of capacity and cost, we decided to use the internal solution, even though it increased development time. The advantage was having everything personalized and controlled internally, improving market intelligence regarding user behavior.

Solution and Validation

The first step in addressing this opportunity was to create thorough documentation that could justify the project to leadership and explain how it would generate long-term benefits, despite not contributing directly to GMV. The project aimed to enable better decision-making based on reliable data and more detailed problem analysis.

I mapped the variations of checkout screens, including payment form options, to track every potential user interaction. Since we sold to over 181 countries, there were many variations to accommodate different payment methods and required documentation.

Next, I listed the metrics we were already tracking in Google Analytics, though not always reliable, and additional ones I wanted to monitor, such as cart abandonment and repeat purchase rates. These were included in the project documentation:

To make the proposal more visual, I included mockups illustrating the data we would have in the final outcome. The idea was to present, in one place, metrics for website traffic and conversion, user engagement, and user feedback with their statuses.

As inputs for the development team, I created a spreadsheet detailing all the events to be tracked, the timing, and their descriptions. During this process, I worked closely with a highly experienced front-end developer, who helped evaluate whether the events were already in place and assess their feasibility.

  • Number of sessions

  • Number of button clicks

  • Conversion rate

  • Abandonment rate

  • Average purchase time

  • Average engagement time

  • And more...

Validation

The validation process involved presenting the proposal to interested parties and gathering feedback at each stage, culminating in a presentation to the leadership team. While there were concerns about internal costs and timelines, the proposal was ultimately supported.

Execution

To deliver the project in stages, we focused on providing a first version quickly, with relevant data to demonstrate its potential value and continue evolving the insights provided. The execution phase involved close collaboration with the team, from modeling the events to validating the data received in the database.

Results and Next Steps

In the first version, we managed to capture key metrics we previously lacked, such as conversion by payment method, conversion by price range, and conversion by customer segmentation. This allowed us to not only track and identify issues in specific scenarios but also analyze A/B tests based on these characteristics.

We also successfully tracked most engagement events, though they were not yet available in the dashboard. However, they were already present in the database, facilitating the next stage of the project, which would present detailed user engagement data. This would give us a funnel view of how users interacted with the checkout form and any errors that occurred.

Learnings & Challenges

  • The project faced many challenges. When we started, the Checkout underwent a technology change, which froze many code updates, even in the testing environment, making it very difficult to validate the events.

  • During this first stage, we faced some challenges. Leadership changes during the project led to increased pressure, as the new team didn’t fully understand why we were using an internal tool instead of a ready-made market solution. As a result, we held bi-weekly meetings to update leadership on our progress and reinforce the importance of the project.