UX Metrics - User Satisfaction Process

Duration: December 2021 - April 2022
Role: Design Ops & Data Design

Context

At the company I worked for, user satisfaction feedback collection was a disorganized and inconsistent process across roughly 20 product teams. Some teams collected feedback, but each used different methods, making it difficult to standardize the data and compare results. Other teams didn't even have a structured collection process, leading to significant gaps in information about the user experience. Additionally, there was an NPS research process conducted by a separate team, which shared some insights with the product teams. However, this process had two major issues: first, access to information was not easy for all teams, limiting its usefulness; second, the feedback collected was usually too broad, providing little specific and actionable information for product teams looking to improve specific aspects of their products.

Challenge

My challenge in this project was to create a unified and efficient process for collecting user satisfaction feedback that could be adopted by all teams in the company. This involved choosing an appropriate research methodology, selecting a tool that allowed customization based on each product's needs, and creating a clear and accessible process for everyone.

Additionally, it was crucial to understand which user journeys were the most problematic to focus efforts on collecting and analyzing feedback. I needed to ensure that the metrics were more precise, with the right user profile, and to segment the results by engagement profile and sales performance for more detailed analysis. The challenge also included standardizing the type of survey to establish a baseline, empowering designers to collect and analyze specific data within their journeys. Lastly, it was essential to focus on creating collection processes for key product journeys, training teams on how to use the new tool, ensuring they knew how to customize it, apply it to their product, and track the results to generate actionable insights.

My role

I was working as Design Ops, bringing my experience as a Product Designer with a data-driven focus to organize the processes of measuring design actions.

Problem
Impact

This lack of organization and standardization in the collection and access to data created inefficiencies, made it difficult to make informed decisions, and compromised the teams' ability to improve the user experience continuously and strategically.

Goal

Establish standardized user satisfaction measurement processes for 100% of the product journeys that involved a Designer or PM.

Team

Design Ops (me), Others designs team (support), Front-ends (support)

Process

To create the satisfaction collection process, I used the same approach as in product creation, dividing tasks into the stages listed below:

Discovery & Definition

Current Survey Processes in the company

In conversations with the product teams, I discovered that each team was trying to conduct its own surveys only at specific moments using available free tools like Hotjar and Typeform. They mostly used the CSAT methodology. These tools were advantageous due to their simplicity and ease of use.

However, for survey segmentation, result consolidation, and distribution control, the process was largely manual in its current form.

In the discovery process, the goal was to define the type of survey we would use, which tool would be implemented, and how we would monitor both the survey results and the process implementation.

Another type of survey that was also used within the company, but not led by the product team, was NPS, which aimed to understand how likely customers were to recommend the Company's tools to their friends. This process was conducted outside the platform and sent to segmented customers once a quarter. Although it had an automated distribution process, there was still a lot of manual work involved.

The advantage of this process was that it had been in place for several years and received a high response rate. However, since it was conducted outside the platform, gathering feedback about specific tools was difficult, and the results were not monitored by the product teams themselves but by the team responsible for the research.

Benchmarking with Other Companies

By talking to fellow designers from other Brazilian companies and analyzing foreign tools we used, I evaluated various satisfaction survey tools available.

Established tools already had built-in surveys as part of the system flow. Most used the CSAT metric, and in some cases, like iFood, they combined CSAT with NPS.

Market Tool Comparison

Even though we had an internal tool that could be improved, I decided to look into market tools that could meet our needs quickly and serve as a tool for various designer-led surveys.

I evaluated the resources, focusing mainly on price and time, and after sharing the pros and cons with the team of designers and managers, we chose Survicate. It met our immediate needs, working both in-app and on websites, integrated with our data tools, allowed user segmentation for survey distribution, and provided control over which surveys were shown to users.

The three most widely used metrics in the market were:

  • CSAT (Customer Satisfaction Score): Measures customer satisfaction with a specific product or service. Typically asks a direct question like "How satisfied are you with [product/service]?" with responses on a scale of 1 to 5.

  • CES (Customer Effort Score): Assesses the effort the customer had to make to solve a problem or complete a task. A common question is, "How easy was it to resolve your issue?" on a scale from "very difficult" to "very easy."

  • NPS (Net Promoter Score): Measures the likelihood of a customer recommending the company to others. The key question is: "On a scale of 0 to 10, how likely are you to recommend our company to a friend or colleague?" The score ranges from -100 to 100, categorizing customers as detractors, passives, and promoters.

After evaluating the differences, I found that CSAT was the most suitable to be a standard survey used at all stages of the product's user flow.

Other companies like Miro and Gympass used external tools specifically designed for satisfaction research.

The first step was to understand the different types of surveys available and which one would best fit our objective of understanding the user experience within product journeys.

Types of Surveys

One specific product within the company already had an integrated internal tool used to evaluate the courses sold on the platform. The tool was similar to market tools but with fewer features and technical limitations for use with other company products.

Solution & Validation

Once the metric and the tool were decided, I needed to create a beta version to test the process on a product and establish a way to track survey responses.

Beta:

Dashboard:

To implement the process, I required the assistance of a front-end developer to integrate Survicate's code into the system, as well as to send specific events that would identify the responding users. This allowed me, through integration with our database, to access additional user information for more in-depth analysis.

During the integration with our database, I needed support from a data specialist to help create a performant query that would retrieve survey results and merge them with the user data we had in the system. My SQL knowledge at that time was intermediate, so their assistance was essential.
In this way, we can not only have a data analysis view in Survicate but also an internal view, where we could cross user feedback with other usage characteristics to better analyze behavior and difficulties.

Validation

Once the process was established, I activated the test survey in a product I had previously worked on. After making a few necessary adjustments based on initial feedback, I successfully collected the required data.

With the process finalized, I introduced it to two more teams who tested it with me, allowing us to refine it further before sharing it with other teams.

Delivery & Monitoring

After testing the process, I created documentation that could serve as a guide for other designers to apply in their product journeys and begin measuring satisfaction. Additionally, I recorded a training session that provided an overview and instructions on how to create surveys using the tool.

I also gave a presentation to all Product Designers and Product Managers, highlighting the importance of implementing the survey and the satisfaction metric.

How to segment? (Training)

How calculate your sample size? (Training)

Monitoring Implementation

Since the goal of this project was for every product with a Product Designer or Product Manager to have a survey in place, I created a way to track the products that were implementing surveys via our data tool.

Presentation

Results & Learnings

  • Within 5 months, we were able to implement surveys in 45 out of 50 product journeys.

  • Creating a process that involves many stakeholders requires clear communication and explanation. The value proposition must be clear because at some point, the team will need to pause and organize to make the process work in their product.

  • It’s important that one of the team's KPIs is user satisfaction, as any new initiatives could have impacts. With the survey in place, especially if segmented, we can investigate and dive deeper into any potential problems that arise.

  • The satisfaction survey itself can serve as a recruitment tool for future in-depth research.

  • As a Design Ops specialist, I was able to fully dedicate myself to overseeing all phases and providing support. This becomes a complex process when there isn’t a person dedicated to this role.