article02.05.20237 minutes

So you launched your digital product – now what?

  • Strategy
  • Design

A product launch should be celebrated: you made it! But let’s not roll down our sleeves just yet... As we drive the roadmap forward, fill our backlog and develop feature sets to grow the product from the MVP (minimum viable product) stage forward, we sometimes forget to look in the rearview mirror. Are users engaging with the product? Are there any major friction points or funnel drop offs?

Periodic health checks are important to ensure we don’t lose any customers along the way – and they can pave the way to timely product improvements. Here are a few ways to determine how to optimize your product’s performance.

Measure

We initially, create a digital solution to attain business goals like increasing sales, driving self-serve or automating certain crucial business processes. After launching, we want to know if the solution is meeting these expectations by regularly following key performance indicators (KPIs).

How many app downloads or sessions do we have? This is usually the first question we ask ourselves, which is quickly followed by “where are users navigating to?” and “what did they do on this or that screen?”

Analytics or a similar platform is the usual go-to for these types of quantitative results. If implemented conscientiously during the development stage, the data we surface will be representative of the business KPIs. We will be able to tell if the conversion funnels are doing well and, if not, where we are losing our users along the way. This is generally an excellent starting point for deeper investigation.

To know why users are dropping, we gather more data – and different types of data. Are there any performance issues like bugs, crashes or slow loading times? Or could the issue be an unclear user journey? Adding complementary performance and qualitative measurement tools equipped with features like crash reporting, session recordings and heat maps can give us valuable insights into the user experience and locate areas where improvements are needed.

Finally, sharing is caring… Not everyone has the tech savviness to dig around for numbers and analyze their meaning. Dashboards showcasing real-time indicators are easy to understand and share with stakeholders and decision-makers. Having a consolidated view of key performance metrics gives Product Managers a solid argument to obtain budgets to include improvements in a roadmap.

Test (with testers and users)

Measuring helps us determine fail points – technical or otherwise – but testing is the key to understanding them.

Technical vulnerabilities such as crashes and bugs can be an important pain point for users, and they should be addressed quickly. However, before bugs are sent in for repairs, they need to be examined and re-tested by the experts to identify two key pieces of information for the developers: repro steps and repro rate. The Quality Assurance team will narrow down factors and causes with device-specific and in-context testing. This is an essential element for the bug solution, which will in turn help us size and prioritize it in the backlog according to impact and urgency.

When it comes to figuring out why users are not engaging with the product, we can further investigate by empathizing with them. User research and usability testing can be done at any phase of a project, but it can be particularly insightful in the post-launch phase.

User research is often used in the initial stages of a project, pre-development, to find out if the product’s concept has any value. This is most often done through interviews to better understand the user’s attitudes and behaviours. The outcome can confirm (or disprove) our assumptions and strongly influence the product vision. This method can also be applied to an existing product to test an idea for a feature or concept.

Usability testing helps determine if the product is easy to use. It can take on various forms, but it usually involves getting a limited number of users to accomplish tasks and collecting their feedback. These types of tests can be more or less structured depending on time and needs, from laboratory research to informal guerilla testing. The aim is to understand what the user thinks and feels as they go through tasks and flows.

Fast iterative prototyping to adjust the design or content can be particularly useful in this context to validate new ideas directly with the user.

Analyze, then optimize

With quantitative data gathered from Analytics combined with qualitative insights collected via observation and testing, we have the building blocks to answer the crucial questions:

1. Which business goal KPIs are not meeting expectations?

Analytics and dashboard data points on elements that are lagging in performance.
For example:

  • Users starting, but not completing an intake form;
  • Assessing at which point users drop when completing a task like reserving a seat on a flight;
  • Many users failing to authenticate.

2. Why are the users failing to engage?

Recorded user sessions, heat maps and crash logs can churn more insights quickly. Further testing of technical issues or user experience of the product helps us gain a deeper understanding of the underlying issues and find potential solutions. For example:

  • Watch session recordings of users filling the form to determine fail points;
  • Get feedback from users to determine if calls to action and navigational elements are clear enough;
  • Test the authentication flow and pinpoint the root causes of users’ frustrations.

3. How can we improve?

Knowing the root cause of the issue and garnering rich insights through observation and testing, Designers, Developers and Product Owners can suggest solutions that will improve user engagement through design, content or technical enhancements.
For example:

  • Re-design the intake form to reduce the completion time;
  • Rework the reservation flow content for usability;
  • Implement a more forgiving password retrieval solution.

As previously mentioned, fast prototyping in the context of usability testing can be a fertile ground to test these potential solutions directly with users. A/B testing on the product on a targeted audience can also be an interesting tactic to validate potential solutions.

Make backlog space for the fixes

Having insights into the products’ areas of improvement with the measure-test-optimize method enables us to allocate a portion of our sprints to fixing the basics and the rest to growing the product features as planned.

With this strategy, we ensure we don’t carry around any user pain points into the future and stay on top of product performance.