Support Layer

The Setup

I was embedded in a cross-functional team working across Customer Service, Analytics, Technology, and Operations. The challenge was clear: we needed to build a self-service support system that would work first for the Baltics, then scale to all of Wolt's markets.

This meant designing for different comprehension levels and languages while exploring how AI and machine learning could make the experience smarter. One product team tackling a problem that touched thousands of users and support staff daily.


The People

We were designing for three distinct groups:

End Users

The people ordering from the app who needed help when something went wrong.

Support Teams

The local teams and customer service representatives helping users resolve issues.

Operations Team

The internal teams organizing and managing support information across markets.


The Problem

How might we help users help themselves?

When we mapped out how we handled support, we found only two modes: Proactive (where our platform automatically sends messages, like notifying a user about a delay) and Reactive (where users had to reach out to us to get help).

There was nothing in between.

This meant small, simple issues – like a missing drink or a quick question about refunds – were clogging up our support inbox. Making our support teams were spend time on tasks that users could easily handle themselves, keeping them from more complex problems that actually needed a human touch.


Understanding the Journey

I started by creating a service blueprint that mapped the entire support journey. Every touchpoint between users, support staff, and our systems. This helped us understand where the friction was and where we had opportunities to intervene.

From this exercise, we identified three key support modes:

Proactive – Automated outreach before users need to ask
Self-Service – Users solving issues independently (this was the gap)
Reactive – Human support for complex problems

The self-service layer was our opportunity. We weren't trying to replace human support – we were trying to create a middle ground where users could get instant resolution for straightforward issues.

How We'd Measure Success: The CARES Framework

Before building anything, I needed to define what "good" looked like. Reducing ticket volume was one metric, but it wasn't enough. We could deflect every ticket and still create a terrible experience if users couldn't actually solve their problems.

I created the CARES Framework to guide both what we built and how we'd measure it:

Clarity – Do users understand the help they're getting?
We'd track follow-up questions on the same topic and helpfulness scores on articles. If users kept asking the same thing in different ways, we weren't being clear enough.

Anticipate – Can we show users we know them by solving problems proactively?
We'd measure how many issues were resolved or deflected before they escalated. Success meant fewer users needing human intervention because the system anticipated their needs.

Resolution – Are issues actually getting solved?
First contact resolution rate, average resolution time, and repeat contacts for the same issue would tell us if we were really helping or just pushing problems around.

Efficiency – Are we respecting everyone's time?
Self-service success rate and average handle time would show whether users could quickly find answers or were wasting time before giving up and contacting support anyway.

Satisfaction – Are users happy with the experience?
CSAT scores would tell us if solving problems faster actually translated to happier users, or if they felt abandoned by automation.

This framework meant we couldn't just build features and hope they worked. Every design decision had to serve at least one of these goals, and we'd measure ruthlessly to see if we were actually delivering.

The Solution

We built a modular, scalable self-service layer that integrated into the user's profile through "Quick links" – making support accessible right where users naturally look for help.

The system had to be smart enough to:

Recognize common issues from order data

Determine when automatic refunds made sense (a missing €1 drink vs a larger order issue)

Present relevant help content based on the user's specific situation

Know when to escalate to human support

For example, if a user reported a missing item, the system could:

Identify what was missing from the order

Calculate if the value was low enough for an automatic refund

Process the refund instantly

Let the user get back to their day

For bigger issues, it would smoothly hand off to our support team with all the context already gathered.

Building It Out

We broke down the page structure into dynamic and static components:

  • Dynamic content that changed based on the user's order and issue
  • Static content for general FAQs and help articles
  • Clear information hierarchy (L1, L2, L3) so users could drill down from category to specific answers

The prioritization framework helped us decide what to build first. We focused on high-opportunity, low-risk features that we could ship quickly and measure. Issues with ambiguous solutions or high risk? Those got more careful, iterative design and testing.

The Impact

We tested this first in Sweden, then expanded to Estonia and Latvia. The results told an interesting story:

What stayed the same:

  • Delivery and item ratings – No significant difference
  • 30-day retention – No significant difference
  • Refunded purchase percentage – No significant difference
  • Average refunded amount – No significant difference

What improved:

  • Contact rate dropped significantly in both test markets
  • Sweden: -0.77pp (-15.33%)
  • Estonia/Latvia: -0.58pp (-16.94%)
  • Even when adjusted for Intercom chatbot deflection

Users were getting their problems solved without needing to contact support. And critically, they were just as satisfied with the experience. Our support teams could now focus on the complex, challenging cases that actually needed human judgment.

What I Learned

This project taught me that good self-service isn't about avoiding human contact – it's about respecting people's time. Most users don't want to chat with support for a missing item. They want it fixed, fast, so they can move on.

The key was understanding the entire service ecosystem: what users needed, what support teams were handling repeatedly, and where technology could create the most value for both groups.

TL;DR

Users were contacting support for simple issues that could be resolved instantly. We built a modular self-service layer that let users fix common problems themselves – missing items, refunds, order questions – without waiting for human help. The result: contact rates dropped 15-17% across test markets while customer satisfaction stayed exactly the same. Our support teams got their time back to handle complex cases that actually needed human judgment.