Overview
At OneAssist, we provide digital protection plans, from mobile and gadget insurance to extended warranties for used cars and appliances.
This project started with a simple goal: to make buying and managing protection plans easier and more human.
We noticed two big issues. Our sales funnel was attracting random leads, and our service team was flooded with status calls.
That’s where the idea of a conversational AI chatbot came in, something that could talk to users, understand their questions, and guide them through the right journey, whether they were looking to buy a plan or check their claim status.
Over time, this evolved into a scalable framework that could power both sales and service experiences across our ecosystem.
Problem Statement
Before we built the chatbot, users landed on a basic lead form to show interest in mobile protection plans. But most of those leads weren’t genuine, many filled it out just to explore, often with incomplete or incorrect details.
Our CRM data showed that agents were spending 15–20 minutes per call explaining the same questions about coverage, claims, and pricing. These were things users should have been able to clarify instantly online.
We also noticed another pattern:
Even when agents followed up, many users didn’t pick up or had already lost interest.
Multiple follow-ups created frustration on both sides. Users felt chased, and agents lost time managing low-quality leads.
On the service side, we saw similar friction.
Customers who had already bought a plan were still calling just to check claim status.
It became clear:
People didn’t want forms or follow-up calls. They wanted quick, trustworthy conversations.
That realisation became our foundation to design a unified system that could assist users both before and after purchase, making the experience faster, clearer, and more human.
Goals
Business | User/Customer | Agent(Ops) |
|---|---|---|
Improve lead quality and conversion efficiency for mobile protection plans | Get quick, trustworthy answers about plan coverage, pricing, and eligibility | Receive only qualified leads through chatbot pre-screening |
Reduce agent call duration and lower cost per acquisition | Ask questions naturally without filling forms or waiting for callback | Reduce repetitive explainer conversations to focus on real conversions |
Streamline post-purchase service queries through self-service claim and pickup updates | Track claim or pickup updates instantly without contacting support | Access clear context before each call for personalised follow-ups |
Build a reusable conversational framework scalable across multiple OneAssist products | Feel in control and informed through a transparent experience | void manual follow-ups on service queries handled automatically by the chatbot |
My Role
I led this project as the sole UX designer, defining both the experience and the strategy behind OneAssist’s first AI-powered chatbot.
Early on, I partnered with the data science team to structure the knowledge base and intent taxonomy, making sure the chatbot could interpret real user questions instead of just matching keywords. This helped the system handle real-world conversations like “Is screen damage covered?” or “Can I claim if I bought my phone online?” naturally and accurately.
To make the chat feel human and aligned with our brand, I collaborated with a UX writer to refine tone and microcopy. We designed response templates that balanced clarity and empathy - keeping them short, conversational, and transparent about what the system could or couldn’t do.
Together with the product manager, marketing, and sales teams, we ran controlled A/B experiments to test real user engagement. Around 60% of incoming traffic was routed directly to the chatbot, while the rest experienced the existing form. This helped us validate that conversational flows actually improved lead quality before we scaled the chatbot further.
Finally, I worked with engineering to integrate backend APIs for lead capture and service tracking, connecting chatbot interactions directly to CRM and claim systems.
This cross-functional effort turned the chatbot into more than a lead-generation tool. It became a bridge between sales, service, and users, helping everyone- customers, agents, and the business
Research & Planning
Before jumping into design, I wanted to understand not just what users struggled with, but how our internal teams sales, service/ops were affected by those same gaps.
We used a mixed-methods research approach(Qualitative and Quantitative), combining real conversations, behavioural data, and process mapping to capture the full picture.
Call Listening & Observation
Reviewed around 90 pre and post-purchase call recordings to identify common user questions and emotional triggers. We noticed recurring queries around coverage, claim timelines, and pickup delays, most of which could have been automated with better clarity upfront.
Operational Insights from CRM Data
Sales | Service/Support |
|---|---|
Our CRM data revealed that agents were overwhelmed with a flood of low-quality leads, many containing incomplete or incorrect information. | On the service side, most support calls were about basic claim status/updates, things users could’ve easily checked themselves. |
Key Research Insights
1. Users seek conversational guidance to make sense of choices
Many users weren’t sure what to ask or how to describe their problem. When faced with blank input fields, they froze or dropped off.
We learned that giving them simple, guided options not only reduced hesitation but also built confidence in the conversation.
2. Empathy turns hesitation into trust
Our call audits revealed that overly formal replies made users second-guess the brand.
When responses sounded conversational, acknowledging doubt or emotion ,users opened up faster and were more likely to continue.
3. Availability and consistency across channels build assurance
Users wanted to reach support wherever they felt most comfortable, WhatsApp, SMS, or email, without needing to open the app or website to chat.
However, static updates like emails or SMS alerts were often ignored because they felt impersonal and one-sided.
Maintaining the same empathetic, conversational tone across every touchpoint, not just formal notifications, became essential to keeping users informed and reassured.
4. Automation should support, not replace, human touch
Users were comfortable using automation for quick queries but expected a clear fallback when stuck or unsure. For this customer base, having visible options like “Call support” or “Connect with an agent” wasn’t just nice to have. It was essential.
Design needed to ensure automation improved efficiency without ever removing the reassurance of human help.
User Personas
To keep things simple, I’ve represented our two main user types through hypothetical personas, not real individuals, but composites built from the patterns we saw in research and data.


User Journey

✨ Crafting the Conversation Experience
Customer support (logged-in User on app/web)
Use case: Claim already raised

Sales intent (entry point -> landing page, new user)

UX Challenges & How I Tackled Them
1. Inconsistent Knowledge Responses
Early replies felt fragmented and robotic.
I restructured FAQs into intent-based conversational nodes with the data science team, ensuring consistent, human-like responses.
2. Balancing Automation with Empathy
Speed alone wasn’t enough- users needed reassurance.
I added contextual fallbacks like “Would you like to connect with an expert?” to make automation feel safe and optional.
3. Omnichannel Consistency
Different channels (chat, WhatsApp, SMS) created tone gaps.
I defined adaptable templates and tone rules so the experience felt like one assistant, not three systems.
4. Reducing Cognitive Load
Early designs sent too much text at once.
We adopted a “one idea per bubble” approach - concise, empathetic messages that boosted readability and engagement.
Next Steps
Agent Copilot: Build an AI assistant to surface user intent and sentiment for agents before calls.
Context Memory: Enable the chatbot to recall past interactions for smoother follow-ups.
Sentiment Escalation: Auto-route frustrated users to human support.
Multilingual Support: Localize the chatbot for Tier-2/3 audiences.




