A co-pilot experience for agents


UX lead, rapid experimentation for MVP, vision and strategy development

Role 
Timeline

12 months

How might we create a unified AI assistive experience that empowers agents to deliver higher customer satisfaction?

Challenge: Fix a fragmented experience

While aiming to boost customer satisfaction with AI-driven automation, High-touch Support (HTS), a part of Google Support, overlooked the fact that the evolving technology required significant agent input. Unfortunately, HTS initially focused on resolution speed without fully understanding agent workflows. This led to a confusing mix of AI tools, developed independently by various sub-teams, that ultimately hindered agent efficiency. There’s clearly a need to develop a set of patterns for a cohesive assistive experience.

Building Cases Assistant MVP

My team decided to take on this challenge to build Cases Assistant. We aimed to give agents AI “superpowers” to help them resolve issues quickly, providing exceptional customer support. We also realized if we built a shared surface, other teams could easily experiment with AI-capabilities without adding more confusing tools for agents.

Balancing usability with velocity

Our first problem was identifying the optimal assistive surface placement, considering workflow efficiency, accessibility, and technical constraints. While cross-functional partners expressed concern over the development cost for the optimal solution, I leveraged research findings to demonstrate its critical impact on agent adoption. I led design iterations and concept validation, advocating for the vertical panel to ensure a seamless experience that scales well.

Exploring placement options for Cases Assistant

Prioritizing between diverse agent needs for MVP

I developed various concepts for research to understand our agents' diverse needs, some of which were driven by personal preferences, while others reflected experience level and product specialization. We discovered agents in general valued clear, predictable assistance aligned with their workflows, which were often poorly documented. These insights shaped our MVP: step-by-step guidance based on established procedures, starting with Ads billing issues.

Exploring various styles and capabilities of agent assistance

Scaling towards unified agent assistance

Following the Cases Assistant MVP launch, our goal was to expand it into a unified platform for AI assistance across all channels. To drive adoption, I proactively collaborated with UX counterparts from other agent AI teams, sharing our progress and understanding their challenges. This enabled me to articulate a clear vision for Cases Assistant as a central hub, unifying disparate AI experiences. To validate this vision, I facilitated a design sprint, bringing all teams together to pressure-test Cases Assistant’s ability to integrate diverse workflows and automation levels.

Mapped out workflows for the different support channels to assess potential gaps in Cases Assistant

To ensure team alignment, I developed a framework categorizing agent assistance by automation level, and mapped key components against the agent's issue resolution journey. The visualizations fostered a shared understanding of both offline and live assistance models, enabling more effective collaboration.

We explored numerous designs for how we can support the agents through the different stages of case handling across all communication channels. We also checked these ideas against the product roadmap to identify gaps and conflicts. The final set of design recommendations were built into prototypes for validation with the agents.

An assistive vision to drive the roadmap

The final outcome was a 2-year vision for HTS to build towards a cohesive assistive experience so that agents have the “superpower” to deliver high customer satisfaction. We also recommended a phased strategy that turns the vision into a feasible roadmap with cross-functional buy-in.