Overture

An AI-powered web platform that guides small-to-medium business marketers through Optimizely, simplifying A/B test setup and analysis to directly combat the platform's highest source of customer churn.

The solution elevates user confidence and engagement by integrating a proactive AI assistant, clear visual guides, and a "Launch Readiness" monitor, ensuring customers can consistently run successful experiments.

My Role

Product Designer

Skills

End-to-end Design

Interaction Design

Information Architecture

Visual & UI Design
User Flows

Team

3 Product Designers

1 UX Researcher

1 Senior Product Manager @ Optimizely


Head Of Product, Experimentation @ Optimizely

Timeline

6 months

Key Features

Meet Opal: The Proactive AI Assistant

Thoughtfully embedding Optimizely’s existing AI, Opal, across all stages of test set-up: from creation to launch, analysis, and iteration. This accelerates test setup and design while guiding customers through best practices: empowering all to run smarter, more effective experiments.

A redesigned preview, Launch Readiness monitor, and a new publish center to ensure customers can launch without worry.

Overture organizes key set up elements in one clear, interactive node layout paired with the visual editor. This makes it easy for users to follow each step, build their experiment smoothly, and see their progress at a glance.

Launch Confidently: Checkpoints for Every Step

Show Don’t Tell: Unveiling the Connections During Experiment Setup

Impact
What our Sponsors are Saying

Overture’s design solution is projected to boost user confidence, accelerate experimentation, and improve retention—especially for small and medium businesses:

45%

Fewer experiments running longer than two months.

With clearer statistical guidance, users can reach significance more quickly and confidently conclude tests.

50%+

Churned ARR (Annual Recurring Revenue) pain points directly addressed for SMBs.

Targeted solutions now make experimentation faster, easier, and more accessible for this segment.

Our work helped shape the Web Experimentation roadmap at Optimizely and contributed to the beta launch of the Experiment QA Agent for Opal:

“We’ve taken a ton of the feedback and already started implementing it…the research and recommendations are already being folded into Opal.

Today we are launching the first beta of the Optimizely Experiment QA Agent, which is the feature they talked about that gives warnings and updates and helps [the user] build a really good test. It is being built into Opal and going into our beta today.”

-Britt Hall

Vice President of Product, Digital Optimization

“I loved working with this team. Throughout the entire experience you could see they cared about making a better experience for the customers and making sure the customers felt confident in what they were doing…That’s great news that we are already starting to implement some things!”

-Brittany Buttler

Senior Product Manager

Context
The Problem
Goals
Current Interface

Users face multiple points of friction in setting up a successful experiment:

Human 🤝:

Establish trust and customer confidence, in order to empower them when using Optimizeley for experimentation.

Customers face the frustration of campaign concepts getting stuck in complicated setup, test failures, and slow feedback loops.

Optimizely’s Web Experimentation (WebEx) is a platform that helps businesses compare different versions of their digital experiences in order to figure out what resonates most with their customers and improve key performance metrics.

A/B testing platforms like Optimizely’s Web Experimentation (WebEx) can be a powerful but confusing tool to use for newcomers.

Small-medium businesses who use Web Experimentation struggle to launch successful tests and leave the platform for other solutions that fit their experimentation needs or abandon experimentation efforts altogether. Meanwhile, Web Experimentation customers can pay more to get dedicated support during onboarding, but for those who don’t purchase this service, Optimizely’s Customer Success team can intervene as a last resort to re-engage struggling users, creating additional workload and delays.

Business 💵:

Reduce churn and loss of revenue. 

Design Challenge

How do we create an experience that guides users to launch their first test, use that initial success to build confidence, and reveal the compounding value of experimentation — all while teaching best test practices?

What I discovered through Research

Main Insight:

Participants lack confidence in setting up and launching A/B experiments on Optimizely because they have insufficient guidance and feedback to validate their choices. This uncertainty undermines trust in both their results and the platform itself.

To understand What the early experiences of self-serve users on the Web Experimentation platform are and what was going wrong we:

➜ Conducted desk research, consisting of competitive analysis and a heuristic evaluation of the platform.

➜ Spoke with 5 Subject Matter Experts (Optimizely employees)

➜ Ran 6 contextual usability studies with professionals who have A/B testing experience but have not used Optimizely WebEx

Key Findings

Many customers quit because of issues using the platform, due to lack of help, hidden steps, and roadblocks within the test-setup process. Our findings revealed the need to fully transform the setup process to improve the customer experience.

➜ Click for the final Research Report

Mapping the Experiment Set-Up Journey

All of those methods and sessions helped us put together a Journey Map to break down the complexity of the user task flow and visualize how participants moved through each step.

It helped us capture user emotions along the way and highlight where people got stuck, allowing us to prioritize pain points and uncover key opportunities for improvement.

We split the Web Experimentation journey into 4 phases including account creation, create a new project, set up experiment being the bulk of the journey, and ending with see results.

Design Ideation

Exploring 2 different layouts to reduce complexity and embed help and guidance exactly where it is needed

We started ideation, rapidly brainstorming around 8+ high-level concepts that addressed our how might we statements. Not all of our ideas at this stage involved AI, but we definitely kept AI and Opal at the forefront of our minds during this process knowing that Opal is a huge priority for the future of Optimizely.

What are ways we can resolve the breakpoints in the customer journey?

What's the story for a new user of WebEx?

Once we were done brainstorming ideas individually, we came together and created themes by affinitizing similar ideas, and then from there we down selected to two ideas that we found to be the most promising: 

Node Layout

Simplify setup by visualizing all components in one place, connected.

Word Clouding

Users describe tests in their own words ➜ AI generates setup + highlights input mapping. 

Rapid Prototyping & Feedback Loops

Ensuring Concepts Fit with Real User Workflows

My co-designers and I mapped out a user flow for the word clouding and node layout concepts combined and built low-fidelity wireframes on Figma for concept testing with seven participants who were in full-time, marketing-adjacent roles with A/B testing experience.

1. New experiment interface with a blank "notepad" or word clouding area for users to brainstorm their experiment ideas.

4. Exploring how we might keep Optimizely's unique Visual Editor feature in this design.

3. Deep dive into Node Layout: nesting Metrics under Event components + Opal AI suggestions.

6. Providing reassurance to the user that their experiment has been published and thinking about next steps.

2. Users enter in "notepad" content for Opal AI to generate an experiment on the right side in a node layout format.

5. Rethinking how we might allow users to preview multiple variants at a time.

I conducted 2 of those concept testing sessions, starting with an generative interview about the user's current workflow and approach to AI before showing them a low-fidelity prototype of Overture, grounding their feedback in real-world needs, to provide reliable and contextual insights.

How User Insights Led Us to a Visual-First Approach

There was not a lot of time to implement all the changes we wanted to make so we prioritized changes by impact + effort by using a priority matrix. 

In particular, I championed for the following decision, based on new user findings from concept testing:                

Design Decision

Building on these insights, I contributed to the high-fidelity prototype using the Optimizely design system. We then conducted usability testing to further understand the value of our revised workflows and perceptions of Opal’s utility and trust.  

We tested the high-fidelity prototype with eight participants who were in full-time, marketing-adjacent roles with A/B testing experience. Of the eight participants, four were current Optimizely WebEx customers.

Finding 1: Expectations for AI

While Word Clouding with Opal AI, represented by the notepad space on the left side of the UI, was positively received, participants see its utility strictly as a brainstorming or ideation feature, not as the primary avenue for setting up final experiment parameters.

Finding 2: Visuals First

Participants anchor their understanding of an experiment to its visual components and expect an early visual representation of the test to help them conceptualize and validate their work during the setup process.

Convinced the team to swap out the notepad editor for a visual editor instead. Feedback showed that users found the notepad editor repetitive and confusing, so we pivoted towards the uniqueness of the visual editor from our initial usability research that brought delight to users and inspired them. This would make the design more dynamic and visual first. 

Introducing Overture

At the end of our 10 week long capstone, we delivered a comprehensive guided web experimentation setup experience. Rather than covering every technical feature, these scenarios highlight the key moments where Overture dramatically improved clarity and confidence for users like guided setup, AI-powered recommendations, and worry-free test launches.

Opal as an AI Setup Assistant

Opal actively guides users in building stronger hypotheses and experiment designs.

Then

Opal’s AI agent was largely passive and easily missed, offering little more than generic prompts. It didn’t adapt to user intent or context, so it rarely helped users clarify what they wanted to achieve through help documentation, much less guide them in shaping stronger experiments and hypotheses.

Now

Opal now actively learns user objectives and context from the start to deliver tailored guidance for clarifying goals, refining hypotheses, and structuring experiments. Based on feedback, every AI suggestion is reviewable and customizable, ensuring transparency and user control throughout setup.

Unified & Simplified Experiment Setup

Consolidated setup and visual clarity make configuring experiments easier for every user.

Now

Opal automates experiment setup based on the confirmed hypothesis, generating a unified node layout where all components are arranged in a single interactive view. Event components are visually nested within metrics, inspired by block coding, making dependencies explicit and setup straightforward, especially for new users.

Then

Participants had difficulty comparing variants in Classic WebEx, as the one time pop-up only allowed one variant preview at a time and used technical jargon in the navigation. This made it hard to spot differences and reduced confidence in experiment outcomes.

Now

A permanent top navigation with simple drop downs and a full-screen slider overlay let users seamlessly switch between variants for clear, side-by-side comparisons. This feature was preferred by nearly all participants and helps users confidently validate tests before publishing.

Driving Confidence during Experiment Preview

A slider overlay enables direct side-by-side variant comparison for trustworthy previews.

Then

Setting up experiment components in Classic WebEx was confusing and fragmented between workflow steps.

During usability testing, 5/6 Users struggled to find where to set up events (what to measure) and how they connected to metrics (how the event is measured), resulting in frequent errors or complete blockers.

Then

Users wanted trustworthy results and to realize value from their experiments sooner. However, this can be undermined if users are not aware of statistically sound test practices. Classic WebEx lacked built-in quality checks and pre-launch planning tools. Users also wanted to be shown the experiment summary before launching their test as opposed to after.

Now

The Publishing Center offers a Launch Readiness Checklist, runtime calculator, and scheduling tools for greater experiment success. A Pre-Launch Summary and easy sharing make collaboration seamless, with all participants preferring to align with a co-worker before publishing.

Reaching Statistical Significance for Launching Experiments

A centralized Publishing Center with a launch readiness monitor to improve quality and planning.

Reflection

Contextual Unfolding

Setting up an A/B test can become technical and quickly overwhelming, so designing for experimentation taught me the value of context and progressive guidance. We aimed to create an experience where users wouldn’t need heavy instructions to succeed. Hence, I found journey mapping very helpful in shaping intuitive, step-by-step flows where we could make guidance feel naturally built-in. I also learned that good design offers clear direction with flexible options, a mindset I brought to designing for AI, making support timely and relevant to each user’s situation.

Balancing Design Systems

Optimizely’s design system was massive, and at first, it felt overwhelming to take it all on. I found that paring it down to just a few basic elements for early, low-fidelity prototyping, made it much easier to keep things consistent without getting bogged down by complexity. Yet as we iterated, I found value in stepping beyond the established components to freely explore new ideas. Balancing the design system with creative freedom let us brainstorm openly and use system elements where they worked best, ensuring both consistency and innovation in our final outcome.