AT&T · Experimentation · Learning loops

Designing Experimentation as a System at Scale

Standardizing A/B testing to drive learning, velocity, and component evolution across att.com

Portions of this case study are abstracted due to proprietary experimentation data and internal workflows.

Public summary

I helped standardize how experimentation worked across att.com so teams could move faster without fragmenting the design system. I partnered with PMs, strategists, and engineering to design variants, shape hypotheses, and convert results into durable patterns. The core shift was turning “one-off tests” into a repeatable learning loop that improved decision quality and increased velocity. Successful experiments informed reusable components and updated standards tied to KPIs. The details below are gated due to internal workflows and proprietary data.

TL;DR

Gated

Role

Senior UX Designer

Scope

Experiment design, variant creation, hypothesis development, insight application

Context

A/B testing across att.com during evolving business and design system needs

Impact

Established standards for experimentation, increased test velocity, and translated results into reusable components tied to KPIs

Full case study (gated)
Request access
Enter password to view the full case study. Or request access via email.

Context

A/B testing on att.com began within my team as a way to validate design decisions against business outcomes. Over time, experimentation expanded in scope and importance, influencing how new designs were evaluated and adopted across the organization.

However, early experimentation efforts lacked:

  • Consistent standards for variant creation
  • Clear alignment with the existing design system
  • A repeatable way to turn test results into durable design improvements

What started as isolated tests needed to become a learning system.

My Role

I worked as a Senior UX Designer embedded in experimentation workflows, partnering closely with PMs, strategists, and engineering.

My responsibilities included:

  • Designing test variants
  • Ideating and shaping new experiments
  • Partnering on hypothesis development
  • Translating insights into future design work
  • Helping define how experimentation fit within the design system

This role sat at the intersection of design, data, and decision-making.

The Problem

As experimentation scaled, friction emerged:

  • Variants were created inconsistently
  • Tests didn’t always map cleanly to design system components
  • Insights lived in PM summaries rather than informing future patterns
  • Successful designs were difficult to operationalize beyond a single test

Without structure, experimentation risked becoming tactical validation rather than strategic learning.

Establishing a Standard for Experimentation

My team became the starting point for how A/B testing was designed and executed.

Rather than treating each test as bespoke, we focused on:

  • Defining how variants should be structured
  • Ensuring tests aligned with system constraints
  • Making results reusable beyond the immediate experiment

This allowed experimentation to scale without fragmenting the design system.

Designing Variants Within System Constraints

A key challenge was that experimentation often required designs that:

  • Did not yet exist in the AT&T design library
  • Needed to flex existing components
  • Had to meet accessibility and implementation constraints

I worked to:

  • Design variants that extended the system responsibly
  • Identify when new components were required
  • Ensure experimental designs could graduate into the broader library

This prevented successful tests from becoming dead ends.

Collaboration & Data Flow

Experimentation relied on tight cross-functional collaboration:

Cross-functional roles

  • PMs: hypothesis framing and success criteria
  • Strategists: KPI alignment and business interpretation
  • Engineering: implementation via Adobe Target and related tooling
  • Analytics: Adobe Analytics as the source of truth

How insights moved

  • Insights shared through PM summaries and analytics reviews
  • Translated back into design decisions and component evolution
  • Ensured intent, data interpretation, and system impact stayed connected

Experimentation → Learning → System Evolution

Click to enlarge

Tip: click/tap to zoom. ESC closes.

Impact

This work resulted in:

  • Improved test velocity
  • Clearer standards for how variants were created
  • Successful experiments informing new component designs
  • Design decisions more explicitly tied to KPIs
  • Stronger feedback loops between experimentation and the design system

Rather than isolated wins, experimentation became a mechanism for evolving the platform.

What Made This Staff-Level Work

This wasn’t about optimizing individual tests.

It was about:

  • Designing experimentation as a repeatable system
  • Connecting short-term learning to long-term infrastructure
  • Balancing speed with system integrity
  • Ensuring insights compounded over time

This required thinking beyond the interface — into process, standards, and organizational behavior.

Reflection

Tests don’t create value on their own. Value comes from how learning is captured, shared, and reused.

Designing experiments without a path back into the system leads to fragmentation. Designing experiments as inputs to a platform creates leverage.

Designing for longevity, governance, and leverage continues to shape how I approach platform and AI-adjacent work.

Hiring Manager Q&A (AI)
Hi — I’m Stan Yeung’s portfolio assistant. This site was built with AI-assisted writing and engineering prompts to make case studies clearer for hiring managers. Ask me about scope, decisions, tradeoffs, or how this experimentation system work connects to platform/product design at scale.