
Your Product Just Got a New User. It Is Not Human.
Something shifted in product design during early 2026. The S&P 500 Software Index dropped 13% in five trading sessions. Salesforce and Adobe fell over 25% since the start of the year. Forrester declared "SaaS as we know it is dead."
The catalyst was not a recession. It was the realization that AI agents are becoming the primary operators of software products. And the interfaces we spent decades perfecting for human hands, human eyes, and human attention spans are not built for this new reality.
This is the origin of AX Design.
The Term "AX" and Its Two Meanings
Matt Biilmann, CEO of Netlify, coined the term Agent Experience (AX) in early 2025. His definition: "the holistic experience AI agents will have as the user of a product or platform." He drew a deliberate parallel to Don Norman's coinage of "User Experience" in the 1990s.
John Maeda amplified the concept in his 2025 Design in Tech Report, calling the shift from UX to AX "perhaps the most profound shift I've observed in the eleven years of publishing this report." At SXSW 2025, he described AX as "moving from crafting interfaces to orchestrating outcomes with zero visual affordances."
But here is the nuance that most articles miss: the term AX is being used for two distinct conversations.
Two Definitions of AX
Same acronym, different audiences, both essential
Making products usable BY agents
“Can an AI agent operate my product?”
Designing experiences WITH agents
“What does the human experience feel like?”
A complete AX strategy needs both. This series focuses on the human side because that is where the design discipline is being reinvented.
Developer AX: Making Products Usable BY Agents
Biilmann's original framing is infrastructure-first. His four pillars of AX focus on Access (can the agent authenticate?), Context (does the LLM understand your product?), Tools (are capabilities machine-readable?), and Orchestration (can agents be triggered and passed context?).
This is critical work. Without it, agents cannot operate your product at all. It is the reason standards like MCP, llms.txt, and A2A exist.
Human AX: Designing Experiences WITH Agents
The second conversation is about the human side. When an AI agent is doing most of the work inside your product, what does the human experience feel like? How do users maintain trust, set boundaries, and recover from mistakes?
This is the conversation where Salesforce frames the designer's evolution from "interface architect" to "experience orchestrator." Where Smashing Magazine coins "Agentic Sludge" to describe the dark pattern of removing friction until users accidentally authorize things they should not. Where Microsoft Design publishes principles for transparency, control, and consistency in agent-powered interfaces.
A complete AX strategy needs both threads. But this series focuses on the human side, because that is where the design discipline is being reinvented, and where the fewest established patterns exist.
Why AX Matters Now: The Data
The urgency is not theoretical. Three data points make the case.
The Trust Gap
Figma surveyed 2,500 designers across seven countries for their 2025 AI Report. The findings reveal a fundamental disconnect:
- 78% of designers say AI makes them more efficient
- Only 47% say AI makes them better at their jobs
- Only 32% trust AI output
That gap between efficiency and trust is the design problem AX exists to solve. The technology works. The human experience of using it does not.
The Maturity Gap
Deloitte's State of AI 2026 survey found that most companies are nowhere near production-ready for agentic systems:
AX Maturity: Where Companies Are Today
Based on Deloitte State of AI 2026 (enterprise survey)
No AX design considerations. AI bolted onto existing UX.
Aware of agentic patterns. Exploring intent-first interfaces.
Some trust patterns implemented. Autonomy controls in production.
Full AX framework. Agents in production with governance models.
77% of companies have no agents in production yet. The design patterns you adopt now will define the standard.
When 77% of companies have no agents in production, the design patterns you establish now will define what "normal" looks like for the next decade. This is the same window that existed for mobile UX in 2008 and responsive design in 2012.
The Research Gap
A 2025 CSCW study using the TOAST transparency scale found that higher transparency "significantly improved trust, satisfaction, and willingness to use AI agents" across three transparency levels. Meanwhile, a systematic review in Frontiers in Psychology found that the uncanny valley extends into text-based AI: agents that behave "almost but not quite" like humans score lowest on trust metrics.
The implication for designers: partial humanization is worse than no humanization. Either make the agent clearly an agent, or invest fully in relationship design. The middle ground destroys trust.
The AX Design Framework
After building agentic systems across Vector (AI lead qualification), Hive (multi-agent orchestration), and client products, we have converged on a three-layer framework that every agentic experience must address.
The AX Design Framework
Three layers that every agentic experience must address
What the agent can do
What the agent sees
How it thinks
What it remembers
What it does
How humans stay in control
See before it acts
Set boundaries
Know its certainty
Undo any action
How the system connects
Agent to frontend
Agent to agent
Agent to tools
Agent to UI catalog
Layer 1: Capabilities (What the Agent Can Do)
Every AI agent has four capabilities that need dedicated design surfaces:
Perception is what the agent sees. Users need to know what data, documents, and signals the agent is reading. Design implication: make inputs visible, not invisible. Show document context indicators, active tool badges, and user state awareness panels.
Reasoning is how the agent thinks. Users need access to the agent's hypotheses and plans at the right level of detail. Design implication: provide thinking panels, step-by-step plan previews, and confidence indicators.
Memory is what the agent remembers across sessions. Users need to see, edit, and delete what the agent has learned about them. Design implication: build preference surfaces, past decision audit logs, and editable context panels.
Agency is what the agent does. Users need clear distinction between proposed, scheduled, in-progress, and completed actions. Design implication: show action status timelines, approval workflow cards, and undo controls.
In traditional UX, these four capabilities are invisible. The system just works. In AX, each one gets a dedicated design surface that users can inspect and control, because the cost of invisible agent behavior is user trust.
Layer 2: Trust Patterns (How Humans Stay in Control)
Trust is the central design challenge of AX. We have identified six patterns that organize across the interaction lifecycle:
Pre-Action (Establishing Intent):
- Intent Preview: The agent states its plan before executing. Users can approve, modify, or reject. This is the most important AX pattern because it prevents the "what just happened?" moment that destroys trust.
- Autonomy Dial: A spectrum from "full manual" to "full autonomy" that users control. Different tasks warrant different autonomy levels, and the setting should be adjustable per context.
In-Action (Providing Context):
- Explainable Rationale: Human-readable explanations of why the agent made a specific decision. Not technical logs, but plain language reasoning.
- Confidence Signal: Visual indicators of the agent's certainty. When an agent acts with 95% confidence, users need less oversight than when it acts with 60%.
Post-Action (Safety and Recovery):
- Action Audit: A chronological log of everything the agent did, with undo windows for each action. Users must be able to reverse any agent action.
- Escalation Pathway: Clear handoff from agent to human when the agent hits the limits of its capability. The agent must know when to stop and ask for help.
These patterns are not optional additions. They are the structural requirements that make agentic interfaces trustworthy enough for production use.
Layer 3: Protocols (How the System Connects)
The protocol layer is the infrastructure that makes AX possible. Four protocols are emerging as the standard stack:
AG-UI (CopilotKit) handles real-time agent-to-frontend communication. It defines approximately 16 event types over HTTP/SSE, enabling agents to stream their state, actions, and UI updates to the frontend in real time.
A2UI (Google) provides a declarative format for agents to request UI rendering from a trusted component catalog. Instead of agents generating arbitrary HTML, they describe what they need and the frontend renders it safely.
A2A (Google) enables agent-to-agent communication. When multiple agents work together on a task, they need a standard way to coordinate, hand off context, and report status.
MCP (Anthropic) connects agents to tools and data sources. It provides a standard interface for agents to access databases, APIs, file systems, and other capabilities.
For designers, the protocol layer matters because it determines what is architecturally possible in the trust layer. You cannot build an intent preview pattern if the agent has no way to stream its planned actions to the frontend. You cannot build an action audit if agent actions are not logged in a standard format.
What Actually Changes from UX to AX
The shift from UX to AX is not a rebrand. It changes fundamental assumptions about who is using the interface and how.
What Changes from UX to AX
Primary user
Human operates the interface
Human and agent share the interface
Design artifact
Wireframes, user flows, prototypes
Trust policies, autonomy maps, escalation paths
Success metric
Task completion time
Appropriate delegation rate
Error handling
Error messages with retry
Agent explains what went wrong and proposes fix
Personalization
User settings and preferences
Agent learns and adapts over sessions
Designer role
Interface architect
Experience orchestrator
| Aspect | Traditional UX | AX Design |
|---|---|---|
| Core assumption | Human directly operates software | Human and agent share control |
| Primary input | Clicks, taps, typed text | Natural language intent + constraints |
| UI generation | Designer creates all screens | Agent can request or generate UI elements |
| Onboarding | Teach the human to use the tool | Teach the human to trust the agent |
| Feedback loop | Error states and success messages | Ongoing confidence and transparency signals |
| Session model | Start, use, close | Ongoing relationship that evolves over weeks |
The most important shift is the last one. Traditional UX optimizes for individual sessions. AX optimizes for ongoing relationships. The agent that helps you today remembers what it learned and becomes more useful tomorrow. Designing for that arc, the progression from cautious stranger to trusted coworker, is the new frontier.
Who Is Building AX Today
The landscape is moving fast. Here is where the major players stand as of March 2026:
Platform vendors are leading the pattern definition. Salesforce's Agentforce introduced Intent-First Architecture for mapping user goals to agent capabilities across platforms. Microsoft Design published official AX principles covering transparency, control, and consistency. These are product-specific implementations, but the patterns they establish influence the broader industry.
Publications are documenting the emerging discipline. Smashing Magazine published the most practitioner-focused piece with their Practical UX Patterns for Agentic AI, covering consent, control, and accountability. UX Magazine has published multiple articles on agentic UX patterns. The World Economic Forum addressed trust at the governance level.
Research institutions are cautious. Nielsen Norman Group has flagged AX as an open research question but has not yet published a definitive framework. IDEO is notably absent from the conversation entirely, treating AI as a design tool enhancement rather than a paradigm shift.
Design agencies are the gap. No independent agency has published a named, referenceable AX methodology with its own terminology and framework. The platform vendors write for their own products. The publications write editorial coverage. The research institutions move deliberately. The space between, where a practitioner methodology with real production experience lives, is open.
That is the space this series occupies.
Getting Started: The AX Readiness Audit
You do not need to redesign your entire product to start with AX. Start with three questions:
1. Transparency: Can Users See What the Agent Is Doing?
Audit every point where an AI agent takes action in your product. For each action, ask: can the user see what the agent is doing, why it made that choice, and what data it used? If the answer is no, you have a transparency gap.
Quick win: Add a "thinking" indicator that shows the agent's current step in real time. Even a simple "Searching your documents..." or "Comparing three options..." builds trust.
2. Autonomy: Can Users Control What the Agent Does?
Map the autonomy spectrum for each agent capability. Which actions should require explicit approval? Which can happen automatically within boundaries? Which should the user set and forget?
Quick win: Add a simple toggle that lets users choose between "suggest and wait" and "act and report" for the agent's most common action. This is the autonomy dial at its simplest.
3. Safety: Can Users Recover from Agent Mistakes?
Identify every irreversible action the agent can take. For each one, ask: can the user undo it? Is there a time window? Is there a clear escalation path to a human?
Quick win: Add an action log that shows the last 10 agent actions with a one-click undo for each. This single feature can transform user confidence.
What This Series Covers
This is Part 1 of The AX Design Playbook, a five-part series that takes you from understanding AX to implementing it in production. Each part builds on the last, moving from conceptual framework to practical implementation patterns you can apply to your own products.
AX Design: Questions Designers and Product Teams Ask
Common questions about this topic, answered.
The Window Is Open
AX Design is not a future discipline. It is happening now, defined by the products shipping today and the patterns they establish. Nielsen Norman Group has not published their definitive framework yet. IDEO has not entered the conversation. Most companies are still at Level 1 or Level 2 maturity.
The organizations and agencies that define AX patterns in this window will set the standard for how humans and AI agents work together for the next decade. Just as responsive design became the default after 2012, and mobile-first became the default after 2015, AX patterns will become the default expectation for any product with AI capabilities.
The question is not whether your product needs AX Design. It is whether you will define the patterns or follow someone else's.
Ready to design agentic experiences that users actually trust?
- Full-Stack AI Services - From audit to production implementation
- Read Part 4 of our Multi-Agent AI series - Deep dive into the six patterns and four protocols
- Contact Us - Start your AX readiness audit
