Stop Buying AI Tools. Start Designing AI Experiences.
The AI software market is enormous and getting bigger. Enterprise buyers are choosing from customer service platforms, chatbot builders, LLM wrappers, orchestration layers, knowledge base tools, voice AI, agent frameworks, and analytics dashboards for the AI they just bought. The vendor landscape gets more crowded every quarter. Sales cycles keep getting shorter.
And yet results, at scale, remain underwhelming. Most organizations that have invested heavily in AI tooling are not seeing the returns the demos promised. The dashboards are live. The integrations are running. But the customer experience still feels robotic. Containment rates are flat. Customers are routing around the AI to reach a human, which defeats the entire point.
Here is what ICX has come to believe after working with teams across this landscape: the problem is not that organizations are buying the wrong tools. The problem is that they are leading with tools at all.
The Tool-First Trap (And Why It's So Easy to Fall Into)
Most AI purchasing decisions follow a recognizable arc. A customer service leader sees a compelling demo, hears about a competitor's deployment, or gets a budget approved. The first question on the table becomes: which platform?
That is a reasonable question. It just should not be the first one.
When platform selection leads, everything downstream gets shaped by what the platform can do rather than what customers actually need. Features get enabled because they exist, not because they serve a specific experience goal. The knowledge base gets structured to fit the platform's taxonomy. Response templates get written to match the platform's defaults. Escalation flows get designed around integration constraints, not the customer journey.
By the time the AI launches, it has been through months of technical work and almost no experience design work. The team wonders why customers are not engaging with it the way the demo suggested they would. The answer, almost always, is that nobody asked the right questions before the contract was signed.
This is the tool-first trap. It feels like building. But it is really configuring. And configured AI without designed AI experiences has a consistent ceiling: functional, but not good. McKinsey's State of AI research consistently finds that the gap between AI leaders and laggards is not a technology gap. It is an execution and design gap. The tools are largely the same. What differs is what teams do with them.
What "AI Experience Design" Actually Means
Experience design is a phrase that gets used loosely. In the AI context, ICX means something specific: the deliberate work of shaping how a conversation feels from the customer's perspective, not just what information it delivers.
This work happens at several layers. The most visible is language quality. Whether the chatbot sounds like a knowledgeable, genuine presence aligned with the brand, or whether it sounds like neutral model output with a logo attached. The post on why most chatbot problems are language problems goes deep on this distinction. The short version: customers form trust judgments within the first two or three exchanges, and those judgments are based entirely on language, not features.
But experience design also happens at the structural level. How long are responses? How does the AI handle multi-turn conversations where intent shifts mid-thread? What happens when the customer's message is ambiguous? How does the AI navigate that ambiguity without making the customer repeat themselves? These are conversation design decisions. They do not emerge automatically from good model selection. They have to be made explicitly, by people who understand both language and customer behavior.
Experience design also happens at the governance layer: who owns the language, who has the authority to change it, what happens when the AI says something wrong, and how quality gets measured over time. Organizations that skip this layer end up with capable AI that nobody manages, drifting gradually away from brand standards and customer expectations without anyone noticing until something breaks publicly. ICX's post on who owns the words your AI says is one of the most practical pieces on this problem.
The Platform Is a Vehicle, Not the Destination
Here is a useful reframe: the AI platform is infrastructure. It is how the experience gets delivered, not what the experience is.
Think about how most organizations approach their website. The CMS is infrastructure. Nobody evaluates CMS options before deciding what the website should say, how it should feel, what user journeys it should support, and what success looks like. Content strategy and experience design come first. The CMS gets selected to support a defined experience, not to create one.
AI should work the same way. But it usually does not. And the cost of that reversal is paid in every conversation where the AI produces something technically correct and experientially wrong.
In practice, leading with experience design means having a conversation design brief before opening a vendor portal. A brief that answers the real questions: who are the customers using this AI, what do they most commonly need, what does success look like in language terms, where are the edges of what the AI should handle, and what does a genuinely helpful version of this interaction actually sound like? Once those questions have real answers, platform selection becomes far more straightforward. You are evaluating against specific requirements instead of comparing abstract feature lists.
This sequencing shift is one of the clearest patterns in organizations that get AI right. The post on what separates the 20% of AI implementations that succeed covers the full set of decisions that distinguish effective deployments from struggling ones. The platform sequencing issue shows up consistently. Teams that define the experience first, then select the platform, almost always outperform teams that do it in reverse.
Three Signs Your Organization Is in Tool-Buying Mode
The tool-first pattern tends to show up in recognizable ways. Here are three signals that an organization is investing in AI tooling without investing in AI experience design.
Nobody owns the system prompt. If the prompt that governs your AI's behavior was written once during setup and has not been reviewed since, there is no experience design practice in place. The system prompt is the most direct expression of how your AI should think, speak, and behave. An unowned system prompt is an unmanaged AI experience. Harvard Business Review's research on AI in customer service consistently identifies ongoing language quality management, not initial tool sophistication, as the primary driver of long-term performance. Without an owner for the language layer, that management cannot happen.
Success is measured in containment, not resolution. Containment rate measures how often the AI handled a conversation without escalating. It is a valid metric. It is not sufficient on its own. An AI can contain a conversation and completely fail to meet the customer's actual need. Organizations that measure containment without measuring conversation quality or real customer outcomes are optimizing for the wrong signal. The chatbot looks good on paper while trust quietly erodes in practice.
Platform capability drove scope. If the AI handles certain topics primarily because the platform made them easy to configure rather than because they represent high-value customer needs, the cart is in front of the horse. Scope should be defined by what customers need and what the AI can do well. It should not be defined by what was easiest to set up during implementation.
If any of these patterns are present, the right move is not to buy a better tool. It is to build a design practice. The guide to choosing an AI customer support platform is worth reading alongside this post for teams navigating the vendor selection process. The most important advice in any platform guide is the same: make sure the experience design conversation happens before the vendor conversation.
What Happens When You Flip the Order
Organizations that lead with experience design instead of tool selection tend to see different outcomes, and not just in customer satisfaction scores.
They launch with cleaner integrations. When the experience requirements are defined before implementation begins, there are fewer scope changes, fewer pivots after the demo environment, and fewer "we did not realize the platform could not do that" moments. Technical work is more efficient when it is building toward a defined target rather than exploring what is possible.
They iterate more purposefully. When language quality is a managed discipline with a named owner and a review cadence, improvement is systematic rather than reactive. Teams with a real conversation design practice know exactly what to change when performance slips. They can diagnose the issue and fix it quickly because they have the vocabulary and the authority to do so.
And they build AI that customers actually use. This matters more than anything else. The tools that generate real ROI are not always the most technically sophisticated ones. They are the ones that feel like genuine help. Gartner's research on AI in customer service shows that the single biggest driver of AI adoption by customers is perceived helpfulness, not capability. Customers return to AI experiences that make them feel understood. They route around ones that feel robotic, even if those experiences are technically capable of helping them.
The investment that makes an AI experience feel helpful is almost never a new model or a platform upgrade. It is better conversation design, better language governance, and better measurement of the things that actually predict long-term trust. These are skills and practices. They are also things ICX helps teams build, whether from scratch or as part of an existing deployment that needs a reset.
There is more coming in this series. The next posts go deeper on specific questions: which customer interactions should stay human rather than get automated, and what AI vendors almost never tell you about what implementation actually costs. Bookmark the blog and keep checking back. There is a lot more ground to cover.
If this post resonated with where your team is right now, ICX would love to hear what you are working on. The services page covers how experience design and conversation design work get structured in practice. And the contact page is the fastest path to a real conversation about what your specific situation needs.
AI Transparency Disclosure
This article was created with the assistance of AI tools, including Anthropic's Claude, and reviewed by the ICX team for accuracy, tone, and alignment with current industry reporting. ICX believes in transparent, responsible use of AI in all business practices.
Why this disclosure matters: As an AI consulting firm, ICX holds itself to the same transparency standards it recommends to clients. Disclosing AI involvement in content creation builds trust, aligns with Anthropic's responsible AI guidelines, and reflects the belief that honesty about AI usage strengthens rather than undermines credibility.