Let AI carry the weight of your product’s abstractions, so your users don’t have to.
I still remember feeling awe-struck the first time I used ChatGPT. I knew it was the beginning of a new era, but I struggled to wrap my head around what it meant for design. Only after reading Jakob Nielsen’s piece “AI: First New UI Paradigm in 60 Years” did I begin to grasp the tectonic shift in design. For sixty years, we’ve been telling computers how to do things, one command at a time. AI inverts this with an ‘intent-based outcome specification’ paradigm: you tell it what you want, and it figures out the how. That isn’t a feature. It’s a different kind of computer.
We all remember how companies scrambled soon after ChatGPT’s birth. Every major product added a chatbot or a co-pilot. AI assistants landed in enterprise tools, creative software, cloud platforms. And today, three years in, those assistants have genuinely delivered something: users can ask instead of clicking, get answers instead of digging through documentation, and offload routine tasks they used to do manually. That’s real value.
But here’s the thing: strip out the AI assistant from most of these products, and you can still use them. The product is the same. AI sits on top of it, not inside it. Adrian Levy calls this the test for embedded intelligence — and most products today are failing it.
Is that the right way to leverage the revolutionary force that is AI?

To understand what’s missing, we have to think about how products get learned. Every product has a learning threshold: the minimum a user needs to learn before they can use it. Not use it well. Use it at all. Consider a door. At some point in your life, you learned that twisting the knob unlocks it and pushing or pulling opens it. You do it on autopilot now, but there was a moment when you didn’t know that. Below the threshold, the door is just a wall.
Digital products stack two layers onto this threshold. The first is conceptual: the core ideas the product operates on. The second is interaction: how the interface represents those ideas. Both need to be cleared before the product can be used at all.
For most consumer products, the conceptual threshold is thin — the ideas map onto things you already understand. But enterprise software is different. The builders of those products invented concepts to structure capability and make it tangible. Before AI, this was required: even though these concepts introduced a steep learning threshold, once a user crossed it, the concepts actually helped them use the product.
Take AWS. To store data in the cloud, at the very least, you need to know what an account is, what a service is, what a resource is. Not because the real world works this way, but because AWS invented these concepts to organize its offerings. They’re the toll. You pay it, or you don’t get in.
Add an AI assistant to AWS without changing any of this, and you do see improvement. Ask “how do I set up storage?” and it explains the concepts. Tell it to create an S3 bucket, and it executes on your behalf. The assistant lowers both layers — but only partially. It teaches you AWS’s language; it doesn’t free you from needing to learn it. The conceptual overhead remains yours to carry.
This is the limit of bolt-on AI: it improves efficiency within the existing conceptual frame. It doesn’t change the frame.

Rethinking the experience looks different.
Imagine a user who needs to store data for an app they’re building. In a truly AI-native product, they say exactly that — in their own words — and the AI acts as the translator between their intent and the product’s primitives. It asks clarifying questions to understand the intent more deeply, maps their goal to the right underlying concepts, configures them, and hands the user a path to their storage. The user never needs to learn what a service is or that their storage lives inside one. The AI absorbs that conceptual weight, so they don’t have to.
This doesn’t eliminate the learning threshold entirely. That same user still needs to understand what cloud storage is and why their app requires it. Domain knowledge can’t be abstracted away, and it shouldn’t be. That’s not the product’s debt to clear.
What AI can minimize is the additional conceptual layer that exists purely because of how the product was built. Concepts like accounts, services, and resources have no identity or meaning outside the product itself. They were invented by the builders of the product to organize their own system. Those are the product’s overhead, not the user’s. AI can carry them.

And it can continue to carry them throughout the lifecycle, not just at the point of setup. Once a user has their storage running, they’ll need to manage it: monitor usage, handle capacity, respond when things break. In a conventional product, that means learning another layer of product-invented vocabulary. The error message reads: “Your S3 bucket has exceeded its storage quota. Review your IAM policy to update access permissions.” The user is back inside AWS’s language, responsible for decoding it mid-crisis.
In an AI-native product, the same moment reads: “Your storage is getting full. Here’s how to increase it.” The conceptual burden doesn’t resurface when things go wrong. The AI keeps translating.
One clarification worth making: this isn’t an argument for replacing products with a chat interface. The traditional point-and-click interface affords granular control and is still important for users who need it. The point isn’t to hide the product behind AI. It’s to stop making the product’s internal abstractions the price of admission.
Trust was always important for digital products; here, it becomes non-negotiable. When users are freed from understanding the underlying concepts, they’re also freed from the ability to audit them. Trust is what holds the whole model together. That means building structures that earn and maintain trust: guardrails that keep the AI within the bounds of its competence, rigorous evaluation frameworks, and feedback loops that catch errors over time. And trust must shape how the product behaves: the AI should convey what it did in terms of outcomes rather than abstractions, something like “Your storage is set to grow automatically and stay cost-efficient”. High-stakes decisions should require explicit confirmation. Everything the AI configures should be reversible.
Most AI in products today lives in the interaction layer, executing faster, navigating smarter, answering questions better. That’s valuable, but it’s only half the leverage. The real opportunity is in rethinking which ideas a user actually needs to hold in order to use your product at all.
This means going back through every concept your product operates on and running it through a fine-toothed comb. Is this concept something a user genuinely needs? Or is it an internal abstraction that your team invented to organize the functionality, something AI could silently handle instead?
AI needs to extend beyond the interaction layer into the conceptual one too. That’s where it can truly reduce the learning threshold, not just accelerate the path through it.

Works that informed this article:
- “AI: First New UI Paradigm in 60 Years” by Jakob Nielsen
- “The state of enterprise AI” by OpenAI
- “Perplexity and NotebookLM don’t use better AI — they use better intelligence flow architecture” by Adrian Levy
- “The ‘Bolt-On’ Fallacy: Why Chatbots Aren’t AI Strategies” by Kevin Bluett
- “The agentic era of UX” by Alex Klein
- “Overcoming the Articulation Barrier in Generative AI Using Hybrid Interfaces” by Tarun Mugunthan
- “Beyond Usability: Designing UX for Trust in the Age of AI” by LINC Interaction Architects
- “Why AI Can’t Be a Feature (and Why Most Products Get This Wrong)” by Brett G
Don’t simply bolt on AI. Rethink from the ground up. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
