Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation.Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation.

Agentic UX Over "Chat": How to Design Multi-Agent Systems People Actually Trust

2025/11/28 03:38
12 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

When I was first tasked with integrating generative AI into Viamo’s IVR platform, serving millions of people across Africa and Asia’s emerging markets, it didn’t take me long to recognise that we couldn’t just stick a chat interface on it and call it a day, as much as that would simplify some of our technical and development challenges. To be clear, we were designing for people who rely on voice interfaces strictly because they need this kind of information about healthcare, agriculture, and finance, and they have little patience for AI that fails them or gives them the wrong information when they have limited time and limited bandwidth.

That project taught me a lesson about designing for AI that I think all designers should learn: Designing for agentic AI is not about making it chat-friendly, but about designing intelligent systems that can work reliably, transparently, and predictably inside of workings that people already trust. Over seven years of designing products, spanning fintech, logistics, and software platforms, I have realised that the most effective way of implementing AI is not about replacing human judgement, but about augmenting it in ways that people can, and ultimately, will trust.

The Fatal Flaw of Chat-First Thinking

There is a dangerous paradigm that has been ingrained in this industry because of its obsession with chat interfaces on AI products, and that is that everyone is trying to build a “ChatGPT for Y”. No one stops and says, “Well, actually, just because we can build this and chat is a part of it, that doesn’t actually have anything to do with whether or not chat interaction is actually what we need on this.”

That’s not necessarily true. Chat is perfect for open-ended exploration and creative tasks that involve journeys as much as they involve destinations. But most business tasks demand accuracy, auditability, and repeatability. When designing the supplier interface for Waypoint Commodities, a system that deals with million-dollar fertiliser and chemical trade transactions, users didn’t require a user-friendly chat interface that could facilitate exploratory conversations about their transactions. They required interfaces that enabled AI systems to point out errors, identify optimal routes, and highlight compliance concerns without clouding critical transactions with any uncertainty or vagueness.

The primary issue with chat-centric AI is that it enables decision-making under a facade of conversation. Users can’t easily inspect what information was used, what was applied, and what was explored as an alternative option. Of course, this is acceptable for low-stakes interrogations, but disastrous for consequential choices. When designing our shipment monitoring system that tracked orders all through fulfilment, our Waypoint project was facing a challenge that required users to be assured AI messages about potential delays or market fluctuations weren’t based on fictitious observations but on actual facts explored and verified by AI itself.

Multi-Agent Systems Require Multi-Modal Interfaces

But then, a paradigm shift occurred within my thinking as I ceased designing only for one AI model and focused on designing for environments that consisted of multiple specialised AI entities operating together as a system.

It meant that we had to forgo entirely the paradigm of a one-window chat system. Instead, we built a multi-window interface through which multiple interaction methods could be used simultaneously. Quick facts would get immediate responses through AI voice output. Troubleshooting would involve a guided interaction through which AI would answer preliminary questions before redirecting the user to an expert system. Users searching for information on government facilities would get formatted replies that would cite sources accordingly. All these methods of interaction would have distinct visual and audio signals that would build user expectation accordingly.

These outcomes proved this strategy was valid, and we experienced improved accuracy of response of more than thirty per cent and heightened user engagement levels. Far more significantly, user abandonment levels decreased by twenty per cent as users ceased leaving conversations due to frustration of expectation mismatches. Since users understood they would be speaking to an AI system that had a certain body of knowledge as compared to waiting for human expertise, they adjusted their levels of enquiry and patience accordingly.

Designing for Verification, Not Just Automation

One of the most important principles of agentic UX design that I uphold is that ‘automation without verification’ is merely ‘technical debt masquerading as AI.’ There should be an ‘escape hatch’ provided alongside each AI ’agent’ used in a system, allowing ’users to validate its reasoning’ and ‘override its decision’ as and when required, ’not because one lacks faith’ in AI ’abilities,’ but because one ’respects the fact’ that ’users have final responsibility’ when ’in regulated environments or high-value transactions.’

When I was responsible for designing the admin dashboard for onboarding new users at Waypoint, we had a typical case of an automation project, the kind that would enable AI processing of incorporation documents, abstracting essential information, and automatically populating user profiles, thereby reducing user onboarding from several hours into just minutes. Of course, we understood that inaccuracies could potentially lead a company into a case of non-compliance or, worse, create fraudulent user profiles. So, we realised that we don’t need more accurate AI processing as a remedy to this problem, but rather to create a system of verification that would involve AI-generated user profiles, pending activation by human admins.

In our interface, we implemented the following system for indicating AI confidence levels for each field that was extracted:

  • Fields that had high levels of accuracy had black text colour and green tick marks;
  • Medium accuracy had orange colour, and a neutral symbol was used;
  • Fields that had low accuracy or missing information had red colour and a warning symbol.

To identify any errors that AI systems had missed, thirty seconds per profile was enough time for admins, as they got enough context through this system.

But the outcome was clear: we achieved a reduction of onboarding time of forty per cent over fully manual methods and greater accuracy than human or AI approaches alone. But more significantly, the admin personnel trusted this system because they could actually follow its logic. If there was any error on the AI’s part, that was pointed out quite easily through the verification page, and this helped build that all-important trust that enabled us to successfully roll out other AI functionality later on.

Progressive Disclosure of Agent Capabilities

Another subtle but essential area of agentic UX that most designers struggle with is providing users with information about what their agents can and cannot accomplish without overwhelming them with possibilities and potential applications of these capabilities. Such is especially true for systems that apply generative AI, and as we struggled at work at FlexiSAF Edusoft, where I developed these systems, they have applications that range widely but are unpredictable across different tasks or activities. Users, in this case students and parents, would need direction through often complex admission procedures and, on the other hand, would need to be informed of what responses could be provided by AI and what would require human interaction.

Our implementation provided capability hints based on interaction, meaning that as one used the system, they would be provided with examples of questions that the AI was strong at answering versus questions that could be more appropriately answered by the human resources people at the institution, meaning that as a user typed questions about application deadlines, they would see examples of questions that the AI was strong at answering, such as “When is the deadline for engineering applications?” as opposed to questions they could more effectively answer, for instance, “Can I be exempted from payment of application fees?”

Additionally, we enabled a feedback cycle through which users could express whether their question had been fully answered by an AI response or not. This was not only for improving the model, but it enabled a UX feature through which users could express that they required escalation of their issue and that they had been left stranded by an AI system. Relevant resources would be provided through this system, and, if not, they would be connected with human resources as well, thus resulting in a support ticket decrease but without sacrificing user satisfaction, as people would feel that they had been listened to and that they had not been left stranded through an AI system.

Transparency and Its Usefulness as a Trust-Building Factor

Trust, of course, is not established by improved AI algorithms but by transparent system design that allows a user to see what the system knows, why it made its conclusions, and where its limitations are. eHealth Africa, our project involving logistics and data storage of supply chains in the medical sector, made this one of its non-negotiables: ’If AI computer agents predicted the timing of vaccine shipments or indicated optimal routes for delivery, these justifications had to be explainable, because human decision-makers would be deciding whether rural clinics received life-saving commodities on time.’

To address this, we built what I call “reasoning panels” that provided output alongside AI suggestions. These reasoning panels did not display model details of its computations, only information about why it reached its recommendations, including road conditions, previous delivery times for this route, weather, and transport capacity available. The reasoning panels enabled field operatives to quickly ascertain if they had been getting outdated advice from AI or if they had neglected an essential, more recently available fact, such as a bridge closure, and made them indispensable and transparent rather than opaque decision-makers, as would be the case for black boxes.

Transparency was required, and this was true for failure as well as success. To this end, we built helpful failure states that would describe why the AI was unable to offer its recommendation as opposed to falling back on some generic error message. If, for instance, it was unable to offer an optimal route because it lacked connectivity information, this was explicitly communicated, and the user was informed of what they could do if they still had no route recommendation available.

Designing Handoffs Between Agents and Humans

But perhaps one of most undeveloped themes of agentic UX is that of handover, or exactly when and exactly how an AI agent is supposed to pass control of a system or of an interaction over to a human, whether that human be a colleague or be themselves a user of that system or interaction. This is precisely where most of the loss of trust occurs within multi-agent systems, and this was actually one of the first projects that I engaged that dealt explicitly with this issue, that of Bridge Call Block for Viamo, which was a system that transferred users from IVR interactions to human customer service reps.

Our protocol for context transfer was designed such that after every interaction of the AI, a structured summary was displayed on the screen of the operator before they could greet the user, and this summary contained what was asked by the user, what the AI intended to say, and why the AI escalated this call. There was no need for users to be asked to repeat what they had asked, and all interaction context was available to the operators, and this small detail of interaction design vastly improved average handling time and user satisfaction, as people felt they had been respected and that their time had not gone to waste.

The handoff from human to AI agent had to be considered equally carefully in reverse. In cases that called on the operators to refer their users back to the automated system, user interface functionality was used effectively by the operators to communicate adequate expectations of AI autonomy based on certain tasks that would enable users to be referred back to the automated system, as opposed to doing so with expected frustrations.

Principles of Pragmatic Design of Agentic UX

As a practitioner designing AI-enabled systems for many years, today I have formulated some pragmatic guidelines that help me design agentic UX effectively:

Firstly, design for the workflow, not for technology. Users don’t care whether they’re being helped through AI, rules, or human intelligence. They only care about whether they can accomplish their tasks effectively and conveniently. Begin by reverse-engineering from the target outcome, identifying areas of added value and added complexity due to AI-enabled agents, and then stop and proceed accordingly.

Secondly, define meaningful boundaries of AI-enabled agents. Users need to be aware of when they are leaving one realm of intelligence and entering other realms, such as the intelligence of retrieval, model intelligence, and human intelligence, and establish consistent visual and interaction guidelines accordingly, such that they don’t wonder what kind of answer they’re going to get and when they’re going to get it.

Thirdly, build verification into your workflow design respecting user expertise. AI systems should ideally help hasten decision-making by bringing up pertinent information and suggesting courses of action, but these should ultimately be made by human users who possess context unavailable to AI systems themselves. Designing decision verification flows into AI system user interfaces that facilitate this would be ideal.

Because of projects that have successfully secured funding, boosted engagement by definite increments, and processed user figures in the thousands, we didn’t succeed because we possessed, or attempted to create, sophisticated AI systems. It is because we provided these users, through our interface, the ability to comprehend what was happening on their end of this AI system and, through that, helped them trust it enough to accomplish increasingly complex tasks over time that has truly made them successful examples of agentic UX.

\n

\ \n

\ \ \n

\

Market Opportunity
ConstitutionDAO Logo
ConstitutionDAO Price(PEOPLE)
$0.006986
$0.006986$0.006986
+0.69%
USD
ConstitutionDAO (PEOPLE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

XRP Price Prediction: XRP Trapped At $1.37 As Breakout Setup Tightens

XRP Price Prediction: XRP Trapped At $1.37 As Breakout Setup Tightens

The post XRP Price Prediction: XRP Trapped At $1.37 As Breakout Setup Tightens appeared on BitcoinEthereumNews.com. XRP trades at $1.3771, down 0.53%, pressing
Share
BitcoinEthereumNews2026/03/24 01:08
Why Digital Banks Are Growing 3x Faster Than Traditional Banks

Why Digital Banks Are Growing 3x Faster Than Traditional Banks

The Growth Gap Between Digital and Traditional Banking Digital banks are acquiring customers at approximately three times the rate of their traditional counterparts
Share
Techbullion2026/03/24 00:50
Analyst Predicts ‘Uptober’ Rally for BTC Regardless of FOMC Decision

Analyst Predicts ‘Uptober’ Rally for BTC Regardless of FOMC Decision

The post Analyst Predicts ‘Uptober’ Rally for BTC Regardless of FOMC Decision appeared on BitcoinEthereumNews.com. Bitcoin traded at $116,236 as of 14:04 UTC on Sept. 17, up about 1% in the past 24 hours, holding above a key level as markets await the Federal Reserve’s policy announcement. Analysts’ comments Dean Crypto Trades noted on X that bitcoin is only about 7% above its post-election local peak, while the S&P 500 has risen 9% and gold has surged 36% during the same period. He said bitcoin has compressed more than those assets, making it likely to lead the next larger move, though it could form a “lower high” before extending further. He added that ether could join in once it breaks $5,000 and enters price discovery. Lark Davis pointed to bitcoin’s history around September FOMC meetings, saying every September decision since 2020 — except during the 2022 bear market — has preceded a strong rally. He stressed that the pattern is less about the Fed’s rate choice itself and more about seasonal dynamics, arguing that bitcoin tends to thrive in this period heading into “Uptober.” CoinDesk Research’s technical analysis According to CoinDesk Research’s technical analysis data model, bitcoin rose about 0.9% during the Sept. 16–17 analysis window, climbing from $115,461 to $116,520. BTC reached a session high of $117,317 at 07:00 UTC on Sept. 17 before consolidating. Following that peak, bitcoin tested the $116,400–$116,600 range multiple times, confirming it as a short-term support zone. In the final hour of the session, between 11:39 and 12:38 UTC, BTC attempted a breakout: prices moved narrowly between $116,351 and $116,376 before spiking to $116,551 at 12:34 on higher volume. This confirmed a consolidation-breakout pattern, though the gains were modest. Overall, bitcoin remains firm above $116,000, with support around $116,400 and resistance near $117,300. Latest 24-hour and one-month chart analysis The latest 24-hour CoinDesk Data chart, ending 14:04 UTC on…
Share
BitcoinEthereumNews2025/09/18 12:42