The conversation around artificial intelligence has reached a predictable impasse. Users say they don’t trust AI. Companies promise transparency. Regulators threaten intervention. Yet the core issue remains: people cannot trust what they do not understand, and most AI systems still communicate in ways that feel foreign to users.
The trust crisis is less about trust itself and more about translation. When a loan application is rejected, a job candidate is filtered out, or a student’s statement of purpose is flagged for AI plagiarism, the system rarely explains its reasoning in terms humans can process. Users are left guessing, frustrated, and sceptical.
The technology is highly functional, but it does not show its work; there’s no explainability.
This translation gap has economic and social consequences. A 2023 KPMG global study found that 61 per cent of people are wary about trusting AI systems, with only half believing the benefits outweigh the risks. This mistrust costs businesses billions in unrealised productivity through delayed AI adoption.
But the problem extends beyond business outcomes. In many sectors, AI systems now shape decisions with significant personal impact. When these systems cannot explain themselves, they become unaccountable gatekeepers.
Education is one clear example. Algorithms assess thousands of data points from academic performance, financial capacity, location, to career goals and produce recommendations that influence students’ futures.
Similar: Can ‘AI Judges’ be the solution to the problems in Nigeria’s justice system?
Yet students rarely know why certain options appear or how the system interprets their information. Similar opacity appears across healthcare, hiring, finance, and public services.
The argument that AI is “too complex to explain” misses the point. Complexity is not the barrier; communication is. Other fields translate complex information for non-experts every day. The challenge is not making the underlying systems simpler; it is expressing their logic in ways users can understand.
While technical explainability research continues to advance, it offers methods to trace model behaviour. However, these methods mean little if the explanations require a core domain knowledge background. Addressing the translation problem requires more than exposing internal logic; it requires producing explanations that are comprehensible, relevant, and usable.
Solving the translation gap would enable faster, more confident adoption. People use tools they understand. When users grasp why a system behaves in a certain way, they are more likely to accept and effectively use its recommendations.
Moving forward, developers must ask not only “does this work?” but “can users understand why it works?” Organisations deploying AI should invest in communication design alongside technical optimisation.
Image source: Unsplash
Regulators should require explanations aimed at users, not just documentation for auditors. Clear explanations support better decisions, more engagement, and more equitable outcomes.
Translation must become a core feature of AI systems. That means designing tools that communicate in plain language, testing explanations with real users, and withholding deployment of systems that cannot clearly articulate their reasoning. Technology that influences people’s lives must be able to explain itself. Anything less is not a trust issue; it is a translation failure.
Mathilda Oladimeji is a doctoral researcher in Information Systems at Louisiana State University, where she studies AI explainability and user trust.
She previously served as Regional Marketing Manager for Intake Education across Africa, managing digital campaigns for over 100 universities.


