Just another day at work. You prompt a GenAI tool to draft a board update: a quick recap of company performance, key metrics and upcoming priorities.
Most of it looks solid. But one chart pulls last year’s data instead of this year’s. A bullet point misstates your growth rate. And the closing paragraph promises a product launch that’s actually been delayed.
Is it good enough? Maybe as a first draft. But if that version makes its way to the board, it’s not just sloppy. It’s a credibility killer.
Generative and agentic AI tools are proving to be incredible time-savers and strategic assets, helping businesses streamline workflows, surface trends and insights faster, and scale content creation. The upsides are tangible and obvious, so now the content floodgates are wide open.
But not enough people are pausing to ask: “Does this need to exist?” Or: “Is it actually right?”
And too often, AI output gets accepted at face value.
So across business functions, there’s been a surge in content that looks good on the surface — slick, well-formatted and seemingly on-point. But look beneath that, and the veneer starts to shatter.
Depending on the prompt and training data, AI-generated content is often about 90% right… but the other 10% isn’t just typos or formatting glitches. Like in the board example above, it often includes material inaccuracies: a revenue figure off by a decimal point, a misread of industry nuance, or a hallucinated stat or statement that simply isn’t true.
These aren’t just internal speed bumps. They become external liabilities, especially when public-facing content contains gaffes. (“Does that company even know what they’re talking about?!”)
That’s why when content is 90% right and 10% wrong, it’s effectively 100% useless. That final 10% is where trust unravels, and the damage is done.
Many of us have seen the fallout: for example, from summer reading lists published by major news outlets with made-up books, to hallucinated legal citations submitted in court filings, to academic research riddled with fake citations. The pattern is the same: 90% right (if that), 10% wrong, 100% damaging.
The stakes are especially high in my line of work, partnering with investor relations (IR) professionals, where every detail counts. A single misstatement in an earnings call script or shareholder letter can shake investor confidence, impact valuation or even move markets.
Historically, IR has been more cautious to adopt AI than other finance functions, but that’s changing fast. There’s enormous potential, in IR and beyond, if we use AI correctly.
We have to get this right because AI is no longer a novelty. It’s as ubiquitous in business as GPS is in everyday life. And like GPS, it’s a powerful tool, but only if it’s taking you in the right direction.
The goal isn’t to let AI shortcut strategy; it’s to help guide it. Here are a few ways to make that happen, so you can move beyond “good-enough” output toward content that’s 100% worth standing behind.
Like any tool, AI is only as useful as the way we use it. If the inputs are off — or worse, no one checks the outputs — then we’re really just speeding off in the wrong direction.
The real opportunity comes when businesses combine AI’s potential with human judgment: training teams to use their tools well and ask smart questions. Teams must also apply scrutiny to results and build in checkpoints that catch mistakes before they matter.
Get that right, and AI stops being a content factory that churns out work of dubious quality. Instead, it’s a driver of trust, clarity and 100% valuable work.


