Customer service has become one of the most active frontiers for applied AI; however, most organizations are struggling to keep up with the speed at which it’s emerging and evolving the landscape. They still treat it as a long form execution cycle where you spend a year or more putting a plan together followed by a mass transformation. Think long roadmaps, big re-platforms, and multi-quarter projects that stall before value reaches the customer. Meanwhile, customer expectations evolve monthly, and AI capabilities advance weekly. Transformation remains a plan, not a practice.
The numbers tell the story. In early 2024, 65% of companies were already using generative AI somewhere in their business. The bar keeps rising, but human planning can’t keep up. When the landscape shifts this fast, a 12-month roadmap risks obsolescence before it ships. To deliver real impact, service leaders need an operating model that can learn, deliver, and most importantly iterate at the pace of AI itself.
Traditionally, customer service transformations were built for slower technological cycles and predictable updates. Now, features that didn’t exist in Q1 are expected by Q3. Every “multi-year roadmap” turns into a rewrite before it’s finished.
Data backs this up: 55% to 70% of executives expect it will take at least 12 months to overcome adoption challenges and barriers like governance, training, trust, and data quality. That lag is where most AI programs lose steam. Instead of delivering outcomes, teams spend months re-planning. If the plan is too rigid, you end up repeatedly restarting the plan instead of delivering outcomes. What leaders need is a framework that turns progress into a rhythm, not a one-time event.
The alternative is practical and repeatable. Instead of a single, fragile overhaul, leaders “red-circle” three meaningful, shippable AI improvements each quarter. Short proof-of-concept deployments that can improve on the fly means they don’t all have to be winners – expect two to work and one to miss. Ship, measure, learn, and move to the next three. After four quarters, you’ll have 8–12 live improvements which represent real change in your enterprise.
This iterative approach isn’t just efficient; it’s safer. Large-scale projects are inherently risky–the longer they run, the more exposed they become to shifting technology, budgets, and priorities. Only 31 percent of IT projects are deemed completely successful, 50% run over budget or schedule, and nearly one in five never reach completion. Those rates have barely improved in a decade, and at an AI pace, large long term “transformations” will be down right unreliable.
By contrast, smaller, time-boxed initiatives beat the odds because they limit exposure and narrow scope to drive efficacy. Each 90-day Red-Circle cycle becomes its own self-contained project: defined, measured, and shipped before the next wave of technology shifts the landscape again.
The Red-Circle mindset doesn’t just change how teams deliver AI—it changes how they decide what to deliver.
Take the modern contact center. Nearly everything can be automated; the smarter question is: Should it be? The most effective teams approach these choices the way they would a product decision—grounded in value and user impact, not just technical capability.
A simple framing helps leaders choose wisely. Map service journeys into three lanes:
This lane-based model should be revisited quarterly as models improve and customer preferences shift. Poorly aligned automation doesn’t just waste effort; it fragments the experience and erodes customer loyalty over time. The pressure to deliver service that is instant, informed, and human when it matters has never been higher. The future of applied AI is automation and human judgment working together, not competing for control.
Here’s how to activate the Red-Circle approach in your own operation:
This discipline is what separates momentum from motion. Gartner notes that the vast majority (85%) of service leaders plan to explore or pilot customer-facing conversational GenAI in 2025. The ones who win will be those who deliver production-level value in short, repeatable bursts, with strong guardrails in place.
In the era of generative AI, credibility depends on speed and responsibility. Keep AI credible by pairing discipline, ongoing evaluation, and human oversight on a 90-day rhythm.
Governance should be continuous—weekly checks that track bias, drift, and accuracy, the way product teams track quality. And when emotion, regulation, or repeated failure calls for a human, that handoff should be intentional and seamless. Human trust and machine intelligence should feel like two halves of one service experience.
The combination of real-time process automation, contextual AI assistance, and continuous human oversight is what defines the next generation of service operations. Customers don’t care about your transformation roadmap; they care whether the experience works right now. The leaders defining the next era of AI won’t be those chasing reinvention. They’ll be the ones practicing it–with smaller wins, more often, and with the right guardrails. Transformation, after all, isn’t one big project. It’s a habit of progress.


