
AI becomes transformative not when machines think faster, but when human imagination and intelligent systems push each other forward. Models can generate output, but true impact happens only when creative ideas are paired with robust, scalable systems that span workflows, data, and real-time decision-making.
At TechSparks 2025, the panel ‘Making AI Real: Where Creativity Meets Intelligence and Scale’ unpacked this powerful intersection. Moderated by Sandeep Alur, CTO of Microsoft Innovation Hub, Microsoft India, the session brought together Mathangi Sri Ramachandran, Head of YuVerse, Yubi Group; Rahul Regulapati, Founder, Galleri5—acquired by Collective Artists; Samanyou Garg, Founder, Writesonic; and Vishal Virani, Co-founder and CEO, Rocket. Alur co-moderated alongside Copilot, Microsoft’s AI assistant, which posed real-time questions, offering a live demonstration of human–AI collaboration in action.
The conversation opened with a question on enterprise readiness: how can teams using AI-driven development tools balance creativity with the demands of security, scalability, and reliability?
Vishal Virani broke it down through the lens of what vibe coding can—and cannot—deliver today. He said current platforms still don’t generate production-grade applications independently. What they can do is dramatically accelerate early development. Rocket, for instance, allows teams to assemble front-end flows, integrate private APIs using Postman collections, and align builds with internal design systems—all within a compact setup period. Its real strength lies in customization: within 15–20 days, Rocket can configure its entire environment to the needs of an enterprise client.
He added that stronger models will expand what these tools can do. “By the time GPT-5 or Sonnet 5 or Sonnet 6 comes, we’ll be able to generate pure, secure code,” he said. During the exchange, Alur mentioned that “vibe coding” was recently declared Collins Dictionary’s Word of the Year.
Virani shared an example from a property-tech team where a product manager entered a problem statement and generated a set of options, including versions tailored for Gen Z users and a 60-plus audience. The manager told him the platform shifted the effort from writing PRDs to exploring possibilities.
He also referred to Devin, a tool dismissed when it launched during the GPT-3 era but built for what later-generation models would enable. When those models arrived, its adoption in the US increased rapidly.
The conversation then turned to how AI is reshaping product discovery. Samanyou Garg noted that users are increasingly relying on AI assistants instead of traditional search engines; ChatGPT alone has crossed 700 million weekly active users. A recent Forrester study shows that 89% of B2B buying decisions now involve AI-mediated conversations, making visibility within these systems indispensable.
Yet there is no “search console” for AI today. Companies have little insight into which prompts surface their products or how their competitors appear. Writesonic aims to solve this by tracking brand visibility inside AI assistants and identifying where optimization is needed.
Garg highlighted that AI models give three to six times more weight to third-party sources like Reddit and Wikipedia than to brand-owned content, making external sentiment and reviews crucial inputs. He also stressed the importance of continuously updating brand-owned content, since outdated information can still be pulled up by models.
Garg framed this shift as the rise of generative engine optimization (GEO), optimizing not just for search engines, but for how AI models interpret, rank, and present a brand.
he discussion then moved from discovery to creativity. Rahul Regulapati spoke about Galleri5’s work on the AI-generated Mahabharat series, now among the top shows on Star Plus. While Galleri5 built the AI infrastructure powering the production, a full creative team, including a director, scriptwriter, stunt and choreography units, and character artists, handled the filmmaking. Between 50 and 100 people worked across creative and technical roles.
Regulapati described AI as infrastructure that removes production constraints. Teams can now shoot basic movements in smaller studios and use AI pipelines to scale them into expansive, cinematic sequences that would previously require massive budgets or complex physical setups. Production timelines that once stretched to two years can now shrink to two months, and costs that previously hit Rs 100 crore can be cut drastically. Throughout, creative control stays intact—the director of photography still dictates lenses and visual style, and the output must match that vision.
He added that audiences tune into Mahabharat for its storytelling, not its technology. Looking ahead, he expects most content to become partially or fully AI-enabled within the next 12 to 24 months, with creative teams firmly steering the process.
The panel then turned to the question of scale, with Mathangi Sri Ramachandran outlining how YuVerse operates AI systems in high-volume, high-stakes environments. The company handles nearly 30 million calls a month through conversational bots, alongside large-scale document processing and personalized video generation. As she put it, “anything we try today is scale tomorrow.”
One of YuVerse’s flagship products, YuVin, creates millions of personalized videos, each tailored to an individual user rather than relying on a single mass-produced asset. The team is now experimenting with interactive video-to-video formats that could further change how brands communicate.
Ramachandran highlighted a persistent gap between machine performance and user expectations. Traditional call centres typically operate at 60–70% accuracy, while AI bots achieve 98–99%, yet clearing a proof of concept still demands 50 to 100 iterations because tolerance for machine errors remains extremely low. India’s preference for voice, even on WhatsApp, makes conversational AI especially compelling, but these expectations continue to slow adoption.
Tracing the evolution of the field, she described how conversational AI has shifted from linguist-led systems to computer scientists, then RNNs and LSTMs, and now to LLMs. She noted that India has always been “an extremely conversational market”, but widespread adoption has only taken off recently with the arrival of more capable language models. Still, she stressed, tolerance for machine error must increase for deployment to accelerate.
Moderator Sandeep Alur offered a reframing: instead of focusing on error rates alone, companies should consider the cost of error. If that cost is low, he argued, teams should move forward rather than chasing perfection that slows scale.
When the conversation moved to the next wave of low-code and no-code tooling, Virani clarified that vibe-coding platforms are best used for conceptualization, not for shipping production-grade code. He cited a property-tech product manager who used Rocket to explore ideas competitors hadn’t considered and rapidly test versions aimed at Gen Z versus users over 60, work that typically takes a month, completed in a single day.
According to Virani, the bigger barrier to adoption isn’t capability but mindset. Enterprises often struggle because they try to force existing workflows into new tools. “If you try to fit your workflow inside any tool, you will not be able to adopt AI anytime,” he said. The US market, he added, has taken the opposite approach: evaluating tools first, then adapting workflows around them. Security and compliance concerns also shift depending on whether teams are exploring ideas or deploying at scale.
On product discovery, Garg explained how AI-driven search is reshaping visibility. AI systems give three to six times more weight to third-party sources such as Reddit, Wikipedia, independent reviews than to brand-owned content. This means outdated or negative information anywhere online can surface in AI-generated answers. For brands hoping to appear in AI recommendations, reputation management must now span review platforms, media, and social conversations.
As the session wrapped up, each panelist offered a concise takeaway on what it means to “make AI real”. Ramachandran emphasized focusing on outcomes that truly matter and ensuring AI remains human-governed. Regulapati argued that AI becomes real when it replaces processes, not people. Garg pointed to fundamentals: talk to users, identify friction, and fix workflows with a human-led approach. Virani underscored a technical truth: the right context matters more than large inputs, and without that grounding, AI systems can hallucinate.
Copilot, closing the loop on its role throughout the discussion, offered a final on-stage summary: stay anchored to real outcomes, use AI to support people, maintain human-led workflows, and always work with the right context.
.


