Pricing work is no longer confined to a spreadsheet, a finance review, and a launch email. A change to tiers or packaging shows up in the day to day life of a product as a support ticket, a sales objection, or a quiet spike in cancellations. Teams feel it in the messy edges, where a plan name does not match what a user expects, or where a new feature lands behind a paywall before anyone has proven that it fits real workflows. That is why pricing has become a product problem, and why experimentation is becoming the language that keeps it honest.
Jyoti Yadav, Senior Data Science Manager at Atlassian working on Loom, builds inside that reality.Her operating principle is simple: treat every major change as a testable promise to users, and make the evidence legible enough that product, engineering, marketing, and sales can commit without guessing.

When A Test Has To Carry The Rollout
That same shift toward proof is visible across industries, because teams have learned how expensive it is to be confidently wrong. Across retailers and brands running analytics driven experimentation, 46% of ideas do not break even or fail to prove the initial hypothesis, which is a blunt reminder that intuition is not a rollout plan. The discipline is practical, not academic. In the same research, 68% say experimentation meaningfully changes decisions about what should be rolled out, what should be refined, and what should be killed early.
Yadav learned that logic in a setting where the operational risks were visible. While working on McDonald’s national “All Day Breakfast” rollout via the Test and Learn platform, she used advanced SQL and automated ETL pipelines to process large scale point of sale data and compare test stores against carefully matched control stores. The question was not only demand. It was kitchen flow, supplier constraints, and whether breakfast items would slow speed of service for lunch and dinner. The analysis contributed to a 5.7% increase in same store sales in Q4 2015 and supported a shift that generated $1.2 billion in earnings in that quarter, beating expectations, while the organization retrained staff to operate dual menus at scale. It was a national change with real friction, and the data had to survive that friction.
“Experiments only matter if they protect the rollout,” Yadav says. “If the measurement ignores how work is actually done, you ship a story, not a result.”
Pricing And Packaging In Subscription Products
Once you have seen how a rollout breaks in the real world, you stop treating subscription changes as a purely commercial decision. In B2B SaaS, pricing and packaging updates are now routine rather than rare, with 94% of companies updating pricing and packaging at least once per year and nearly 40% doing so as often as once per quarter. That pace makes governance around experiments unavoidable. When teams adjust tiers that frequently, the cost of unclear measurement is not theoretical. It becomes churn, discounting, and internal confusion that compounds every quarter.
Yadav applied that cadence during Loom’s end to end pricing and packaging overhaul following Atlassian’s acquisition. She led a team of six data scientists and built a biannual data meta synthesis to unify analyses, align stakeholders, and drive roadmap pivots with a shared view of risk and upside. The work required balancing the value of new AI features, including a 33% premium for Business plus AI, against retention and bundling complexity, then translating those tradeoffs into pricing tiers such as Business at $12.50 per month and Enterprise plans that could reach $10k annually. The launch also had to respect how Loom was already used at scale, including the 49M videos created with Loom AI, because packaging decisions land differently when usage is already habitual. That same rigor underpins her work beyond Loom as an editorial board member and peer reviewer at the SARC Journal of Technology Perception and the Journal of Economics Intelligence And Technology, where she evaluates applied research and data driven decision making at scale. The job was not to “set a price.” It was to make the change defensible across functions.
“Packaging is where strategy becomes real to customers,” Yadav says. “If you cannot explain why a tier exists, you will end up defending it in support threads and renewals.”
Proving AI Value Before You Charge For It
As teams add AI capabilities to products, the pressure to monetize early can outrun what has been proven in use. That gap shows up in the market. In enterprise AI efforts, 74% of companies are not yet achieving tangible value at scale, and only 26% have developed the capabilities needed to move beyond pilots. Those numbers do not argue against AI. They argue for measurement that is honest about adoption, workflow fit, and the difference between novelty and habit.
Yadav’s Loom AI launch work was built around that distinction. She led a team of data scientists through analysis and experimentation, drove the final recommendation, and supported the launch that increased annual recurring revenue by $2.85M per year. Adoption signals were treated as product evidence, not marketing garnish, with 67% of users using AI generated titles and 73% reporting the AI suite as extremely valuable. Those are the kinds of usage rates that change how a product team thinks about where AI belongs and how it should be packaged, because they speak to repeat behavior, not a one time click. This was not an abstract exercise. It shipped.
“AI features earn their price the same way any feature does,” Yadav says. “You watch what people do repeatedly, then you decide what is worth paying for.”
Keeping Global Teams Aligned On One Version Of The Truth
After an AI launch and a pricing overhaul, the hardest part is often not the analysis. It is getting global teams to agree on what the analysis means. In modern work patterns, people are interrupted 275 times a day by meetings, emails, and pings, and about 30% of meetings now span multiple time zones. That is a brutal environment for careful decisions. When the narrative shifts with every meeting, teams stop trusting the numbers and start optimizing for the loudest room.
Yadav’s work at Loom sat directly in that context, because the product is an answer to coordination friction. As part of Loom’s growth and AI assisted workflows, the platform reached 88M videos recorded in 2024 and reduced the need for 202M meetings, a scale that makes “alignment” more than a cultural preference. It becomes an operating requirement. Her approach emphasized repeatable synthesis and clear experimentation outputs so stakeholders could evaluate changes without re-litigating the basics in every time zone. Integration with Atlassian’s ecosystem also raised the bar for consistency, because pricing, packaging, and AI feature expectations do not live inside a single product boundary anymore. The point was to keep one shared truth even as decisions moved across functions.
“Data does not travel well when every team has its own version,” Yadav says. “Your job is to make the evidence portable, so the decision stays consistent.”
Experimentation That Keeps Monetization Honest
The subscription economy is projected to grow 67% over the next five years, rising from $722 billion in 2025 to $1.2 trillion by 2030, which raises the stakes on pricing decisions that protect trust. At the same time, global enterprises are expected to invest $307 billion on AI solutions in 2025, with spending expected to reach $632 billion by 2028, a pace that will keep pushing AI features into packaging decisions whether teams are ready or not. The advantage will belong to organizations that standardize experimentation so cross functional teams can move quickly without turning customers into test subjects.
“Growth is not the goal by itself,” Yadav says. “The goal is to grow without losing clarity about what actually worked.”


