The 'Stability Tax' is the hidden cost of high-speed AI code generation. The Trust Paradox: Why 90% of developers use AI, but 30% don't trust a line of code it The 'Stability Tax' is the hidden cost of high-speed AI code generation. The Trust Paradox: Why 90% of developers use AI, but 30% don't trust a line of code it

I Built an Enterprise-Scale App With AI. Here’s What It Got Right—and Wrong

To get a first hand understanding of the impact of AI on the software development lifecycle (SDLC), I decided to run an experiment. I wanted to try and write a reasonably complex system from scratch using AI. I didn’t want a "Hello World" or another “To Do” app; I wanted something realistic, something that could be used at scale like we'd build in the enterprise world.

The result is Under The Hedge—a fun project blending my passion for technology and wildlife.

The experiment yielded several key findings, validating and adding practical context to the broader industry trends:

  • Is AI making developers faster or just worse? I ran an experiment to find out.
  • The "Stability Tax": Discover the hidden cost of high-speed AI code generation and why it's fueling technical debt.
  • Vibe Coding is Dead: Learn why generating code via natural language prompts is raising the bar for developer mastery, not lowering it.
  • The Trust Paradox: Why 90% of developers use AI, but 30% don't trust a line of code it writes.
  • The Bricklayer vs. The Site Foreman: A new model for the developer's role in the age of AI.

The Project: Under The Hedge

I set out to build a community platform for sharing and discovering wildlife encounters—essentially an Instagram/Strava for wildlife.

To give you a sense of the project's scale, it includes:

  • AI-Powered Analysis: Users upload photos, and the system uses Gemini to automatically identify species, describe behavior, and assign an "interest score" based on awareness of what’s going on in the image and the location it was taken.
  • Complex Geospatial Data: Interactive maps, geohashing for location following, and precise coordinate extraction from EXIF data.
  • High-Performance Data Layer: A scalable and bleeding-fast single-table design in AWS DynamoDB to handle complex data access patterns with sub-millisecond latency.
  • Scalable Media Infrastructure: A robust media component using AWS CloudFront to efficiently cache and serve high-resolution images and videos to users globally.
  • Social Graph: A full following system (follow Users, Species, Locations, or Hashtags), threaded comments, and activity feeds.
  • Gamification: Place leaderboards to engage locals.
  • Enterprise Security: Secure auth via AWS Cognito, privacy controls, and moderation tools.

You can check it out here: https://www.underthehedge.com

The Industry Context

Before I share what I found while developing Under The Hedge, we should assess what the rest of the industry is saying based on the studies from the last couple of years.

As we come to the end of 2025, the narrative surrounding AI-assisted development has evolved from simple "speed" to a more nuanced reality. The 2025 DORA (DevOps Research and Assessment) report defines this era with a single powerful concept: AI is an amplifier. It does not automatically fix broken processes; rather, it magnifies the existing strengths of high-performing teams and the dysfunctions of struggling ones.

Throughput vs. Stability

The 2025 data reveals a critical shift from previous years. In 2024, early data suggested AI might actually slow down delivery. However, the 2025 DORA report confirms that teams have adapted: AI adoption is now positively correlated with increased delivery throughput. We are finally shipping faster.

But this speed comes with a "Stability Tax." The report confirms that as AI adoption increases, delivery stability continues to decline. The friction of code generation has been reduced to near-zero, creating a surge in code volume that is overwhelming downstream testing and review processes.

Vibe Coding Bug Spike

This instability is corroborated by external studies. Research by Uplevel in 2024 found that while developers feel more productive, the bug rate spiked by 41% in AI-assisted pull requests. This aligns with the "vibe coding" phenomenon—generating code via natural language prompts without a deep understanding of the underlying syntax. The code looks right, but often contains subtle logic errors that pass initial review.

The Trust Paradox

Despite 90% of developers now using AI tools, a significant "Trust Paradox" remains. The 2025 DORA report highlights that 30% of professionals still have little to no trust in the code AI generates.

We are using the tools, but we are wary of them—treating the AI like a "junior intern" that requires constant supervision.

Code Churn and Technical Debt

The Death of "DRY" (Don't Repeat Yourself) The most damning evidence regarding code quality comes from GitClear’s 2025 AI Copilot Code Quality report. Analyzing 211 million lines of code, they identified a "dubious milestone" in 2024: for the first time on record, the volume of "Copy/Pasted" lines (12.3%) exceeded "Moved" or refactored lines (9.5%).

The report details an 8-fold increase in duplicated code blocks and a sharp rise in "churn", code that is written and then revised or deleted within two weeks. This indicates that AI is fueling a "write-only" culture where developers find it easier to generate new, repetitive blocks of code rather than refactoring existing logic to be modular. We are building faster, but we are building "bloated" codebases that will be significantly harder to maintain in the long run.

Security Risks

Finally, security remains a major hurdle. Veracode’s 2025 analysis found that 45% of AI-generated code samples contained insecure vulnerabilities, with languages like Java seeing security pass rates as low as 29%.

So what do these studies tell us?

The data paints a clear picture: AI acts as a multiplier. It amplifies velocity, but if not managed correctly, it also amplifies bugs, technical debt, and security flaws.

What my Experiment Taught Me

My chosen tools were Gemini for architecture/planning and Cursor for implementation. In Cursor I used agent mode with the model set to auto.

Building Under The Hedge was an eye-opening exercise that both confirmed the industry findings and highlighted the practical, human element of AI-assisted development.

The Velocity Multiplier

While I didn't keep strict time logs, I estimate I could implement this entire system—a reasonably complex, enterprise-scale platform—in less than a month of full-time work (roughly 9-5, 5 days a week). This throughput aligns perfectly with the DORA report's finding that AI adoption is positively correlated with increased delivery throughput.

The greatest personal impact for me, which speaks perhaps more about motivation than pure speed, was the constant feedback loop. In past personal projects, I often got bogged down in small, intricate details, leading to burnout. Using these tools, I could implement complete, complex functionality—such as an entire social feed system—in the time it took to run my son’s bath. The rapid progress and immediate results are powerful endorphin hits, keeping motivation high.

The "Stability Tax" in Practice

My experience also validated the industry's growing concerns about the "Stability Tax"—the decline in delivery stability due to increased code volume. I found that AI does well-defined, isolated tasks exceptionally well; building complex map components or sophisticated media UIs was done in seconds, tasks that would typically take me days or even weeks. However, this speed often came at the expense of quality:

  • Bloat and Duplication: The AI consistently defaulted to the fastest solution, not the best one, unless explicitly instructed otherwise. This led to inefficient, bloated code. When tackling a difficult issue, it would often "brute force" a solution, implementing multiple redundant code paths in the hope of fixing the problem.
  • The Death of "DRY" Confirmed: I frequently observed the AI duplicating whole sections of code instead of creating reusable components or helper methods. This is direct evidence of the "write-only" culture highlighted in the GitClear report, fueling the rise in copied/pasted lines and code churn. If I changed a simple data contract (e.g., renaming a database property), the AI would often try to maintain backwards compatibility by handling both the old and new scenarios, leading to unnecessary code bloat.

Ultimately, I had to maintain a deep understanding of the systems to ensure best practices were implemented, confirming the "Trust Paradox" where developers treat the AI like a junior intern requiring constant supervision.

Security and Knowledge Gaps

The security risks highlighted by Veracode were also apparent. The AI rarely prioritized security by default; I had to specifically prompt it to consider and implement security improvements.

Furthermore, the AI is only as good as the data it has access to. When I attempted to integrate the very new Cognito Hosted UI, the model struggled significantly, getting stuck in repetitive loops due to a lack of current training data. This forced me to step back and learn the new implementation details myself. Once I understood how the components were supposed to fit together, I could guide the AI to the correct solution quickly, highlighting that a deep conceptual understanding is still paramount.

AI as a "Coaching Tool"

Despite its flaws, AI proved to be a magnificent tool for learning. As a newcomer to Next.js and AWS Amplify, the ability to get working prototypes quickly kept me motivated. When I encountered functionality I didn't understand, I used the AI as a coach, asking it to explain the concepts. I then cross-referenced the generated code with official documentation to ensure adherence to best practices. By actively seeking to understand and then guiding the AI towards better solutions, I was able to accelerate my learning significantly.

How to Help AI Be a Better Code Companion

To mitigate the "Stability Tax" and maximize the AI's velocity, a proactive, disciplined approach is essential:

  1. Detailed Pre-Planning is Key: Use tools like Gemini (leveraging its deep research feature) to create detailed specifications, architecture diagrams, and design documents before starting implementation. This "specification first" approach provides the AI with a clearer target, leading to more predictable and robust output.
  2. Explicitly Enforce Quality Gates: Instead of relying on the AI to spontaneously generate quality code, we must proactively instruct it to maintain standards. This includes designing regular, specific prompts focused on:
  • Identifying security improvements.
  • Identifying performance issues or potential optimisations.
  • Identifying duplicated or redundant code.
  1. Leverage AI for Quality Assurance: Use the AI to retrospectively analyze generated code and identify areas for refactoring or improvement, a task it can perform far faster than a manual human review.
  2. Use AI for the Entire SDLC: We should deploy AI to write and self-assess feature design documents, epics, and individual tasks, and crucially, to write comprehensive test plans and automated tests to catch the subtle logic errors associated with "vibe coding."

Conclusion: The End of "Vibe Coding"

So, should we stop using AI for software development?

Absolutely not. To retreat from AI now would be to ignore the greatest leverage point for engineering productivity we have seen in decades. Building Under The Hedge proved to me that a single developer, armed with these tools, can punch well above their weight class, delivering enterprise-grade architecture in a fraction of the time.

However, the era of blind optimism must end. The "Sugar Rush" of easy code generation is over, and the "Stability Tax" is coming due.

The data and my own experience converge on a single, inescapable truth: AI lowers the barrier to entry, but it raises the bar for mastery.

Because AI defaults to bloating codebases and introducing subtle insecurities, the human developer is more critical than ever. Paradoxically, as the AI handles more of the syntax, our value shifts entirely to semantics, architecture, and quality control. We are transitioning from being bricklayers to being site foremen.

If we treat AI as a magic wand that absolves us of needing to understand the underlying technology, we will drown in a sea of technical debt, "dubious" copy-paste patterns, and security vulnerabilities. But, if we treat AI as a tireless, brilliant, yet occasionally reckless junior intern—one that requires strict specifications, constant code review, and architectural guidance—we can achieve incredible things.

The path forward isn't to stop using the tools. It is to stop "vibe coding" and start engineering again. We must use AI not just to write code, but to challenge it, test it, and refine it.

The future belongs to those who can tame the velocity. I only wish my experiment resulted in building something that would make me loads of money instead of just tracking pigeons! 😂

Thank you for reading, please check out my other thoughts at denoise.digital.

Market Opportunity
RWAX Logo
RWAX Price(APP)
$0.0002812
$0.0002812$0.0002812
+8.90%
USD
RWAX (APP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny

The post Shocking OpenVPP Partnership Claim Draws Urgent Scrutiny appeared on BitcoinEthereumNews.com. The cryptocurrency world is buzzing with a recent controversy surrounding a bold OpenVPP partnership claim. This week, OpenVPP (OVPP) announced what it presented as a significant collaboration with the U.S. government in the innovative field of energy tokenization. However, this claim quickly drew the sharp eye of on-chain analyst ZachXBT, who highlighted a swift and official rebuttal that has sent ripples through the digital asset community. What Sparked the OpenVPP Partnership Claim Controversy? The core of the issue revolves around OpenVPP’s assertion of a U.S. government partnership. This kind of collaboration would typically be a monumental endorsement for any private cryptocurrency project, especially given the current regulatory climate. Such a partnership could signify a new era of mainstream adoption and legitimacy for energy tokenization initiatives. OpenVPP initially claimed cooperation with the U.S. government. This alleged partnership was said to be in the domain of energy tokenization. The announcement generated considerable interest and discussion online. ZachXBT, known for his diligent on-chain investigations, was quick to flag the development. He brought attention to the fact that U.S. Securities and Exchange Commission (SEC) Commissioner Hester Peirce had directly addressed the OpenVPP partnership claim. Her response, delivered within hours, was unequivocal and starkly contradicted OpenVPP’s narrative. How Did Regulatory Authorities Respond to the OpenVPP Partnership Claim? Commissioner Hester Peirce’s statement was a crucial turning point in this unfolding story. She clearly stated that the SEC, as an agency, does not engage in partnerships with private cryptocurrency projects. This response effectively dismantled the credibility of OpenVPP’s initial announcement regarding their supposed government collaboration. Peirce’s swift clarification underscores a fundamental principle of regulatory bodies: maintaining impartiality and avoiding endorsements of private entities. Her statement serves as a vital reminder to the crypto community about the official stance of government agencies concerning private ventures. Moreover, ZachXBT’s analysis…
Share
BitcoinEthereumNews2025/09/18 02:13
Bitcoin Has Taken Gold’s Role In Today’s World, Eric Trump Says

Bitcoin Has Taken Gold’s Role In Today’s World, Eric Trump Says

Eric Trump on Tuesday described Bitcoin as a “modern-day gold,” calling it a liquid store of value that can act as a hedge to real estate and other assets. Related Reading: XRP’s Biggest Rally Yet? Analyst Projects $20+ In October 2025 According to reports, the remark came during a TV appearance on CNBC’s Squawk Box, tied to the launch of American Bitcoin, the mining and treasury firm he helped start. Company Holdings And Strategy Based on public filings and company summaries, American Bitcoin has accumulated 2,443 BTC on its balance sheet. That stash has been valued in the low hundreds of millions of dollars at recent spot prices. The firm mixes large-scale mining with the goal of holding Bitcoin as a strategic reserve, which it says will help it grow both production and asset holdings over time. Eric Trump’s comments were direct. He told viewers that institutions are treating Bitcoin more like a store of value than a fringe idea, and he warned firms that resist blockchain adoption. The tone was strong at times, and the line about Bitcoin being a modern equivalent of gold was used to frame American Bitcoin’s role as both miner and holder.   Eric Trump has said: bitcoin is modern-day gold — unusual_whales (@unusual_whales) September 16, 2025 How The Company Went Public American Bitcoin moved toward a public listing via an all-stock merger with Gryphon Digital Mining earlier this year, a deal that kept most of the original shareholders in control and positioned the new entity for a Nasdaq debut. Reports show that mining partner Hut 8 holds a large ownership stake, leaving the Trump family and other backers with a minority share. The listing brought fresh attention and capital to the firm as it began trading under the ticker ABTC. Market watchers say the firm’s public debut highlights two trends: mining companies are trying to grow by both producing and holding Bitcoin, and political ties are bringing more headlines to crypto firms. Some analysts point out that holding large amounts of Bitcoin on the balance sheet exposes a company to price swings, while supporters argue it aligns incentives between miners and investors. Related Reading: Ethereum Bulls Target $8,500 With Big Money Backing The Move – Details Reaction And Possible Risks Based on coverage of the launch, investors have reacted with both enthusiasm and caution. Supporters praise the prospect of a US-based miner that aims to be transparent and aggressive about building a reserve. Critics point to governance questions, possible conflicts tied to high-profile backers, and the usual risks of a volatile asset being held on corporate balance sheets. Eric Trump’s remark that Bitcoin has taken gold’s role in today’s world reflects both his belief in its value and American Bitcoin’s strategy of mining and holding. Whether that view sticks will depend on how investors and institutions respond in the months ahead. Featured image from Meta, chart from TradingView
Share
NewsBTC2025/09/18 06:00
XRP Holds $1.88 Fibonacci Support as 3-Day Chart Signals Bullish Continuation

XRP Holds $1.88 Fibonacci Support as 3-Day Chart Signals Bullish Continuation

XRP is once again drawing attention on higher timeframes as its 3-day chart begins to mirror past bullish phases. Market observers are closely watching how the
Share
Tronweekly2026/01/11 21:30