Published: September 3, 2025

Is Code Rotting Due To AI?

Yes, AI lets us ship a lot faster, but we're often blind to the unintended consequences that this has on our codebase, security, and long-term maintainability. Code written today has half the lifespan it had five years ago i.e put another way, code churn has roughly doubled since AI assistants went mainstream1. Is this code rot? Or is it a fundamental change in how we build products? The data tells a surprising story.

The Code Survival Data

Erik Bernhardsson's analysis already showed modern projects had short code half-lives2. Angular's codebase turned over in 0.3 years, while Linux kernel code lasted over 6 years. The ship of Theseus effect - code was constantly being replaced.

AI accelerated this beyond expectations.

Code churn roughly doubled since 2021, per GitClear's analysis of 153 million lines of code1. This isn't just faster iteration - it's a change in how code gets written and why it gets replaced. When Cursor generates 100 lines of working code in 30 seconds, the psychological barrier to rewriting disappears. Why modify existing code when you can regenerate it? Not inherently bad - but it has consequences.

The Analysis

To test our hypothesis about code longevity in the AI era, I analyzed five repositories using Erik Bernhardsson's git-of-theseus method:

  • Redis (Traditional development, 2009) - Established project with careful, long-lived code
  • Angular (Pre-AI framework, 2010) - Major framework built before AI tooling era
  • Browser-use (AI-first startup, 2024) - Fellow YC company embracing AI coding
  • Firecrawl (Modern AI-era startup, 2024) - Web scraping tool built in the AI era
  • SAMMY Dashboard (AI-first startup, 2024) - Our internal dashboard with heavy AI assistance

The results are clear: AI-generated code has dramatically lower survival rates.

First Year Code Survival Rates

Comparing code longevity across different development eras (365 days)

Development era significantly impacts code stability

The graph shows the likelihood of code lines surviving over the first year of the repository's life. Simply, if I write a line of code today, what are the chances it still exists unchanged in a year?

After one year, the survival rates tell a clear story:

  • Redis (2009): 80.2% survived - careful, deliberate development
  • Angular (2010): 59.0% survived - pre-AI framework with structured development
  • Browser-use (2024): 27.5% survived - modern iterative practices in the AI era
  • Firecrawl (2024): 17.6% survived - fast-moving startup leveraging AI tools
  • SAMMY Dashboard (2024): 12.4% survived - fully AI-first development

Five repositories spanning 16 years, showing three distinct eras:

  • Traditional Era (Redis): Write carefully, change rarely
  • Pre-AI Modern Era (Angular): Structured iteration with long-term stability focus
  • AI Era (Browser-use, Firecrawl, SAMMY): Generate fast, iterate aggressively

This isn't necessarily bad - it reflects different development philosophies:

  • Traditional approach: Write code carefully, modify incrementally
  • AI approach: Generate quickly, replace when needed

But it fundamentally changes how we think about code quality, documentation, security and system architecture.

What This Means

These numbers represent fundamentally different approaches to building software:

  • Traditional Development (Redis, 80% survival): "Let's think through this carefully before we write it." Code is crafted to last, with deliberate architectural decisions.
  • Modern Pre-AI (Angular, 59% survival): "Ship it, then refactor based on what we learn." Structured iteration, but still focused on code longevity.
  • AI-First Development (Browser-use, Firecrawl, SAMMY, 12-27% survival): "Generate, test, replace." Code becomes disposable - the system architecture matters more than individual implementations.

That 12% survival rate looks like dysfunction. It's not. It's a fundamental shift in how we build software.

How this is measured (for the data nerds)

For those that are interested in the methodolgy, the tool uses Kaplan-Meier survival estimation - the same statistical method medical researchers use to study patient survival rates. Instead of tracking patients, it tracks every line of code through time.

It doesn't just count lines added or removed. It tracks the survival probability of code over time, accounting for recent commits (limited observation time) versus older code that's been around long enough to show its full lifecycle. The approach is to track each line from creation through modification or deletion, then calculate survival probability:

S(t)=(1dini)S(t) = \prod \left(1 - \frac{d_i}{n_i}\right)

Where S(t)S(t) is survival probability at time tt, did_i is lines modified/deleted, and nin_i is lines at risk. This gives us the probability that a randomly selected line survives tt days unchanged.

The New Methodology

Development methodologies: Waterfall, Agile, AI, and Vibe Coding

That 12% survival rate represents a complete paradigm shift. We don't build incrementally anymore. We generate completely, then subtract aggressively.

Remember agile? Start with a skateboard. Get feedback. Add a handle to make it a scooter. Then a bike. Then a motorcycle. Eventually, a car. Ship something usable immediately, iterate based on feedback.

AI broke this.

AI tools let you ship a complete car on day one. Not a skateboard you'll eventually turn into a car - an actual working car from the start. But it starts fanciful. Unnecessary features, some missing components, extra fluff. You iteratively refine it into something practical. Strip what doesn't work. Add what's missing. Polish until production-ready.

This is why AI code has such low survival rates. We don't build on top of existing code - we constantly regenerate, replace and refine it. That 12% survival rate is the methodology working as intended.

Code longevity in the AI era isn't about writing code that lasts forever - it's building systems that can evolve rapidly without breaking.

Not Vibe Coding

This is not vibe coding. Vibe coding is accepting every AI suggestion verbatim, letting dead code accumulate, adding features without cleaning up. You end up with a car buried under spaghetti code.

The AI methodology demands more discipline than traditional development:

  • Generate ambitious - Let AI build the full feature set
  • Test ruthlessly - Ship to users, measure what works
  • Cut aggressively - Remove everything that doesn't deliver value
  • Refine constantly - Regenerate and improve based on feedback

Why it happened:

  • Speed became table stakes: When competitors ship complete products in days using AI, delivering a skateboard means you've lost. Users don't want to imagine your future car - they expect it now.
  • Expectations rose: The baseline for "minimum viable" moved up. What counted as impressive five years ago looks incomplete. AI raised the floor.
  • Rewriting became cheaper than incrementing: When AI regenerates entire features in minutes, the economics flip. Why modify a skateboard into a bike when you can generate the bike directly?

Nobody writes code anymore. We prompt, review, curate - but don't write. This shift happened in months, and it brings real dangers.

The Security Problem

This methodology is powerful but dangerous:

  • Security Vulnerabilities: AI-generated code has real security problems. Studies show up to 40% of AI code fails secure coding guidelines34. SQL injection, hardcoded credentials, insufficient input validation - AI mirrors insecure patterns from its training data. As you ship faster, your attack surface grows. Developers miss issues because they trust AI output more than they should56.
  • Missing Context: AI doesn't understand your business logic. It generates code that looks correct but misses domain requirements, compliance rules, or how your architecture actually works78. You know your system - AI only knows patterns.
  • Code Bloat and Duplication: AI doesn't know what exists in your codebase. It regenerates helper functions, recreates utilities, and duplicates logic because it's pattern-matching, not understanding your context9. This fragmentation makes maintenance harder over time.
  • Skill Atrophy: Accept AI output without thinking, and your problem-solving muscles weaken. You lose chances to learn and innovate. "Algorithmic dependence" is real - if AI becomes a crutch, your ability to think through complex problems atrophies1011.
  • Compliance Issues: AI doesn't consistently meet industry standards - GDPR, OWASP Top 10, PCI-DSS. Ship unreviewed AI code and you're creating legal exposure4.
  • Model Collapse: Here's the long-term concern: future AI models trained on AI-generated code risk amplifying flaws. As more code becomes AI-generated, training data degrades, creating a feedback loop of worse code quality12.

These aren't theoretical. They're happening now in production.

Making It Work

After a year building with AI, this is what works:

1. Modularization

Instead of letting AI generate code anywhere, we architect for containment. Modular code isn't just good practice - it's your primary defense against AI-generated chaos1314.

Why Modularization Works for AI Code

When AI is confined to working inside well-defined modules, any unintended consequences stay isolated. Changes, errors, or security vulnerabilities can't easily propagate beyond the module boundary. This makes code easier to review, replace, or roll back without affecting the rest of your system14.

Our Modular Boundaries:

  • Backend: Separate routers in FastAPI - PRs live in one router
  • Frontend: Split code by feature to keep AI-generated code segmented
  • Database: Move security logic to database level where AI can't bypass it
  • Services: Each service has strict input/output contracts that AI must follow

AI-generated code should be easy to remove. When modules have clear boundaries and minimal dependencies, you can regenerate entire features without fear.

2. Design Patterns

Beyond modularization, principled design patterns channel AI's generative freedom into well-scoped parts of the code. These patterns force AI to conform to clear contracts and separation of concerns - turning architectural constraints into active guardrails13.

Template Pattern: Define the skeleton of an algorithm, let AI fill in the details

class DataProcessor:  # Human writes this
    def process(self):
        data = self.load_data()
        transformed = self.transform(data)  # AI implements this
        return self.save(transformed)

The core workflow stays intact. AI can only modify specific steps, not the overall structure. This prevents AI from accidentally breaking critical business logic while still allowing rapid iteration on implementation details.

Strategy Pattern: Swap AI-generated algorithms safely

class PaymentProcessor:
    def __init__(self, strategy: PaymentStrategy):
        self.strategy = strategy  # AI can generate new strategies
 
    def process_payment(self, amount):
        return self.strategy.execute(amount)  # Core logic protected

AI can generate entire payment strategies without touching the payment processor itself. If an AI-generated strategy has issues, you swap it out - the system architecture remains stable.

Factory Pattern: Control AI's object creation

class ComponentFactory:
    @staticmethod
    def create_component(type):  # AI fills this in
        # AI generates component creation logic
        # But can't change how components are used

These patterns do more than organize code - they train AI through consistency. The more your codebase exhibits a pattern, the more likely AI will align with it in future generations.

It's more effective to architecturally constrain AI than to rely solely on human review1315. If there's only one safe way for AI to accomplish a task, review becomes both easier and more reliable.

3. Security by Architecture

You can't rely on code review to catch every security issue when AI generates code at scale. Instead:

  • Row-Level Security enforced at database level
  • API key verification as required dependencies, not optional checks
  • Org-scoped database clients that enforce data isolation automatically

If there's a security hole possibility, application code shouldn't touch it. We write route wrappers that include safe dependencies for Cursor to go wild with.

4. Guidelines Matter

Our .cursorrules file contains:

  • Coding standards and naming conventions
  • Required security patterns (e.g. every endpoint must use verify_api_key)
  • Architecture constraints (which directories for what code)
  • Common gotchas and anti-patterns to avoid

More specific rules = better AI output. Not just for consistency - for preventing AI from repeating mistakes. These same rules now drive our automated PR reviews through tools like Greptile, catching issues before human review even starts.

5. Different Reviews

Traditional code review assumes human-written code. AI-generated code needs different scrutiny:

  • Security-first review - Check authentication, authorization, input validation
  • Architecture compliance - Does this follow our patterns?
  • Duplication detection - Is this recreating existing code?
  • Context validation - Does AI understand the broader system implications?

Despite the risks, the benefits are real. We prototype in hours instead of days. Our codebase is better documented. New team members contribute on day one. With AI handling implementation, we focus on system design, user experience, business logic.

The Bottom Line

Your code isn't rotting. The entire development paradigm has shifted.

AI didn't make coding easier, it made it fundamentally different. We generate complete products, test with real users, and ruthlessly cut what doesn't work. The code is disposable, but the systems we build are more resilient than ever.

The winners will:

  • Embrace disposable code as a feature, not a flaw
  • Build modular architectures that contain AI's blast radius
  • Implement security by design, not by review
  • Focus on system architecture over code longevity

The era of carefully crafted, long-lived code is ending. The era of rapidly evolving systems has begun.


References

Footnotes

  1. GitClear: Team: "Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality", 2023 2

  2. Erik Bernhardsson: "The half-life of code & the ship of Theseus", 2016

  3. Qwiet AI: "Risks in AI-Generated Code: A Security and Reliability Perspective", 2024

  4. Qwiet AI: "Risks in AI-Generated Code: A Security and Reliability Perspective", 2024 2

  5. Forbes Technology Council: "How AI-Generated Code Is Unleashing A Tsunami Of Security Risks", 2025

  6. CodeStringers: "Risk of AI Code", 2024

  7. IoT For All: "AI Code Human Oversight", 2024

  8. 8th Light: "Potential and Pitfalls of AI-Assisted Coding", 2024

  9. Turing Tech: "The Hidden Cost of AI-Generated Code: What Research and Industry Trends Are Revealing", 2024

  10. Binmile: "Pros and Cons of AI-Assisted Coding", 2024

  11. Dev.to: "AI-Assisted Coding: The Hype vs the Hidden Risks", 2024

  12. 8th Light: "Potential and Pitfalls of AI-Assisted Coding", 2024

  13. ForgeCode (Dev.to): "Simple Over Easy: Architectural Constraints That Make AI-Generated Code Maintainable", 2024 2 3

  14. LimaCodes (Dev.to): "Monolithic Code vs Modularized Code: Choosing the Right Fit for Your AI Project", 2024 2

  15. Mark Shust (LinkedIn): "Most developers think AI-generated code is the problem", 2024