AI Code is Obsolete
The feedback loop that is stalling software evolution.

Over the past two years, AI agents have reshaped the daily workflow of software engineers across the industry. I now rely on them constantly: writing code with Cursor, getting AI code reviews from CodeRabbit, and drafting or refining documentation with Notion AI.
These tools have taken much of the grunt work off my plate but they’ve also revealed some critical limitations with today’s LLMs. One of the most glaring is how poorly they handle new technology.
Can’t Teach an Old Dog New Tricks
AI agents shine when scaffolding new projects. I often use Bolt to spin up the boilerplate and tooling, then switch to Cursor for feature development. It’s a great way to skip setup overhead and jump straight into real work.
But there’s a consistent frustration: Bolt scaffolds projects with outdated dependencies.
For example, I recently asked Bolt to scaffold a simple blog with Astro, Tailwind, Biome, and shadcn/ui. Here’s the full prompt:
You are a web developer tasked with creating a simple blog website. Build a complete blog application with the following specifications:
Tech Stack Requirements:
- Astro as the main framework
- Tailwind CSS for styling
- Biome for linting and formatting
- shadcn/ui for UI components
Make sure to use the latest version of all these tools
Core Features to Implement:
- Homepage with a list of blog posts
- Individual blog post pages
- About page
- Responsive design that works on mobile and desktop
- Clean, modern UI using shadcn components
Technical Requirements:
- Set up proper project structure with organized folders
- Configure Biome for code quality (linting and formatting)
- Implement proper TypeScript types where applicable
- Use Astro's content collections for blog posts
- Create at least 3 sample blog posts in Markdown format
- Implement proper SEO meta tags
- Add navigation between pages
Deliverables:
- Complete project setup with all dependencies
- Configuration files for Biome and Tailwind
- Reusable components using shadcn/ui
- Sample blog posts with frontmatter
- Responsive layout with header and footer
- Basic styling that demonstrates Tailwind integration
Output Format:
Provide step-by-step instructions with code examples for each major component, including file structure, configuration files, and key implementation details. Include commands for installation and setup.
Here’s a slice of the generated package.json:
{
...
"dependencies": {
...
"astro": "^4.15.9",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"tailwindcss": "^3.4.13",
"zod": "^3.23.8",
"@biomejs/biome": "^1.8.3"
}
}
Even though I explicitly requested that it use the latest versions for these packages, it defaulted to legacy releases of Astro, React, Tailwind, Biome, and Zod.
It’s not ancient, but far from current. Why does this keep happening?
Why LLMs Gravitate Toward the Past
There are two main reasons:
- Outdated training data. LLMs are usually trained on code that’s months—or even years—behind the latest releases. If a version didn’t exist during training, the model simply doesn’t know about it.
- Bias toward familiarity. Even if newer versions are in the training set, older versions are more common. The model has more examples to draw from, so it defaults to what it “knows” best.
This means AI is often more effective working with legacy packages than with bleeding-edge ones. We’re only a couple of years into the agentic coding boom, so we’ve yet to experience the long-term effects this will have on productivity and innovation in software. It puts us at risk of reenforcing a vicious cycle though:
- Developers hesitate to adopt new tools because AI struggles with them.
- Without adoption, there’s less real-world usage of the new tools.
- Less usage means less training data.
- Which, in turn, keeps the AI biased toward the old tools and patterns.
Eventually, AI itself could slow the pace of software evolution. If everyone waits for LLMs to “catch up,” new tools and best practices may never get the traction they deserve.
And to make matters worse, more and more of today’s AI-generated code will feed into tomorrow’s training data—compounding the bias toward the past.
Breaking the Cycle
So what do we do?
- Push AI forward. Don’t blindly accept the versions it scaffolds. When possible, update dependencies yourself and force the AI to adapt.
- Stay sharp. Learn new versions of tools before AI gains familiarity. Struggling through the edge cases now pays off later when debugging.
- Build better agents. Eventually, coding tools may evolve beyond LLMs into agents with reasoning capabilities that don’t rely so heavily on statistical frequency. Until then, it’s on us to resist stagnation.
The future of software shouldn’t be held back by the past baked into training data. If we want AI to help us build what’s next, we can’t let it trap us in what’s already been done.