AI Coding Slop and Degradation

AI Coding Slop and Degradation: Is It True?

The complaining is everywhere. AI is producing slop at unprecedented scale. Production systems are degrading. High-profile outages are climbing. Engineers warn of a coming reliability collapse. It's fashionable to blame the tool. Is any of this actually new?


A Familiar Cleanup Job

Early in my career, I got paid well to replace Visual Basic apps.

VB6 was the original "anyone can code" event. Drag four buttons onto a form, wire up some logic, and you had a Windows app. All kinds of non-developers created business software written without source control, without tests, without anyone reviewing what got put into user's hands.

Then someone like me cleaned it up later.

VB wasn't the villain. Plenty of skilled developers built solid VB apps. The villain was an organizational system that just let a new tool happen.

Sound familiar?

Slop Has a Long History

The current moaning around "AI slop" treats bad AI-generated code like a new animal. It is simply different in it's form that is so big it's inescapable to see.

Slop has been the steady output of every wave of democratized coding for decades. VB6, PHP, WordPress plugins, Stack Overflow copy-paste. Each wave produced a mountain of garbage that real engineers had to clean up.

AI is the latest entry in that tradition. AI-generated code today is exponentially better than what democratized tools produced before. Decades of training material for LLMs lifted the floor. A modern AI coding tool, well-prompted, will out-write the average non-developer of the Information Age.

A typical VB app solved a narrow problem for one user or one team. Vibe-coding now produces systems. The blast radius when something goes wrong is orders of magnitude larger than it was before.

"AI writes worse code than humans" is too easy to just throw out there. It was all human code to begin with.

Degradation Is Just Tech Debt

A recent post from a respected developer warned of "unprecedented software degradation...increasing outages across high-profile products as AI-generated code accumulates.""

The concern is fair, "unprecedented" isn't about the code itself, it's about the magnitude of AI.

Consider Y2K. Yes, I'm showing my age. So what. The millennium bug created urgent demand for four year digits everywhere. Tech debt has been quietly rotting in Information Age production for decades. Every legacy iteration is a graveyard of compromises that will need to be refactored eventually.

Anyone framing AI degradation as a uniquely scandalous event is forgetting that we have always lived in this situation. So what's really going on here? What should we be focusing on?

Follow the Money

Software has always been built under the same pressure:

None of this was invented by AI. AI is a new tool for an old motivation. It lets shareholders or investors keep margins while the maintenance debt compounds in someone else's quarter.

The same incentives that produced offshoring, deferred Y2K-era modernization, and "do more with less" are shaping AI adoption right now. The tool changed, and it's a massive tool, no doubt. The motivation didn't. Ask who decided to ship without review? Who deferred the refactor? Who cut the engineers who would have caught it?

Follow the money. It's all there.

The Mirror

AI didn't introduce slop. It didn't invent tech debt. It didn't create the incentive to defer maintenance for shareholder value.


AI didn't break the system. It just runs it bigger and faster. Welcome to the Intelligence Age.