Technical debt part 2: LLMs, AI and the new Frontier
Apr 3, 2025

LLMs, AI and the new Frontier
In Part 1, we explored what technical debt is, why it accumulates, and how it affects productivity, delivery, and retention.
If you haven't read it yet, here is the link:
https://blar.io/blog/technical-debt-part-1-introduction
But what happens when we add AI into the mix?
The rise of Large Language Models (LLMs) like GitHub Copilot, ChatGPT, or Cursor has changed how developers write code. We're moving faster than ever, but with speed comes risk.
Let’s unpack how LLMs can increase technical debt… and how to flip the script and use them to reduce it instead.

LLMs and the Acceleration of Debt
LLMs help developers generate code at lightning speed. But speed without structure can be dangerous. Here’s how technical debt can accumulate:
1. Code Without Context
LLMs can generate “working” code without fully understanding your system's architecture, constraints, or edge cases. The result:
Fragile solutions
Hidden coupling
Inconsistent patterns
Cybersecurity problems
Example: A developer uses Cursor to implement an auth flow, but the model doesn’t follow your team’s encryption practices or naming conventions. It works… until it breaks.

2. Lack of Ownership: The Hidden Cost of AI-Generated Code
One of the biggest risks of using AI to generate code is the illusion of progress. Developers see functional code, commit it, and move on without fully understanding how it works. Over time, this creates hidden complexity and technical debt that compounds exponentially.
The 42% Problem:
A 2018 study (here the link) found that developers spend 42% of their time dealing with technical debt debugging, refactoring, and fixing bad code rather than writing new features or delivering direct value.

Why does this happen?
When AI-generated code enters a codebase without proper review, context, or understanding, it multiplies the future maintenance burden.
Here’s how:
Poor Maintainability
Code that isn't well understood is hard to maintain. Over time, developers inherit AI-generated code they didn't write, leading to:
Unclear logic – Why was this implemented this way?
Missing documentation – What does this function actually do?
Unexpected edge cases – AI-generated code often lacks robust error handling.
🔹 Example:
A junior developer uses Copilot to generate a caching mechanism. It works, but nobody realizes it doesn’t invalidate properly. Months later, a major bug surfaces, and no one knows where to start debugging.
Fear of Touching Code Later
Developers often hesitate to modify or refactor AI-generated code because they don’t fully grasp its implications. This leads to:
Code silos – Only the original dev (or AI) understands it.
“Don’t touch it if it works” mentality – Bugs get patched rather than properly fixed.
Accumulating risk – Unclear logic remains untouched until it causes a crisis.
🔹 Example:
A team generates an AI-assisted function for database transactions. Six months later, they need to optimize performance but no one wants to touch it because they're afraid of breaking it. The inefficiency persists.
Accidental Duplication of Logic
When developers don’t deeply understand the AI-generated code they use, they may:
Reimplement similar logic elsewhere, creating redundant code.
Ignore existing utilities that do the same thing better.
Break consistency by introducing different patterns for the same problem.
🔹 Example:
An AI model suggests a function to handle error logging, but the team already has a standardized logging utility. Over time, multiple logging methods appear, making debugging a nightmare.

Illustrative image, we also love LLMs when they're used correctly ❤️
How to Use LLMs Without Drowning in Debt
Establish Guardrails
Define team-wide standards. Use linters, formatters, and CI checks to enforce consistency
Treat LLM Output as a Draft, Not a Final Product
Every suggestion needs human review and adaptation. Encourage curiosity, not copy/paste habits.
Track Technical Debt Actively
Use tools like Blar to track where debt is forming, and prioritize fixes based on impact.
Balance Speed with Sustainability
Don’t fear shipping fast, but be intentional about when you’re taking shortcuts. Always have a plan to revisit and refactor.
Intelligent Code Review: The Blar Advantage
Traditional code reviews can be slow and inconsistent, especially as teams grow. Blar changes the game by using specialized AI agents to analyze pull requests, identify issues, and suggest improvements, before technical debt accumulates.
🔹 Key Benefits of Blar’s Intelligent Code Review:
✅ Less Manual Overhead – Blar automates the tedious parts of code reviews, flagging security vulnerabilities, inefficiencies, and style inconsistencies, so developers can focus on high-value work.
✅ Faster Onboarding – New team members learn best practices faster as Blar provides real-time feedback aligned with the team’s coding standards.
✅ Preemptive Issue Detection – Instead of fixing problems later, Blar catches issues at the source, preventing technical debt from creeping into the codebase.
By integrating seamlessly with existing workflows, Blar ensures cleaner, more maintainable code from day one, reducing the long-term cost of bad decisions.
TL;DR
LLMs can either bury you in technical debt or help you clean it up, it all depends on how you use them.
Just like any tool, AI requires strategy. With the right practices, LLMs can reduce cognitive load, increase productivity, and help teams write better, cleaner code faster.
With Blar, you can also reduce technical debt by analyzing pull requests, identifying issues, and suggesting improvements, ensuring your code remains efficient and maintainable.
Struggling with technical debt? Let’s fix it.
We help teams regain control of their codebase, ship faster, and stay ahead of technical challenges. If technical debt is slowing you down, let’s talk.
Blar: blar.io
