Coding for Obsolescence
Avoiding that painful rewrite by planning for it
A common refrain you hear in software development is that of death by a thousand cuts. Hundreds of commits — which taken individually seem reasonable — combine to create an impenetrable tangle of complexity.
As quality starts to degrade, the cost of doing something the right-way increases. Developers become less and less likely to fix systemic issues, instead choosing to implement work arounds, layers of abstraction, and quick fixes. This in turn makes things worse, which makes fixing it even harder. The cycle continues.
Productivity slows, the product becomes increasingly fragile, and developers start refusing to touch parts of the code with a 10-foot-barge-pole.
Eventually things become so bad that something has to be done.
The result: a massive effort to refactor the offending code. Or maybe it got so bad that the only way out is a complete rewrite. Either way, it will be a costly process, both in real terms and opportunity cost, and will inevitably take longer than first planned.
I used to think this was avoidable through thorough code reviews and good engineering design. While these practices can definitely postpone the inevitable, I now think it is just that… inevitable.
Even the most entrenched products evolve over time. For a start-up the product may look nothing like it did 6-months prior. In engineering terms this means technical decisions made in the past are no longer relevant, data models have grown and evolved, and remnants of past product iterations lie in wait, looking for unsuspecting victims.
If we accept evolutionary technical debt as a force of nature, as an inevitability, we can start to look for ways to mitigate the impact. Componentize and compartmentalize your system with the expectation that some day you’ll need to replace each part.
It’s a mindset: plan for obsolescence, nothing lasts forever. We’re not building the pyramids.