Programming for obsolescence
The importance of programming for deletion
I first heard about the expression “programming for deletion” from Kaushik Gopal on his Fragmented podcast. He briefly mentions it in the context of trying not to overly complicate an application’s architecture. After listening to him and thinking about it for a moment, I couldn’t agree more with his remark. When looking back In my programming career, I had many projects that I’ve developed for the sole purpose that some of their features will have to be replaced, removed or at least heavily refactored. Many of those projects had detailed roadmaps that clearly stated that a feature will be removed at some point in the future. Planned obsolescence was integral in the projects lifecycle.
“Wait, what!?” — I hear you say. “Shouldn’t we aim to program the best way we can so that our code lives as much as it can?”. True, we should output the best code we can at a given time. Programming for deletion has very little to do with code quality, but more with the quality of an project.
You’ve probably heard of the expression: “program like the next person assigned to your project is an axe murder”, this is still very true and programming for obsolescence adds a new abstraction layer on top of that. Your code should be maintainable and easy to replace/remove all together. No feature should be tightly coupled to any project. Theoretically you should remove all the features of the project and it would still boot-up. Empty and very likely useless (unless someone enjoys viewing blank screens), but building and working nonetheless.
Imagine a project…
Programmers are not fortune tellers and we cannot predict the future of software development. What we can do is look back and analyze the history of software development, technologies and trends.
If we could look back at an imaginary software development graph of the past, let’s say, 40 years, we can clearly spot some kind spikes on that graph. Those spikes would correspond to the introductions of new programming languages, programming paradigms, interfaces, frameworks, hardware, etc. Each having a huge impact on the industry. Although tempting, those spikes should not grab our interest, we should know about them, but build on the inbetweens. It’s in the inbetweens where we can find the common and long lasting trends. Perhaps some trends that are pretty much neutral to the global software development trends. When we realize that most of those trends are just surface layers on top of well structured and concrete software principles, an implementation detail sort of (if we follow the clean architecture paradigm), we can focus on the more lasting, future-proof ideas.
Let’s imagine a well structured and programmed project written, for example, in the C programming language in the 80’s. How would that project look in the eyes of an contemporary software developer?
Well, it would probably be very fast executing due to it being written in C. Yes, probably, but that’s not what we are looking for. What else? How does the project structure look like? Can we guess what the project is about by just looking at its project structure? Yes, we can. That’s excellent!
An project’s structure is its signature and if we can at least guess what the project is about just by looking at it’s structure, we know that, at least at a surface level, there’s some code separation going on.
What else are we looking at in our imaginary C project? Glancing over some source files of headers, structures, do we see large code blocks with cryptic names or do we see lots of smaller source code files with a few smaller functions with clearly defined names? We’re very likely to see the latter. Comments? Check. Loose dependencies? Check. Project-specific properties grouped together, tests, clearly defined point of entries? Check, check and check. Now imagine that the project in question has a console client for entering commands and printing some outputs based on those command. How hard would it be to replace that client with a GUI-based client? Probably not very hard. We already have clear separations going on. Now, our job is not to rewrite the whole project, but to implement the new client routines. The only reason we can do that is because the original developers focused on the projects core functionalities and the whys not the hows. They most likely didn’t try to jump on the latest trends and continued with the development process with as little outside noise as possible. The developers probably didn’t have programming for obsolescence in the back of their minds while writing the project, they were very likely focused on deadlines and/or other requirements, but nonetheless they have achieved the primary goals of programming for obsolescence and that is: code simplicity and loose couplings.
What does that actually mean?
How would the process of programming for obsolescence go? By combining TDD or BDD with architecture principles that focus on separation rather than tight coupling such as Clean architecture, we have already done most of the work that needs to be done in order for our code to be classified as “ready for deletion”. What remains to be done is to ask ourselves two very important questions :
- ”How hard is it to remove a block of code/feature?”
- “Will this be a subject to change sometime in the future?”
In modern, agile-driven development, things are bound to change more frequently and if we find ourselves constantly adding or removing a given feature or block of code, perhaps it’s time to think about extracting it as much as possible and hiding it behind an abstraction layer. Complete features could be easily removed with little to no hassle.
Marked for obsolescence
What are some of the candidates to be marked as possible future obsolescence?
Outside some common blocks of code or files that usually get removed during a projects lifecycle, such as: temp structures, obsolete test cases, deprecated code, unused references, badly written workarounds and hotfixes; libraries and frameworks are the most likely to change.
Frameworks, framework modules, layers or any framework-defendant lines of code should always be looked as something temporary. Something that’s there more as a decoration rather something structurally integral to any project. There are, of course, framework-dependent projects that simply wouldn’t exist without the said framework. Think of Android, React, Spring, Node.js, etc. Those are all heavy frameworks without whom a project wouldn’t be made in the first place, but ,in general, we should aim to extract all the references to those frameworks into an implementation detail layer, thickly hidden from the rest of our core-domain logic. Same logic applies to smaller libraries such as JSON parsers or bug reporting services.
One of the more integrated and intertwined parts of projects that will, most likely, be marked for deletion is the support or dependency for specific versions of clients or services.
No feature should be tightly based on a specific version of a service even if it’s the most current one at the time of writing that part of code. New versions of web clients, services or libraries are being released all the time and we should try to keep up with them as much as possible. Sometimes we could be even blocked from publishing a product if some sort of criteria for the minimum requirements is not meant. Usually static code analysis tools such as lint checks can warn us about those cases, but we shouldn’t rely only on them. We should also look into the documentations and release notes for the services and libraries that we use in our projects.
A hardware’s lifecycle is different to the software lifecycle and the parts that fall outside their intersections are the parts that will be removed. In the world of mobile development, for example, we see a lot of devices being deprecated due to the lack of software support or due to them being discontinued. In that case, our code should be prepared for those scenarios with efficient removing of all of the code referencing such devices.
The performance optimization conundrum
As a general rule of thumb, the more we optimize parts of our code for specific scenarios, the more we are tightening the connection between that code and the rest of our project.
If we are making a game engine and program a single, performant function for a last-gen, x86 processor that calculates collision detection and is called all throughout our code base, refactoring that function could cause the whole engine to be unstable.
Sometimes it’s the necessity of a project to have more optimized parts of the code. If that’s the case, what we as developers can do is to push those optimizations away from our core domain layers into the outermost abstraction layers using loose couplings. We can make them part of the implementation detail. As a result, we have a majority of the code base that is perfectly reusable and the tightly-coupled optimized code as part of an plugin or extension.
In our game engine example, the more performant, last-gen optimized code can be part of the hardware abstraction layer where, if the right conditions are met, we can use the performant function or if not, we can always fall back to some more generic implementation of the same function. Instead of rewriting the whole engine, we can just rewrite that part of the code.
As a small side-note, we should tend to always measure before optimizing.
A note on best practices
We should be cautious when implementing best practices.
Best practices are what their name suggest, the current best practices in the industry. One should always note that the best practices are tailored for most, not all projects. Those best practices were not made for our particular project. Each project has its own little quirks that make it stand out in some way.
Even some industry titans tend to throw the term around, but every year they come out with a new set of best practices and solutions with noting that the previous ones have become obsolete and not recommended for further guidelines.
Sometimes we see best practices and recommendations for solving particular problems in a way that makes our codebase more integrated and harder for later refactoring or deletion. At first it looks perfectly OK, but on closer look, at the gain of having code that’s more in line with the latest trends, we lose some of the ability for easier deletion later on. Best practices are a good starting point from which we can build upon.
No rules, just suggestions
Planned obsolescence has a very negative connotation to it mainly from different industries, but in software development it should have the complete opposite effect. It should be looked at as enhancing already well-written, clean code.
It’s hard sometimes to wrap our heads around too much abstractions. When or where to decide that something can or will be changed in a project’s lifecycle. The more projects we work on and the more experience we have, the clearer it will be what can be more or less abstracted into some layer.
Ideally, what we want in the end is that our projects and by extension, our code effortlessly evolve over time.
It should be noted that nothing written in this article should be looked as an requirement for writing great code. Quite the opposite. Like any other programming principle, programming for obsolescence should be used carefully and wisely with the all important decision of when and where to implement it.
What are your thoughts on this subject? Were you familiar with the concept or have you already being using it without knowing about it like myself? Thank you for reading.