We aim to please the customer at short notice and always overestimate our capacity to comprehend a system as it gets more complex. That’s a recipe for technical debt. The antidote to this psychological shortfall is more team discipline in writing clean code with good test coverage. Static analysis tools with strict validation settings should be an integral part of your continous integration process.
Born to create technical debt
In a previous post I talked about the vicious circle of bad test code, a common cause of poor maintainability in many large code bases. I suggested that you need to take (unit) testing seriously and have a more systematic and certainly less artistic approach to the practice of writing code. The discipline required of the team is a must-have trait that’s unfortunately not hot-wired in our genetic firmware. On the contrary it often looks like we were born to create technical debt. How’s that? We aim to please. We like the satisfaction of cooking something up that other people find yummy and we feed in turn on their compliments. Nothing sinister about that. The crucial and galling difference with cookery is that serious software is too costly for one-time consumption. You’re dealing with an evolving entity and you can’t afford to sit back and put Mother Nature in the driver’s seat, or you will be witnessing the making of a Frankenstein’s monster. We often underestimate the longevity of the product we’re eventually going to build, so we don’t bother upgrading obsolete components of the system. We underestimate the burgeoning complexity of its essential logic, so we don’t put in enough refactoring effort to keep it under control. Here’s the biggest cognitive bias of all: we always overestimate our capacity to grasp what has actually become a tangled mess.
The hacker mentality: more dirty than quick
I have never been on any project where going quick and dirty would have yielded more business value in the end than working cleanly. Except for rapid prototypes and proofs of concept coding for maintainability is always the safer option. Neither have I met a junior programmer who started off writing clean code, no matter their IQ. The longer I have been a developer, the less I am impressed by the hacker mentality of coding. There is nothing, absolutely nothing intelligent about writing convoluted, ‘clever’ code. It is self-serving and disrespectful of your colleagues. Many professional arenas, from law and medicine to aviation have safety regulations, check lists and disciplinary bodies that can take away your license. Newspapers have style guides and grammar nazis. The average software development team doesn’t come close. Bending the rules to breaking point rarely gets you fired. Pressed for time we too often deliver the goods through a mixture of quick fixes and evil hacks. This should be cause for embarrassment, not pride.
Any fool can write code that a computer understands and many foolishly do so. To produce code that’s easy on the brain we need some baseline for clarity, brevity and simplicity (the hallmarks of maintainable code) and muster the discipline to insist on it. These metrics are certainly not wholly subjective. Static analysis tools do a fine job in sniffing out bad maintainability. Modern IDEs come equipped with tooling that packs decades of best practices to tell you in detail what to improve, sometimes even offering to fix it on the spot. Some people will downplay the importance of static analysis because it doesn’t catch logical errors. True, clean code can still be very incorrect in the same way that a spell checker can’t help you out when you write dependent if you mean dependant. That doesn’t make the tool useless. Anyway, forcing yourself to keep units concise and simple does reduce errors, albeit indirectly.
IntelliJ’s extensive code inspection options
Too much of anything is bad for you
A judge applies the law but isn’t allowed to question its validity; that’s up to parliament. In the same vein the rules you agree on as a team are up for debate, but you shouldn’t @Suppress them willy nilly. Try to make the code fit the rule first. Only when that is obviously impossible of ridiculous can you remove or adapt the rule, but that should be a consensus decision. You may be the judge of your particular bit of code, but coding guidelines are a team effort. As a first line of quality control before the code is shared for review it should be part of your automatic build, preferably in the form of a pre-commit hook.
Some rules are uncontroversial and very pertinent. Others are a matter of taste (tabs or spaces). Then there are stylistic standards that do little to manage complexity but keep things uniform, which is good because it reduces cognitive load. I strongly believe in standard naming conventions. If design patterns have taught us anything it’s a shared idiom and I look forward to the day when AI can detect an apparent factory that goes by the name of a Creator, Generator or – the height of unhelpfulness – a Helper.
The most useful checks however are about simplicity and brevity. A long method is a drain on your short-term memory and a telling sign that the method has taken too much on its plate in terms of responsibility, i.e. low cohesion. Also watch out for anything bearing the name registerNewUserAndSendEmail() or classes hinting at godlike powers that end in *Manager. Every class that’s not mere data manages something or other and you might as well call it SolutionWizard. Long methods or ones with more than three parameters are a telling sign that the code has too many possible execution paths, or cyclomatic complexity if you want to look extra smart. Setting a strict limit on cyclomatic complexity is my all-time favourite, because it makes code hard to understand and even harder to test throughly. Which brings me to test coverage.
Unit-test coverage: forget about averages
Test coverage can be expressed as the percentage of classes, methods and lines that are covered by unit tests. I believe all (100%) classes and methods should be touched with at least 80% line coverage, and you should be adamant that this coverage applies to every class. Don’t take it as an average; you can get 80% average with most classes on 90% and some on 50% coverage. But please explain to me then why these units were so poorly covered? Hard or impossible to test is not a valid argument. Treating the threshold as a minimum average will only invite you to bump your test coverage by reaching for the low hanging fruit, i.e. the methods with few execution paths. These are quick wins, as you need only one or a few test invocations to get 100% coverage. But such units are less error prone by nature. Would you be flying if maintenance staff only did the items on the checklist for which they didn’t need to reach or crouch? You want to focus instead on the code with high cyclomatic complexity, because through writing the tests you’re more likely to stumble on errors in your design. If you find too many on your team of the low-hanging-fruit variety of test writers you should definitely add mutation testing to the mix.
Painting yourself into a corner
My first experience in a project that incorporated strict validation as part of its continuous integration took some getting used to. At first I was ever so annoyed by the pedantry of it, but since breaking the build several times I quickly came round. Writing maintainable code doesn’t come natural to anyone. It sure didn’t to me. Over the last 18 years I have abandoned several hobby projects because I ended up in a self-inflicted maze of incomprehensible code and painted myself into a corner. The mantra ‘write as if the next person to edit your code is a homicidal maniac who knows where you live’ should still hold if you’re working solo. Especially then, I would say.
EDIT: I first titled this post ‘autistic tools’, but realized this might give offense. It was certainly not my intention to make light of what is in fact an incapacitating mental condition.