During my IT life I had to analyze many code bases – something that you would call an architecture review. Or some might call it a part of architecture review. As for me, I don’t believe in reviews without actively touching the source code. Touching means: static analysis, load and stress tests and manual walk-throughs.
You can try to do everything manually, but when the code base under review has more than 5 artifacts, you’re lost with this approach. So, what you need are tools. One tool block is helpful for static code analysis. You can get a lot of interesting information out of its results. It don’t even have to be obvious, mission-critical, dangerous spots in the code which can crash the thing once entered. Most of the problems are the result of the technical debt, and their impact will be seen much later – when it’s too late.
Year after year, new features get implemented on top of the code base without any considerable and necessary restructuring of this. Often it is even not possible to adequately restructure the code base when the fact of technical debt hits you hard, since original developers have left the company years ago and new ones just have to ensure new features continuously come in. They agree on a complete reimplementation, management doesn’t allow it for too high costs, the story goes on with a lot of frustration but no action. It’s not the point that we can understand this. The point is, that we shoudln’t accept it. But this is a completely different story.
What is relevant for this post, is that you can at least learn the size of your technical debt using some savvy tools. I used different tools for static code analysis. Years back, a manually configured a bunch of tools like PMD/CPD, Checkstyle, Findbugs, Classycle, JDepend, JavaNCSS and such have helped a lot when analysing big Java code bases. Some of them might be pretty dead meanwhile.
The most uncomfortable thing with these tools was the very static view on the snapshot under analysis. You measure violations and their levels and need to decide, based on the numbers, what to do. When you want to learn about benefits from restructurings, you need to do these restructurings first before you can learn. If you’re wrong, you need to re-restructure. And so on.
On my way, I found tools like Dependometer and later its commercial successor SonarJ, which allowed you to simulate restructurings / refactorings. You have defined your ideal architecture through layers and vertical slices, assigned packages to cells and thrown your code on it. The tool has found architectural violations and reported them to you. Then, you have defined a bunch of restructurings, which have been done purely virtually, so the tool measured against the virtual state of the code base, and you knew how much a certain restructuring would help reach the ideal. If it helped well, you have thrown the restructuring tasks over to development, or better, did it yourself.
The free tools I’ve mentioned earlier didn’t allow this. And even as Sonar arrived, it became more of a standardized wrapper around these or similar tools. It only measures the actual code base, and when you do reviews, you do them on the actual or actually restructured code base. No simulation. No play.
But even providing static view on things, Sonar is THE tool of choice for so many projects. It’s a great foundation for extensions and has become a whole ecosystem. So I thought: why not teach it the restructuring / refactoring simulation?
I still do reviews of Java code bases, so I need a tool for simulation, because once I enjoined it, I don’t want to miss it. But not everybody is willing to pay for commercial product licences having so many great open source products around. Here, my thoughts are perfectly fitting in, and it’s not only thoughts anymore: I have actually started teaching Sonar simulation.
You can find the current progress in my Sonar fork on GitHub . I have named it whatif. Right now, whatif can already rename packages, so you can see how this action breaks cycles and unnecessary dependencies. You need to have a refactoring definition file which you will pass onto Sonar through the parameter sonar.whatif. This file would look like this:
org.pbit.sonar.test.a: org.pbit.sonar.test.b org.pbit.sonar.test.c: org.pbit.sonar.test.b
And so on. Left side defines what is, right side is what it should become. Simple Java property file.
The project and the restructuring / refactoring configuration are moving target, so expect changes here. But the principles will stay the same. I’m already working on the interface extraction. It’s much more tricky, and I will surely need a week or two to finish the first version.
When these pieces are done, I might implement a plugin (or reuse and adopt existing ones) for target architecture definition (non-UI, config file for sure). Then I can also measure how much virtual refactorings have helped come closer to the ideal architecture. Just the same way I enjoyed with commercial tools.
How I do this technically? I’m manipulating AST virtually, after it gets created from a source file and right before any analysing plugin kicks in. Some Sonar plugins don’t go for the AST but are looking at the binary code and original text files, for analysis as well as for presentation – here I need to invest some more work. I would need to modify the relevant components to know of my virtual code modifications.
Step by step, I would grow this. And of course I appreciate any help, contribution and feedback.
Dein Job bei codecentric?
More articles in this subject area\n
Discover exciting further topics and let the codecentric world inspire you.