The Best Kind of Programming is Refactoring
On one side I'm trying to use this opportunity to continuously improve the existing code base and would like to apply another refactoring pass to follow-up a previous refactoring.
I consider the refactoring performed last year a success. It consolidated an embedded SQL script that was hard-coded and spread through about five different parts of the application. The refactoring was fairly simple, consolidated the code into one object, and has been working great in production since we installed it. I don't think we've seen even one defect come from it. It allowed us to perform some complicated logic, including recursion, for a later project request in a much more controlled fashion (modifying one object instead of five or more).
But it could still be improved. It consolidated the embedded SQL but didn't move it to the most central place such logic could be: on the server in an Oracle PL/SQL package. If it was on the server, its functionality would be reusable not only by the client software but also by other PL/SQL procedures. With this new project, I'm considering making this refactoring part of the applicable change request, but I'm very conflicted about it.
If it ain't Broke, don't Fix It
In an earlier post I said that writers say "The best kind of writing is re-writing," and that perhaps "the best kind of programming is refactoring." I still feel those are valid statements and that disciplined, regular refactorings are a good thing. However, I'm also under pressures to deliver this project in a timely fashion. So what else is new?
Here's the dilemma. I have to decide how I can effect the required functionality change. Do I add the new code object at the client object level, and have it integrate with the existing recently refactored business object, or do I start it with the logic at the PL/SQL level?
Building the new object's guts at the PL/SQL level seems to make sense and is essentially doing what I did not do with last year's object. But the new object will need to communicate with the existing one, and that means I have to also refactor the existing object. That adds overhead to the project for a benefit whose value to the organization I can't quantify yet. In addition, this refactoring would have a few challenges.
I'm sure PL/SQL can handle the logic demands, but I'm unsure if the way I'm currently performing dataset management in the client will translate cleanly to the PL/SQL. The objects's internal data grids will need to be replaced with cursors or work tables and those perform differently than the client's grids technology. And some of the more complicated functionality in the current object may need to be adjusted for the paradigm shift. So I have concerns that moving the logic to the PL/SQL layer isn't just a rote migration. There will also be a translation effort that introduces some risk.
It now becomes a battle of "The best kind of programming is refactoring" versus "If it ain't broke don't fix it." Of course, that's the whole point of refactoring, as McConnell notes in Code Complete. "Refactoring refers to changes in working code..."
On the other hand, "If it ain't broke, don't fix it," says that you might be better off spending time on other improvements. The old saying has some merit with regard to stability, but it can also be perceived as a philosophy of complacency. Fortunately, I don't think complacency is the problem here since the current object is already the product of a recent refactoring and is performing well.
Conclusion
I don't want to work unnecessary overtime. I did it on the last project and it went on for a year and a half and represented a sizable sacrifice on my part that I'm fairly certain was unmatched by anyone else and certainly not by the compensation. So I will probably keep the code at the client level, but still encapsulated in a business object; this will save us the effort involved in the additional refactoring pass of the existing object. The new business objects will be able to communicate with it and I don't foresee significant performance problems. If we need to, we can attack the logic migration at a later time when such refactorings are not subject to the project's time constraints.
Is that the right decision? You could argue either way, but a few points from McConnell's refactoring checklist bear emphasis here:
- Are you keeping each refactoring small? A complete move of the logic would bear a proportionate amount of risk and regression testing, so I think electing to postpone a full refactoring does help. I could certainly still peel off outer bits and at least start the process, leaving the rest still in the client.
- Have you considered the riskiness of the specific refactoring and adjusted your approach accordingly? See above.
- Does the change enhance the program's internal quality rather than degrade it? A bit trickier to answer. I don't think refactoring the entire object would have a significant effect on performance or readability, but does give us more options in future solution designs. As I said above, perhaps not valuable enough to entertain at this time.
The battle between fixing something versus cobbling more crappy code onto a crappy foundation is an endless one for most developers, but especially for that group that has to maintain code for shops that aren't involved in commercial software development. Developers in commercial software shops can afford to be purists; developers subject to corporate sensibilities (or insensibilities) have to reconcile that their leaders are revering time to market over software quality. Many of them don't even understand the concept of software quality, and perhaps they don't need to if their business is not software. But at some point someone does and has to fight the battles I've described here. Whether the guys at the top understand it or not, software quality does affect the company's costs and efficiencies.