Category Archives: Refactoring

Update on Continuous Deployment a few months in

On average, I’m doing just under four totally automated zero-downtime production deployments per day since I started doing continuous deployment on this project in early August 2011.

Obviously, that’s an average, so some days there are more, and some days there are fewer.

How many times during that span has improperly tested code broken things in a way that impacted users? Just once, and it was fixed in just a few minutes by adding back the code that I over-zealously removed.

Also in that time, we went from 0% automated test coverage to 18% automated test coverage, refactored a lot of duplicate code, and added new features.

Oh no, not more of the same TDD discussion Tedium.

Every once in a while, otherwise reasonable people get together to argue about TDD with religious zeal. In the most recent flare-up, I’ve been disappointed that on all sides, as nobody is saying anything new.

Yawn

At the risk of adding yet more noise, I did have a two nuanced thoughts that are at least new to me and I thought I would share them, along with a recent personal anecdote.  If this is obvious or old-hat to you, then I’m sorry.  If you think I’m  too stupid for words and that I’m drinking and/or selling snake-oil enriched kool-aid, I look forward to what will undoubtedly be informed and insightful feedback.

1. s/driven/aware/

The hardcore position of “never write a line of production code without a failing test for it” probably does more harm than good. Different kinds of code require varying degrees of effort (cost) to write and maintaintests for. Different kinds of code give varying degrees of benefit from test automation. Without always realizing it, developers make cost-benefit decisions all the time, and good development organizations empower their developers to act on those decisions.

That said, the cost-benefit decisions developers make must at least be informed decisions. A professional developer who hasn’t taken the time to learn how to use the appropriate test frameworks for their language/environment (jUnit, nUnit, whatever) is just plain negligent in 2009.  Test automation is just one tool in a competent developer’s toolbox, but a critical one.  I wouldn’t trust a carpenter who didn’t know what a hammer was, or a cardiologist who hadn’t bothered to learn about this newfangled angioplasty business.

Test-driven may not be appropriate for every context, but everyone needs to be at least test-aware.

2. s/first/concurrently/

Test-first is a really helpful approach, but it doesn’t work with the way everyone thinks, and mandating that everyone must always think in exactly the same way is the worst sort of micro-management.  The other extreme, writing test automation for a large system after it’s complete, is often prohibitively difficult and (frankly) boring as hell.

My advice is to always at least think about how you would write tests for your code before writing it. That will help keep you from painting yourself into untestable corners. Also, interlacing test writing immediately after you get a small subset of your system done is going to be much easier than testing the whole thing after the fact.  Personally, I move back and forth between writing the tests first and writing the code first. The key for me is that I’m working in short code-test-code-test cycles, using persistent (that is, I don’t throw it away when I’m done) test code as the primary mechanism for executing the code I’m writing as I’m writing it.  I don’t think of the process as being test-first, I think about testing concurrently with coding.

Recent Anecdote

Sure, anecdotes aren’t data, and they can’t prove anything, so take from it what you will

I just finished a pretty big refactoring project (that is a “pure” refactoring, the external interfaces and behavior of the existing stay the same but the underlying implementation was improved) of a system that had some decent test automation. Every time I got an edge case behavior wrong, introduced a side-effect, or removed a necessary side-effect (yuck), a test would go from green to red. This saved me at least a few days of development and testing time, and reduced the chances that I would release bugs to our QA guy (bad) or our production system (even worse).

Code Smells, Correlations, and Poisoned Coffee

Winston Churchill had a much faster and sharper wit than I have. Consider this* famous exchange:

Lady Astor: “If you were my husband, I would put poison in your coffee.”

Churchill: “If I were your husband, I would drink it.”

I, on the other hand, always think of the correct thing to say in an argument a few days later. It’s not just for ex-post-facto arguments, either. I’ve been doing some technical trainings lately, and I try to have a more conversational style than just reading lots of bullet points from slides. I figure that if I don’t understand a topic well enough to speak about it extemporaneously, than I have no business talking about it.

In the last presentation I made, I talked about the practice of refactoring and the concept of code smells.  I gave examples of a few of high-value smells and then gave this little wishy-washy disclaimer.

“When you encounter these code smells, it doesn’t mean that you have to change it, it just means that you should look at your code closely here, as you may have problems.”

It’s not so bad, it’s pretty much what everyone says about refactoring. What I should have said, was this:

Code smells are correlations to quality problems. Heavily commented code blocks aren’t necessarily bad, but lots of comments very strongly correlate to readability problems. You don’t fix it by deleting the comments. You fix it by making your code readable enough to stand without the “how does it work” comments.

Long methods aren’t necessarily bad, but long methods very strongly correlate to cohesion problems. It’s possible, and sometimes required, to have a long method that’s perfectly cohesive, but it’s outside the norm.

And, of course, you don’t fix the problem based on the correlation to the problem, you fix the actual underlying problem. Breaking a long ProcessThings() method into three arbitrary methods called StepOne(), StepTwo(), and StepThree() doesn’t actually make the code any better.

You see, that’s not wishy-washy at all, and it appeals to the distinction between correlation and causation. It’s not as funny as poisoned coffee, but it has some concreteness to it.

*After looking up the Astor/Churchill exchange, I found that it’s very possibly an apocryphal story. Oh well, it’s still funny.