I like code review but complain about it all the time. Why? The other day I had an eureka moment: There are two quite different kinds, which I will give the misleading names agile review and waterfall review, and I'm annoyed when the reviewers perform one kind of review and I want the other.
Waterfall development is centered around planning, development and deployment in stages: First plan well and get rid of most bugs, then implement what's planned, then review and test to fix the remaining bugs, then deploy. In this model, code review focuses on catching minor bugs, ensuring code style harmony, proper use of internal/thirdparty APIs, etc. Big questions such as
is this feature a good idea at all? are out of scope, because that's been settled in the planning phase. (I'm not saying waterfall works in practice, only describing the theory.)
Agile development is different. Instead of planning well, agile development is based around an iterative process: Implement something that seems to be a good idea, consider it, learn something from it, finish it and then deploy, then pick something else to do, something one now knows more about. Code review can be the first part of the consideration and learning process, so it's a good time to discover that although a feature seemed like a good idea, it's going to be a morass and should be dropped at once.
An example may show this better. Consider a server that serves requests from clients and uses a sharded database backend, and a customer who complains that although performance is fine in general, a few requests are much too slow. The reason for this hypothetical problem is that combining results from different shards is too slow.
In an agile process, the team might open an issue saying
Some requests are too slow, see support ticket 123456 and bring 95th percentile response time closer to 50th. The developer investigates, write some code, see that it works on the happy path, and ask for review. In this case, the first thing to discuss during code review is what problem is solved by the change and whether it is the ultimate problem or more of a bandaid. Next, perhaps, how to test this approach. Only after that does it make sense to write the tests, implement the sad paths, review the code style, level of testing and other implementation details.
In a waterfall process, one person does the investigation and opens an issue saying something more like
access database shards in parallel so that performance is better when many shards are needed for one request, and the questions for code review is whether the implementation conforms to the style guide, whether the tests cover the problem and will avoid false positives/negatives, etc.
These two kinds of review discuss different questions, and they even work best on different code. An agile review should start much earlier than a waterfall review: If you're going to discuss whether you've understood the bug at all, then it makes sense to discuss that before you bother with 100% unit test coverage. If the code's going to be dropped there's no point in getting 100% test coverage and fixing every corner case.
The waterfall review is based on a much more firmly worded issue. There's little scope for learning about the problem in
access database shards in parallel…, so no point in doing the review until the code is complete and all the aspects of the review can be done at the same time.
I wrote above that the names are misleading. What I call waterfall review can be necessary in an agile process. A very agile team where I once worked had the freedom to modify or drop almost every feature as we learned about the problem space. But only almost every feature. There was a contract that demanded that we interoperate with a named third-party service using a specific protocol. Using that protocol was a poor choice, that protocol should be buried, not used, but it wasn't our choice to make.
Most code related to that third-party service had to be reviewed in the waterfall manner, not our usual agile style. I remember a terribly frustrating review where a reviewer wanted to do his usual excellent agile thing and it just wasn't helpful. The contract was signed, the details were fixed and the discussion could not affect anything.
The opposite mistake is more common: Asking waterfall questions even though the team is supposed to work in an agile manner. I think the main reason for that is that the reviewers want to write code, and a waterfall review doesn't require mental immersion in the way an agile review does. Just scan for style aberrations or other surface issues, ask for another code revision and get back to work.
It's a bit of a pity that the waterfall model of software development is out of fashion, not? But maybe it's possible to tweak scrum/agile development to be almost like waterfall? What would be necessary?
First, introduce cycles and milestones to more or less replace the continuous delivery. Adult supervision requires setting goals. It's okay to run CI tools, as long as you have at least a couple of future magic dates, call them
next step along the waterfall and
final arrival at the ocean or call them
internal/partner alpha launch and
public launch date.
Do it right instead of the continous improvement. The waterfall model says that it's cheaper to fix bugs earlier, so do it right once and it'll stay right. This too is simple to integrate into Scrum, just boost the code review and pre-implementation consultation. Talk about how important it is to have the right design and to avoid technical debt. Make technical debt seem worse than the many other kinds of software deficiencies. […More…]
The author of the most pleasant java code I've ever seen wrote the following paragraph on an internal mailing list last year, or perhaps the year before:
By the time I met my last significant mentor I had been programming for 15-16 years. I was skeptical to much of what he wanted me to do, but I shut up and gave it a few weeks — eventually incorporating much of his thinking into how I work and evolving my own style. He was amazingly bad at explaining things verbally, and to be honest: his code looks like shit. But he kept coming into my office and demanding all the right things of me; that I think about what I output (
seriously, do you need that log output!?), that I have near 100% test coverage even if it means the test code is 5 times larger than the code it tests, that I always remove code that isn't used, that I never implement methods that I don't need right now, that all public methods and classes are always documented and that the documentation be kept in sync with the implementation, that the code will build and run its tests on any machine and without screwing over other developers by assuming what is and isn't installed on the machine, that it compiles without warnings, that software is trivial to fire up, that any and all performance critical code is benchmarked, that code is never prematurely optimized but that design-work always consider performance implications of choices etc etc. I got yelled at every day for at least 3 months.
That mailing list was internal to a company that cared deeply about doing all the right things, avoided almost all of the cliché mistakes and yet missed deadlines serially. (I didn't know it yet, but later, the company would cancel its main product, weeks before release.) The demanding mentor was someone whose code looked like shit, yet demanded all the right things. There were parallels there, I thought, maybe I could use one to learn about the other. So I put it aside for a think and a ramble. […More…]
The daily standup and periodic sprint planning in Scrum expose ideas to more people. There's more chat about things. Methodic code review pushes in the same direction.
That has many good effects. One which may not be so good is that dubious features are easily killed before they're implemented, or rejected when the first part is reviewed. Sometimes that makes me unhappy.
Dubious features are Schrödinger's cats: They can be anything from damaging to insanely great.
Insanely great features are insanely great in hindsight, but the ones I've written weren't great in advance, and I fear most of them wouldn't have passed the Scrum process. It's so easy to say no and concentrate on the sprint story. An answer like
write it and see, we can always ditch it afterwards isn't in the spirit of scrum.
I suppose that's no bad thing, overall. More deadlines met. But fewer inspired features.
I am reminded of the Inmos Transputer.
That, as my older readers may still vaguely remember, was a freak processor in the eighties. It was designed for parallelism: Its fundamental design was for a computer with many transputers, not one with a single humongous blob. Each CPU was small, simple (the wikipedia page includes the complete instruction set) and linked to four other CPUs using bidirectional message-passing connections, and the design allowed vast CPU meshes with message routing and forwarding.
The thing that reminds me of the transputer is the way those links worked. When a Transputer received a message that had to be forwarded, it would prioritise communication over its own computation.
I am reminded of this because my mail is down. A great big failure happened during Christmas vacation. Then a routing mishap left me unable to take part in a video conference this morning. I am forced to prioritise my own programming over message passing, and it feels so good. Yesterday was great.
Tomorrow I shall apologise to borud about my unresponsiveness. But today, I plan to wallow in solitary hacking.
Actually I'll wait a few hours with publishing this. There's a chance someone might see it.
One really good thing I learnt from working with Abhijit on Archiveopteryx is how to get quickly into the flow. Click, flow.
The basic pattern involves a good kind of task, and a good way to get to it. Both parts are vital. […More…]
Blog posting number one, from a typical silver bullet merchant […More…]