Friday, December 15, 2006

Do unto others...

Ron Jeffries enigmatically once said of the XP practices, "they're only rules". What I think he meant was that the practices are a social contract within the team - everyone is expected to abide by the contract, but the team is also free to change the contract when that's appropriate. What we don't want on an XP team is someone unilaterally deciding that the rules don't apply to them. Instead, we want them to raise an issue in a retrospective so the team can hear why some rule isn't working for that person, so that the team can provide advice, support, suggestions, or possibly change the rule.

Developers understand this in the development team context, and also with regard to the rules covering the interactions of the customer management teams with the developers, but I find that they are rather more tolerant of the rules covering the interactions going the other way. In particular, I regularly hear people suggesting that although the customer/project man ager wants to do things a certain way, the developers have decided to do something else "in the interests of the project". What's worse, I feel the urge to do this sort of thing myself!

This pattern appears mainly in two contexts - priorities, and quality. When developers know better, the choose to implement features in an order different to the order requested by the customer, usually because it will be "more efficient". This is fine when everything gets done but if there are some hiccups the customer may be left with features they didn't really need while also missing features they think are more important. While one order may truly be more efficient, that may not be the most important thing from the business perspective and if there's disagreement the final decision on priority rests with the customer.

The equivalent behaviour regarding quality is harder to detect. Sometimes the developers decide to implement lower quality than the customer requested, but more often I see the developers decide to implement higher quality. The context is usually that the developers have two approaches - one can be implemented quickly but hard to change/maintain, and the other is slower to implement but easier to maintain - and the customer prefers the "quick and dirty" approach. In spite of this, the developers decide to implement the "nice" approach. Once again, this is fine if the developers can somehow do it in the same timeframe (overtime perhaps?), but not if it's done at the cost of other features.

Developers need to recognise (intellectually and emotionally) that they are in a service industry, and when push comes to shove they take direction from the business, represented by the customers and management. Developers have a responsibility to present their views and recommendations, with all the supporting information, and then let the business make the business decisions, which include how to spend time/money. When business and development don't agree there are two main reasons - development haven't presented their position in a way that the business understands, in which case the developers need to improve their presentation and influence skills, or business understands the development position quite well but it isn't the most critical factor in the outcome. Sometimes developers don't have the big picture! In neither case the problem that "the business is stupid" or "the business made the wrong decision", though they certainly may make decisions that the developers don't agree with (it undoubtedly happens the other way around as well!).

Of course, none of this applies when people are trying to make decisions that they aren't actually responsible for - in that situation everyone needs to say "thanks very much for your input, but I don't think you're responsible for the final decision". It's common for this to happen with estimation, when the business tells the developers what needs to be done (their responsibility) and how long it's going to take (the developer's responsibility). Sometimes the lines are blurred though - the business is making "draft" decisions, fully expecting them to be reviewed by developers, rather than final decisions - and the situation needs to be clarified.

So if you're a developer who interacts with the business then you have a responsibility to respect the decisions of others, and you'll probably benefit from honing your influence skills as well as your technical skills.

Tuesday, December 12, 2006

Easy Access Training

In 2007 I'd like to try a few different things, and the first of these is what I'm calling Easy Access Training. Many developers have trouble getting budget and/or time off for training, so I hope to offer one day of training per month on a weekend, and have the price float depending on demand. I expect the rate to come out lower than a commercial course, which seems to be about $600/day (+GST), but I honestly don't know how much lower; probably it will depend on the courses that I offer. There seems to be a lot of interest in test driven development, so I'm starting there, but I'm keen to hear what other courses people woule be interested in on this basis.

What do people think of this idea?

Thursday, December 7, 2006

Shu Ha Ri

I'm fascinated to read various posts (mostly blogs) about how "Agile" is bad, sometimes accompanied by a caveat that "agile" is ok. To me any distinction between the two is artificial - from its inception agile development has encouraged adaptation of the method. I see the distinction arising for two reasons - (i) simple misunderstanding, since adaptation isn't included in every description of agile development; and (ii) a mismatch between the question and the answer in shu-ha-ri terms.

In these discussions "Agile" is characterised as a fixed set of practices that must be adhered to, regardless of context or experience. While some people may approach there use of agile this way, I feel this is a misinterpretation of agile development - false doctrine, or if you want to be extreme, heresy!

Adaptation is certainly part of extreme programming, even if this word isn't used. In the second edition Kent and Cindee talk about reflection as on of the XP principles, and using root cause analysis as one of the corollary practices. These are both part of adpating XP to the local context. When the agile manifesto said that "...we have come to value...responding to change over following a plan" I took that to include the process itself! During a recent panel discussion I was asked which agile practice I would introduce first, if I could only introduce one, and my eventual answer was "retrospectives". If you introduce regular, open, honest retrospectives then given enough time and some knowledge of alternatives you'll eventually get the process that you need. And I think that I'd classify that you'd probably get an agile process. Diana Larsen speculated on the same lines.

So if adaptation is an intregral part of agile development, why doesn't everyone understand that. I think the answer is embedded in how we teach and talk about agile. In "Agile Software Development" Alistair Cockburn talks about Shu, Ha, and Ri as three stages in learning. In the Shu stage, the student needs a clear set of instructions or rules, and the faith that these rules will give them the results they need. In the Ha stage, the student can automatically apply the rules, but has begun to understand that the rules don't always give the best results. They start to look for these exceptional cases and apply different rules in these contexts. They gain flexibility but, and this is the important part, they need to be fluent in the basic rules first. Kent had something similar in mind when he suggested renaming the XP practices etudes - things that you practiced for fluency, but that you didn't necessarily apply rigorously every single day. In the Ri stage the student has moved beyond rules, but that's beyond the context of this discussion!

Most people, and I don't think this is restricted to software development, want to skip the Shu stage and move straight to Ha. They want to hear that the rules don't apply all the time and then decide when and when not to apply them. I hear this all the time - we want to do risk driven pairing, we'll only write tests for the complex parts of the code, things like that. The inconvenient truth is that this approach doesn't work. Until you've personally applied the practices in both appropriate and inappropriate situation you're just not in a position to make those decisions. You'll apply the practice where you want to apply the practice, but that won't be the same as where you should apply the practice.

So when I'm introducing people to agile practices, I'll say that you pair all the time, that you write unit tests for every method, that you should have 100% code coverage from your unit tests. I know these things aren't true, and it's not what I would say to someone who already had some experience with the practices, but they are very useful first approximations. They are Shu statements. People experienced with software development, but not with agile practices, can sense that these aren't complete answers - if they think that this is all there is to agile, that there isn't more sophistication futher down the track, then they are liable to reject agile as snake oil. It's easy for them to get this impression from introductory presentations, since it's hard to come up with material suitable for everyone in the audience, and it's even easier if the presenter hasn't moved past Shu themselves. We teach students in groups of similar ability for a reason!

In an ideal world this distinction between "Agile" and "agile" would disappear, but I'm not hopeful. Instead I thiink we'll see two things happen - some people will create branded, proprietary agile methods that aren't subject to community reinterpretation, and other people will stop talking about 'agile' altogether and just 'do stuff that works for them'. Time will tell.