Here's an email from one of our consultants, and my response (edited for anonymity and to removing the internal cursing!).
So, our product manager asked me today whether we could (just) let quality slide for a while, in the interests of getting stuff done faster.
I gibbered and muttered for a bit, and then gave an answer which amounted to "I don't know how to do that". Which is true; having personally experienced the benefits of practices like pairing, automated tests, and refactoring, and having done things that way exclusively for a quite a while, I'm not sure I know HOW to do it the old way.
I'm not sure that the his concern has gone away, though. He describes his experience as "mostly startup-like stuff", throwing stuff together quickly, and potentially throwing in away a few short years later. He's certainly never worked in an agile environment before, and is doubtful about the value of some of our practices.
Any advice?
As with most companies with a history of software development, there are people who have been burned by non-agile development, and people who haven't had that experience, and they have very different perceptions.
Here are some principles:
1) The customer is in control of the quality sliders for a project. We (IT, not just Cogent) are a service provider who give advice and recommendations, and then act on the instructions/direction of the business.
2) If the customer asks us to do something we (Cogent) fundamentally disagree with, we tell them that we are not the right group to provide that service and we quietly, respectfully disengage. Think of a surgeon who was told to perform operations without sterilisation, and that someone else would deal with the infection later.
3) There certainly are times when "not doing things the right way" is appropriate. I wrote the first cuts of Cogent Times with no test cases, because I didn't know if it would get any traction (i.e. be useful enough for people to use it). I don't regret it, and I would do it that way again. Now I'm adding test cases incrementally. When Runway was only developed by Simon it didn't have tests either - we demonstrated that this didn't scale. Kent's blog on the Flight of a Startup describes this well.
4) I think it's important to understand timeframes though. I believe that letting (internal) quality slide for a while will get stuff done faster for a few weeks. Other people seem to think that it will let them go faster for months or years. I disagree. Put simply, dropping quality won't get things done faster. We have the courage of our convictions here - we use these practices when we spend our own money on our own applications, when we really, truly, want to go as fast as we can.
5) The impact of lower (internal) quality is exacerbated by:
number of people working on the application
quality of the people working on the application
size of the application
level of understanding of the problem domain
level of change in requirements
flexibility of the technology stack
tolerance for defects in production
If your application is being built in a modern language, by one skilled person who is familiar with the product domain, working with people who know what they want and can describe it in detail without hesitation, false starts or iterations, and no one loses any money (or lives) if there are production defects, then what we do is definitely overkill. I've yet to see an environment that matches this description.
Having said all that, in the bigger picture the product manager isn't necessarily the person who makes the decisions on where the quality sliders are. Those are frequently decisions that need to be made at a higher level as they affect the total life cycle cost of the application, and the product manager is only responsible for the development part. This is the classic breakdown mode of process change - the people who bear the cost of the process change are not the people who reap the benefits of the product change. Producing a stable, defect free application that can be extended isn't necessarily in the product manager's interest, though it may very well be in the company's overall interest, especially when funding is allocated per project rather than per product. I present that argument mostly for completeness, since I think that the payback period of our practices is short enough that they're in the product manager's interest as well.
Things get even worse when the organisation funds work as "projects" rather than investing in "products". In these situations the value of feedback is reduced (since success is judged on the delivery of the promised features) and the focus is on the short term (the delivery phase for those features).
However it's also fair for the customer (and I mean the big-picture customer, not just the product manager) to suggest experiments - a good question in those situations is what will be measured to determine if things are better, and better for who? You need all the stakeholders involved in these discussions, not just the product owners.
Also bear in mind that we are not the only people to say these kinds of things. I believe part of what you're seeing is descent into the "easiest" possible implementation of Scrum. Here's what they're implicitly asking for:
"To be able to change their mind on requirements each iteration, and to obtain the benefits of this flexibility, without either performing enough up-front analysis to understand the fundamentals of the problem domain, or making any investment in the technical robustness of the application to support change"
This is very appealing from a business perspective, but it's an illusion. Jeff Sutherland has said he's never seen a hyper-productive Scrum team that didn't use the XP technical practices. Martin Fowler has written on what he calls Flaccid Scrum. The trend is to no longer consider the practices you mention as "agile", but simply "contemporary" - things you should be doing without a second thought.
So my first response to these kinds of suggestions (actually, my second, after I stop cursing) is that if the proposals would make things go faster then we'd happily implement them. But they won't. If I'm challenged further then I would ask them to find someone else on the technical staff who believe that the proposals will make things go faster, and then I'd step aside for them.
Seriously, hospital administrators don't tell surgeons how to maintain hygiene and where to cut. The customer can do whatever they want - we don't have to do it for them.
Not sure if that helps or not.
Steve
Subscribe to:
Post Comments (Atom)
There's significant evidence that "lowering quality" actually increases project timeframes. When faced with this dilemma, there's nearly always something else at play - like a desperate desire to deliver, but the lack of understanding to do it properly.
ReplyDeleteThe thing about some of the aspects of 'quality' isn't really quality at all. Much of it is like asking the pit crew to 'only screw the wheel nuts on halfway so the driver can get out faster'.
No version to print?! need to hang it on the wall here.
ReplyDeleteA point of order: Runway did have tests (though certainly not sufficient in my opinion) for the server side but not the client side. Your main argument still holds though - client-side development was much harder as a result.
ReplyDeleteSimon, I stand (or sit) corrected. :-)
ReplyDeleteJon,
ReplyDeleteWe have trouble getting over the barrier of explaining why our practices are hygiene. We haven't found the way to express things that make this clear - they don't see the risks that we do, and we can't justify things well.
I'm not really having issues. It seems that when I explain it to people, they understand. They don't like it, or they may disagree, but they certainly trust that what I say is valid.
ReplyDeleteI wonder what the difference is.
ReplyDelete