Sunday, April 28, 2013

Agile Clinic: Dear Allan, we have a little problem with Agile...

Consider this blog an Agile Clinic. On Friday an e-mail dropped into my mailbox asking if I could help. The sender has graciously agreed to let me share the mail and my advice with you, all anonymously of course…



The sender is new to the team, new to the company, they are developing a custom web app for a client, i.e. they are an ESP or consultancy.



“the Developers work in sprints, estimating tasks in JIRA as they go. Sprints last three weeks, including planning, development and testing. I have been tasked to produce burndowns to keep track of how the Dev cells are doing.”



OK, sprints and estimates are good. I’m no fan of Jira, or any other electronic tool but most team use one so nothing odd so far. But then:



“three week sprints”: these are usually a sign of a problem themselves.



I’ll rant about 3-week Sprints some other time but right now the main points are:


3 weeks are not a natural rhythm, there is no natural cycle I know of which takes 3 weeks; 1 week yes, 2 weeks yes, 4 weeks (a month) yes, but 3? No.



In my experience teams do 3 weeks because they don’t feel they are unto shorter iterations. But the point of short iterations is to make you good at working in short cycles so extending the period means you’ve ducked the first challenge.



“planning, development and testing” Good, but I immediately have two lines of enquiry to peruse. Planning should take an afternoon, if it needs to be called out as an activity I wonder if it is occupying a lot of time.



Second: testing should come before development, well, at least “test scripts”, and it should be automated so if it comes after development it is trivial.



Still, many teams are in this position so it might be nothing. Or again, it could be a sign the team are not challenging themselves.



Also, you are doing the turndowns but by the sounds of it you are not in the dev teams? And yo have Jira? I would expect that either Jira produces them automatically or each devl “cell” produces their own. Again, more investigation needed.



Continuing the story:



“The problem I’m encountering is this: we work to a fixed timetable, so it isn’t really Agile.”



No, Agile works best in fixed deadline environments. See Myth number 9 in my recent Agile Connection “12 Myths of Agile” - itself based on an earlier blog “11 Agile Myths and 2 Truths”.



“We have three weeks to deliver xyz and if it gets to the end of the sprint and it isn’t done, people work late or over the weekend to get it done.”



(Flashback: I worked on Railtrack privatisation in 1996/7, then too we worked weekends, death march.)



Right now the problem is becoming clear, or rather two problems.



Problem 1: It isn’t done until it is done and accepted by the next stage (testers, users, proxy users, etc.). If it isn’t done then carry it over. Don’t close it and raise bugs, just don’t close it.



Problem 2: The wrong solution is being applied when the problem is encountered, namely: Overtime.



As a one-off Overtime might fix the problem but it isn’t a long term fix. Only the symptoms of the problem are being fixed not the underlying problem which explains why it is reoccurring. (At least it sounds like the problem is reoccurring.)



Overtime, and specifically weekend working, are also a particularly nasty medicine to administer because it detracts from the teams ability to deliver next time. If you keep taking this medicine it you might stave off the decease but the medicine will kill you in the end.



The old “40 hour work week” or “sustainable pace” seems to be ignored here - but then to be fair, an awful lot of Scrum writing ignores these XP ideas.



Lines of enquiry here:


  • What is the definition of done?
  • Testing again: do the devs know the end conditions? Is it not done because it hasn’t finished dev or test?
  • What is the estimation process like? Sounds like too much is being taken into the sprint
  • Are the devs practicing automated test first unit testing? aka TDD
  • Who’s paying for the Overtime? What other side effects is it having?
“This means burndowns don’t make any sense, right? Because there’s no point tracking ‘time remaining’ when that is immaterial to the completion of the task.”

Absolutely Right.



In fact it is worse than that because: either you are including Overtime in your burn downs in which cases your sprints should be longer, or you are not, in which case you are ignoring evidence you have in hand.



The fact that burndowns are pointless is itself a sign of a problem.



Now we don’t know here: What type of burn-downs are these?



There are (at least) two types of burn downs:


  • Intra-sprint burn-down which track progress through a sprint and are often done in hours
  • Extra-sprint burn-down which tracks progress against goal over multiple sprints; you have a total amount of work to do and you burn a bit down each sprint.
I’ve never found much use for intra-sprint burn-downs, some people do. I simply look at the board and see how many cards are in done and how many in to-do.

And measuring progress by hours worked is simply flawed. (Some of my logic on this is in last years “Story points series” but I should blog more on it.)



Extra-sprint burn-downs on the other hand I find very useful because they show the overall state of work.



From what is said here it sounds like hours based intra-sprint burn-downs are in use. Either the data in them is bad or the message they are telling is being ignored. Perhaps both.



“I was hoping you might be able to suggest a better way to do it? I feel like we should be tracking project completion, instead, i.e. we have xyz to do, and we have only done x&y. My main question is: Is there a useful way to use estimates when working to a fixed deadline by which everything needs to be completed by?”



Well Yes and Yes.


But, the solution is more than just changing the burn-down charts and requires a lot of time - or words - to go into. I suspect your estimating process has problems so without fixing that you don’t have good data.



Fortunately I’ve just been writing about a big part of this: Planning meetings.



And I’ve just posted a Guide to Planning Meetings on the Software Strategy website. It is designed to accompany a new dialogue sheet style exercise. More details soon. I should say both the guide and sheet fall under my “Xanpan” approach but I expect they are close enough to XP and Scrum to work for most teams.



This quote also mentions deadlines again. I have another suspicion I should really delve into, another line of enquiry.



Could it be that the Product Owners are not sufficiently flexible in what they are asking for and are therefore setting the team up to fail each sprint? By fail I mean asking they to take on too much, which if the burn-downs and velocity measurements aren’t useful could well be the case?



We’re back to the Project Managers old friend “The Iron Triangle.”



Now as it happens I’ve written about this before. A while ago in my ACCU Overload pieceTriangle of Constraints” and again more recently (I’ve been busy of late) in Principles of Software Development (which is an work in progress but available for download.)



This is where the first mail ended, but I asked the sender a question or two and I got more information:



“let's say the Scrum planners plan x hours work for the sprint. Those x hours have to be complete by the end - there's no room for anything moving into later sprints.”



Yikes!


Scrum Planners? - I need to know more about that


Plan for hours - there is a big part of your problems.



No room to move work to later springs - erh… I need to find out more about this but my immediate interpretation is that someone has planned out future sprints rather rigidly. If this is the case you aren’t doing Agile, you aren’t doing Scrum, and we really need to talk.



I’m all for thinking about future work, I call them quarterly plans these days, but they need to be flexible. See Three Plans for Agile from a couple of years back(long version is better, short version was in RQNG.



“Inevitably (with critical bugs and change requests that [deemed] necessary to complete in this sprint (often)) the work increases during the sprint, too.”



Work will increase, new work will appear, and thats why you should keep the springs FLEXIBLE. You’ve shot yourself in the foot by the sounds of it. I could be wrong, I might be missing something here.



Right now:


  • Bugs: I’m worried about your technical practices, what is your test coverage? How are the developers at TDD? You shouldn’t be getting enough bugs to worry about
  • Change requests are cool is you are not working to a fixed amount of work and if you haven’t locked your sprints down in advance.
You can have flexibility (space for bugs and change requests) or predictability (forward scheduling) but you can’t have both. And I can prove that mathematically.

You can approach predictability with flexibility if you work statistically - something I expose in Xanpan - but you can only do this with good data. And I think we established before your data is shot through.



“This leads to people 'crunching' or working late/weekends at the end of the sprint to get it all done. It is my understanding that this isn't how Agile is supposed to work.”



Yes. You have a problem.



So how should you fix this?



Well obviously the first thing to do is to hire me as your consultant, I have very reasonable rates! So go up your management chain until you find someone who sees you have a problem and would like it fixed, if they don’t have the money then carry on up the chain.



Then I will say, first at an individual level:


  • The intra-sprint sprint hours based burn-downs are meaningless. Replace them with an extra-sprint charts count your delivery units, e.g. User Stories, Use Cases, Cukes, Functional spec items, what ever the unit of work is you give to developers and get paid to deliver; count them and burn the completed units each sprint
  • Track bugs which escape the sprint; this should be zero but in most cases is higher, if its in double figures you have series problems. The more bugs you have the longer your schedule will be and the higher your costs will be.
  • See if you can switch to a cumulative flow diagram charting to show: work delivered (bottom), work done (developed) but not delivered, work to do (and how it is increasing change requests), bugs to do
  • Alternatively produce a layered burn-down chart (total work to do on the bottom), new work (change requests) and outstanding bugs (top)
  • Track the overtime, find who is paying for this, they have pain, find out what the problem is they see
None of these charts is going to fix your problems but they should give you something more meaningful to track than what you have now.

Really you need to fix the project. For this I suspect you need:


  • Overhaul the planning process, my guess is your estimation system is not fit for purpose and using dice would be more accurate right now
  • Reduce sprints to 1 week, with a 4 weekly release
  • Push Jira to one side an start working with a physical board (none was mentioned so I assume there is none.)
  • Ban overtime
We should also look at your technical practices and testing regime.

These are educated-guesses based on the information I have, I’d like to have more but I’d really need to see it.



OK, that was fun, thats why I’ve done it at the weekend!



Anyone else got a question?