Converting Waterfall Requirements into Underground Agile Features

(Originaly posted by AdrianK, 20-Sep-2010 at: Muse Infuse/Converting Waterfall Requirements into Underground Agile Features.aspx)

Converting Waterfall Requirements into Underground Agile Features

Well here’s an interesting thing: we’re doing a decent sized project using a standard sort of requirements driven approach in a spreadsheet in a government agency which has mixed feelings about agile – and guess what?
Well, thanks to some discussion, people who aren’t afraid to try new things and luck, we’ve managed to start taking an approach which might be known to some of you as a word that starts with the letter “A”.

Those “in the know” know what we’re up to – because they’re the ones doing it – most other people are blissfully unaware. For those of you unfamiliar with this hugely successful approach it’s called “Agile Undercover“. One of the biggest barriers to successful agile adoption is the weight and baggage currently associated with the agile buzzwords – as soon as you say agile it instantly becomes political and messy: cowboys use it as an excuse to do “iterations” and the naysayers bring out their pitchforks; and no one listens to the poor folks in the middle who actually know what they’re talking about. This might not be the case everywhere but I doubt it’s hard to imagine.

Agile undercover is where the purists go back to basics. If you really want to “do agile” then what you’re really saying is “I want to do the important things” – this means honouring the principles and probably implementing some of the practices; the one thing we can leave behind are the buzzwords.

Real-Life War Story

Some quick context: the core internal project team includes a Project Manager (PM), Business Analyst (BA), Manager of the Web Team and me as the Solution Architect; all development will be done by a vendor – we don’t have any developers in-house so this won’t be your typical developer driven uprising.

So, here’s what happened. A friend of mine who heads up the Project Managers is keen to look into agile with some degree of seriousness; in fact he recently did a “lunchbox” session on agile – skipping most of the buzzwords but including concepts like “features”. He’s mostly coming at this from his own angle – I wouldn’t describe him as a long-time SCRUM practitioner or anything – so good on him.

“Our” PM was at the session; she’s never done agile before in her life – but she’s open to new ideas and always keen to learn new things. Awesome. The rest of the core team were in a similar position – not adverse to the idea. Timing is everything; I need to explain the point we were at in the project. There’d been a massive requirements gathering effort, after removing duplicates we managed to get down to 196 “Business Requirements”, these had then been prioritised by the wider internal project team.

The next step was to confirm to our vendor what we wanted included in the first delivery so they could estimate effort and cost for us. No problem there except going into this we had no idea what the total effort would be – would we be including too much or too little?

I suggested a SCRUM based approach – but I didn’t quite put it like that; this suggested approach was one I’d suggested earlier – probably with a view that it’d take people time to get used to the idea. This aligned nicely with our PM discussing some concerns about how the requirements and estimation process was likely to shakedown – not concerns with the vendor or any people involved – purely the process.

My suggestion was simple – let’s rank all the requirements (in terms of business value) and then get the vendor to score all of them for effort giving us a rough total cost and date. The PM (who attended the lunchbox session) wanted to group requirements into logical “features” (her words not mine). “Bring it on” I said. Boy do I have a plan for you! First we took the existing spreadsheet of requirements which looks something like

The key point here is that we have 196 requirements that are broadly prioritised (90 were “critical”), we also have a column that states whether the requirement can be met Out-Of-The-Box (OOTB), requires configuration or needs custom development (this isn’t the same as an actual score for effort – but it’s a good start; if something’s OOTB then there’s not much point arguing about it regarding timescales). I then made a copy of this worksheet and fiddled with the layout until I got something that would resemble a
Story Card.

The idea is that there’s one requirement per row, and if you format it appropriately you can print one requirement per page, all you need to do then I print four pages per sheet of paper and take to them with scissors or a guillotine. The original list of requirements had a “category” column which broke the requirements into 5 or 6 categories –
they were a bit broad but helped us break the next part of the process up.

We spread all the requirements for a given category out on the table and started sorting them into logical features – things which sounded the same or would naturally fit together, things which would naturally make a unit of work. We at this stage we treated all requirements as equal – ignoring the existing priority rankings (“Critical”, “Highly
Desirable” etc). Naturally this process led to some great discussion.

Some points that came out:

  1. Don’t worry if a feature pile gets to big – until it’s scored you just don’t know, and it’ll be easy enough to break that feature into smaller parts later (did someone say iterations? Naughty! – this is undercover, remember?).
  2. Some requirements seemed better suited to feature piles from the other “legacy” categories – no worries, just add / remove them from piles as required.
  3. By flicking through a feature pile you can easily get a feel for how important it is as the existing
    requirement level priorities were plainly visible (the big red line means “Critical”).
    We needed to easily identify feature piles, so we started labelling them with a sticky note.

Pretty soon we ended up with feature piles, which we grouped by our “legacy” categories, and roughly ranked for good measure.

The final act in this immediate process was to rank all the feature piles (in terms of business value), relative to each other. At this point we numbered all the piles: 1 (most important) down to 39 (least important); that’s an average of 5 requirements per pile.

After that it was a simple matter of adding a new column to the spreadsheet thus capturing the results.

We offered our proud piles to the vendor but they chose to take the digital copy; I think that’s actually the best idea: we safeguard the “originals” and the vendor can print their own copy of the requirement cards – including any additional columns they want to use.

Next Steps

The requirements now need to be scored for effort by the implementation team (our vendor), and rather than ask them to score each feature pile for effort we’ve asked them to score each requirement – why? I hear you ask. The problem with scoring only the feature piles is that if for any reason we have to split a feature pile up we
won’t know the effort for each. The best we can do is to have a score for each requirement and thus get a score per feature pile – assuming we need to start breaking them up.

Naturally the implementers will be concerned with managing dependencies; so how best to score for effort? I know from experience that you want to score realistically but take advantage of anything you can – therein lies the danger: if they score a particular requirements effort as low based on a dependant requirement already being implemented we’ll all run into problems if we start changing the contents of the feature piles.

This is where simple common sense comes into play: the feature piles represent our current thinking as to how they should be implemented – so things are only going to change for a good reason. The build team should be able to use this knowledge to their advantage – scoring effort based on any advantage via dependencies is fine
(it helps the bottom line) but they need to let us know; likewise we need to discuss with them any changes we’re thinking of making to the feature piles.

Now we get to the second part of the pep talk I had with our PM: once we have a ranking for business value and a score for effort we can start to look at how we “phase” the work for implementation – big bang or chunks. The one (of many?) rules I suggested putting in place was that regardless of how long a “phase” is we identify scope for that and not change it once that phase starts; the time-box for the phase must not change. 

My reasoning for the fixed phase length (yes – iteration if you will) is based on the tried and true Agile practice of being able to establish history – so we get to the point of being able to say “we get through X worth of effort per iteration”, therefore we know how long the project will take overall, or how much we can do for a fixed cost. So, we’re all looking forward to see how the effort scoring goes.

Key Takeaways

  1. Avoiding Agile buzzwords helped avoid political minefields and allowed everyone (regardless of agile experience) to get involved in the process and contribute to it.
  2. Working with requirements in a tactile fashion is easily the fastest way to deal with them; there’s no bottleneck of people fighting over the mouse and working this way is very liberating.
  3. You get a good overview of the entire projects scale.