Saturday, 11 June 2011

Eli Goldratt RIP

I just received very sad news that Eli Goldratt died today. He had aggressive lung cancer, presumably brought on from a lifetime of pipe-smoking. I feel extremely lucky to have met him. He has influenced my life tremendously. I only wish I'd met him earlier. :-(

Picture of Dr. Eliyahu M. Goldratt

Friday, 29 April 2011

Why do teams fail to sustain code quality?

Code quality always seems to get worse and worse. Even when a team is actively fighting against it, complexity inevitably wins in the end. What's going on? Why is this pull towards complexity such a powerful force?

In my view, a lot of the problem stems from the wrong actions being taken, with the best of intentions in mind.

Code quality quickly spirals down

Developers don't want to introduce bugs, so they naturally take actions to avoid doing so. Making changes to working code is risky, so developers make heavy use of conditional logic ("in my particular case do this, otherwise, do the same as before") and duplication to minimise the changes to existing code.

There's a good intention behind it. Unfortunately, it makes the code increasingly difficult to work with—to understand the ramifications of a change— and subsequently increases the pressure to tread carefully and avoid making those bigger and bolder changes to the design that are really needed to keep the code clean.

Automated regression tests aren't enough to stop it

Agile teams tend to make heavy use of automated regression tests. This not only allows the team to release code frequently, but allows them to refactor it and keep the design in good shape. The tests should catch any bugs introduced when the design is reworked.

That's the theory, but, in practice, the developers don't keep the design clean. Why? Because their old "don't make unnecessary changes" habit is so deeply-ingrained. And so, the vicious circle continues.

You have to change the habits too

How do you change habits? Education, constant reminders, code reviews, management oversight, tie a knot in your handkerchief... whatever it takes because if you don't change the habits your code quality will deteriorate and your team's agility will disappear with it.

"Circuit Diagram" for Options

After drawing the diagrams for my last post, I'm playing with ideas for a notation for describing options. What I've come up with so far isn't very good — I'm not even very clear on what the dimensions are... I think it's roughly time along the x-axis and choices along the y-axis, though I think I may be mixing other concepts in as well — but, anyway, I thought I'd put it out in the hope that someone else might help me improve it.

Here's a real-life example from my last project (read it from left to right):

We actually followed these steps. The team in Hong Kong worked on improving the speed of the database reports, while, in London, I built an alternative report generator that wasn't as flexible, but was very fast. By release time the Hong Kong team had ironed out all the problems with the queries and optimised the databases indexes so their reports were adequately fast. In the end, we went with a mixture of the two.

I also got in trouble by trying to be too clever. I put code in to the test environment to do live cross-checking of the reports generated by the two methods. It did reveal some discrepancies. But I got in trouble because they were trying to do speed tests at the time. Communication, communication, communication...

Let me know if you have any ideas about a notation for describing options (I'm talking about real-life options, not financial options).

Thursday, 28 April 2011

Always have a plan 'B'

You cannot predict exactly what's going to happen in the future. Having options allows you to respond quickly as different situations arise.

Option to Abandon

Lifeboats on an ocean liner are an example of the option to abandon. The same with backups of your software. With luck you'll never need them, but it's worth creating the option just in case.

Option to Backtrack

The option to backtrack is similar to the option to abandon, except that you don't abandon the direction completely, you go back to a previous "safe" point ready to try again.

If you release a new version of some software and then find it's unstable, you want to be able to go back to the previous version. Any kind of "rollback" or "undo" capability is providing the option to backtrack.

Being able to backtrack is important for learning. To experiment with a piece of software, try out a new refactoring, or test the boundaries of a technique you want to feel safe that you can go back to where you were.

Option to Choose

This option is about alternatives. To travel between cities you might take a train, but if the train drivers are on strike you'll need to find another way. By thinking about your choice of options in advance, you won't need to panic.

An example in software is where the business logic is decoupled from the data storage technology, so that users can choose between several different databases. This can create value by widening the potential market.

Option to Defer

This option is about timing. Do we have to decide now or can we decide later? The option to defer is about pushing back decision points, to allow more information to come in, or being able to put something on hold, with the possibility of returning to it later on — for example, postponing a space shuttle launch due to icy conditions.

Traditional software development methods force the development team to commit to features a long time before they are developed or deployed. Agile methods let you defer commitment (see this article by Chris Matts and Olav Maassen for a more in-depth explanation of how options-thinking relates to agile).

Option to Expand (or Contract)

This option is about resources. If a customer wants to place a huge order with you, can you accommodate it? Can you recruit quickly? Can you move resources from a less successful project to another more successful one?

Tuesday, 29 March 2011

Start with clear acceptance tests

One of the advantages of acceptance test driven development (ATDD) is that it helps a team to agree on the objectives before diving into coding. The clearer the acceptance tests, the fewer misunderstandings there are likely to be. Every misunderstanding wastes time and adds cruft to the software.

Releasing frequently gives an opportunity for feedback and correction. Small corrections early on can prevent the need for large corrections later.

1. Well-articulated acceptance tests and frequent releases:

Relatively straight line

2. Poorly articulated acceptance tests:

Line with some zig-zagging

3. Longer times between releases exaggerate the deviations:

Line with fewer segments and severe zig-zagging

4. Without any vision or objectives:

Random loopy patterns

This often happens in start-ups where a product idea is hazy and the team is basically chasing every hare that runs past. Eventually they might happen to catch one, but it's a pretty inefficient way to work.

Friday, 25 March 2011

Evaluating start-up business ideas

I often have ideas for start-up businesses and get quite excited about them until it slowly dawns on me there are major flaws. In a bid to spot the flaws sooner, I've come up with some evaluation criteria. I thought I'd post them, in case anyone finds them interesting or has any of their own thoughts on the subject they'd like to share.

Judging a business

Large scope for growth

Judging a business

Risks can be kept low

Judging a business

Friday, 21 January 2011

What's wrong with the Agile Manifesto?

I've never liked the Agile Manifesto. I'm not blaming the authors because I know it must have been a nightmare trying to come to a consensus, and the manifesto has served a useful purpose bringing like-minded people together under a single brand. It is slightly ironic that the wording hasn't changed in 10 years, but there you go. I'm not brave enough to propose an alternative yet. In this article, I'm just going to explain where I think the logic is faulty.

Here is the wording of the manifesto:

Manifesto for Agile Software Development

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

It doesn't distinguish agile from "slapdash"

The aim of the manifesto seems to be to try and distinguish agile from waterfall. Unfortunately, agile's worst enemy isn't waterfall but the "we don't write documentation, so we're agile" brigade. And the manifesto doesn't defend agile very well against this threat. Perhaps there's something in the twelve principles but who on earth reads them? Certainly not the slapdash developers claiming to be agile.

It mixes up goals and solutions

"We value working software over comprehensive documentation". OK, but surely whatever software development method you select, working software is the ultimate goal, isn't it? Some people believe that by writing comprehensive documentation you are more likely to produce working software. That's part of their solution. But you can't go round comparing goals with solutions. It's nonsensical. Perhaps what the manifesto authors wanted to convey was that "the code is the documentation", or something along those lines, but couldn't agree to write it, so plumped for the goal as their solution. That's cheating!

The dilemmas are easy to solve

The manifesto poses dilemmas and then tries to position agile at a certain point in each trade-off continuum. But why are we compromising? If both sides have value (as the manifesto plainly says), why don't we find solutions that resolve the conflicts? Let's take each one in turn:

I don't know whether the value of documentation is to do with maintainability, improving clarity or good governance. The point I'm trying to make is that there does not have to be any compromise between those needs and agile methods.

So what is agile then?

I'm not sure. Please can you give me feedback on this post and your ideas. Leave a comment, or blog about it, or tweet me (@davidp99). Thanks.

Tuesday, 11 January 2011

FlowChain - Ideal company structure?

"If you could start from a clean slate, how would you structure a large company?"

A little Twitter conversation with Simon Baker (@energizr) about product streams, reminded me of a talk at last July's Agile Coaches Gathering, where Bob Marshall (@flowchainsensei) answered the question above by describing his "FlowChain" concept.

This is my interpretation of what he said (from memory) so it may not be quite right and he's probably refined it since then.

Running the whole company in an "agile" way

Essentially FlowChain scales agile practices up to the company level. There is a single (large) development team that works from a prioritised backlog of MMFs (Minimal Marketable Features). I'm not sure who manages the backlog. I've called it a "product strategist", but that's my name for it not Bob's.

The team works on multiple value streams. Each value stream has its own operations staff.

Zooming in:

Extreme flexibility

What I like about FlowChain is the flexibility it provides. If a value stream is doing well, resources can quickly be redeployed to capitalise on its success. Alternatively, new products (value streams) can be developed. In most organisations, moving developers from one project to another is a major upheaval, and that puts pressure on the project to be a success. When you can move developers quickly to something else, it relieves the pressure and allows the company to tackle more risky (but potentially more lucrative) opportunities.

Note that in a large company, the pool of resources for developing products could be very large, and there may therefore be many MMFs in progress at the same time. Bob didn't explain how to resolve resource conflicts, sequencing etc. We may have to wait for his book...

Prototypes, Spikes and Tracer Bullets

Here's an attempt to classify various development techniques and the sequence in which they tend to be used.



Monday, 10 January 2011

Don't kid yourself

"Our team's agile, but we haven't released yet because some of the teams we depend on aren't agile."

Fine, then you're not agile!

Sunday, 9 January 2011

BDD: Concrete Examples Aren't Enough

The Given/When/Then style of Behaviour Driven Development (BDD), favoured by Cucumber and JBehave, puts a lot of context in the examples that I claimed was unnecessary clutter, but I had quite a few comments that puzzled me until it dawned on me there's another difference in my approach compared with the Given/When/Then style.

I always state the required behaviour in a sentence or two before giving the examples. Each behaviour is described by a specification like this:

Maybe it looks like a lot of work to write, but it isn't really. The structure is standard and each part is just a sentence or two.

Whereas in the Given/When/Then approach the business rule describing the behaviour generally isn't made explicit. The reader is expected to guess the rule from the examples, so naturally they have to have more context.

Occasionally it's difficult for the domain expert to immediately state the abstract rule and easier to start with concrete examples, but once the expert has explained a few examples then we can usually begin to take an educated guess at the rules behind them—"Ah OK, so the rule is X?"—and then have a useful discussion.

But if we don't make the rule explicit we only have the concrete examples, so they have to be made much more verbose—and potentially implementation-specific—so that readers can correctly interpret them.

It is true that Given/When/Then style examples are much better at avoiding implementation lock-in than test scripting ("click this, click that..."), but I would argue that context-free examples with explicitly stated rules are even better, both in terms of avoiding lock-in and in terms of readability (how long it takes the reader to understand the behaviour expected).

Friday, 7 January 2011

Object-Oriented Example

For ages I treated objects as glorified data structures to be operated on by procedural code. It took me a long time to get out of the procedural mindset and into an object-oriented mindset.

Well-written object-oriented code helps you deal with complexity by hiding internal details so that you can operate at ever higher levels of abstraction. Object-oriented code increases flexibility – objects can be plugged together in different ways like Lego™ pieces – and it keeps duplication and churn to a minimum by keeping data and operations in the same place instead of spreading them throughout the code-base.

The recent discussion on the GOOS mailing list has made me realise that others are also struggling with the transition. Nat Pryce has posted a couple of blog entries today, and I'd like to pick up the same theme with an example in the hope it might help.

Let's start with a simple PopGroup class that implements the role of SongPerformer (in Java this would be an interface):

To construct a valid PopGroup you need a Singer, a Drummer and a Keyboardist. These are the PopGroup's dependencies.

The responsibility for coordinating the various activities to perform songs is the responsibility of the PopGroup object, but the actual acts of singing, drumming and keyboard playing are separated out. This is the single responsibility principle combined with dependency injection.

This approach makes the design highly flexible because we can plug together all kinds of variations. As long as the dependencies we inject implement the right interfaces we can make the pop group perform a song.

For example:

If we don't have a DrumMachine, we can substitute a PhysicalDrummer and the PopGroup will still function the same.

Our PhysicalDrummer object depends on interfaces not implementations. If we're desperate we could grab a passer by, a dustbin and a broom handle and create a drummer:

Unfortunately the BroomHandle class does not implement the Drumstick interface. We could make it implement Drumstick, but the BroomHandle is in a different domain. It's in the domain of sweeping, rather than musical performances. We don't really want to couple its class to an interface from the musical performances domain, so we create an adapter to map between the two. This is known as a ports and adapters architecture.

To make things easier to understand we can wrap the objects into a class that has a name that better expresses the intent.

Now we've hidden all the complexity and we're operating at a higher-level of abstraction.

In reality, a ports and adapters architecture is not so much mapping between entirely unconnected domains as mapping lower-level domains to higher-level domains, so that the technical detail is hidden within simpler more abstract concepts.

Concordion Extensions

Nigel Charman has started a concordion-extensions sub-project that adds some useful extra features to Concordion, such as the ability to add screenshots to the Concordion output so you can more clearly see what state the application is in when something fails.

Another valuable extension inserts a little icon into the output and when you hover over it, it shows logging output that occurred during the test. This is a neat solution to a dilemma that I've sometimes encountered: "If I don't write down the steps, how can I be sure the test has been implemented right?"

We want to write high-level tests that express the business requirements without reference to a particular implementation, so we can change the implementation later. To do this, we hide the test's implementation steps in its accompanying Java fixture. But some people aren't comfortable navigating and reading Java code. If you're a non-developer tester trying to make sure the system is well-tested, it can be scary to "trust" the developers to implement your tests correctly.

The temptation is therefore to explicitly write into the test specification the steps required to execute the test. The trouble is by encoding the "how" you end up locking the test into a specific implementation. The lock-in problem is exacerbated by duplication across tests. When you're detailing all the steps, you find that many tests require a similar set of steps.

Our conflict cloud looks something like this:

Testing Conflict

Some testing frameworks, such as Robot and FitNesse, allow you to pull out common set-up but this is a compromise. Effectively what you're doing is programming, but not in a proper programming language. The intentions of your tests become lost amongst increasingly complex interweaved test scripts.

Nigel's solution allows you to write the tests in a way that retains a clear separation between intent and implementation, yet allows non-developers to be reassured that the test has been implemented correctly.

Testing Conflict Resolved