Saturday, 30 May 2009

Internal vs. External Quality

Recently I was asked about the need to cut corners in order to get things to production on time.

I believe that software has two types of quality: internal and external. External quality is understood by your clients: it’s the number of features, the ease-of-use of the software the slickness of the UI, amongst other things. Internal quality is understood by your developers, it’s about the code and includes measures such as code coverage, cyclomatic complexity, coupling and cohesion as well as elegance and simplicity of solutions.

There is a very interesting difference between these two types of quality. If you reduce external quality, you get to market quicker. If you reduce internal quality, you get to market later.

For external quality: if there are fewer features, the system is ready sooner. If you don’t need to spend time analysing user interactions or polishing the UI to pixel perfection, the system is ready sooner.

For internal quality: if the code is elegant and well designed, then it is easily understood, changed and repurposed to new uses; the system is ready faster. If code is covered by tests, then you know that changes haven’t broken it and you can be more confident and agressive with refactoring and other changes; the system is ready faster. If code is reviewed or pair-programmed, then it has far fewer defects; defects found in UAT or, even worse, production are far more time-consuming to fix than if they are found at the moment the code is written; the system is ready sooner.

So my advice is, if you are in a situation where there is real business value in getting a system to production fast, then do not sacrifice internal quality (increase it if you can) and only sacrifice external quality.

And bear in mind, clients understand external quality. You can work with them to make the appropriate trade-offs for the project. If time-to-market really is key, and external quality matters, but not as much, then the clients will allocate time to address that quality in the future - something they will rarely do for internal quality.

Monday, 17 March 2008

Polishing Iterations

In Rachel Davies' Agile Mashup talk at QCON 2008, she noted that many teams have a "polishing" iteration, where no new functionality is released.

My team have recently added this: we find we need a bit of space to step back and look at the application from a "big picture" point-of-view. Sometimes it's useful to look at consistency across the application: particularly from a GUI point-of-view. It's also good to make time for exploratory testing. Finally, we like to make some space to incorporate the feedback we've got from the business during the iteration.

Perhaps we should be doing some of this stuff as we go; but at the moment we're finding a polishing iteration just before release is working very well for us.

Agile Mashups

I attended Rachel Davies' talk at QCON 2008 about Agile Mashups. She made the point that in the real world people take a variety of practices from different Agile methods; as a simple example there are Scrum teams using TDD and XP teams using burndown charts. She pointed out a few practices that seem more optional, such as pair programming and sitting together.

I find there's a very interesting tension. On the one hand I think it's important to know what "good agile" looks like. There is a danger that some teams throw away their documentation, hack and claim to be Agile. So to sort this "cowboy agile" from the real thing, you could use the Nokia Test, which is a checklist: tick the boxes and you are doing Scrum :-)

Particularly, when you start Agile and don't understand all the subtle interrelationships of practices and sometimes unexpected effects of doing certain things, it's useful to try to stick closely to a process and really try it, to find out what works for you and what doesn't.

On the other hand, a central piece of Agile is self-organising teams that inspect and adapt. A good Agile team will have assessed these practices for themselves, tried tweaking them, thrown away the ones that don't work, tried new things. So trying to assess Agility through a checklist runs the risk of constraining a team and stopping them adapting.

I think really good teams find how to get this balance right: you should feel free to try new practices, even quite radical ones, and think of ways to properly assess and modify them; at the same time, you should be serious about really trying the practices from your method of choice
and getting a deep understanding of why those practices are there.

Friday, 14 March 2008

Multi-core: more bugs faster?

A year or so ago I noticed the news that Intel were experimenting with an 80 core processor. I was worried... multi-core processors are now standard; people are starting to write software to take advantage of this. But concurrent programming is hard. I run a team with some good programmers in it, but in most pieces of code I see that use threads and synchronisation I can usually find a subtle threading bug. These bugs are not the kind of thing you can test for with all the nice unit testing tools that we normally use.

I was lucky to attend a talk at QCON2008 by Joe Armstrong. He pointed out no-one in the hardware world is currently anticipating a limit to scaling by multiple cores; there is anticipation of thousands of cores within the next 10 years. Joe also pointed out Amdahl's Law and noted that if 10% of your program is serial then the most speedup you can get is 10x. This is very thought-provoking: we will need to push concurrent programming into the core of development but, from my own experience, we desperately need new programming paradigms to make sure we don't create terribly buggy software.

Fortunately, maybe there's some help on the horizon. For a start, the Java folks are focussing on fine-grain concurrency features for Java 7 (JSR166y). By using high-level constructs, programmers can describe problems and algorithms in a way that these features can execute effectively on multi-processor systems, yet the constructs are easy to get right.

Even better, we have Erlang, a 20-year old language that has been used in some very hardcore situations (telephone switches) and has a simple and massively scalable concurrency model built into the very core of the language.

The advent of multi-core and the imminent arrival of massively multi-core processors makes this a very good time to be learning more about Erlang and the concurrency features planned for your language of choice.