My last few posts have focussed on the state of the nation with respect to the Agile movement in general. Today’s post is focussed on Agile for Software Engineering and one of the key assumptions that can be made about Agile teams that is fraught with danger.

Teams can be doing apparently ‘great’ Agile, and be delivering crap code.

“Whaaaaaat?” I hear you scream. “a key principle of Agile is that we build quality in!!??!?” angry face , angry finger pointing> “you heretic” .

Sure…. but here is the thing. Half a dozen Stanford PhD’s sitting in a basement in Silicon Valley writing code for a startup they want to make them insanely rich can probably be relied upon to build a in quality culture as part of an Agile implementation, but the reality is that in the broader corporate world, that’s not an assumption you can or should make.

The idealogical assumption is that empowered teams will naturally take ownership of the underlying quality of the product, as well as quickly delivering high value solutions (the visible part of the iceberg). This is not always necessarily the case, and some Agile practices actually drive the opposite.

So let me define ‘Great Agile’ in a software engineering context. In addition to the list of effective indicators I listed in my post “Thats not Agile….” see https://www.shibusa.com.au/thats-not-agile/, add in Automated Testing, Continuous Integration, Automated Regular Deployment.

The code always works, because it is comprehensively tested, customers are embedded and getting exactly what they want: fast, reliable, transparent, business value adding incremental delivery. The teams are fully integrated and working like clockwork.

Sounds pretty damned good hey? Well here is the trap: None of this guarantees that the code being delivered is being delivered within sound architectural framework, neither does it guarantee good quality code. Code built to a high consistent standard, focus on leverage and re-use, bias towards building one reusable API rather than duplicating similar functionality, these are not necessarily byproducts of a highly automated Agile delivery machine.

The quality of the code, I have found, varies massively and is mostly down to the individual team’s culture. Some teams pride themselves on the quality not only of the product, but the sophistication of their engineering practices. Great. My point is that you cannot assume that, and that what looks like great engineering (extensive use of automation for example) may not be reflected when you look under the hood.

I love the aspect of Agile that plots the velocity. I think that it provides great focus to the team. The problem can be that we all get obsessed with the delivery rate. It’s hard not to, and unless the team is extremely disciplined, the quality of the code base can suffer. This is not always evident.

Here are some things to keep an eye on:

The Code Base: is it running out of control, expanding like a thermo-nuclear chain reaction?
Code Quality: Run a tool over the code to assess code quality. Easy to do, often very telling.
Re-use: are you measuring it? Are you rewarding or punishing it?
Architectural Governance, how is this built into the process?
Do you have intermittent, but persistent performance problems that nobody seems to be able to get a handle on?
Are you tracking and reporting tech debt?
I have a strong view that architectural integrity and code quality, and code re-use measures should be defined for any significant implementation of Agile in a software engineering environment.

Set up, measured and reported alongside the Burn up, and given equal weight when assessing ‘success’.

Otherwise, one day down the track, you might find that your hot Agile delivery team is sitting on some dirty little (costly) secrets…..