Test of Time

Compromising Test Coverage Under Pressure

With life, the universe and everything, everyone has their own standards. Perceptions of quality are different between individuals. Agreeing a common definition of done across the team is an important initial step in any Agile journey.


When the going gets tough, ensuring each feature is built to these principles can be challenging. Even the most well-intentioned developer can fall foul of shortcuts. When managers pile on the pressure, programmers attempt to meet expectations by compromising on best practices.


Fork in the road

How do we make sure our teams take the correct route under pressure, rather than a shortcut?


Unfortunately for us, this has included test coverage. This key criteria of our definition of done has been under threat. Recent adoption of minimum coverage gates prompted several discussions around temporary adjustment of these metrics as engineers felt functionality was blocked. Here I reflect on the contributing factors that force our programmers to deviate from the right path, and how to lead them down the correct route.


Begin the End


If we are compromising on test coverage when under pressure, it is safe to assume we are not writing our tests first. We advertise ourselves as practising Test Driven Development, or TDD for those in the know. When it comes to the code review stage, there are occasions where feature and project coverage doesn’t meet the minimum percentage agreed in the team definition of done.


For the past 3 years we have been aggressive with our adoption of Behaviour Driven Development, a subset of TDD. We are exceptionally fortunate that we have client representatives writing behavioural scenarios for all new features. A story is not ready for development if we haven’t received a set of scenarios. Despite being available from the outset, encouraging developers to write tests first is still proving troublesome.


TDD Lifecycle

Engineers can struggle to write tests first when adapting to the TDD circle of life


Ideally we would want them to follow the TDD approach of writing a failing test first and implementing the least possible amount of code required to make the test past. Writing effective tests was not given much focus in my Computer Science degree course. Speaking to students and recent graduates, it still appears to be the case that they have little experience writing unit tests as part of their studies. Furthermore, they generally have no opportunity to write integration tests. It is an arduous process to change the mindset of writing tests last. I’ll admit I still labour over following the TDD cycle.


Nevertheless, the benefits are numerous. Many are documented across the Web, however one often overlooked advantage is the scenario’s dual purpose as feature documentation. Lengthy user manuals akin to War and Peace are no longer viable for clients who want to start using our software immediately.


Out of Time


Another aspect programmers and technical leads alike find excruciatingly difficult is providing accurate estimates. Humans struggle to determine how long things will take. Regardless of whether they include the writing of integration and unit tests in these timelines. Against their better judgement, developers will provide estimates that are not concrete. Call it the programmer curse of perpetual optimism. Development will most likely take longer than anticipated as they are tormented by library conflicts, logic errors and other blockers. Not adding testing effort will only add fuel to the fire.


A common misconception is that an estimate is an exact measure of time from which we can project a delivery date. The noun estimate is defined as an approximate calculation or judgement of the value, number, quantity, or extent of something. Approximate is the key word in this definition. Despite our best intentions to include our testing effort, we often overshoot.



Estimating effort is exceptionally difficult, but cutting corners to meet deadlines results in users finding more defects


Sounding the trumpets at the grand unveiling of the latest and greatest tool may appear to be the right thing to do in this instance. Writing unit and integration tests does not prove the absence of bugs. Manual user testing will not verify that functionality is defect free either. Arguably they may find increasingly more defects in their testing cycles if this pattern continues.


Why should our customers be handed potentially substandard systems for which you don’t have time to write automated tests? Appreciation for the time of others is an important aspect of compassionate collaboration. By selfishly saving ourselves time by avoiding writing tests, you are suggesting it’s acceptable for our clients to spend more time searching for system shortcomings. A client’s place is not on a pedestal. Developers and stakeholders should work together. But we must respect each other’s time.


Crossing the Threshold


Although our Happy Developer Guide advocates minimum coverage across all of our code bases, tools are required to enforce this benchmark. Reliance on developer diligence is not enough. Over the past year we have created several new projects, initially without any mandated thresholds on test coverage.


Adding gates after the fact can foster frustration as programmers have to pick up the pieces of others not adhering to these axioms. Collective code ownership is a key quality of strong Agile teams. Each engineer should follow the Boy Scout rule, and leave the code base better than they found it.



Like a camp ground, we should leave code better than we found it to preserve its natural beauty


To balance developers differing principles, you cannot rely on the strong, proud engineers to write the additional tests required to bring your code repository up to scratch. Avoid this animosity by including these limits from the beginning to encourage compliance to your coverage commandments.


Thanks for reading!

One thought on “Test of Time

Comments are closed.

%d bloggers like this: