Beyond the Limit

Pondering Test Coverage Limits and Thresholds

1729, 1089, 42, 3.14159… History, pop culture and mathematics are littered with magical numbers. The fame of each sequence of digits is established in different ways. People will remember those constants they come to use regularly over time in equations. Or those that dictate a formal limit that they must follow.



What is the right number for test coverage percentage?


All good musings start with a question. What is the right percentage test coverage to enforce? The posing of this quandary by another on social media has got me thinking about test coverage again. What is the right number? Is there a right number at all? This week I revisit test coverage limits, focusing on what the limit should be, and mechanisms to enforce said number.


Test for Echo


As expected, the social media responses to the aforementioned question varied. Typical answers vary between 80 and 100%. My own opinion is that it should be at least 90%. Searching the expanse of the Internet doesn’t give you a concrete answer either, with similar ranges being discussed. Differing opinions on the effectiveness of 100% coverage are also easy to find.


Inside Lottery Machine

Some engineers still see writing tests as a bonus ball moment, rather than a mandated part of feature development


Perceptions differ vastly across my workplace as well. I would love to say responses are similar to the above. In certain circles they are thankfully within that range. However, there are still some that see tests are a bonus in the development of new features. That logic is so simple to understand that writing tests is pointless. This lack of craftsmanship allows you to identify those developers unable to own the features they develop like a flashing green diamond above their SIM.


Test Pilot Blues


Irrespective of an engineers dedication to the craft, the right number is one that is collectively agreed. Squads should be encouraged to aim high, rather than try to scrape the barrel for the lowest achievable threshold. Utilising lead engineers will help establish a high bar. The sole way to establish N% coverage as dogma is to have the team define N for themselves, and document it as the definition of done.


Strong lead developers will also be mindful that the right number depends on the current state of the project. Legacy codebases such as some that we own have low test coverage due to a previous lack of dedication to the automated testing cause.


Child in Chainmail

Legacy applications with historically poor coverage can cause developers to aim low in establishing their coverage metrics


Regardless of past sins, new components should not fall foul to the same poor practices. The team should agree a high threshold for all new components together. That should be as high as you can.


Put to the Test


Once test coverage threshold consensus has been achieved, it is vital to enforce the threshold. Coverage regression can be caused by several factors, which have been discussed previously. Of late, differing craftsmanship has been a lesser cause thanks to threshold quality gates, and enforcing of strong coverage practices through regular pull requests.


Deadlines have been the greatest single contributor of coverage dips in development of recent features. Even the most diligent of programmers will cut corners when it gets hot in the kitchen. This may be driven by a lack of dedication to the practice of TDD. Tests are still seen as an exercise to be undertaken once something works. This week I’ve seen an engineer writing tests for a feature developed last sprint, raising concerns that their inexperience meant it took considerably longer. This mindset drastically needs to change to ensure testing thresholds are adhered to and instilled among junior developers.


Timer on Laptop Screen

As we inch increasingly closer to deadlines, the first thing developers drop is writing automated tests


Going back to our legacy components, we need to be mindful of coverage dips when striving to improve our adoption. Gradually increasing thresholds is one way, but still mean drops in coverage without regular discipline. It also means once we overachieve, that engineers can drop the coverage to the bare threshold when the going gets tough. The use of delta gates should be considered to prevent coverage falls.


The Test of Time


This journey of discovery has helped me realise that there is no single solution to the coverage equation. Teams should strive to enforce a high standard that they can work towards, rather than imposing a minimum standard that has already been achieved.


Hands in Middle

Teams must agree on test coverage metrics together to build trust and consensus


Collective agreement on what the percentage coverage should be is important. I cannot impose my own 90% preference on the entire team. How can they possibly buy into a number that they don’t consider magical. Factors such as the current state can be a starting point. Legacy codebases will require use of delta gates to ensure an upward trend to your desired result. It’s by no means the end of the journey. Pick your percentage wisely.


Thanks for reading!

My Jekyll Doesn’t Hide

Client Visibility of Technical Debt Over Feature Stories

We all know that old idiom about skeletons in the closet. And the other advising us not to air our dirty laundry in public. Secrets in life are constructs that we prefer to keep hidden to prevent public shaming.

Many monsters lurk in our legacy code due to a lack of discipline in addressing technical debt

Secrets are also common to our technology platforms. Technical debt is lurking in every corner of our legacy systems. When not addressed in a timely fashion, these items become more costly to remedy. Without discipline, our newer platforms can suffer the same fate.

Hiding certain misgivings may be human nature. However, lack of transparency with clients in the accrual and remediation of technical debt is a common pitfall in software development practice. This week I reflect on the lack of transparency of technical debt with feature stories, and the need for product teams to engage with clients in prioritisation of all items together.

Hide Away Blues

My understanding of why technical hygiene tasks may be better off hidden doesn’t just stem from empathy. Our current setup purposefully hides it. A separate backlog of hygiene items is maintained and shared among multiple squads. While this approach has lead to a reduction of known debt over the years, we’ve also been presented with mixed results.

If technical debt is not repaid promptly, the cost of implementation increases exponentially

Small items are tackled easily with this approach, without severely impacting functional achievements. The value of regularly addressing smaller hygiene items should not be underestimated. This can introduce ownership issues for small items such as minor library upgrades. Those can be handled along with regular development if collective ownership is instilled.

Yet the challenge remains on the handling of larger remediations. These tend to undergo the pass the parcel treatment. No matter how many developers make progress on the item, the final feature takes significant time to complete.

Somethin’ to Hide

Legacy systems introduce significant challenges to hygiene transparency. The aforementioned sizeable items exist purely for lack of effort while they were easier to manage. In our case, we’re now having to pay the accrued interest. Our newer components don’t suffer from these trials.

Hiding hygiene items in the closest projects a false impression on system reliability

Yet our hidden hygiene history paints a picture that these systems are reliable, supportable and maintainable. In reality, these systems are rarely changed. Therefore older library dependencies are still utilised. Changes are as a result painful to implement as an upgrade of several versions requires more extensive testing, and manual at that due to lack of automated tests.

Motivating developers to undertake extensive work on such systems is challenging. Sure the sense of achievement at the end is such a high. Nevertheless, strong leadership is required to recognise and reward these efforts.

Where Do I Hide?

Once remediation is complete, senior stakeholder education is another hurdle we must jump in the race to production. If these items have not been transparent from the beginning, business sign-off of the changes and resulting testing are difficult to obtain.

Engineers need to be comfortable explaining the reasons for remediating technical debt in language clients can understand

The primary motivation for discussing post-implementation appears to stem from the technologist’s inherent fear of explaining the technical details in a comprehensible format. On a small level, I’ve witnessed engineers object to explaining the technical detail on stand ups when Business Analysts and the Product Owner are present. Like any skill it needs to be refined over time, and I would suggest regular discussions to justify why we are undertaking such hygiene work is appreciated far more than a last minute heads up.

Once hygiene work is completed, does it become any easier to justify why the work is secretly prioritised? By not being transparent with these items, we are failing to trust that a non-technical Product Owner will engage and understand why quality and reliability are important. A common backlog of feature and technical debt items is the sole mechanism for building mutual trust.

Hide And Seek

I’ve recently discovered that hiding of technical debt is not limited to just my team, but is an endemic problem across larger organisations. These fears are not limited to explaining to business stakeholders, but senior technology management as well. One colleague raised a concern that explaining these items can result in management becoming bogged down in unnecessary detail.

Transparency, transparency, transparency

Transparency is the key to Agile adoption. Ownership of the plant is a collective effort between business and technology stakeholders. A regular complaint is that business units don’t engage with Agile adoption. That clients provide insufficient time to support technology in the development of new features. If technology wants to be treated as an equal partner, they need to be transparent with the work they are undertaking to maintain the old, as well as build the new.

Thanks for reading!

Making Plans

Emerging Agile Planning Pitfalls

Life is filled with best made plans. From recent January resolutions to our travel bucket list, everyone attempts to form short and long-time life milestones. Yet sometimes we need to reset the timeline. A recent goal for me, the undertaking of initial coaching training, has also triggered reflections on how effective and evolved our Agile practices are.

When grassroots Agile is employed, bad habits can plague all ceremonies, including planning

Regular planning and grooming are important activities in any effective Agile practice. It ensures we embody the manifesto principle of responding to change over blindly following a plan. Following my recent reset, I reflect on some of the pitfalls that have plagued our planning processes. Furthermore, in the spirit of continuous improvement, I outline potential changes currently being undertaken to kick these habits.

Makin’ Plans

A key consideration should be that an element of upfront planning is necessary. Winging it is just not an option. The project mindset of numerous large organisations in my experience leads to one of two bad patterns. Significant upfront planning that delays development and refuses to adapt to evolving client needs. Or no upfront planning at all, resulting in a product that is simply a set of disjointed features.

There is a danger that development teams initially misinterpret responding to change as not requiring any upfront planning at all. That is certainly my experience. Without an initial direction and clearly communicated business strategy, developers will struggle to appreciate how distinct features connect into a centralised product.

Upfront mapping activities, using techniques such as User Story Mapping, are essential to defining a product strategy and roadmap

To ensure consistent client value is delivered, techniques such as User Story Mapping coined by Jeff Patton should be leveraged to give us our initial backlog items. The added benefit is such exercises are help us establish a baseline for a product roadmap. Furthermore, the strategy helps identify an initial MVP, that can respond to change using grooming, planning and estimation tasks.

I Want to Know Your Plans

Client engagement is one of the biggest challenges that we currently face. Rather than being an intentional act, it is simply a byproduct of their busy work lives. The common fix is to introduce a mediator role between users and technology to free up the time. Although this may be perceived to save time, this instance of The Telephone Game can unintentionally influence the product deliverables.

The more people you add to the communication channel, the more disjointed the deliverables and product strategy become

A commitment to direct collaboration from expert users is the sole mechanism to ensure we build the right product. A strong Product Owner will ensure all features take us step by step towards the product goal. User Story Mapping is merely the start of the journey. Our biggest mistake are Business Analysts performing the prioritisation and grooming individually. A close second would be not providing sufficient training on the roles and responsibilities of a successful Product Owner.

Rather than using our BA mediators, the Product Owner must contribute to planning sessions including regular backlog review. This should be performed in conjunction with the development team and analysts in a centralised medium with all updates committed against the story. That prevents a chaotic grooming ceremony I observed recently for another squad, where previously agreed acceptance criteria was discussed yet again.

Plans Within Plans

Over the years we have experimented with various formats for breaking down and estimating stories. One of the biggest mistakes I’ve seen several teams make is planning and estimating stories at the same time. To date, a lack of presence of the Product Owner on our planning sessions have meant planning work for the upcoming sprint have been a development team specific activity. This leads to a lack of transparency when items overrun, or even communication of items currently under development.

Keeping planning activities amongst just the engineers reduces client transparency

With our current focus on improving client engagement, attendance in planning sessions by the Product Owner on planning sessions has been agreed. This means that the breaking down and estimating portions need to be conducted separately. Having separate meetings avoids bamboozling our owner with technical detail. Coupled with improved backlog grooming, we can also ensure stories are ready to break down and estimate.

Stealing Time From the Faulty Plan

To paraphrase a well known saying, I’ll leave the worst to last. The biggest cardinal sin I’ve witnessed teams performing is estimating in time and not points. Engineers will still refer to a story taking X points, but really they have established a one-to-one mapping between these disparate constructs. Time does not work for several reasons. A simple search will reveal numerous opinions on estimate pitfalls. I of course have several arguments myself.

Firstly, optimistic programmers quite simply cannot give accurate estimates of how long something will take them to complete. Blockers and manual mistakes are never counted in these estimates. Other commitments such as hackathons, one day holidays and knowledge shares are rarely included either. Yet clients incorrectly assume that X days is an accurate estimate that they will question when deliver is delayed to X+1 and beyond.

Using the passage of time for estimates introduces several planning challenges

Another relates to developer experience. Particular tasks that take one day for one engineer will take several for another depending on experience. That’s not to say that we should always assign to the more experienced programmer. Good technical leaders will ensure all developers are grown and supported to foster a skills balance within the team.

Use of points in conjunction with velocity are key to addressing these issues. Breaking the point-time relationship paradox is going to be an exceptionally difficult undertaking in my initial coaching attempt. Agreeing a day agnostic points system is the first step in this journey. Soon enough I’ll find out how bumpy the ride will be.

Thanks for reading my reflections!

Just the Two of Us

Affects of Work Environment on Pair Programming Productivity

Lennon and McCartney. Tom and Jerry. Macaroni and Cheese. Life is filled with famous duos. It is indeed true that two heads are better than one. Collaboration in all forms allows for diversification of thought that more often than not contributes to a better solution.

It is this same notion that drives the driver navigator relationship of pair programming. With origins rooted in XP, it allows for programmers to work together on all code writing tasks. Despite adoption challenges, firms that actively practice pair programming state many benefits.

Pair programming has been known to yield productivity benefits, if work environments support it effectively

This week I was excited that some local developers were trying our pair programming without encouragement from myself. For us this is a long time coming. Hence my exhibition of extreme enthusiasm. Overall results were positive, minus identification of a few workspace specific quirks. This week I reflect on environmental factors that affect pair programming productivity, that encourage our continued solo programming efforts.

Two Hearts

Although not directly related to workspace, it is important to note that developer attitude is important. The personality traits of the team are just one environmental factor that affects pair programming adoption.

Alone we can do so little; together we can do so much.

Helen Keller

You can lead a horse to water, but you can’t make them drink. Stubborn software engineers who prefer solitary coding are not an isolated opinion. The team cannot just leave that individual to work alone. One toxic perspective will spread throughout the developer population. Not just in causing dismissal of practice, but also in the accumulation of tension. Emptying open minded programmers is the sole method to prevent the accumulation of hostility.

When Two Worlds Collide

In an ideal world, development teams are co-located to strengthen collaboration. Certainly our ongoing strategy is to try and reduce the number of regions over which any given team expands. With current expertise spread across the globe, co-location is a not so distant dream. This makes full pairing on all features challenging.

Organisations need to invest in powerful collaboration tools to support cross-regional development. Screen shares and phones are a great initial step. They will allow a common view of the code. Many also have integrated features such as digital whiteboards that, where exposed can help with design. Regardless, these tools will not guarantee a successful pairing.

Sharing across regions is a necessary evil that makes pairing problematic

Building rapport over the phone is difficult. Psychologists suggest rapport is built more quickly when eye to eye contact between people is established. Pairs are regularly rotated, so building a rapport quickly is important to produce productive pairs. Web cams can help build strong developer relationships across regions. In addition to pairing, they can also be utilised across stand ups and retrospectives alike to build team bonds.

Two Old Friends

Assuming the regional and opinionated impediments have been removed, we must also consider the physical barriers. Collaborating across a single machine means individual workspaces must support two people sitting and viewing code.

Pedestals are the biggest physical impediment that we have to at-desk collaboration today. The traditional drawers stick out like a sore thumb, enforcing a single seat rule at any desk. This leaves your observer squinting at the code from further back. Or sitting on the pedestal and perching over, which is also not ideal.

Pedestals are the biggest physical impediment to pair programming that we currently have

Monitors must be height-adjustable and able to rotate to share code effectively to adjust to different eye levels. Even this doesn’t solve the pedestal problem. Under desk pedestals to allow space for chairs should be preferred so your moveable chairs are useful for more than just chair races. Or a pedestal stool hybrid, subject to health and safety constraints (yes really).

Two Minutes Silence

Be mindful of the sounds of the environment as well as the sights. The environment should not share the buzz of conversations. One of the biggest issues with our open floor setup is the travel of noise. Laughs and repartee reverberate across the area. It’s not the first time that I’ve been vocal about the need for noise cancelling headphones to support concentration, both in blog and in person.

Even when conversing, programming requires significant thought. Therefore the environment needs to reduce the transfer of noise. Consider noise reduction technologies to drown our the noise. Furthermore, be mindful of the layout. Consider clustering teams together in small huddle spaces to reduce the buzz of irrelevant noise. Drop in booths are great to escape, but reliance on them for a full day of pairing is not sustainable.

Noise cancelling mechanisms, combined with considered layout, are required to reduce the noise from coding discussions

Pair programming requires a balanced ecosystem to ensure ease of practice and attainment of benefits. Attitude, co-location and workspace considerations are just as important as management buy in to foster this collaboration technique. Note I suggest it is supported and not explicitly enforce. Hopefully our first foray into elective pair programming will yield benefits. Watch this space!

Thanks for reading!

The Seeker

Evaluating Data and Exception Driven Workflows in UI Design

New year, new start. January brings news of resolutions into our conversations and social media feeds. Brought on by last years triumphs and tribulations, we use data to inform how we should improve in the upcoming year. Deriving meaning from the events of the prior year if you will.

With a new year, people seek resolutions to improve meaning and effectiveness of their lives

The majority of systems I’ve developed over the years are data-intensive. Past and present design considerations affect how users process information and achieve their goals. With the start of a new year, I reflect on data and exception driven workflows, reflecting on their differing usage and future impact.

Where Do I Hide?

Historically, I’ve witnessed development teams providing expansive grids of data for convenience. To the extent that I’ve recently discussed the need to consider visualisation alternatives in our systems. Exposing all data columns a user might need to see at some point does reduce time to market for clients. It allows us to throw grid controls or Business Intelligence software on top of sources without giving much consideration to how data is actually used. This introduces unintended challenges into business processes.

Exposing all data increases cognitive load on making decisions and reacting to events. Originally, engineers exhibited good intentions to allow easy analysis and flexibility of transforming data by varying dimensions. Seeking context and identifying patterns is not always acceptable. Where results dictate a call to action, users exhibit a higher cognitive load. A data driven workflow introduces the need to constantly validate decisions. For new joiners, such processes require the accrual of knowledge over time to master business processes.

Care should be taken to identify key user goals over throwing grids and BI software over data and forcing users to dig through their data

These challenges teach us that presenting all data fields is not always the right approach. Good engineers should be comfortable asking questions and forming an understanding of the processes clients follow, and the goals they strive to achieve. Developers should collaborate with users and designers alike rather than wait for permission to try something new.

Tell Me a Tale

Notification-based flows are a viable alternative to presenting vast expansive grids of data. If users are seeking answers from our data, there is no reason the system cannot present the answer itself, thus informing users of what action to take.

This does address the minimum knowledge requirements of a data driven application. New clients can begin using applications from the beginning without a significant fear of making mistakes, or executing particular actions in the incorrect order. Dedicated engineers should focus on alleviating the cognitive load of users. Reducing training overhead of our users should be a key considering in our product strategy.

Notifications, in moderation, can help users identify actions far more easily than searching through datasets

Designing such a workflow introduces challenges. One key threat to user adoption is alert volumes. The possibility of cognitive overload remains unless caution is not taken to send correctly targeted alerts. Notifications should only be sent to the intended audience. Understanding of the target population can help identify subscriptions to reduce the cognitive load for all consumers.

Migrating from manual procedures to automated notifications relies heavily on trust. Stakeholders need to be satisfied that the system generates the correct decision. Having comfort in the underlying data is a critical starting point. Only then can we construct a reliable notification based approach.

Citizens of Tomorrow

Creating an alerting mechanism is the start and not the end of the automation journey. Once notifications have been identified, there is no reason that the risk of manual error could not be eliminated by automating the corresponding action.

Regardless of the benefits of automating repetitive manual steps, false positives and negatives will affect uptake. Especially in heavily controlled environments where incorrect decisions result in potentially disastrous consequences. Humans will begin to disregard particular alerts if regularly presented with false positives.

Recent experiences using automated tickets got me thinking about the desensitisation of users to false results

A recent experience over Christmas vacation with mobile ticketing brought home the potential impact of such false results. My husband’s bus ticket always failed to validate. The reader emitted an angry beep every time he boarded a bus. Every time without fail, the driver would roll his eyes and wave him through. It became vastly clear that false negatives were such a common occurrence that negative validations were always dismissed as invalid.

Failure to strike the balance of these false results against true negatives will erode client trust in the system. Confidence in both the workflow and action automation can only be built through validation and measurement of commonly agreed KPIs. Yet another case where collaboration between software engineers and users is the sole solution to building solutions that address client problems.

Thanks for reading!

Grand Designs

Challenges Integrating Design Thinking and Agile Development Practices

Have you ever seen rabbits in the clouds? Or perhaps a face in the flames? Psychology dictates that in our constant search for patterns, humans look for context and meaning in the most insignificant of constructs.

There is an element of order in building software. Design and architectural patterns help us avoid the same old pitfalls. The old ways must make way for new inventions. Out of the box thinking is the sole method of solving new problems. Mechanisms that encourage innovative thinking can stop us seeking patterns and ignite ingenious ideas.

Can Design Thinking encourage out of the box thinking and integrate with our Agile development practices?

Design Thinking is regularly touted as a process for the development of innovative solutions by designers. Yet integration with software development practices are critical to ensure the successful adoption of the solution. As we embark on our latest development journey, I ponder the potential for integrating Design Thinking and Agile practices, reflecting on my recent readings and challenges.

Think Again

One key concern with utilisation of Design Thinking is the premise that it allows definition of a full solution upfront. Locking in every requirement upfront, regardless of the published format, doesn’t allow for lessons to be learned.

Design Thinking and Agile methodologies can encourage iterative design and development

As illustrated above, Design Thinking is an iterative circle, not a fixed length line. It doesn’t have to be weeks of upfront workshops. Iterative development must start from the beginning, in the empathy stages. Some teams have had success with using Sprint Zero workshops to establish common ground and agree the required high engagement levels between technologists and business. Where all parties identify the challenges and the initial story set sounds like an ideal compromise.

Utilising a cyclic method in both Design Thinking and Agile practices allows not only the product definition to evolve. Knowledge of user processes is not static. Our education undergoes peaks and troughs not only as features are adopted, but also as they change to address new challenges. Having an initial empathy gathering stage prevents the identification of additional walls that are built as user processes mutate.

Think How It’s Gonna Be

There is a common misconception that exercising agility means you are mindlessly react to new requests. Some think it is a reactionary journey, where teams take every turn on the route, rather than driving straight down the motorway. At times I have felt our journey has turned into blind country backroad turns taking us further into darkness.

Building a solution to such complex business problems needs us to understand where we are going and how it will solve our clients problems. A product strategy serves as the GPS on our Agile journey. Identification of challenges and high level themes using the empathy and define stages. As stated previously, precise stories can be defined over time in line with our goals, over an initial big bang approach.

The Hills format designed under the IBM Design Thinking format does look rather familiar…

The definition of hills may provide a mapping between these processes. Recent reading this week helped me discover these tools for defining the identified problem, with a corresponding user outcome. Peering at the above example, doesn’t the who, what, wow format appears surprisingly familiar to our favourite user story format? Given this similarity, formulating a journey using user story mapping techniques can help you identify a product strategy.

Think of Tomorrow

Designers are often the primary facilitators driving Design Thinking processes. A common challenge discussed online is how to foster collaboration between software developers and designers. Programmers, designers and business users alike all speak different languages. Nevertheless, early engagement with all three parties will establish the required communication channels.

While formulating blue sky ideas, sometimes we need to keep out feet closer to the ground and focus on technically feasible solutions

Although blue sky thinking is important for ideation, how do you assess the technical feasibility? In the dream evolutionary paradigm, new ideas to solutions will need to integrate with the existing software.

By excluding technology from the table, technical feasibility of any solutions is not considered. Designers and users require early engagement from technologists to help advise on the product details pre-prototyping stage. Software engineers need to meet them half way to collaborate to build the best possible product.

Think it Over

We have been travelling down the Agile road for several years now. Clearly this is the start of our ongoing journey with Design Thinking. Musing over any identified obstacles in our software development has always helped us employ continuous improvement. Ownership of your process means we must strive to make things better.

Be mindful of the walls being built between technologists, clients and designers, and break them down

Integration with Design Thinking has presented blockages in our process as we wait for stories to begin prioritisation. Regardless of the above musings, it is vital to consider this the start of the road. This is a great opportunity to learn best practices of design to improve system workflow.

Collective ownership by clients, software developers and designers is vital to ensure the success of our latest endeavour. Continuing the conversation will ensure we address some of our challenges in defining our solutions together.

Thanks for reading!

Space Oddity

The Importance of Breathing Space in Software Development

Writing code is a pretty thought intensive activity. Breaking down the individual steps, reviewing API documentation, testing and debugging as you go. We switch between some pretty intensive tasks. The satisfaction when the problem is solved is indeed thrilling. Any factors that impede these steps will prevent engineers yielding the desired results.

I have seen first hand how work environment can affect developer productivity. Engineers in our environment are still using noise cancelling headphones to ensure the ability to become engrossed in their work. The state of their calendar is another influencing factor that should be explored.

Whether we are collaborating or working alone, we need space in our schedules to be productive

A recent catch up with a colleague got me thinking about space. They were very frank about how extending free time in their calendar caused their productivity to thrive. To mark my thirtieth blog post, I reflect on how space has contributed to my own productivity over the course of my career, and how my tactics to focus on deliverables have evolved.

A Sky Full of Stars

Back in the good ol’ days, the ratio of scheduled meetings to space in my calendar generally favoured the latter. Measurement of my productivity was the development of features. In the majority of my early projects, meetings were reserved for key updates only.

Scheduled meetings are a big productivity killer when the time compared to unscheduled time tips the scales. Especially if there are many with small gaps between them. Keeping meetings to a minimum is required for developer productivity to thrive.

Some may think stand ups tip the scales on too many meetings, however my own experiences have found them to be a far lighter touch

I’ve seen differing attitudes to the number of meetings dictated by Agile methodologies such as Scrum. Some consider the number of meetings too frequent, or a surveillance mechanism to monitor developers. I recently reflected on different experiences, where under a waterfall model I was subjected to a myriad of update meetings that impeded my own deadlines. This is a clear example of how such a lack of space to code affects developer deadlines. Becoming engrossed in a coding tasks only to switch out thirty minutes later will affect the quality of any code submitted for review.

Supermassive Black Hole

Minimum meetings not only produces programmer productivity. In my early career, it allowed me to build a reputation for reliable delivery. It also granted me the opportunity to develop expert knowledge in the technologies I used daily.

Once in a position of knowledge, you are faced with two options. The first is to keep your cards close to your chest to ensure a dominant position. The second is to disseminate that knowledge among colleagues to ensure a long term strategy. While the latter is always the best option, it has presented me with challenges on an old high profile project.

Imparting knowledge to less experienced colleagues is a legitimate, productive task that should be given equal weight to coding

Providing training is not always a scheduled classroom activity. The majority of training we receive is as part of the daily grind. More commonly it is ad-hoc conversations as developers encounter issues with their own deliverables. As an oracle of expertise you need to embrace that providing on-the-job training is a valid pursuit.

Good managers will protect inexperienced engineers through use of office hours, where it makes sense. Developers will not opt for such a solution. They will aim to please as their own individual items start to suffer. Specific knowledge sharing meetings have been the right call in half of the situations that I’ve encountered. Then again, spending all day training and all night coding is, in hindsight, never the right call!


Transitions to senior engineer and manager makes space harder to find. Measurement of your success is no longer dependent on the code you write, but the features delivered by others. Quickly you become a facilitator, rather than a technical contributor.

This is certainly true in my case. I did touch code this week. Shock horror! However, my commits are often small tidy ups to improve software quality. The ratio of meetings to space very much lies in the former camp now.

Dedicated thinking time, be it for code, slides or just getting stuff done, now has to be scheduled for me. Fake calendar entries are a great mechanism to allow me to achieve that.

Even leaders need gaps to support developer productivity

The lack of gaps in my calendar are vastly becoming an ongoing joke in the office. To be productive I also need space to support and guide developers when they need it. All my hours need to be office hours. Not just a couple of dedicated sessions per week. No matter how senior you get, you need space in your calendar to get the job done. The question becomes whether it is explicitly scheduled or available freely.

Thanks for reading!

Start Me Up

Managing The Pitfalls of Persistent Testing Environments

Despite living over two hundred miles from my immediate family, I have always been called upon as dedicated tech support when gadgets go wrong. I can regale many stories of fixing PCs, printers and mobiles on family visits or over phone calls. The age old strategy of switch if off and on again or some variant has proven to be successful most of the time.

Similar tactics are employed to keep some of our persistent testing environments alive. I dream of a day where we no longer exhibit a reliance on permanent testing environments. The ultimate fantasy would be for all services, queues and any other infrastructure to be spun up and down at the click of a button.

Managing our legacy testing environments is like trekking through an overgrown and unfamiliar jungle

While we have made significant strides in some of our newer microservice components, our legacy applications fall far short of this goal. To this day they are reliant on physical infrastructure that cannot yet be created on demand. Here I reflect on past and present challenges of maintaining our legacy testing environments, and the effects on team productivity.

Constructing the Connection

Accommodating connected systems has been the greatest complication of late. Working in a large multinational corporation introduces many challenges on agility. In the midst of our current transformation, adoption of agile techniques is taking time to permeate through the organisation. While the message echos through the grapevine, we need to engage with traditional waterfall teams and Agile evangelists alike.

In large Agile transformations, opinions are changed via grapevine whispers as it ripples through the organisation

Managing upstream applications with less frequent releases means also managing their expectations on our testing environment availability. If their instance is up and running for three months to support a long laboured testing cycle, their default expectation is ours must also be continually available for the same period. This manifests itself in urgent, short notice requests that are expected to be fulfilled, fostering frustration among our developers.

Communication channels need to be well established on both sides for this relationship to succeed. Striking a balance between facilitating their testing as well as our own is critical. Agreements on notice and availability of our testing environments have only just been established. The jury is out to measure the effectiveness of these SLAs. Only time will tell if our ongoing strategic transformation will better support all applications.

Inside Out

The outside perceptions are important. Looking to the inward affect on team productivity has provided some fascinating observations. Despite having support rotations to ease the burden, fixing fractured environments does often fall on teams testing features. Obviously production takes precedence. Regardless, even a short stint of fixing testing environments that break every few days is far from satisfying.

Automated deployments only get us so far. Without collective ownership and automated verification techniques, shared components can be left in a broken state. Nothing breeds animosity more than feeling you are continually firefighting issues introduced by another squad.

Hitting the reset button restores order for a little while, at the expense of building engineer expertise

Introduction of the reset button process to refresh the environment has addressed many of these issues. An unintended side-effect has been engineers are less likely to investigate and diagnose issues before applying the fix. Mentoring by more experienced engineers addresses knowledge gaps. Nevertheless, instilling the same level of ownership by senior developers takes far more time to transfer to more junior programmers.

Stop Breaking Down

Continuous improvement is an imperative technique for addressing the trials of our testing environments. The aforementioned reset button and communication protocols with other teams are great strides forward by the team. These small increments should be nurtured by managers, and balances with delivering of client features.

These changes can only go so far. A legacy system will remain legacy without significant intervention to reduce the behemoth of technical debt. Microservices and utilisation of container frameworks can be used to better scale solutions. Nevertheless, this solution should be used with caution to avoid creating a distributed monolithic monster.

The legacy application monster will continue to live only until we commit to the significant intervention to eradicate technical debt

Infrastructure such as queues and databases are the greater challenge to address. Large organisations need to invest in technologies for generating full application environments, including communication and persistence layers. Replacing the reset button with start and stop will cease with ongoing productivity killer that is our testing environments.

Thanks for reading!

Bad Reputation

How Poor Leadership Taints Software Development Practices

Everything evolves. Technology is a common example of the ongoing digital revolution, with new frameworks and languages appearing at a constant pace. The field of how we engineer software is also changing, with new practices and techniques being defined every few weeks. Agile has existed in name form since that infamous conference in 2001. Since then opinions on it’s adoption have existed in equal measure.

I’m fortunate enough to work with engaged colleagues who read as much as me. A recent find of a 2015 article on abhorrent Agile techniques, including some aspects of Scrum by one of our team has caused considerable debate in the office.

The entry is… interesting. It certainly lives up to the initial disclaimer of rants, essays and diatribes. Some points such as the affect of open plan office spaces on work rate align with my recent musings on the affect of workspace on developer productivity. Other musings raise valid concerns of the deprioritisation of technical debt over user features, which can be a symptom of poor craftsmanship.

Weak leadership will derail software delivery, regardless of whether Agile or Waterfall practices are adopted

The main premise of this article is to call out how terrible Agile, and Scrum in particular, are without proposing alternative solutions. Digging into the details, I find the article to be an unfortunate symptom of bad experiences. Here I reflect on my experiences of developing software over the last seven years in Agile and Waterfall environments. Not to rebut, but simply to highlight how weak leadership can taint practices within software development.

I Want You

The Technology industry has greatly suffered from sweeping statements regarding developer stereotypes. The quintessential developer that people imagine is still the pale antisocial guy sitting in the corner furiously typing away late into the night to create his vision.

Programmers tend not to be great at managing clients. We’re very literal people. We like systems that have precisely defined behaviors. This makes working with non-technical people harder … because we tend to take every request literally rather than trying to figure out what they actually want.

Michael O’Church, Why “Agile” and especially Scrum are terrible

Long gone are the days where programming is a solitary occupation. Working with people, technical or not, is a challenge. We are all fundamentally different. In business driven domains technologists may speak the same language, but ideas can be presented differently based on the individual. Jargon between different frameworks and languages contribute to these challenges. Our recent games of discussing Angular and REST services are a classic example where engineers themselves are speaking different languages.

Coders need to choose collaboration over solitary programming to build software

Clients have diverging opinions on what features they want. Fostering curiosity in engineers is vital to encourage them to figure out what users want. Our strongest developers sit with our clients regularly and observe their processes. By understanding the process, and the technical possibilities, those engineers propose great solutions to the problems our users face. These ideas often differ to the stated client request, and have often been accepted as the preferred solution. Leaders will encourage such collaboration, and see that this time is better spent than daily pinning programmers to keyboards, expecting a churn of N lines of code per day.

Wherever You Will Go

Product strategy is an often overlooked concept in both the Waterfall and Agile projects that I have worked on over the years. I’ve had different successes and failures utilising both paradigms. This needs to be balanced with smaller, more manageable deliverables.

Corporate Agile, removed from the consulting environment, goes further and assumes that the engineers aren’t smart enough to figure out what their internal “customers” want. This means that the work gets atomized into “user stories” and “iterations” that often strip a sense of accomplishment from the work, as well as any hope of setting a long-term vision for where things are going.

Michael O’Church, Why “Agile” and especially Scrum are terrible

I’ll admit the strategy surrounding the product is often obfuscated from developers. I’ve been fortunate to experience empowerment to write user stories collaboratively with users that provided an explicit benefit. However, with both paradigms I have experienced lack of clarity on what the wider goal is, and what the roadmap is to achieve that goal.

A transparent product strategy ensure we all competing in the same race

One historic project I worked on managed in a Waterfall fashion simply had a goal of automating calculations, with no strategy to exposing users to the results. This was partially down to a refusal to adapt to change, resulting in an unsuitable solution being presented months down the line. In comparison, our early Agile experiences resulted in lots of small, useful features being delivered in a matter of weeks that combined didn’t form a cohesive workflow. Talented leaders will foster an element of purpose to promote passion among programmers using a defined product strategy.

If your firm is destined to be business-driven, that’s fine. Don’t hire full-time engineers, though, if you want talent … Good engineers want to work in engineer-driven firms where they will be calling shots regarding what gets worked on, without having to justify themselves to “scrum masters” and “product owners” and layers of non-technical management.

Michael O’Church, Why “Agile” and especially Scrum are terrible

Successful technology solutions become ingrained into our regular routines. Early innovations at PARC labs on smaller devices established that their success is dependent on their usage becoming knitted into the fabric of daily life. Our strongest developers understand user needs as well as the technical challenges. When they leave, they take that knowledge with them. As leaders it’s important for us to retain our full-time talent. Hiring consultants will help us address technical challenges, at the expense of understanding client needs.

I’ll Be Watching You

No one wants to feel micromanaged. Agile is advertised as fostering collective ownership among developers, over the traditional top-down management profile of Waterfall. I’ve had differing experiences of adoption of nanny-state surveillance by managers.

Scrum is sold as a process for “removing impediments”, which is a nice way of saying “spotting slackers”. The problem with it is that it creates more underperformers than it roots out. It’s a surveillance state that requires individual engineers to provide fine-grained visibility into their work and rate of productivity.

Michael O’Church, Why “Agile” and especially Scrum are terrible

Levels of supervision depend on the levels of trust exhibited by leads. One of my last frustrations of a Waterfall project was the sheer number of status meetings I had to attend to satisfy the obsessions of the Project Manager and Technical Manager. Deliverables were severely impacted as we spent more time talking about our deliverables than actually producing them.

I’ve found management keep a far more watchful eye under Waterfall development than in teams where Agile has been adopted

By comparison, my experiences of Scrum and Kanban have been far lighter touch. Stand ups are more around communicating status and voicing blockers. I can then engage with colleagues to remediate any problems. The culture of these two teams has had more of a contributing factor in my experience than the software management paradigm.

No Silver Bullet

Across all Software Development techniques, there is considerable debate on the best approach. Large organisations prefer to establish a one-size-fits-all approach to software development. It’s human nature to want to facilitate receiving what we want more quickly.

Scrum is the worst, with its silliness around two-week “iterations”. It induces needless anxiety about microfluctuations in one’s own productivity. There’s absolutely no evidence that any of this snake oil actually makes things get done quicker or better in the long run. It just makes people nervous. There are many in business who think that this is a good thing because they’ll “work faster”.

Michael O’Church, Why “Agile” and especially Scrum are terrible

Expectations do need to be managed to ensure clients don’t expect you to deliver the world. We did have experiences with Scrum initially that we delivered more features faster, which won user favour. Nevertheless, the need for space was spotted quite soon to ensure engineers had learning space. Otherwise, you establish an inconsistent work rate, with effort upticks immediately before and after releases.

Although agreement would allow standardisation of development practices, neither Agile or Waterfall are the silver bullet that we have been looking for to defeat our development demons

We found Kanban to be more effective at establishing a constant rate. This prevented team burnout in our case. Other teams in our area still prefer Scrum, as they haven’t had the same cadence challenges.

Engaging with technical and non-technical colleagues alike is important to evolve your development techniques. Brooks stated that there is no silver bullet to solve the trials of software engineering. Neither Waterfall or any Agile methodology will fit every use case. Leaders need to trust teams to establish the techniques to take down the monsters taunting their development processes, using many smaller bullets.

Thanks for reading!

Beyond the Great Divide

Encouraging Developers to Cross Platform Divides

The comfort zone is comfortable for a reason. Feeling content, engaged and productive is a thrilling experience. If we’ve ignited passion in our programmers, you’ll see their fingers dance eagerly across the keys. Developers become complacent with not only particular technologies, but specific platform components.


If you always do what is easy and choose the path of least resistance, you never step outside your comfort zone. Great things don’t come from comfort zones.

Roy Bennett


To address our recent bottleneck and design challenges, I’ve been reading Your Code as a Crime Scene by Adam Tornhill. One clear challenge described early in the book is that engineers struggle to reason about the entire system. This is a difficulty we are finding as our platform starts to scale more widely. Here I reflect on our own experiences of encouraging developer laser precision on parts of the system, and how it has impacted platform performance.



We have encouraged programmers to exhibit laser precision over the components to which they always contribute


Bridges Crossing Rivers


As I alluded to before, engineers are building knowledge of only a part of the system. This is partially a symptom of the myriad of microservices that we have constructed over the past few years. Our architecture has made it easier to decouple services and create new features. Nevertheless, delivery pressures result in our engineers persistently touching the same services. Reasoning about the performance of the entire plant becomes impossible.


Bridge in Japanese Forest

It’s vital that we encourage engineers to cross component bridges and develop knowledge of the entire system


Our team is not the worst for coupling programmers to components. Several developers have migrated across domains. Such moves are normally a reaction to increased project demands in a particular area. In such circumstances, experienced engineers are utilised over junior developers due to time pressures. These rare occurrences have grown more experienced developers into leaders. What is the effect on junior developers?


This preference can introduce poor design into components when junior developers do not receive appropriate mentorship. Component design can become bloated, impacting performance. Giving all developers experience of other parts of the platform provides them with the guidance they need to better design performant microservices. To grow our entire team and platform, we need to make better use of rotations.


Another Spin


Through the grapevine whispers that other organisations are better at utilising enforced rotations echo. Reflecting on a talk I attended back in March reminded me of one particular example.


As part of their adoption of Extreme Programming practices at Pivotal, Elisabeth Hendrickson discussed mandatory rotations. Combined with pair programming and other XP techniques, they have found great benefits to their product development. Enforced rotations expose engineers to different projects, teams and technologies.


Record Player

Leads need to be proactive in rotating engineers across projects, technologies and teams


I’m tempted to consider more regular rotations by our engineers. To adopt a similar approach, we must determine how often should developers rotate? Like sweet and salt popcorn, a balance must be maintained to ensure people have space to learn, while ensuring client features are consistently delivered.

The Tender Trap


We should be growing our workforce to allow them to become adaptable. Building their transferable skills will prevent technologists being trapped in a particular domain. Seeking single technology skilled developers will also prevent them building expertise across all systems that we build and support.


Boy With Periscope

It’s more important to employ curious engineers that can investigate technologies and system components than be the best coder in technology X


An emerging trend discussed at a recent conference was the need for organisations to evolve their hiring practices. Our historical strategy of employing Java experts to write Java, or Angular developers to build Single Page Applications will not scale. Employing engineers with intrinsic curiosity can aid us in nurturing cross-component experience. Fostering the ability to learn new technologies, architectures and performance evaluation techniques will make for productive platform-wide programmers.


Thanks for reading!