Bug Prevention

This is the last part of this series on the relationship between Testing & Product Risks.
Part two of this series will discuss my process for identifying product risks through testing. I also shared a model.

This is my third and final post. I will be discussing proactive and reactive quality. Bug detection and prevention. Pre- and post-code writing can be tested to determine if there are any risks or variables. This will help us design better to reduce those risks. We also have models that can help explain the feedback loops we find from testing.

Design can help to reduce risks

Although bug prevention is not new, it is still relatively unheard of in the industry. It is also a subject of confusion as many people believe bug prevention doesn’t require testing. It’s actually testing ideas which allow us to discover the information about risk that we can then use to refactor our artefacts, designs and code design in order to avoid problems.

This loose SDLC model helps me to describe the feedback loops we get from continuous testing activities throughout the SDLC.
This SDLC model does not represent a waterfall approach. This is because we are working on many smaller features simultaneously in the SDLC.

This is the investigative testing portion of the SDLC. It happens before any code is developed. There’s no software to verify that it works as expected. Similar to exploratory testing against a product of software, we test ideas, artefacts, code designs, UX & UI designs, and architecture designs to uncover information. The only difference is that the information that’s uncovered is in form of identified risks. Our testing notes are risk maps.

It is understandable that we can reduce some of these risks through design. Some of those risks we cannot. We can add them to our test charters to investigate the product and determine if there are any problems.

Proactive and reactive quality in bug prevention and detection

It is easy to distinguish between testing to identify risk (or, in other words, problems within the ideas, designs, etc ) and testing the software driven to identify problems in it.

It’s not so simple as saying all tests done after code has been written is about finding problems based upon our pre-discovered risk…
Because it is impossible to imagine every unknown that could be related to the small feature/ MVP/SMURFS we are creating, Complete unknowns will always exist (that is, unknowns we are not aware of). Even if we have these testing procedures in place to try to be aware of unknown risks or variables, there will always be many things we don’t know. Even after the code has been written and we have begun to investigate the software, we will discover new information about risks and variables we didn’t know.

Stuart Firestein used a very useful analogy to describe ignorance as ” ripples in a pond”. He said, “As knowledge grows, so does ignorance.” It is not that ignorance can be transformed into knowledge. It is the opposite. Knowledge leads to better-quality ignorance. We are always looking for better and more relevant questions.

This quote is very inspiring to me. This applies to all ideas and information we gather about our software throughout the SDLC. As we uncover and learn more, the more ripples of knowledge grow.

Yes, there is a difference between detecting and preventing bugs in software. But, if we look at it differently, we can stem more design changes to reduce those risks. We must also realize that there will be more unknown risks and variables beyond the time we write code and the time it is released into production.

There are risks to not being aware of risks

We hope you see the value in this approach to the relationship between product risks and testing activities.

It is still shocking to see how many software teams fail to focus on product risk when testing. Speaking with teams in this situation, I have noticed that while they may think of great test ideas, they struggle to link them back to the test’s purpose, which is to test for a product risk.

These teams I’ve spoken to struggle to include testers before any code is written.

This is done to ensure that any code written is tested.
To find out about the risks and variables related to the ideas and designs for the software solution (think of the feedback loops in the first half SDLC model). If you don’t have this perspective about product risks, then testing now is likely to be reactive. You will need to test the code once it’s been written.

The response to my question was TDD, which is writing tests that aid in code design. The problem is that the scripted tests that we expect to be written don’t actually run until the code has been actually written so that the expectations can be compared. These tests are automated and can’t reveal any unexpected information (i.e. Variables and risks. You will need to use the investigative testing method.

Some teams tried this approach and rearranged their processes to inject testing through the entire SDLC. This was done in order to put more emphasis on product risks. These teams saw huge benefits! These teams could clearly see the benefits of testing this way. They also had fewer bug fixing cycles because of the quicker feedback loops that early testing provided. This was helping them to stem more design and mitigate some of their risks.

How have you dealt with product risk?

Are you able to see the risks in this way? Do you see the dangers of not considering risks this way?

I hope that you are able to test this way and practice your thinking. I would love to hear about your experiences.