Software development is a tricky business. When you think of all that can go wrong, the possibilities can be overwhelming. From coding errors, to borrowed libraries, to myriad other causes, the need for testing is fundamental to the development process. Testing can uncover many of the errors or oversights that can occur. Failure to effectively test prior to release can be very costly. Fortunately, the software security lifecycle includes testing methodologies to prevent many of these errors.
As a security professional, understanding testing techniques is an extremely important job responsibility. If you are on the technical side of information security, you may be conducting the tests yourself. One way that an employer can ensure that they have a qualified person is by seeking someone who understands the software security lifecycle. However, even the most seasoned professional may fall victim to a hidden problem with testing that can lead to other challenges. This is the problem of cognitive bias.
Existing Methods Towards More Secure Software
Many testing approaches for software are similar to those used in network testing. One of these includes penetration testing, whereby a person attempts to force the system, (or in this case, the software) to behave in an unexpected or unanticipated way.
Pen testing can include different methodologies. One such method is known as a “black box” test, where the tester knows nothing about the system prior to testing. Another method is the “white box” method, where information about the system is available prior to testing. These, along with many of the testing techniques, such as “fuzzing”, integration testing, and stress testing are all important steps towards better, and more secure software.
One has to wonder how bias slips into the testing environment, and can it be avoided?
While testing can find many errors, it lacks the ability to detect errors caused by cognitive bias. Also, no matter how hard we try, we cannot escape bias. We can be aware of our biases, but that may not eliminate them.
One of the most prominent biases is what is known as confirmation bias. This is where we want something to be true. Can you see how this can impact your thinking while evaluating the security of a software product? One way that confirmation bias can manifest itself is when a flaw is discovered, but not easily reproduced, causing you to incorrectly determine the discovered flaw was an anomaly in your method, rather than a problem with the test subject. Another way confirmation bias can affect results is by convincing oneself of the outcome of a test beforehand, and regardless of the results.
Another bias that can affect software security testing is known as the Fundamental Attribution Error. This is where a moral judgement is made, rather than an observation. For example, if you are not a supporter of a particular operating system, and you are testing software designed for that system, you may already have a predisposition towards the success or failure of the software, regardless of its capabilities. This bias is also closely related to “anchoring”, where one jumps to conclusions based on first impressions.
Flaws Can Be Skin Deep
The most prevalent examples of software bias recently are the controversies surrounding facial recognition software. It is hard to imagine that the designers and programmers intentionally programmed their software to behave in a discriminatory manner, or that the artificial intelligence algorithms were intentionally misguided.
What About Ethics?
One would think that our moral compass would step in to help us avoid biases, however, biases do not work that way. In most cases, a bias is the result of an unknown fact, or an unconscious preference or dislike towards a particular thing. A person who consciously practices prejudiced behaviour is far beyond the discussion of simple bias or ethics.
Can We Overcome Bias?
The good news is that bias is not a death sentence, and it can be overcome, but it requires a strong conscious effort on the part of the testers. The first step is to have the proper training so that all phases of the testing process are well understood. Weaknesses in understanding the testing can cause troubles that may undetected biases to emerge. A trained and certified software security lifecycle professional can help to avoid this simple pitfall.
Another way to uncover and avoid biases is through diverse collaboration. Candid perspectives should be accepted from other qualified professionals. While that is no absolute guarantee that bias may be present, multiple viewpoints dramatically reduce those probabilities.
How the CSSLP can help you succeed
The Certified Software Security Lifecycle Professional (CSSLP) credential is the perfect way to show that you not only understand the testing methodologies and techniques, but that you also have a strong ethical foundation in information security. These two characteristics can give your employer the confidence of knowing that their likelihoods for successful software testing and implementation are at the highest levels, with the lowest levels of bias creeping into the mix.
To find out more about cognitive bias in software security testing, download our white paper, The Confessions of a Software Developer today.