Avoiding Technical Debt: An Intro to Software Testing
February 21, 2017
Software testing is known to be one of the most expensive, intensive and frustrating parts of the software development process. If major problems or bugs are discovered as a product is about to be launched, it can mean sizable delays to the launch date or even signal the need to re-write large sections of the program.
However, just because the testing process can be unpleasant doesn’t mean it’s to be avoided or skimped-on. The performance of your software in the marketplace will depend on the performance of the software itself, and it is always best to release the most functional version of your product as possible.
Discovering bugs is inevitable, but a buggy product is terrible for a company’s reputation and fixing these bugs post-launch can cost you bigtime. Technical Debt is what happens when testing is neglected, and can pose serious problems for an on-time product launch:
“As a change is started on a codebase, there is often the need to make other coordinated changes at the same time in other parts of the codebase or documentation. Required changes that are not completed are considered debt that must be paid at some point in the future. Just like financial debt, these uncompleted changes incur interest on top of interest, making it cumbersome to build a project.”
The longer you put off testing, the more potential your project has to accumulate technical debt and the longer it will take to launch an optimal product. Software testing is a crucial step in the development cycle and, no matter when it occurs, it needs to be thorough and corners should never be cut.
Testing and the Software Development Cycle
In the past, many companies used the Waterfall workflow model for software development. In this model, testing was saved until the software had been completely built, often causing this process to be rushed in favor of an earlier release date.
Today, most companies are using an Agile development process, in which individual units of the end product are designed, developed and tested before moving onto the next functional unit of the software. This helps to avoid leaving errors in the source code of programs or otherwise overlooking problems that will only worsen as more code is written around faulty components.
A typical testing cycle will focus on different levels of the software at different times. Unit Testing, for instance, is the process of testing individual units of the software to see if they are coded properly. Integration Testing, on the other hand, is testing to see if the interactions between different units are happening like they’re supposed to.
These tests are followed by System Testing and Validation Testing. System Testing is testing software performance in the systems that will support it: cloud, browser, etc. Validation Testing is testing and evaluating the software system to see if it is fulfilling its intended purpose: an app to accurately measures network speed, an all-in-one financial management platform, etc.
It is also good to run performance and stability tests on a piece of software to determine if it can handle the stresses of widespread use on multiple platforms. For instance, Load Testing simulates web traffic to the software to determine its capabilities and readiness for real world use.
Whether this testing starts at the functional unit level (bottom-up testing) or from the most macro-level of the user experience (top-down testing), it needs to be thorough and it needs to ensure that:
The individual components and functional units of the program are built correctly.
These components/units are interacting/communicating correctly.
The software is fully integrated into the systems that will support it.
These component interactions result in the desired functionality of the software.
Software Testing Methods
As far as the testing itself goes, the two major methods are manual testing and automated testing. Each method has its advantages and drawbacks and many companies use a combination of the two in their software testing process.
Manual Testing is performed by QA (Quality Assurance) professionals, who manually test software and components of software for bugs, errors and user experience issues. The main advantage of manual testing is the human perspective that you’re getting and the experience that manual testers have with identifying common problems and bugs. The main disadvantage of manual testing are human limitations, as testers cannot test as thoroughly as scripting programs. They can also be subject to fatigue on a long project, and lose perspective as they get bogged down in extensive testing.
Automated Testingis performed by an automation testing professional, who sets up scripts to test software at various levels. Scripts can be used to test individual functional units of software and are used to mimic user behavior to test for errors and bugs. These scripts can test more quickly and exhaustively than a human, once they are set up, and are especially good for continuous testing in an agile software development context.
No matter how you test your software, you should be sure that your coders don’t have a “Write first and ask questions later,” approach to code quality. There will always be issues to iron out, but many less when quality is stressed from the beginning. Testing early and often will help to mitigate the possibility of discovering major problems with a piece of software just before its scheduled to be released.
Ronny Cheng is one of the Co-Founder’s of Digital Astronauts and has helped drive lead generation in the software industry for organizations of all sizes — from start-ups to Fortune 500’s. He helped build one of the first online software review websites, specializing in CRM, ERP, and HR software. He’s a nationally published author with extensive experience working with the HR/Recruiting industries largest brands. In his spare time, you can catch him on Instagram doing his best food blogger impersonation.