Automated software testing is becoming increasingly crucial in the GxP world, where Pharma 4.0 is the future, the EU MDR is asserting its influence, and quality management systems are more reliant on electronic tools. These advancements come with mounting regulatory and best practice pressures [An11, GAMP5]. Embracing software testing automation can help your company prepare for the future.
However, there are several factors to consider when implementing automated software testing, such as
- reducing the time and budget allocated to Computer System Validation (CSV) or Computer System Assurance (CSA),
- increasing confidence in computerized systems,
- and ensuring investments made in automation provide significant value.
Additionally, issues may arise from the compatibility of mandatory re-validation with the company’s agenda, which can lead to increased risks during audits.
In this article, we will delve into the various aspects of automated software testing, providing you with a comprehensive overview and essential considerations for this promising journey, with the ultimate goal of achieving success.
A complete guide to Computer System Validation
This +100 page guide aims to bring context and define the necessary and appropriate strategies for the validation of computerized systems for Pharmaceutical Industries, Biologics, Biotechnology, Blood Products, Medicinal Products, and Medical Devices, used in activities related to compliance with Good Practices (GxP). Download it now for free:
What is (the use of) automated software testing?
What is automated software testing, and how can it assist us with various automated system validation tasks? To answer this question, we need to explore some of the tool families and their applications.
First, let’s look at examples of where these tools can be helpful before summarizing them into an overall picture.
Some families of automated software testing tools
1. Test management tools
The grouping of tests to ensure that both risks and requirements are covered and automating the traceability matrix is a crucial aspect of automated software testing. It is necessary to trace executed tests, attach evidence to those executions, and document the environment used.
Additionally, it is essential to determine which features the discovered bugs affect and track the traces of failed implementations while also measuring progress using KPIs, considering time and budget constraints.
After fixing the bugs, it is necessary to re-run some tests to confirm the fix and ensure no subsequent regression occurs while tracking traces of the failed implementation. Post-release, test execution results should be kept for archiving purposes, and specific tests should be selected for future regression testing. It is important to demonstrate the effect of the test selection on the traceability matrix, and each future test execution should be kept in a separate archive.
All data recorded during the testing process adheres to ALCOA+ principles.
2. Test case deduction tools
Imagine a situation where there are ten parameters, each with five values, that need to be tested based on risk assessment. Describing each combination of parameters would be time-consuming.
We are also replacing an existing system with new software and need to ensure that the new software provides the same answers as the old one. Fortunately, the old software follows ALCOA+ principles, and we have a database of all past decisions and records.
To automate the test case creation process, we create scripts, spreadsheets, and database extracts, among other tools.
3. Test oracles
Data exchange with authorities requires following a specific standard. The standard involves exchanging data via a comma-separated values file where the first and last lines include specifications such as file format, number of records, and timestamp, while each line in the middle represents a record with a fingerprint-based on complex mathematics.
As we replace our existing system with new software, we use test case deduction to test all different cases in the old database. To facilitate this process, we create two test oracles.
The first test oracle compares the decisions made by the new system with the decisions made by the old system stored in the old database. The second test oracle compares the output file with the authorities’ standard by checking each line, including fingerprint comparisons with its own calculations.
4. Stubs and drivers
As the system is still under development, some components are testable, but certain input or output components may not be available yet. To overcome this issue, we create stubs and drivers that replace these components for testing purposes.
Once the system is available, it will require an expensive GC-MS device for operation. Since we don’t have the budget to purchase one for testing, we replace it with a stub that replicates the device’s state with a simple set of lights.
To ensure our disaster recovery plan works against GC-MS failure, we need to test the software without breaking our actual GC-MS device. To do this, we use a stub to simulate the failure event.
5. Static code analysis
To enhance the maintainability of source code by team members, we have implemented specific guidelines for code formatting and organization, which are outlined in a work instruction. Additionally, we adhere to best practices for the programming language, which includes proper code formatting and avoiding language features that are considered bad practice.
These guidelines are automatically verified through static code analysis, and a report is generated overnight to ensure adherence to the established conventions. Furthermore, the tool is deployed on developers’ machines to detect and report any violations in real-time as they make changes to the code.
6. Unit and integration testing
The source code follows a clear structure that uses different levels of abstraction, and it uses objects and functions to represent concepts and actions, respectively.
The behavior of the software is predictable, which allows us to create tests for each function and object, as well as for their interactions. These tests are automatically run to ensure that the software behaves as expected, and the results are associated with the built software.
Apart from checking the behavior of the software, we also use code-level testing to measure the percentage of statements and decisions that are executed during those tests.
One approach to developing the software is to use the test-driven development agile methodology, which involves creating unit and integration tests first and developing the code until all the tests pass successfully.
7. Continuous integration
After the developer completes his work, the code is added to a central source code repository. Each night, we use this central repository to build the software and conduct unit tests and static code analysis. We then connect the generated reports to the software and make them available on an intranet website.
We also create a report that outlines the changes made since the last build. This allows the QA team to access the most up-to-date version of the software with a degree of confidence and understand what modifications have been made.
8. Black box test execution automations
Test scripts have been developed to verify software functions against their requirements during operational and performance qualifications (OQ/PQ) which includes testing the user interface.
However, repeating these tests with different variable combinations and performing revalidation and regression activities can be time-consuming and demotivating for testers.
To address this, black box testing automation has been implemented to automate the execution of these tests.
9. Performance testing tools
To ensure that the system can handle 100 concurrent connections and terabytes of data, we employ a tool that generates 100 parallel connections and sends data continuously until 100 terabytes have been processed. The connections’ actions may be continuous or paused, and the tool provides reports on the system’s responsiveness throughout the scenario. This allows us to verify that the system can handle the required load and provides insights into potential performance issues.
10. Security testing tools
We utilize a tool that performs security testing by applying various hacker patterns to attempt to breach the software’s security or render it unavailable. The results of each test are reported, indicating whether the attempt was successful or unsuccessful.
The big picture: addressing some testing challenges using automation
Let’s summarize these examples and try to define these families. The image below gives that big picture, in relation to the V-model activities of GAMP5.
Test tasks can also benefit automations that are not themselves test automation. For example:
- Test environment pre-conditions can be achieved automatically through the use of automation scripts. This is crucial, for example, after running destructive tests (in which case we can restore the environment from a backup), or when running tests on an environment that needs to mimic the production environment (export from production and import to test every night, for example).
- Data validation frameworks that can test data against specifications or discover data specifications through automated exploration; this is not as powerful as an oracle, but can be a great component of oracle.
- Artificial intelligence and machine learning algorithms can also produce great oracles … once they are validated. Note that sensitivity, specificity, precision, and recall, used to evaluate those algorithms, are in fact a comparison between actual and expected results. Since datasets are huge, those comparisons are done using oracles based on special validation datasets.
Where to begin with automated software testing?
To begin with automated software testing, it is essential to have technical knowledge in automation testing theory and programming. This knowledge is necessary for people to be trained in automated software testing. Developers can create and run unit tests and perform static code analysis, but understanding the full picture takes time. Specialized tools, such as security and performance tools, require specific knowledge.
The first step is to consider what aspects of software testing you want to automate, why you want to do so, and what you expect to achieve. To answer these questions, you can read articles and resources on the topic, such as the ISTQB syllabi.
A good starting point for automated software testing is to begin with the easier aspects such as unit testing and static code analysis. You can then create a reliable regression suite and automate it to benefit from iterations. Maturity models and other tips can also be considered to improve the reliability and effectiveness of automated software testing.
Benefits, costs, risks, and limitations
Before deciding to go to software testing automation, there are a few things to consider [ATMBook, CATL-TM § 6.2]. While some of those are obvious, reading the rest of this article can give you good insights into the other points. In any case, you can rely on QbD’s experts to help you…
Benefits of automated software testing
Automated software testing provides numerous benefits, the most apparent of which is time and cost savings. It also reduces variability in planning, increases traceability, and produces more consistent results through systematic evaluation. Automated testing also enables the testing of inaccessible elements that cannot be assessed by humans, and it reduces the burden of testing, making it less tedious and more engaging for testers.
Costs of automated software testing
Introducing an automated testing tool entails both initial and ongoing costs.
Acquiring knowledge about the tool is a necessary initial step. Additionally, there are tasks such as selecting and acquiring the tool, and integrating it into the company’s ecosystem and other tools. These tasks require resources and time.
On an ongoing basis, there are traditional IT tool ownership costs to consider, as well as the maintenance and porting of tests to other platforms. The tool’s use must be constantly evaluated and improved to ensure that it remains effective. Any unavailability of the tool could potentially impact the business strategy.
Risks of automated software testing
In addition to the usual risks of new technologies (change management, poor expectations, maintenance evaluation, poor return on investment evaluation, etc.), people could use the tool too systematically, leading to reduced efficiency (automation of non-recurring or difficult-to-automate tests), testers could reduce their experience of manual testing and their commitment, leading to reduced test coverage.
And of course, we are in the GxP industry, where there are many regulations…
Limitations of automated software testing
An automated tool cannot be as creative as humans and is limited to what is asked: humans have their eyes everywhere when performing a test, which a machine cannot do, leading to fewer bugs being found. And how can a machine judge the usability and appearance of software? Finally, pressing a device’s emergency stop button and viewing images are examples of things that still need to be performed by a human.
On the other hand, test tools cannot create bug reports for humans. Any failed test execution will lead to a manual interpretation of the error before documenting the bug.
Where does automated software testing make sense?
Since an automaton cannot feel like a human, it will be a huge effort and perhaps impossible to automate tests involving senses and opinions. So we must forget about the look and feel, intuitiveness of the user interface, quality of images, quality of sound, etc. Also, software cannot (yet) open doors, change screws, or push physical buttons…
In addition, automating takes time. More time than a one-time execution by hand. So, the more the automation is complex, the more re-execution must be performed to have a return on investment. It means that, if you want to have a return on investment, you can only automate complex things if the execution will be recurring.
On the other hand, having comprehensive systematic check (code look, passing through all statements and decisions, etc.) and performing high volume of tests using massive combinations, connections, data sets, etc. is very difficult in the manual way.
Let’s conclude with some examples related to test execution:
Figure 3 – Automated software testing examples related to test execution
Some pitfalls can be avoided, and automation can be facilitated:
- Automation involves technical writing that may not be understandable to non-programmers. It is crucial that each test case prior to automation be documented in human language.
- We have seen that using an end-user interface complicates things. There are some software architecture principles that can help exercise components without using such interfaces, allowing for greater automated test coverage:
- Microservices where components are highly constrained and “talk to each other” and … can talk to tests.
- Application programming interfaces, which allow calling pieces of code outside of internal software code.
- Automation has great value in recurring testing tasks, especially regression testing. It means that:
- Regression tests can be run if regression tests are already defined.
- Regression is subjected to the pesticide paradox:
- System increasingly resists those tests, instead of delivering the expected behavior.
- To prevent this, regression tests must be changed regularly.
- That risk increases with automated regression.
- Automation programming gains benefits from abstraction: the more we isolate system implementation from test logic, the easier they are to maintain. This is concretized through three maturity levels:
- Capture and replay: We capture and replay actions manually created by testers.
- Different values? Different tests.
- System changes? Need to rewrite tests.
- Writing tests? Expert knowledge.
- Data-driven: We isolate the data in separate set (spreadsheet, etc.), data is automatically read by the test.
- Different values? Same test.
- System changes? Need to rewrite tests.
- Writing tests? Expect knowledge.
- Keyword driven: We define a language for testing that talks about “business actions,” which is implemented using software libraries.
- Different values? Same tests as also data driven.
- System changes? Only the libraries need to be changed.
- Writing tests? Business knowledge + “language” documentation.
- Capture and replay: We capture and replay actions manually created by testers.
- There is great value in “making the glue” between automation tools and other quality assurance. For example, automated validation should update the status of test cases, which, if documented using a test management tool, can immediately update the traceability matrix and provide part of the validation report.
Modern software development methods, like scrum, involve short and frequent development cycles. Similarly, test-driven development relies heavily on the effectiveness of testing.
Thus, test automation is critical for the success of these projects. The GAMP5 standard emphasizes this point: “Tools play an essential role in demonstrating that the system is suitable for its intended use, that functionality can be traced to requirements, and that testing has been completed.”
For example, adding automated unit tests and static code analysis to agile projects is a common practice. Code is built automatically each night, and unit tests are run, with the results added to the “nightly build bill of materials” so that each developer receives a quality report each morning.
GAMP5 also provides a great example of DevOps, an extension of Agile methodology, which uses source code control and automated testing as part of the continuous integration process to prevent the release of flawed code.
The ISPE’s Good Practice Guide on innovation provides further guidance, stating that “continuous integration and continuous deployment is an extension of DevOps where there is more use/dependence on automated software test analysis tools.”
If we consider the benefits, costs, risks, limitations and the huge variety of automatable software testing actions, we can build a great story.
As a summary:
- The closer you are to the code (unit test, static code analysis, etc.), the easier it is to automate.
- Conversely, end-user interface testing is complex to automate. It only makes sense to automate those when they come back:
- Regression testing
- Recurring testing in scrum methodology
- Test-driven development
- Test automation has initial and recurring costs and requires trained people.
- Test automation is not just practicing the end-user interface. It has many aspects that can help you.
- Automate what computers do better (systematic, recurring, etc.) and keep manual testing for what humans do better (interaction, visualization, etc.).
- Automated and manual testing all have their advantages and disadvantages, a mixed strategy will increase your CSV.
Need help setting up your automated software testing? Or do you have additional questions? Our experts will be happy to help you!
Expert knowledge in Computer Systems Validation
|An11||EudraLex Volume 4 Annex 11: Computerized Systems|
|ATMbook||P. Hendrickx and C. Van Bael, Advanced Test Management, PS_TestWare, 2010|
|BlogV||Y. Soufyani, What is GAMP 5 V-model in CSV?, QbD blog, Feb. 2023|
|CFTL||ISTQB Certified Tester Foundation Level (CFTL) Syllabus, 3.1.1, 2018|
|CTAL-TA||ISTQB Certified Tester Advanced Level Test Analyst (CTAL-TA) Syllabus, 3.1.2, 2022|
|CTAL-TAE||ISTQB Certified Tester Advanced Level Test Automation Engineer Syllabus, 2016|
|CTAL-TM||ISTQB Certified Tester Advanced Level Test Manager Syllabus, 2012|
|CTAL-TTA||ISTQB Certified Tester Advanced Level Technical Test Analyst Syllabus, 4.0, 2021|
|GAMP5||ISPE GAMP 5 Guide, 2nd edition, 2022|
|GPG-Inn||ISPE GAMP Good Practice Guide: Enabling innovation, 2021|
|GPG-Test||ISPE GAMP Good Practice Guide: Testing of GxP systems, 2005|
|ISTQBGlo||ISTQB Standard Glossary of Terms Used in Software Testing|
|Part 11||21 CFR part 11: Electronics Records; Electronics Signatures – Scope and Application|