Life Sciences Insights

Sharing expert knowledge via our latest blog posts

Automated software testing for the GxP world

Struggling with CSV validation and testing automation? Tired of re-validating at every sprint? Discover expert tips to reduce time, and budget, and increase validation coverage in our latest blog post on automated software testing for the GxP world.
Automated software testing for the GxP world - QbD Group

Automated software testing is becoming increasingly crucial in the GxP world, where Pharma 4.0 is the future, the EU MDR is asserting its influence, and quality management systems are more reliant on electronic tools. These advancements come with mounting regulatory and best practice pressures [An11, GAMP5]. Embracing software testing automation can help your company prepare for the future.

Considering software testing automation, is shaping your company for the future.

However, there are several factors to consider when implementing automated software testing, such as

  • reducing the time and budget allocated to Computer System Validation (CSV) or Computer System Assurance (CSA),
  • increasing confidence in computerized systems,
  • and ensuring investments made in automation provide significant value.

Additionally, issues may arise from the compatibility of mandatory re-validation with the company’s agenda, which can lead to increased risks during audits.

In this article, we will delve into the various aspects of automated software testing, providing you with a comprehensive overview and essential considerations for this promising journey, with the ultimate goal of achieving success.

A complete guide to Computer System Validation

This +100 page guide aims to bring context and define the necessary and appropriate strategies for the validation of computerized systems for Pharmaceutical Industries, Biologics, Biotechnology, Blood Products, Medicinal Products, and Medical Devices, used in activities related to compliance with Good Practices (GxP). Download it now  for free:

FREE E-BOOK

What is (the use of) automated software testing?

What is automated software testing, and how can it assist us with various automated system validation tasks? To answer this question, we need to explore some of the tool families and their applications. 

First, let’s look at examples of where these tools can be helpful before summarizing them into an overall picture.

Some families of automated software testing tools

1. Test management tools

The grouping of tests to ensure that both risks and requirements are covered and automating the traceability matrix is a crucial aspect of automated software testing. It is necessary to trace executed tests, attach evidence to those executions, and document the environment used. 

Additionally, it is essential to determine which features the discovered bugs affect and track the traces of failed implementations while also measuring progress using KPIs, considering time and budget constraints.

After fixing the bugs, it is necessary to re-run some tests to confirm the fix and ensure no subsequent regression occurs while tracking traces of the failed implementation. Post-release, test execution results should be kept for archiving purposes, and specific tests should be selected for future regression testing. It is important to demonstrate the effect of the test selection on the traceability matrix, and each future test execution should be kept in a separate archive.

All data recorded during the testing process adheres to ALCOA+ principles.

A test management tool is a tool which supports the planning, scheduling, estimating, monitoring, reporting, control, and completion of tests activities. [ISTQBGlo]

2. Test case deduction tools

Imagine a situation where there are ten parameters, each with five values, that need to be tested based on risk assessment. Describing each combination of parameters would be time-consuming.

We are also replacing an existing system with new software and need to ensure that the new software provides the same answers as the old one. Fortunately, the old software follows ALCOA+ principles, and we have a database of all past decisions and records.

To automate the test case creation process, we create scripts, spreadsheets, and database extracts, among other tools.

3. Test oracles

Data exchange with authorities requires following a specific standard. The standard involves exchanging data via a comma-separated values file where the first and last lines include specifications such as file format, number of records, and timestamp, while each line in the middle represents a record with a fingerprint-based on complex mathematics.

As we replace our existing system with new software, we use test case deduction to test all different cases in the old database. To facilitate this process, we create two test oracles.

The first test oracle compares the decisions made by the new system with the decisions made by the old system stored in the old database. The second test oracle compares the output file with the authorities’ standard by checking each line, including fingerprint comparisons with its own calculations.

A test oracle is a source to determine an expected result to compare with the actual result of the system under test. [ISTQBGlo]

4. Stubs and drivers

As the system is still under development, some components are testable, but certain input or output components may not be available yet. To overcome this issue, we create stubs and drivers that replace these components for testing purposes.

Once the system is available, it will require an expensive GC-MS device for operation. Since we don’t have the budget to purchase one for testing, we replace it with a stub that replicates the device’s state with a simple set of lights.

To ensure our disaster recovery plan works against GC-MS failure, we need to test the software without breaking our actual GC-MS device. To do this, we use a stub to simulate the failure event.

A driver is a temporary component or tool that replaces another component and controls or calls a test item in isolation. [ISTQBGlo] A stub is a skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [ISTQBGlo] Often, the word “driver” is used both for stub and driver.

5. Static code analysis

To enhance the maintainability of source code by team members, we have implemented specific guidelines for code formatting and organization, which are outlined in a work instruction. Additionally, we adhere to best practices for the programming language, which includes proper code formatting and avoiding language features that are considered bad practice.

These guidelines are automatically verified through static code analysis, and a report is generated overnight to ensure adherence to the established conventions. Furthermore, the tool is deployed on developers’ machines to detect and report any violations in real-time as they make changes to the code.

Static analysis is the process of evaluating a component or system without executing it, based on its form, structure, content, or documentation. [ISTQBGlo]

6. Unit and integration testing

The source code follows a clear structure that uses different levels of abstraction, and it uses objects and functions to represent concepts and actions, respectively.

The behavior of the software is predictable, which allows us to create tests for each function and object, as well as for their interactions. These tests are automatically run to ensure that the software behaves as expected, and the results are associated with the built software.

Apart from checking the behavior of the software, we also use code-level testing to measure the percentage of statements and decisions that are executed during those tests.

One approach to developing the software is to use the test-driven development agile methodology, which involves creating unit and integration tests first and developing the code until all the tests pass successfully.

Unit testing, or component testing, is a test level that focuses on individual (hardware or) software components. [ISTQBGlo]

Integration testing is a test level that focuses on interactions between components or systems. [ISTQBGlo]

Component integration testing is a testing in which the test items are interfaces and interactions between integrated components. [ISTQBGlo]

7. Continuous integration

After the developer completes his work, the code is added to a central source code repository. Each night, we use this central repository to build the software and conduct unit tests and static code analysis. We then connect the generated reports to the software and make them available on an intranet website.

We also create a report that outlines the changes made since the last build. This allows the QA team to access the most up-to-date version of the software with a degree of confidence and understand what modifications have been made.

Continuous integration is an automated software development procedure that merges, integrates, and tests all changes as soon as they are committed. [ISTQBGlo]

8. Black box test execution automations

Black box testing doesn’t need knowledge of the code and his organization; the software is seen as a black box. The opposite, where we can find for example static code analysis and unit testing, is called white box.

Black-box test techniques are based on an analysis of the specification of a component or system. [ISTQBGlo]

Some testing activities can be based on code discovery and guessing, they are gray box techniques. Performance and security testing are some examples.

White-box test techniques are based on the internal structure of a component or system. [ISTQBGlo]

Note that a driver can be considered as a white box when we create it (e.g., we call pieces of code), but is used most of the time in black box context, where testers cannot call directly pieces of code. On the other hand, Oracles might need development to be created, but since they are not involving the code of the software under test, that development is a black-box activity.

Test scripts have been developed to verify software functions against their requirements during operational and performance qualifications (OQ/PQ) which includes testing the user interface.

However, repeating these tests with different variable combinations and performing revalidation and regression activities can be time-consuming and demotivating for testers.

To address this, black box testing automation has been implemented to automate the execution of these tests.

Test execution automation is the use of software […] to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

9. Performance testing tools

To ensure that the system can handle 100 concurrent connections and terabytes of data, we employ a tool that generates 100 parallel connections and sends data continuously until 100 terabytes have been processed. The connections’ actions may be continuous or paused, and the tool provides reports on the system’s responsiveness throughout the scenario. This allows us to verify that the system can handle the required load and provides insights into potential performance issues.

A performance testing tool generates load for a designated test item and measures and records its performance during test execution. [ISTQBGlo]

10. Security testing tools

We utilize a tool that performs security testing by applying various hacker patterns to attempt to breach the software’s security or render it unavailable. The results of each test are reported, indicating whether the attempt was successful or unsuccessful.

The big picture: addressing some testing challenges using automation

Let’s summarize these examples and try to define these families. The image below gives that big picture, in relation to the V-model activities of GAMP5.

Figure 1 – Overview of automated software testing tools

But also…

Test tasks can also benefit automations that are not themselves test automation. For example:

  • Test environment pre-conditions can be achieved automatically through the use of automation scripts. This is crucial, for example, after running destructive tests (in which case we can restore the environment from a backup), or when running tests on an environment that needs to mimic the production environment (export from production and import to test every night, for example).
  • Data validation frameworks that can test data against specifications or discover data specifications through automated exploration; this is not as powerful as an oracle, but can be a great component of oracle.
  • Artificial intelligence and machine learning algorithms can also produce great oracles … once they are validated. Note that sensitivity, specificity, precision, and recall, used to evaluate those algorithms, are in fact a comparison between actual and expected results. Since datasets are huge, those comparisons are done using oracles based on special validation datasets.

Where to begin with automated software testing?

To begin with automated software testing, it is essential to have technical knowledge in automation testing theory and programming. This knowledge is necessary for people to be trained in automated software testing. Developers can create and run unit tests and perform static code analysis, but understanding the full picture takes time. Specialized tools, such as security and performance tools, require specific knowledge.

The first step is to consider what aspects of software testing you want to automate, why you want to do so, and what you expect to achieve. To answer these questions, you can read articles and resources on the topic, such as the ISTQB syllabi.

A good starting point for automated software testing is to begin with the easier aspects such as unit testing and static code analysis. You can then create a reliable regression suite and automate it to benefit from iterations. Maturity models and other tips can also be considered to improve the reliability and effectiveness of automated software testing.

Benefits, costs, risks, and limitations

Before deciding to go to software testing automation, there are a few things to consider [ATMBook, CATL-TM § 6.2]. While some of those are obvious, reading the rest of this article can give you good insights into the other points. In any case, you can rely on QbD’s experts to help you…

The topic of automated software testing can be quite complex, and the decisions made can greatly impact the success or failure of a project. With proper guidance and knowledge, however, automated testing can help bring a team's dreams to fruition. At QbD, we strive to provide the necessary support and expertise to ensure that our clients' projects are successful and that their goals are achieved.

Figure 2 – Benefits, costs, risks, and limitations of automated software testing (click to enlarge)

Benefits of automated software testing

Automated software testing provides numerous benefits, the most apparent of which is time and cost savings. It also reduces variability in planning, increases traceability, and produces more consistent results through systematic evaluation. Automated testing also enables the testing of inaccessible elements that cannot be assessed by humans, and it reduces the burden of testing, making it less tedious and more engaging for testers.

Costs of automated software testing

Introducing an automated testing tool entails both initial and ongoing costs.

Acquiring knowledge about the tool is a necessary initial step. Additionally, there are tasks such as selecting and acquiring the tool, and integrating it into the company’s ecosystem and other tools. These tasks require resources and time.

On an ongoing basis, there are traditional IT tool ownership costs to consider, as well as the maintenance and porting of tests to other platforms. The tool’s use must be constantly evaluated and improved to ensure that it remains effective. Any unavailability of the tool could potentially impact the business strategy.

Risks of automated software testing

In addition to the usual risks of new technologies (change management, poor expectations, maintenance evaluation, poor return on investment evaluation, etc.), people could use the tool too systematically, leading to reduced efficiency (automation of non-recurring or difficult-to-automate tests), testers could reduce their experience of manual testing and their commitment, leading to reduced test coverage.

And of course, we are in the GxP industry, where there are many regulations…

Limitations of automated software testing

An automated tool cannot be as creative as humans and is limited to what is asked: humans have their eyes everywhere when performing a test, which a machine cannot do, leading to fewer bugs being found. And how can a machine judge the usability and appearance of software? Finally, pressing a device’s emergency stop button and viewing images are examples of things that still need to be performed by a human.

On the other hand, test tools cannot create bug reports for humans. Any failed test execution will lead to a manual interpretation of the error before documenting the bug.

Where does automated software testing make sense?

Since an automaton cannot feel like a human, it will be a huge effort and perhaps impossible to automate tests involving senses and opinions. So we must forget about the look and feel, intuitiveness of the user interface, quality of images, quality of sound, etc. Also, software cannot (yet) open doors, change screws, or push physical buttons…

An automaton is not a human being! Engage people where they are at their best, engage automation where it is at its best....

In addition, automating takes time. More time than a one-time execution by hand. So, the more the automation is complex, the more re-execution must be performed to have a return on investment. It means that, if you want to have a return on investment, you can only automate complex things if the execution will be recurring.

On the other hand, having comprehensive systematic check (code look, passing through all statements and decisions, etc.) and performing high volume of tests using massive combinations, connections, data sets, etc. is very difficult in the manual way.

Computers can only look at ... data. If your expected results are SMART, you are on the right track for automations. So you need to know exactly what you are looking for, and accept that this is the only thing being checked; humans are better for the rest. Computers can easily check large amounts of data (e.g., checking constraints in a spreadsheet), while humans can easily check complex data (e.g., a photograph).

Let’s conclude with some examples related to test execution:

Figure 3 – Automated software testing examples related to test execution

Improving things

Some pitfalls can be avoided, and automation can be facilitated:

  • Automation involves technical writing that may not be understandable to non-programmers. It is crucial that each test case prior to automation be documented in human language.
  • We have seen that using an end-user interface complicates things. There are some software architecture principles that can help exercise components without using such interfaces, allowing for greater automated test coverage:
    • Microservices where components are highly constrained and “talk to each other” and … can talk to tests.
    • Application programming interfaces, which allow calling pieces of code outside of internal software code.
  • Automation has great value in recurring testing tasks, especially regression testing. It means that:
    • Regression tests can be run if regression tests are already defined.
    • Regression is subjected to the pesticide paradox:
      • System increasingly resists those tests, instead of delivering the expected behavior.
      • To prevent this, regression tests must be changed regularly.
      • That risk increases with automated regression.

A system that is a victim of the pesticide paradox is like a QA manager closing all CAPAs to management review, regardless of whether they are good or not. Running automated tests very often and never changing them is like having management review every day and only looking at the number of open CAPAs...

  • Automation programming gains benefits from abstraction: the more we isolate system implementation from test logic, the easier they are to maintain. This is concretized through three maturity levels:
    • Capture and replay: We capture and replay actions manually created by testers.
      • Different values? Different tests.
      • System changes? Need to rewrite tests.
      • Writing tests? Expert knowledge.
    • Data-driven: We isolate the data in separate set (spreadsheet, etc.), data is automatically read by the test.
      • Different values? Same test.
      • System changes? Need to rewrite tests.
      • Writing tests? Expect knowledge.
    • Keyword driven: We define a language for testing that talks about “business actions,” which is implemented using software libraries.
      • Different values? Same tests as also data driven.
      • System changes? Only the libraries need to be changed.
      • Writing tests? Business knowledge + “language” documentation.

Capture and replay, data driven and keyword driven. From the honest and humble beginner to the test automation superhero, there is one way, no secret! Efficient data driven and keyword driven are a must once the internal coding can change a lot, otherwise your tests will break.

  • There is great value in “making the glue” between automation tools and other quality assurance. For example, automated validation should update the status of test cases, which, if documented using a test management tool, can immediately update the traceability matrix and provide part of the validation report.

What if you spend weeks turning automation results into best documentation practices and documented traceability?

Enabling agility

Modern software development methods, like scrum, involve short and frequent development cycles. Similarly, test-driven development relies heavily on the effectiveness of testing.

Thus, test automation is critical for the success of these projects. The GAMP5 standard emphasizes this point: “Tools play an essential role in demonstrating that the system is suitable for its intended use, that functionality can be traced to requirements, and that testing has been completed.”

For example, adding automated unit tests and static code analysis to agile projects is a common practice. Code is built automatically each night, and unit tests are run, with the results added to the “nightly build bill of materials” so that each developer receives a quality report each morning. 

GAMP5 also provides a great example of DevOps, an extension of Agile methodology, which uses source code control and automated testing as part of the continuous integration process to prevent the release of flawed code. 

The ISPE’s Good Practice Guide on innovation provides further guidance, stating that “continuous integration and continuous deployment is an extension of DevOps where there is more use/dependence on automated software test analysis tools.”

Conclusion

If we consider the benefits, costs, risks, limitations and the huge variety of automatable software testing actions, we can build a great story.

As a summary:

  • The closer you are to the code (unit test, static code analysis, etc.), the easier it is to automate.
  • Conversely, end-user interface testing is complex to automate. It only makes sense to automate those when they come back:
    • Regression testing
    • Recurring testing in scrum methodology
    • Test-driven development
  • Test automation has initial and recurring costs and requires trained people.
  • Test automation is not just practicing the end-user interface. It has many aspects that can help you.
  • Automate what computers do better (systematic, recurring, etc.) and keep manual testing for what humans do better (interaction, visualization, etc.).
  • Automated and manual testing all have their advantages and disadvantages, a mixed strategy will increase your CSV.

Need help setting up your automated software testing? Or do you have additional questions? Our experts will be happy to help you!

Please do not hesitate to contact us.

Expert knowledge in Computer Systems Validation

Our validation solution guarantees maximum return on investment. Check our off-the-shelf validation solution for you.

Related articles

Did you find this article interesting? Thanks for sharing it with your network:

Subscribe to the Blog
Here you will find interesting articles and news related to your industry.

Table of Contents

Stay up to date with life sciences insights

Come visit our booth at CPHI Barcelona 2023

Come to see the QbD Group at stand #3G73 at CPHI Conference in Barcelona. And after the conference…Eat & Connect with lifescience professionals at our QbD’s CPHI Networking Drink.