CSA & CSV: What the Experts Know
Monday November 14, 2022
Brian Slocum is a Senior Consultant and Project Manager for Arbour Group, a global provider of regulatory products and services for the Life Sciences for over 25 years. He has broad experience with software applications enabling manufacturing, distribution, engineering, quality management, marketing, clinical affairs, and related business functions. Brian also has extensive experience in various software development (SDLC) methodologies and their effective use within a regulated environment. With the draft guidance, Arbour Group presents its subjective opinion.
On September 13, 2022, the FDA released draft guidance, for comment purposes only, on Computer Software Assurance (CSA) for Production and Quality System Software. Now that we have the draft guidance, we have an objective basis for another subjective opinion.
This release has been widely anticipated and the topic of many seminars and panel discussions over the last couple of years. Likely, you’ve already heard of CSA, and perhaps you’ve listened to the differing opinions of what it is, what it means, and how it compares to Computer Software Validation (CSV).
If you’ve heard of CSA from one of its many proponents, you may have heard that CSA compared favorably to CSV in somewhat absolute terms. You may have heard something to the effect that CSA adds value, whereas CSV produces documentation. Most knowledgeable practitioners would recognize such statements as simplistic and hyperbolic, but hyperbole is based on truth.
CSA may have greater utility and may provide results vastly superior to CSV. However, this is only the case if you’re practicing CSV badly and are capable of practicing CSA well.
CSA vs. CSV
CSA was talked about and promoted for years before the FDA's draft guidance was released. But CSA, with a minor exception, doesn't change or revoke any portion of CSV[i]. What the CSA guidance does bring is new clarity in how to define risk based on the intended use and what types of testing are appropriate for different levels of risk. As the draft guidance states, CSA is a supplement to existing CSV guidance, not a revocation.
More than anything, CSA corrects some misinterpretations and misapplications of CSV in the way some organizations have practiced it. Much of the misapplication can be traced to software-centric risk assessments, such as risks based on software categories or risk and mitigation calculations that result in complex numeric scores with no practical meaning and often lead to no differentiation in the test approach. Misapplication of a risk-based approach to Computer Software Validation is likely to lead to poor results. The new guidance clearly states that risk should be process and patient-centric[ii], not software-centric.
The benefits of CSA depend on your organization, your regulated products or services, and your level of validation (and organizational) maturity. If your organization has strayed far from the principles of risk-based CSV, you may be producing documentation with no inherent value. You may even burden your organization with non-value-added testing and compromise its effectiveness. If that's your situation, you may have much to gain from applying CSA principles. If, on the other hand, you have been following CSV guidance as intended, your practices may already be in close alignment with CSA.
What is CSA? How does CSA differ from how CSV is practiced?
Q: Can CSA provide better quality outcomes and take less time than CSV?
A: Yes. But only if it's implemented well.
Q: What is required to implement CSA well?
A: This requires organizational transformation because it moves processes and responsibilities upstream. If you tell your CSV department or CSV provider to do less testing and take less time, you will almost certainly fail.
Q: Can we get CSV involved earlier and tell them to accept development or user acceptance tests?
A: No. You can get CSV involved sooner if they're not involved soon enough, but you still need to implement CSA. With CSA, you can no longer wait until the end of the process to confer a quality or compliance stamp of approval (i.e., you can no longer pass the final exam by cramming at the last minute). That may have been possible with CSV. But with CSA, you must go to class and do the homework. With CSA, you must define intended use, determine risk, and assign appropriate assurance activities at the outset.
A fundamental tenet of CSA is to use upstream resources. This extends to choosing and qualifying suppliers. You can only leverage upstream resources if you know what is required and what artifacts are acceptable early on.
CSA = Agile
The key to understanding CSA is agile development. CSV --as initially conceived-- presumed waterfall methodology. CSA assumes an agile development environment. That means you can still employ CSV in an agile environment[i] or CSA in a waterfall environment. CSA only presumes agile methodologies as a conceptual framework. Likewise, CSV, as typically practiced, is a waterfall step at the end of the development process, usually performed by a specialized group, before handing off to QA for final approval.
Most organizations have some level of hybridization. They contain some elements of agile and some of the waterfall. But for illustration purposes, we’ll assume CSA and CSV occupy hegemonic extremes.
In concept, CSA is part of an agile process in which you only document what has value to your organization. In contrast, CSV documents everything that may be needed to satisfy an auditor (either internal or external). At its hegemonic and dysfunctional worst, CSV can be a bloated bureaucracy, detached from the rest of the process, that adds weeks or months to the end of any development or deployment project to no good effect.
With CSA (properly implemented), the CSV function no longer occupies its silo with self-contained information. The same amount of information doesn’t need to be captured because, with CSA, it doesn’t need to stand on its own. But with reliance on other data such as vendor testing, SDLC artifacts, and quality assurance artifacts from activities further upstream in the development process, a broader array of processes (and related artifacts) may be relied upon to assure software quality.
All requirements will still be documented. RTM will be mapped to a broader array of tests. Those tests will come in various forms, from various sources, at different points in the process.
A CSA implementation effort will affect more people than a self-contained CSV model. CSA will likely require a redefinition of some job duties as CSV becomes a more integrated part of SDLC. For example, regular business users may perform unscripted or ad-hoc testing because there is no longer such a stringent documentation standard to require a CSV specialist. But the “existing protocol” language in 820.70(i) still applies. It applies more to a documented risk and test management program than to a single test case or test plan.
As everything shifts upstream, quality will be improved, and the CSV silo will be smaller and take less time to build. That doesn’t mean the whole process will take less time –especially not in the short run. That’s because you have shifted more responsibility and likely more rigor in passing stage gates earlier in the process.
The stage gates will be smaller but more numerous. Agile requires more work cycles, more focused content per cycle, and a more significant commitment to producing agreed-upon deliverables from each cycle. Each work cycle has an inspection step that requires stakeholder involvement. This will identify issues earlier in the cycle. Identifying issues earlier in the cycle is where the benefits of Agile and CSA start to be realized.
With CSV, sloppy and poorly executed SDLC processes can be covered up as the backend artifacts in the CSV silo. With CSA, this is no longer possible. With CSA, you will need to have documented SDLC procedures (for defining intended use, assessing risk, defining SDLC testing, etc.), and you will need to have related artifacts to show you are following this process. This requires a more coordinated level of IT governance (including IT, business, and quality stakeholders) to oversee the entire process –from design to release.
The best approach for you depends on your organization and its current practices. Every organization should strive for continuous improvement, but how, where, and at what pace should be based on an honest assessment of your current state. Regardless, you should not see yourself as faced with a binary set of choices. CSA is a supplement, a codification, and a refinement of CSV, not a divergent path.
To further discuss Arbour Group’s position on CSA and managing a holistic approach to validation, contact us to address your concerns and questions.
[i] The September draft guidance states that when final it will supplement the existing “General Principles of Software Validation” except for Section 6 (“Validation of Automated Process Equipment and Quality System Software”), which will be superseded.
[ii] The draft guidance simplifies risk assessment into “high process risk” and “not high process risk”. High process risk applies to reasonably foreseeable software failures that could harm patients or users (also know as “medical device risk).
[iii] See AAMI TIR45, “Guidance on the Use of Agile Practices in the Development of Medical Device Software” (2012).