After completing provenance assessment, licence review, governance gap analysis, and bias and adversarial evaluation, a residual non-conformity risk profile remains for each open-source model component. This residual profile aggregates the risks that the downstream provider’s compensating controls cannot fully eliminate.
The AI System Assessor documents each residual risk with its source (provenance gap, governance gap, testing gap, or licence uncertainty), its potential impact on the system’s compliance posture, the compensating controls applied, and the residual risk rating after those controls. The AI Governance Lead reviews the aggregate residual profile and makes a formal risk acceptance decision, retained in the evidence pack.
The residual non-conformity risk should factor into the model selection decision. A model with high residual non-conformity risk may be inappropriate for a high-risk system even if its technical performance is superior, because the documentation and compliance gaps may be difficult or impossible to close. The model selection rationale document should record this trade-off explicitly, demonstrating that compliance risk was weighted alongside performance in the selection process.
Residual risk from open-source components is subject to the same periodic review as all risk register entries. Changes in the open-source community (new evaluation results, disclosed vulnerabilities, licence amendments, provider cessation) may alter the residual risk profile and trigger reassessment.
Key outputs
- Residual non-conformity risk profile per open-source component
- AI Governance Lead risk acceptance decision
- Periodic review schedule for open-source component risks