v2.4.0 | Report Errata
docs development docs development

Maintainability asks whether the model can be retrained, fine-tuned, or recalibrated in response to post-market monitoring findings without triggering a substantial modification under Article 3(23). It also asks whether the model’s behaviour is stable across minor updates.

The assessment evaluates the model’s sensitivity to retraining. Gradient-boosted trees and logistic regression produce stable, predictable changes when retrained on augmented data; the performance shift is typically proportional to the data change and can be estimated in advance. Deep neural networks can exhibit large behavioural shifts from small data changes, making incremental maintenance more difficult without triggering a substantial modification assessment.

The assessment also considers the operational effort required for maintenance. Models that can be retrained and revalidated through the existing CI/CD pipeline score higher than models requiring manual intervention, custom infrastructure, or lengthy retraining cycles. The availability of parameter-efficient fine-tuning methods (LoRA, adapters) for the candidate architecture may improve the maintainability score by enabling targeted updates without full retraining.

Quantitative substantial modification thresholds should be estimated for each candidate: what magnitude of retraining-induced performance change would cross the threshold? Architectures where normal maintenance frequently crosses the threshold impose a heavy governance burden.

Key outputs

  • Maintainability score per candidate model
  • Estimated substantial modification sensitivity
On This Page