Organisations that fine-tune a foundation model for use in a high-risk system must document the fine-tuning process with the same rigour applied to in-house model development. This obligation arises because fine-tuning typically changes the model’s intended purpose and may trigger provider status under Article 25(1)(b).
Fine-tuning records should capture the fine-tuning data governance, addressing Article 10 requirements for the fine-tuning dataset, including provenance, composition, demographic representativeness, bias assessment, and known limitations. The fine-tuning methodology must be documented: the approach (full fine-tuning, LoRA, QLoRA, prefix tuning, adapters), hyperparameters, training duration, convergence metrics, and random seed. The evaluation results compare the fine-tuned model against the AISDP-declared performance and fairness thresholds.
A clear delineation is required between the base model’s characteristics inherited from the GPAI provider and the fine-tuned model’s characteristics that fall under the organisation’s responsibility. This boundary determines which compliance obligations are addressed by the provider’s own documentation and which the fine-tuning organisation must satisfy independently.
For parameter-efficient fine-tuning methods, the compliance boundary does not depend on the volume of parameters modified; it depends on whether the modification changes the model’s intended purpose or risk profile. A LoRA adapter that redirects a general-purpose model toward a high-risk use case triggers the same Article 25(1)(b) analysis as full fine-tuning.
The fine-tuning records are stored in the model registry alongside the provenance metadata for the base model. The Model Selection Record should document the base model selection as a GPAI integration decision and the fine-tuning as a development decision, with separate risk assessments for each.