System Purpose & Constraints
Before any architectural work begins, the Business Owner must articulate a precise statement of business intent. This statement defines what the system is intended to achieve, for whom it operates, and within what constraints. Precision matters here: “to assist human recruiters in screening high-volume applications by ranking candidates against role-specific competency profiles” is adequate. “To improve recruitment efficiency” is too vague to constrain design decisions or enable meaningful compliance assessment.
The Business Owner assesses the business intent for alignment with the organisation’s values, the EU AI Act’s requirements, and the fundamental rights of affected persons. If the intent cannot be satisfied without creating unacceptable risks to fundamental rights, the organisation must modify the intent or decline to develop the system. This assessment is documented in the AISDP as a precondition to Module 1.
The statement of business intent also serves as the reference point against which every subsequent design decision is measured. Architectural choices, feature selection, threshold calibration, and post-processing rules should all trace back to this statement. Where a design decision cannot be justified in terms of the stated purpose and constraints, it requires either revision of the decision or revision of the intent statement through the appropriate governance gate.
Key outputs
- Statement of business intent with specific purpose, beneficiaries, and constraints
- Business Owner’s assessment of alignment with organisational values and regulatory requirements
- Documented precondition to AISDP Module 1
Prohibited Outcomes
The ethical framework established before development must explicitly identify outcomes that the system is prohibited from producing. These prohibitions translate high-level principles into concrete design constraints that the engineering team can implement and test against.
The development team addresses several foundational questions when defining prohibited outcomes. What are the potential harms this system could cause? Who bears those harms? Are the harms distributed equitably? The answers inform a set of boundaries that the system must never cross, regardless of what the model’s raw outputs might suggest.
For a recruitment screening system, a prohibited outcome might be that no protected characteristic subgroup receives a selection rate below 90% of the highest-performing group. For a credit scoring system, it might be that no applicant is rejected solely on the basis of postcode. These prohibitions become testable acceptance criteria, embedded in the CI/CD pipeline and monitored in production.
The AISDP must demonstrate the translation from principles to constraints. “The system must not discriminate” is a principle; a specific selection rate ratio threshold is a design constraint. The distinction is important because principles alone cannot be verified through testing, while constraints can.
Key outputs
- Enumerated list of prohibited outcomes with measurable thresholds
- Mapping from ethical principles to testable design constraints
- Integration points with CI/CD acceptance criteria and production monitoring
Ethical Framework — Design Constraints & Non-Deployment Thresholds
The ethical framework documented before development begins should reference recognised principles such as the EU’s Ethics Guidelines for Trustworthy AI, the OECD AI Principles, or the organisation’s own responsible AI framework. Its purpose is to translate those principles into concrete design constraints and, critically, to establish non-deployment thresholds: the conditions under which the system must not proceed to production.
Design constraints derived from the ethical framework address questions of harm distribution and redress. What mechanisms allow affected persons to understand, challenge, and seek redress for the system’s decisions? What safeguards ensure the system serves its intended beneficiaries without unfairly disadvantaging others? These questions yield specific requirements for the explainability layer, the human oversight interface, and the post-processing rules.
Non-deployment thresholds define the performance and fairness boundaries below which the system is considered unfit for production. If the system’s fairness metrics, accuracy measures, or robustness scores fall below these thresholds during pre-deployment validation, deployment is blocked. The AI Governance Lead approves the ethical framework and the thresholds it establishes before development resources are committed.
Key outputs
- Ethical framework document referencing recognised principles
- Design constraints derived from the framework, with measurable criteria
- Non-deployment thresholds for performance, fairness, and robustness
- AI Governance Lead approval record
Transparency Commitment (Deployer, Affected Person, Regulator, Internal)
Before development begins, the organisation commits to a level of transparency appropriate to the system’s risk tier and the expectations of its stakeholders. For high-risk systems, this commitment spans four dimensions.
Transparency to deployers specifies what information will be provided about the system’s capabilities, limitations, and operational requirements. This feeds directly into the Instructions for Use documentation required under Article 13. Transparency to affected persons specifies how individuals will be informed of the system’s involvement in decisions that affect them, and how they can obtain explanations of individual outcomes. This dimension intersects with the right to explanation under AI Act Article 86 and the right not to be subject to solely automated decision-making under GDPR Article 22 (see also Recital 71).
Transparency to regulators specifies how the AISDP and its evidence base will be made available for inspection by national competent authorities. The commitment should address response timelines and export formats. Internal transparency specifies how the development team, governance leads, and organisational leadership will maintain ongoing visibility into the system’s behaviour during both development and production operation.
These commitments are documented and approved by the AI Governance Lead before development resources are committed. They become the basis for the transparency measures documented in AISDP Module 8 and the monitoring dashboards described above.
Key outputs
- Transparency commitment document covering all four dimensions
- AI Governance Lead approval record
- Mapping to Module 8 deliverables and Article 13 requirements