Five Reputational Risk Dimensions
Reputational risk, though not explicitly within the AI Act’s scope, is among the most consequential risks an organisation faces when deploying AI systems. Five dimensions structure the reputational risk assessment. Customer reputational risk considers how deployers and end users would respond to a publicised system failure; customer attrition in AI-dependent services tends to be abrupt. Market reputational risk considers the broader market perception, which is amplified for organisations in regulated sectors.
Regulatory reputational risk considers the organisation’s visibility to national competent authorities; early enforcement actions will attract disproportionate media attention. Shareholder and investor reputational risk considers ESG rating impacts and cost-of-capital effects. Employee reputational risk considers the effect on talent recruitment and retention; engineers and data scientists increasingly evaluate employers’ AI governance practices.
For each identified technical, fairness, and compliance risk, the AI System Assessor assesses the reputational dimension using five factors: the probability of public discovery, the narrative severity, the stakeholder groups affected, the organisation’s ability to contain damage, and the likely duration of the reputational effect. Reputational risk mitigations include proactive transparency measures, crisis communication planning, and deployer notification procedures.
Key outputs
- Five-dimension reputational risk assessment per identified risk
- Five-factor reputational severity analysis
- Reputational mitigations (transparency, crisis planning, notification procedures)
- Module 6 AISDP documentation