Microservice Dependency Mapping
High-risk AI systems built on microservice architectures require a current dependency map showing how each service communicates with every other service, the data contracts between them, the sequence in which services process data for a given inference request, and the failure modes that propagate across service boundaries. This is a compliance artefact, not an architectural convenience.
Without a dependency map, the organisation cannot assess whether a change to one service constitutes a substantial modification to the system as a whole. A modification to the data ingestion service that alters how missing values are handled will change the feature vectors produced by the feature engineering service, which will change the model’s inference behaviour. The Technical SME must be able to trace this chain for every proposed change.
Before any service is updated, a change impact analysis traces the change’s effects through the dependency map. The analysis references the specific AISDP modules affected and assesses whether the combined effect crosses the substantial modification threshold. The composite version identifier captures the specific combination of service versions currently deployed; a deployment event that changes one microservice changes the composite version, even if the other services remain unchanged.
Key outputs
- Service dependency map with communication paths, data contracts, and failure modes
- Change impact analysis template
- Integration with the composite version identifier
- Module 3 AISDP evidence
Contract Tests (Consumer Expectation Validation)
Contract testing addresses a failure mode that integration testing misses: the silent breaking change. When a data provider modifies an API response format, or a feature computation service changes its rounding behaviour, the dependent system may continue operating without errors yet produce incorrect results. Contract testing detects these breaks before they reach production.
Consumer-driven contract testing (Pact) works by having each consumer of a service define a contract: “I expect to send this request and receive a response with these fields, of these types, within these value ranges.” The contracts are stored in a broker and verified against the provider on every provider build. If the provider makes a change that violates a consumer’s contract, the provider’s build fails before the change is deployed.
Statistical contract testing (Great Expectations applied to data interfaces) extends the concept to data quality. A data consumer defines statistical expectations: “I expect the income column to have no null values, to be non-negative, and to have a mean within 10% of the historical mean.” Statistical contracts are particularly important for ML systems, because a delivery that satisfies the schema contract but violates the statistical contract may be silently accepted and degrade model performance. Contract tests run as part of the CI pipeline for every service, and a failure blocks deployment.
Key outputs
- Consumer-driven contract definitions (Pact or equivalent)
- Statistical contract definitions (Great Expectations or equivalent)
- CI pipeline integration with deployment blocking on failure
- Module 5 and Module 3 AISDP evidence