AI Risk FAQ
How do you manage third-party AI risk?
Quick answer
Third-party AI risk arises when AI tools provided by external vendors create risks for your organisation — including data privacy risks, AI reliability risks, and EU AI Act compliance risks. Managing third-party AI risk starts with an inventory: what AI tools are in use, who provides them, and under what contractual terms. For high-risk AI systems, the EU AI Act places obligations on deployers even when the AI is developed by a third party. Due diligence on AI vendors must now include compliance posture under the EU AI Act.
Identifying third-party AI risk
Identifying third-party AI risk requires a complete inventory of AI tools in use — including AI features embedded in software platforms, not just standalone AI tools. Once the inventory exists, each tool should be assessed against three risk dimensions. Data privacy: what data enters the tool, where is it processed, is it used for training, what are the contractual protections? Reliability: what is the vendor’s documented accuracy and error rate for this tool in contexts like yours, what monitoring exists, and what is the vendor’s escalation process for reliability issues? Regulatory compliance: what is the EU AI Act classification of this tool in your usage context, has the vendor provided the required technical documentation, and what deployer obligations does your use of this tool create? Many organisations have no formal process for this assessment, meaning AI tools are adopted based on functionality and price, with no systematic evaluation of the risks the vendor relationship creates.
Due diligence on AI vendors
AI vendor due diligence has become a governance requirement, not just a procurement best practice. For high-risk AI systems under the EU AI Act, deployers must receive specific documentation from providers: an EU declaration of conformity, access to technical documentation, information about the training data, and support for the required fundamental rights impact assessment. Asking for these documents is a good due diligence step even where the system is not definitively classified as high-risk, because the vendor’s response indicates their compliance posture. For all AI vendors, minimum due diligence should include: review of the data processing agreement for GDPR compliance, verification of whether data is used for model training, confirmation of data residency and transfer safeguards, and review of the vendor’s published security and incident response documentation. Vendors who cannot answer these questions clearly should not be used for business-critical AI applications.
Acuity AI Advisory helps Irish organisations assess EU AI Act compliance — including deployer obligations for third-party AI systems. See our EU AI Act compliance services.