Subcontractors are already using AI on your projects. Without a board-level AI policy, the liability exposure and quality control implications land with you — whether you knew about it or not.
Construction firms have spent years developing robust subcontractor management frameworks — prequalification, performance monitoring, insurance requirements, health and safety obligations. Most of those frameworks say nothing about AI. That gap is closing fast.
Subcontractors are using AI tools today. On live projects, right now. For structural calculations, cost estimating, design coordination, document drafting, and site monitoring. In many cases, the main contractor or client does not know this is happening. In some cases, the subcontractor does not fully understand what the tool is doing or what obligations its use creates.
This is a board-level risk, not an IT inconvenience.
What the liability exposure looks like
When a subcontractor uses an AI tool to assist with a structural calculation or a safety-critical specification and that output contains an error, the question of who carries the liability is not straightforward. It depends on the contract terms, the scope of the subcontractor's obligations, what the main contractor reviewed, and whether any contractual restriction on AI use existed.
In the absence of an AI policy, most existing construction contracts are silent on this. They were written before the question was live. That silence creates ambiguity — and ambiguity in construction claims is expensive.
Beyond liability, there are quality control implications. An AI-generated drawing or specification that has not been reviewed by a competent professional may contain errors that are difficult to detect on casual review. A site team working to a drawing that originated from an AI tool, without knowing that, has no reason to apply additional scrutiny.
What an AI policy for construction should cover
An effective AI policy in a construction context needs to address several distinct questions.
Scope of permitted use. Which types of AI-assisted outputs are acceptable, and for which work packages? Preliminary design exploration is different from final specification. AI-assisted document drafting is different from AI-assisted structural analysis. The policy needs to distinguish between these categories, not treat AI as a single undifferentiated capability.
Disclosure requirements. When a subcontractor uses AI to produce a deliverable, the main contractor and client should know. A requirement to disclose AI use in submittals and documentation is a basic governance measure. Without it, review processes cannot be appropriately calibrated.
Human review requirements. Any safety-critical or contractually consequential output produced with AI assistance should require sign-off by a named competent professional — not just approval in the general sense, but documented review that confirms the output has been independently verified.
Intellectual property and data. What data is being uploaded to AI tools? Many AI products process inputs on third-party servers. Project data — drawings, specifications, commercial information — uploaded to a cloud-based AI tool may be processed in ways the contractor has not considered. Data provisions in the AI policy need to reflect actual project confidentiality obligations.
EU AI Act classification. Some AI applications in construction may meet the threshold for high-risk classification under the EU AI Act, particularly where they are used in safety-critical contexts. Boards and their management teams should understand whether any AI tool in their supply chain falls into this category and what obligations that creates. See our overview of EU AI Act compliance for the relevant framework.
The board's role
Boards of Irish construction businesses do not need to understand the technical detail of every AI tool their subcontractors use. They do need to ensure that a policy framework exists, that it is flowed down to subcontractors through contract terms, and that there is a process for reviewing and updating that framework as AI capabilities and regulatory requirements evolve.
The firms that have moved first on AI governance in construction are not doing so out of excessive caution. They are doing it because they have correctly identified that the liability and quality control exposure from unmanaged AI use across a complex supply chain is real, current, and growing.
We work with construction boards and senior management teams on AI governance frameworks appropriate to the sector. If you want to understand what good looks like and what your current exposure is, talk to us.