78% of organizations report using AI in at least one business function, according to a report from McKinsey.
Translation: Your organization’s use of AI is no longer your only concern.
The frontier of exposure now extends to your partners’ and vendors’ use. The main question being: Are they embedding AI into their operations in ways you don’t see until something goes wrong? A vendor’s chatbot that mishandles sensitive data, an algorithm that delivers biased outputs or a partner that trains its models on your information can all cascade into regulatory penalties and reputational damage. And unless your contracts anticipate these scenarios, the burden is likely to shift to you.
To stay ahead of this risk, organizations can (and should):
- Require disclosure of where and how AI is used
- Restrict how their data can be fed into external models
- Mandate human oversight for high-stakes decisions
- Assign liability for errors or bias back to the vendor
These aren’t just legal details. They are your organization’s first line of defense in managing AI risk beyond your own walls.
1. Disclosure of AI use
You can’t govern what you can’t see. Require vendors to formally disclose where and how AI is used in their delivery of services. That includes the obvious tools (like chatbots) and embedded functions in productivity suites, automated analytics and third-party plug-ins.
Without disclosure, you may be relying on AI-generated work product without even knowing it — a compliance nightmare, especially if you operate in multiple jurisdictions.
This isn’t a hypothetical gap. While nearly four out of five organizations use AI, McKinsey reports that only 21% have fully mapped and documented their AI use cases. The lack of visibility within companies highlights how easily “shadow AI” can infiltrate workflows and underscores the importance of demanding visibility from vendors.
Action to take
Spell out that disclosure must be proactive, not only upon request. In Europe, for example, the EU Artificial Intelligence Act already requires such transparency when AI is used in customer-facing roles.
2. Data usage limitations
Your data is your most valuable asset; you may not know how it’s used once it leaves your control. Many AI vendors want to leverage client data to train and refine their models. Unless your third-party contracts explicitly restrict this, sensitive information could end up in systems you don’t govern or even embedded in a model that benefits your competitors. The lack of transparency of AI use cases makes it nearly impossible to know whether your data is being repurposed in ways you never agreed to.
Action to take
Include explicit language that your data may not be used to train external models, incorporated into vendor offerings or shared with other clients. Require that all data handling comply with the strictest applicable privacy laws (GDPR, HIPAA, CCPA, etc.) and specify that these obligations survive the termination of the contract.
3. Human oversight requirements
AI can accelerate workflows and reduce costs, but also introduces risks that can’t be left unchecked. Human oversight ensures that automated outputs are interpreted in context, reviewed for bias and corrected when the system goes astray. Without it, organizations risk over-relying on AI’s efficiency while overlooking its blind spots. Regulatory frameworks are moving in the same direction: for example, high-risk AI systems must have documented human oversight mechanisms under the EU AI Act.
The consequences of skipping human oversight are already visible. In the US, Workday is facing an EEOC lawsuit — still unresolved as of September 2025 — alleging that its AI-powered recruiting software discriminated against applicants based on race, age and disability. Even though the alleged bias originated in the vendor’s system, the case is being brought under federal employment law, which means the employers who relied on Workday’s tool are not insulated from accountability.
That’s a critical lesson for third-party contracts: regulators and courts don’t just look at the technology provider when a vendor’s AI makes a flawed or biased decision. They also look at the organization that used the tool in its operations.
Action to take
Define specific oversight requirements in contracts with vendors, such as requiring that a qualified recruiter review AI-driven hiring recommendations. Just as importantly, internal processes should be built to ensure those reviews actually happen.
4. Liability for output error or bias
When AI gets it wrong, the costs can be steep — from reputational fallout to regulatory fines. The critical question is who bears that liability. Without explicit clauses, the default may be that your organization is responsible for damages, even if the issue originated with a vendor’s AI tool.
Many vendors attempt to limit their own exposure. Research shows that 88% of AI technology providers cap their liability, often at no more than a single month’s subscription fee. While this data comes from AI software contracts, it illustrates a broader reality: third-party partners are unlikely to assume meaningful responsibility for AI-driven errors unless you require it in your agreement. That misalignment matters. Regulators and courts typically look first to the organization using the tool, not the vendor providing it.
Action to take
Negotiate liability provisions that explicitly cover AI-driven issues, including discriminatory outputs, regulatory violations and errors in financial or operational recommendations. Avoid generic indemnity language. Instead, AI-specific liability should be made its own section in the contract, with remedies that scale to the potential impact.
AI contracts as your first line of AI governance
As your vendors embed AI deeper into their services, liability, bias and data misuse can easily become your problem. The clauses outlined here provide a starting point for protection, but they’re not the end of the story. Your contracts must work in tandem with internal oversight, including maintaining an AI inventory, training employees and establishing clear policies for responsible use.
Regulators are moving quickly, lawsuits are beginning to test accountability and vendors will continue to push liability onto their clients. The organizations that thrive will be those that treat contracts as part of a broader AI risk framework — not an afterthought. By embedding disclosure, data protections, oversight and liability into agreements today, you create guardrails that protect your business tomorrow, no matter how the technology evolves.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Your vendor’s AI is your risk: 4 clauses that could save you from hidden liability
Source: News


