AI risk isn’t a side quest anymore. It’s the main storyline.
The prize? Faster decisions, smarter systems, limitless automation.
The catch? Blind spots so deep even your best model couldn’t predict them.
And yet, as companies race toward “AI-first,” most are dragging governance built for the fax-machine era, the corporate equivalent of trying to stream Netflix on a Nokia 3310.
Risk models born in a world of passwords and firewalls can’t cope with self-modifying agents or models that rewrite their own rules mid-sentence.
Over the last 18 months, I’ve been neck-deep in fixing that. Building frameworks from scratch. Shaping industry-first initiatives like the OWASP Top 10 for Agentic AI Systems and the WEF Cyber Resilience Compass. Not as a side hustle. Not in the comfort of a conference room with pastel sticky notes. I’m talking messy workshops, impossible deadlines and governance debates that could have melted steel.
Here are three lessons no textbook or ISO glossary will hand you; the kind you only learn by sweating through the uncertainty while trying to design AI risk standards that work.
1. Strategy needs tension, not just consensus
Everyone claims they want alignment. But too much alignment? That’s the fast lane to mediocrity.
One of the first things I learned was this: If everyone at the table agrees too quickly, you’re probably solving the wrong problem, or not solving anything at all.
Early in our AI governance work, we had engineers fixated on model weights, ethicists locked on fairness and compliance teams twitching over regulations that didn’t even exist yet. The polite thing would have been to dilute everything until everyone nodded in quiet agreement.
That’s how you end up with governance so bland it couldn’t stop a rogue chatbot from recommending bleach as a detox.
We did the opposite. We leaned into disagreement like climbers use tension in the rope; not to fight, but to keep from falling. Heated debates weren’t dysfunction; they were design tools.
Instead of chasing a perfect, immovable framework, we built scaffolding; modular principles that could stretch as capabilities evolved. We embedded concepts for autonomy, feedback loops and emergent behaviour, not just static controls.
If your strategy sessions feel comfortable, you’re not building for the real world. You’re building a brochure.
2. Execution lives in the edge cases
The most significant AI threat isn’t evil robots. It’s a misunderstood system.
AI governance diagrams look beautiful in slide decks. Clean. Linear. Colour-coded.
But out in the wild, models wander. They learn things you didn’t teach them.
They drift into untested territory.
They simulate scenarios in the background, then make decisions you can’t fully trace.
We hit one of those traps head-on. Anthropic was experimenting with a self-improving language model, a system that could adjust its algorithms and code continuously.
Clever in theory, until you realise the audit trail just deleted itself. Try governing a ghost.
The problem with most risk registers is that they assume the system “plays fair.” Self-modifying agents don’t. They can sidestep your spreadsheet.
So we shifted our approach. We built intent-aware safeguards, not rigid rules, but adaptive guardrails that adjusted to what the model was trying to do.
We didn’t just map architecture; we mapped behavior.
- What happens when the AI lies?
- …when it makes a recursive call?
- …when it ignores or refuses your instructions?
Most governance frameworks crumble in these unusual and often overlooked corners. That’s where your playbook needs teeth.
3. Build with, not for, the business
Nothing kills a governance standard faster than designing it in a vacuum.
You can’t lock yourself in a room, type up a 90-page PDF and expect product teams to salute. Real adoption happens where the friction lives: inside the sprints, in the workflow shortcuts, in the “just-ship-it” culture.
The people plugging AI into business processes often don’t read policies. Some don’t even know they exist. That’s why we co-created everything. Engineers, product owners and even marketing.
We ran workshops where teams role-played AI failures. We red-teamed our frameworks to see where they’d snap. And we stopped asking, “Is this compliant?” and started asking, “Would this help you make a better decision under pressure with half the facts?”
The result? A living playbook. Not a governance tombstone gathering dust in SharePoint. Principles, triggers and templates baked directly into product and security lifecycles. Something that breathes with the business instead of policing it from afar.
When the people closest to the risk help shape the guardrails, they own them.
The future isn’t about control, it’s about readiness
Here’s the part many risk leaders still don’t want to hear: You will never fully control AI risk.
These systems move too fast, think too strangely and break too many assumptions to be fenced in forever. That doesn’t mean you’re powerless. It means you need a different muscle, one built for adaptation, not dominance.
- If you’re in policy, draft guardrails that flex.
- If you’re in engineering, build observability from day one.
- If you’re in audit, hunt for signals, not just evidence.
AI risk governance isn’t a one-time fix. It’s a posture. A muscle. And it only strengthens when you use it.
So stress-test your frameworks. Break your tools. Assume you’re missing something because you are. And build with the expectation that you’ll be wrong, but ready to pivot fast.
The risk that matters most
The riskiest move in AI governance isn’t pushing a flawed framework into production.
It’s pretending you’re in control when you’re not.
Start small. Start now. Build the scaffolding. Test the edge cases. Involve the people who live with the risk daily. And keep your frameworks alive, because dead ones won’t defend you.
I’ve seen enough to know: no perfect governance model is waiting around the corner. There’s only the one you start today and evolve tomorrow.
If you’re building too, I want to hear from you. Bring your ideas. Challenge the thinking. Let’s make something that works in the real world, before the real world makes something that works around us.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: 3 unfiltered lessons from reinventing AI risk governance
Source: News

