The recent new regulations for climbing Mount Everest give us some surprising parallels, lessons learned, and best practices between the physical risks of mountaineering and the governance risks of high-stakes AI.
The new and stringent regulations related to Everest center around mandatory use of local guides and prior experience, electronic tracking, strict health certifications, and waste management — a clear focus on experience, real-time observability, safety, and sustainability.
High-risk AI systems, those defined as so by the EU AI Act regarding their potential impact on health, safety, or fundamental rights, are classified this way if they either fall under EU product safety legislation or used in sensitive areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, or justice.
So to help CIOs deal with high-risk AI implementations, here are five lessons from the top of the world.
Proof of acclimatization
In recent seasons, Everest experienced a surge in aspirational climbers who lacked basic high-altitude skills and equipment knowledge. Those factors, and refusals to turn around at hard time stops, resulted in several deaths.
So under the 2025/2026 Tourism Bill, climbers must now provide a verified certificate proving they’ve summited at least one peak above 7,000 meters in Nepal before they can apply for an Everest permit. Why 7,000? Because this altitude represents the transition from high to extreme altitude and presents a critical physiological and technical threshold.
For CIOs, this situation mirrors shadow AI and AI sprawl, where teams may lack the experience to mitigate the underlying risks of their implementations. To resolve it, it’s important teams working on high-risk AI projects have proven experience with at least moderate risk implementations, and understand the governance requirements of the higher-risk projects they’re about to tackle.
This experience rule should apply to the technologies involved as well. Teams working on projects and tech involved all need to be fit for task. For example, CIOs may decide to prohibit the deployment of autonomous systems in core financial or customer-facing workflows unless the underlying model and its orchestration layer have successfully passed a pilot with documented safety metrics. According to KPMG’s Q1 2026 AI Pulse Survey, these types of restrictions are well underway with 43% of organizations identifying high-risk use cases where autonomous agent decision-making isn’t allowed.
Mandatory black box and tracking
On Everest, all climbers are now required to rent some kind of GPS tracking chip that’s sewn into their jackets to expedite search and rescue operations, if needed.
“On Everest, tracking isn’t optional, it’s survival,” says Steven Pivnik, an entrepreneur and advisor who utilizes an endurance mindset built from years of Ironman racing and mountaineering, including Mt. Everest. “In high-risk AI, if you can’t see how decisions are made or trace outcomes. You don’t have control, you have exposure.”
In the AI world, this tracking requirement translates to real-time agentic observability. Every high-risk AI project should include a dedicated observability budget typically 10 to 15% of total project cost. Teams should also implement trust verification frameworks that provide a real-time heartbeat of agent intent, ensuring that if an agent drifts into a non-compliant decision path, it’s located and paused before it can execute.
Certified local guides — the Sherpa requirement
On Everest, solo climbing is now strictly prohibited. Every climber must be accompanied by at least one certified Nepali guide or high-altitude worker. This ensures local knowledge and safety are prioritized.
The business lesson is to move away from generalist AI teams and toward specialist, hybrid ones with necessary technical, contextual, and compliance-related expertise. This includes team members with deep, industry- specific domain knowledge, dedicated compliance or ethics officers, cybersecurity specialists, and external partners as needed.
“Enterprises considering the implementation of complex AI projects should integrate cybersecurity early in their planning process,” says Jude Sunderbruch, MD at cybersecurity consulting firm OakTruss Group. “Some organizations have the necessary skills in house but in other cases, it’s advisable to leverage outside partners with relevant experience.”
The KPMG AI Pulse Survey also found that when it comes to managing agent risk in the next six to 12 months, 48% of organizations are looking to deploy AI agents developed by trusted tech providers versus going it alone.
Strict health certification
Climbers must submit a medical fitness certificate issued within 30 days of an expedition start date. And for those over 50, tests like an ECG and stress test may be required too.
In the AI world, there’s an expansive number of vendor and tool-specific certifications available to validate expertise. Organizations such as Thinkers360 offer holistic ones that cover an expert’s lifetime body of work in specific domains by examining their authored content and experience. In a world exploding with self-proclaimed AI experts, reviewing third-party credentials can be a useful way for CIOs and their teams to review vendor and practitioner capabilities.
An additional way to conduct the medical check-up for your AI project is to run a formal impact assessment to identify potential health risks to the organization or the public before a single line of code is deployed. Having a pre-defined incident response and liability plan can also help establish the requisite financial and legal insurance for added protection.
Sustainability and waste management
Climbers are now mandated to use government-sanctioned biodegradable waste, alleviation, and gelling (WAG) bags to carry their waste down from higher camps to base camp for proper disposal.
In the AI world, this translates to a similar environmental focus as boards and executives increasingly turn their attention to the sustainability impact of AI data centers. With global data center investment projected to exceed $3 trillion over the next five years to meet AI-driven demand, some organizations are already reporting AI-related infrastructure costs and emissions doubling month-over-month as experimentation and pilots expand.
To manage this aggregate energy consumption, CIOs need to work better with their sustainability teams to set goals for the environmental footprint of their sovereign data centers, as well as those of their partners. They can achieve this by looking for technologies designed to address this challenge at the architectural level.
By paying attention to lessons learned from Everest, and new regulations focused on quality over quantity, you’ll be in a stronger position to mitigate risk in your next high-stakes AI project.
Read More from This Article: 5 lessons from Everest for high-risk AI projects
Source: News

