Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

5 lessons from Everest for high-risk AI projects

The recent new regulations for climbing Mount Everest give us some surprising parallels, lessons learned, and best practices between the physical risks of mountaineering and the governance risks of high-stakes AI.

The new and stringent regulations related to Everest center around mandatory use of local guides and prior experience, electronic tracking, strict health certifications, and waste management — a clear focus on experience, real-time observability, safety, and sustainability.

High-risk AI systems, those defined as so by the EU AI Act regarding their potential impact on health, safety, or fundamental rights, are classified this way if they either fall under EU product safety legislation or used in sensitive areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, or justice.

So to help CIOs deal with high-risk AI implementations, here are five lessons from the top of the world.

Proof of acclimatization

In recent seasons, Everest experienced a surge in aspirational climbers who lacked basic high-altitude skills and equipment knowledge. Those factors, and refusals to turn around at hard time stops, resulted in several deaths.

So under the 2025/2026 Tourism Bill, climbers must now provide a verified certificate proving they’ve summited at least one peak above 7,000 meters in Nepal before they can apply for an Everest permit. Why 7,000? Because this altitude represents the transition from high to extreme altitude and presents a critical physiological and technical threshold.

For CIOs, this situation mirrors shadow AI and AI sprawl, where teams may lack the experience to mitigate the underlying risks of their implementations. To resolve it, it’s important teams working on high-risk AI projects have proven experience with at least moderate risk implementations, and understand the governance requirements of the higher-risk projects they’re about to tackle.

This experience rule should apply to the technologies involved as well. Teams working on projects and tech involved all need to be fit for task. For example, CIOs may decide to prohibit the deployment of autonomous systems in core financial or customer-facing workflows unless the underlying model and its orchestration layer have successfully passed a pilot with documented safety metrics. According to KPMG’s Q1 2026 AI Pulse Survey, these types of restrictions are well underway with 43% of organizations identifying high-risk use cases where autonomous agent decision-making isn’t allowed.

Mandatory black box and tracking

On Everest, all climbers are now required to rent some kind of GPS tracking chip that’s sewn into their jackets to expedite search and rescue operations, if needed.

“On Everest, tracking isn’t optional, it’s survival,” says Steven Pivnik, an entrepreneur and advisor who utilizes an endurance mindset built from years of Ironman racing and mountaineering, including Mt. Everest. “In high-risk AI, if you can’t see how decisions are made or trace outcomes. You don’t have control, you have exposure.”

In the AI world, this tracking requirement translates to real-time agentic observability. Every high-risk AI project should include a dedicated observability budget typically 10 to 15% of total project cost. Teams should also implement trust verification frameworks that provide a real-time heartbeat of agent intent, ensuring that if an agent drifts into a non-compliant decision path, it’s located and paused before it can execute.

Certified local guides — the Sherpa requirement

On Everest, solo climbing is now strictly prohibited. Every climber must be accompanied by at least one certified Nepali guide or high-altitude worker. This ensures local knowledge and safety are prioritized.

The business lesson is to move away from generalist AI teams and toward specialist, hybrid ones with necessary technical, contextual, and compliance-related expertise. This includes team members with deep, industry- specific domain knowledge, dedicated compliance or ethics officers, cybersecurity specialists, and external partners as needed.

“Enterprises considering the implementation of complex AI projects should integrate cybersecurity early in their planning process,” says Jude Sunderbruch, MD at cybersecurity consulting firm OakTruss Group. “Some organizations have the necessary skills in house but in other cases, it’s advisable to leverage outside partners with relevant experience.”

The KPMG AI Pulse Survey also found that when it comes to managing agent risk in the next six to 12 months, 48% of organizations are looking to deploy AI agents developed by trusted tech providers versus going it alone.

Strict health certification

Climbers must submit a medical fitness certificate issued within 30 days of an expedition start date. And for those over 50, tests like an ECG and stress test may be required too.

In the AI world, there’s an expansive number of vendor and tool-specific certifications available to validate expertise. Organizations such as Thinkers360 offer holistic ones that cover an expert’s lifetime body of work in specific domains by examining their authored content and experience. In a world exploding with self-proclaimed AI experts, reviewing third-party credentials can be a useful way for CIOs and their teams to review vendor and practitioner capabilities.  

An additional way to conduct the medical check-up for your AI project is to run a formal impact assessment to identify potential health risks to the organization or the public before a single line of code is deployed. Having a pre-defined incident response and liability plan can also help establish the requisite financial and legal insurance for added protection.

Sustainability and waste management

Climbers are now mandated to use government-sanctioned biodegradable waste, alleviation, and gelling (WAG) bags to carry their waste down from higher camps to base camp for proper disposal.

In the AI world, this translates to a similar environmental focus as boards and executives increasingly turn their attention to the sustainability impact of AI data centers. With global data center investment projected to exceed $3 trillion over the next five years to meet AI-driven demand, some organizations are already reporting AI-related infrastructure costs and emissions doubling month-over-month as experimentation and pilots expand. 

To manage this aggregate energy consumption, CIOs need to work better with their sustainability teams to set goals for the environmental footprint of their sovereign data centers, as well as those of their partners. They can achieve this by looking for technologies designed to address this challenge at the architectural level.

By paying attention to lessons learned from Everest, and new regulations focused on quality over quantity, you’ll be in a stronger position to mitigate risk in your next high-stakes AI project.  


Read More from This Article: 5 lessons from Everest for high-risk AI projects
Source: News

Category: NewsApril 22, 2026
Tags: art

Post navigation

PreviousPrevious post:AI hype to AI value: Escaping the activity trapNextNext post:AWS 코리아 “목표·데이터·가드레일·실행”…AI 성공 4대 조건 제시

Related posts

The 4 disciplines of delivery — and why conflating them silently breaks your teams
April 22, 2026
The silent failure between approval and delivery
April 22, 2026
AI hype to AI value: Escaping the activity trap
April 22, 2026
Ways CIOs can prove to boards that AI projects will deliver
April 22, 2026
The changing face of IT: From operator to orchestrator
April 22, 2026
AWS 코리아 “목표·데이터·가드레일·실행”…AI 성공 4대 조건 제시
April 22, 2026
Recent Posts
  • The 4 disciplines of delivery — and why conflating them silently breaks your teams
  • The silent failure between approval and delivery
  • AI hype to AI value: Escaping the activity trap
  • Ways CIOs can prove to boards that AI projects will deliver
  • 5 lessons from Everest for high-risk AI projects
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.