Healthcare organizations are under intense pressure to operationalize gen AI. But unlike many industries, they can’t afford to move fast and fix problems later. The earliest large-scale deployments, especially ambient clinical documentation, are already delivering measurable gains. At the same time, though, they’re exposing new fault lines around protected health information (PHI) and clinical trust.
What’s emerging isn’t a slowdown in AI adoption, but a redesign of how it’s introduced. Healthcare CIOs, CISOs, and clinical informatics leaders are converging on a shared understanding that scaling AI safely requires rethinking governance, security controls, and infrastructure in parallel.
According to Mark Mabus, CMIO and SVP of electronic health records at Parkview Health, ambient documentation, otherwise known as ambient listening or AI charting, has quickly become healthcare’s most visible gen AI use case. By capturing and summarizing physician-patient conversations, the technology promises to reduce clinician burnout while improving documentation quality. “It helps our providers get their notes done faster,” he says. “It reduces the amount of typing and their cognitive burden.”
That momentum, however, is forcing IT leaders to confront new operational questions that traditional healthcare architectures weren’t designed to answer. The closer organizations get to production scale, the more complex the risk profile becomes.
“Where’s the audio processed?” asks Mabus. “Is it on site, in a cloud? Is protected health information retained in there or not, and who validates the output? Those are things we have to assess and validate even before we consider putting a tool into production.”
Central to the emerging healthcare AI playbook is the idea that all decisions are made by humans. Assistive systems can draft notes, summarize charts, or suggest responses, but clinicians remain firmly in the loop. “Physicians still have to edit it and sign off on it,” says Mabus.
Mark Mabus, CMIO and SVP of electronic health records, Parkview Health
Parkview Health
This human-in-the-loop requirement does more than just satisfy regulators — it shapes how organizations tier risk and prioritize deployments. At Parkview, AI use cases are formally categorized by clinical impact and automation level, with higher-risk scenarios facing stricter review. The cautious posture reflects hard-earned lessons from early pilots. In some cases, Mabus says, technically impressive tools failed to deliver clinical value. “When I’m expecting three lines and I get nine paragraphs, that creates extra cognitive burden,” he says.
This experience reinforces the broader point now resonating across healthcare IT that clinical usability and compliance readiness must advance together.
The governance problem of shadow AI
Even as formal deployments expand, healthcare leaders are grappling with a familiar enterprise problem where users experiment outside approved channels. “I think it reminds me of texting in the healthcare environment,” Mabus says. “People will still text even though they’re provided secure tools. It’s just human nature.”
The analogy is instructive. Just as secure messaging platforms never fully eliminated SMS workarounds, gen AI policies alone are unlikely to stop clinicians from testing public tools when they perceive a productivity benefit.
Some organizations have attempted technical blocks, but experience suggests those measures have limits. Users can quickly route around network controls using personal devices and cellular connections. Instead, many health systems are pairing policy with education and enterprise-grade alternatives. The goal isn’t to eliminate experimentation but channel it safely.
The risk of unmanaged experimentation isn’t theoretical. “I’ve seen large language models give completely different responses,” Mabus says. “And one of those responses would probably cause patient harm if used.” That variability is pushing healthcare organizations to emphasize validation, transparency, and clinician training alongside traditional compliance controls.
More broadly, healthcare is relearning a lesson familiar to enterprise IT leaders: governance is as much behavioral as it is technical.
The threat curve bending upward
While clinical teams focus on workflow integration, security leaders are watching a different trend line: the accelerating speed of AI-enabled attacks. “It’s not necessarily the complexity of the attacks, it’s the velocity,” says Kevin Torres, CISO and VP of IT at MemorialCare. “It’s coming at us in a relentless fashion.” He points to a recent password spray campaign against his health system that showed a tenfold spike in failed login attempts, an indication that adversaries are increasingly automating credential attacks.
At the same time, the spread of AI-powered clinical tools is expanding the third-party risk surface. Ambient listening platforms, analytics engines, and generative assistants often process highly sensitive patient interactions outside the traditional boundaries of electronic health records. In response, MemorialCare has intensified vendor scrutiny. “We go through an exhaustive third-party risk management process and score whether it’s safe to share data with them,” says Torres. Reviews include NIST alignment, penetration testing history, access controls, and breach track records.
Kevin Torres, CISO and VP of IT, MemorialCare
MemorialCare
The growing executive visibility around AI risk is also reshaping governance. Torres says his organization now provides its board with an enterprise risk management dashboard that explicitly tracks AI-related exposure alongside cybersecurity and business continuity risks. Even with those controls, uncertainty remains high. “We don’t know what we don’t know right now,” he says. “I think we’re due for a big disruption in one of the core AI vendors.”
That expectation is reinforcing a broader shift toward continuous monitoring rather than one-time compliance checks.
Healthcare architecture must be rebuilt for AI
Beneath the policy and security layers lies a deeper structural issue that many healthcare environments weren’t designed for the speed and fluidity of gen AI workflows. According to Cletis Earle, healthcare field CTO at cloud computing company Citrix, the first cracks often appear when clinicians begin experimenting with external tools. “If you don’t have a secure environment with de-identified information, clinicians think they’re doing a great thing,” he says. “But it creates a chaotic event.”
The problem isn’t malicious behavior but workflow friction. When approved tools lag behind user needs, clinicians may copy and paste data into consumer-grade AI services to save time, inadvertently exposing PHI.
Traditional perimeter controls are poorly suited to this pattern. So Earle argues organizations need to build what many now call a safe runway for AI innovation — an architectural approach that enables experimentation while containing risk. “You need to create sandboxes to allow clinicians to experiment,” he says. “But make sure the data is de-identified and contained.” In practice, that means tighter data segmentation, automated de-identification pipelines, and isolated environments where models can be tested without touching production PHI.
Another emerging risk lies in how quickly POCs can outgrow their original guardrails. “Proofs of concept are essential, but if they’re not done thoroughly, they can break the framework of the architecture later,” Earle says. The warning highlights a growing concern among healthcare IT leaders that early AI pilots must be designed so governance, identity controls, and monitoring can scale with successful deployments.
Cletis Earle, healthcare field CTO, Citrix
Citrix
Taken together, these experiences are beginning to crystallize into a recognizable operating pattern across health systems. Rather than pursuing fully autonomous AI, many organizations are advancing through a deliberately staged approach. Assistive-first deployments keep clinicians in control while teams build confidence in model performance and data handling. Risk tiering frameworks help separate low-impact automation from clinically sensitive use cases. And sandboxed environments allow experimentation without exposing production PHI.
At the same time, security teams are tightening third-party reviews and expanding behavioral monitoring while boards demand clearer visibility into AI-related enterprise risk. Education has also become a central pillar. Instead of relying solely on technical blocks, leading organizations are investing in clinician training and transparent communication about where AI can and can’t be used safely.
The result isn’t a slowdown in innovation but a more engineered approach to scale, one that treats compliance and security as design constraints rather than after-the-fact controls.
Compliance by design: the new CIO mandate
For now, assistive AI remains the dominant pattern in healthcare. But most leaders expect the pressure toward greater automation to increase as models improve, and vendors push more advanced capabilities into clinical workflows. That shift will likely reopen many of today’s governance questions at a higher level of urgency. Autonomous ordering, agentic workflows, and cross-system orchestration will introduce new safety and accountability challenges that current frameworks only partially address.
Security teams, in particular, are entering a more turbulent phase. As Torres argues, the real impact of AI-enabled disruption is still ahead, with rising attack velocity and an expanding threat surface likely to test current defenses. Moreover, the current human-in-the-loop equilibrium is unlikely to hold indefinitely.
If there’s a unifying theme across healthcare AI adoption today, it’s that momentum and caution are advancing together. Health systems aren’t pulling back from gen AI. Ambient documentation, clinical summarization, and intelligent workflow support are already delivering tangible benefits. But the organizations moving most confidently are those investing early in governance redesign, architectural containment, and continuous risk monitoring.
The lesson for healthcare CIOs is becoming clear. The challenge is no longer whether to deploy AI, but how to build the guardrails that allow it to scale safely. The future of AI in healthcare will belong to the best runway engineers, not the fastest adopters.
Read More from This Article: Healthcare CIOs rethink AI rollout
Source: News

