Most enterprises have been dabbling in AI for a few years, running isolated pilots and POCs with limited follow-through. But a different pattern has started to emerge. Some organizations are now turning so-called random acts of AI into repeatable, measurable, and mission-aligned business practices.
Here, IT leaders, from different industries who’ve made that journey, share four key lessons they learned along the way.
Organize for intentionality
When Dan Jennings became CTO of Walgreens in 2023, the company was full of energy around AI, but it lacked coordination. “We had pockets of AI activity everywhere,” he says. “Different teams were experimenting with models, pilots, and vendor tools, but there wasn’t a unified strategy.”
His first step was to bring order to that experimentation by creating an AI Center of Enablement (COE), a virtual structure that connects technology, data, InfoSec, and business units under a common framework. The COE serves as both innovation engine and control tower, designed to filter, prioritize, and scale AI initiatives that align with Walgreens’ business strategy. “It’s about fail fast, learn fast, but with discipline around innovation,” he adds.
Dan Jennings, CTO Walgreens
Walgreens
The COE evaluates each proposal, then guides teams through a consistent process: POC, MVP testing, and measurable deployment. “We’re treating AI like any other product investment,” Jennings says. “There’s a roadmap, a business case, and a set of outcomes we can track.”
That structure allows Walgreens to balance its dual identity as a retailer and healthcare provider — industries with very different levels of risk tolerance. On the retail side, AI can move fast, driving inventory optimization, personalization, and digital engagement. On the healthcare side, every initiative requires governance, transparency, and validation before rollout. “We’re bringing both worlds together,” Jennings says. “The agility of retail and the discipline of healthcare.”
He describes the company’s evolution as a shift from enthusiasm to intentionality. Early on, employees were eager to try generative tools on their own, creating what Jennings calls random acts of AI. Today, those efforts are being funneled through the COE into governed use cases that support core priorities, like pharmacy forecasting and staff scheduling. “It’s no longer about playing with models,” he says. “It’s about using AI to drive measurable business outcomes, safely and at scale.”
Measure ROI beyond the balance sheet
For Will Landry, SVP and CIO of Franciscan Missionaries of Our Lady Health System (FMOL Health), traditional financial metrics only tell part of the story. “We’re a nonprofit health system,” he says, “so our ROI is about physician engagement and satisfaction, and patient engagement and satisfaction — not just dollars.” In other words, success is measured as much in human experience as in economic efficiency.
Landry says the organization evaluates how AI tools reduce clinician fatigue, shorten documentation time, and improve the quality of patient interaction. For example, with ambient listening systems now deployed in hundreds of clinics, physicians spend less “pajama time” finishing notes after hours, and more on meaningful patient dialogue, a shift that’s boosted both satisfaction and morale. Moreover, patients receive the notes quicker and they’re more accurate.
Will Landry, SVP and CIO, FMOL Health
FMOL Health
Even so, FMOL Health’s investments in AI are delivering measurable operational returns. Landry points out that as the health system has steadily expanded its AI adoption, from clinical documentation tools to back-office automation, total technology spending has grown at a slower rate than the increase in revenue and service volume. “That tells me the efficiencies are real,” he says. “We’re automating more with the same team and the same headcount.” For Landry, that balance — higher engagement and flat tech spend — is the truest measure of AI’s return: a healthier organization built on both fiscal prudence and human well-being.
At office furniture manufacturer Steelcase, CTO Steve Miller also takes a broad view of ROI, one that extends well past profit margins and cost reduction. “It depends on what area of the business we’re trying to address,” he says. “In some cases, it’s a sustainability metric we’re trying to influence. In others, it’s experience and sentiment levels of people working with our products.”
Instead of treating AI as purely an efficiency engine, Miller and his AI Business Group link each initiative to outcomes that matter most to that part of the company, whether that means lower energy use, reduced material waste, or improved customer and employee experience. For example, AI-driven analytics and simulation tools help optimize manufacturing flow, minimizing energy consumption and scrap rates, while design-focused systems measure success in how seamlessly users interact with Steelcase products. “Some of our results aren’t pure financial metrics,” Miller says. “Some are sustainability metrics and sentiment, for example.”
By connecting AI investments to sustainability and human experience, Steelcase builds value that compounds over time. The company’s predictive analytics, for instance, can forecast when organizations are likely to renovate offices — insights that help customers plan responsibly while reducing overproduction and waste. For Miller, this is the essence of deliberate AI. “It’s about using data and intelligence to improve how people work and how we make things, not just to make them cheaper.”
Build trust through guardrails and governance
For Landry, trust is the foundation for any AI initiative in healthcare. “We’re custodians of patient data,” he says. “Our patients trust us to protect their information.” That responsibility means AI innovation at FMOL Health must always balance experimentation with safety and ethical oversight.
To achieve that balance, Landry’s team embeds governance into every layer of technology design and deployment. Even experimental systems such as ambient clinical listening tools that draft visit notes are rolled out under strict supervision, with explicit patient consent and post-session clinician review. “We’re conservative by design,” he says. “If the AI drafts something, the physician always reviews and signs off before it goes anywhere near the record.”
FMOL Health’s governance structure also extends beyond clinical applications. The organization’s data management and security teams work together to monitor how data is shared among hospitals, clinics, and other partners through its Community Connect program, which allows independent clinics to use FMOL Health’s Epic system securely. The goal, Landry says, isn’t just compliance but confidence: clinicians and patients need to know AI is being used responsibly.
For Landry, those guardrails don’t slow innovation, they make it sustainable. “Healthcare has to move fast, but it can’t break things,” he says. “Our governance framework gives us the confidence to move forward safely.”
According to Miller, trust in AI starts with structure. “We have a data governance council that helps provide the guardrails of not just what you can do with AI, but what you should do,” he says. The council, composed of leaders from data governance, information security, legal, and HR, reviews every AI initiative to ensure data quality, privacy, and ethical use. Their work includes defining which datasets are allowed for which purposes, monitoring for bias, and enforcing clear rules on how sensitive information is handled.
Steve Miller, CTO Steelcase
Steelcase
Miller insists this framework isn’t a bureaucratic layer but a living governance system built directly into AI development. “We have a data governance expert actually embedded in our AI development team,” he says. That person’s job is to oversee model behavior, flag potential risks, and ensure responsible iteration as new tools are tested. “You have to do a lot to make sure data is being used properly,” Miller says. “And if it’s not producing good results, you rein it in quick.”
He emphasizes that these practices enable innovation. By clarifying ethical boundaries early, teams can experiment freely within safe parameters. “When you get into agentic AI, you really have to define and enforce boundaries,” Miller says.
At Steelcase, governance is more than compliance, it’s a foundation for trust. The company’s designers and engineers rely on AI systems daily to create product configurations, simulate plant layouts, and analyze space utilization data. “It’s about giving our teams confidence that the AI will behave predictably and ethically,” Miller says. “That’s what allows people to actually use it.”
Empower people through AI
For Russell Levy, chief strategy officer at data broker ZoomInfo, the real power of AI lies in amplifying human capability, not automating people out of the process. “Some of our best AI agents weren’t built by data scientists,” he says. “They were built by sales reps who knew exactly what worked for them, and they wanted to share it with everyone else.”
That principle has shaped ZoomInfo’s entire AI strategy. The company gives employees the tools to design and deploy their own agentic AI but under a governance model that keeps humans firmly in charge. “Every agent we deploy has a human in the loop,” Levy explains. “An agent can draft an email, log a meeting, or summarize a call, but a person decides when and how that output gets used.”
Russell Levy, chief strategy officer, ZoomInfo
ZoomInfo
Levy sees this as a shift from automation to augmentation. By codifying the instincts of high-performing employees into reusable AI agents, ZoomInfo makes expertise scalable across the organization. A salesperson’s best follow-up strategy or a support rep’s phrasing for tough conversations can now be captured and shared automatically. “It’s about taking tribal knowledge and making it reusable,” Levy says.
That democratization of AI, where nontechnical employees create agents that embody their own workflow intelligence, has become a cultural turning point. “Once people realize they can build something that helps them and their whole team, adoption takes care of itself,” Levy explains. The result is a workplace where humans and agents collaborate continuously, each learning from the other. “AI doesn’t replace our people,” he says. “It scales their impact.”
Across these sectors, the contrasts are clear, but so is the common thread that deliberate AI isn’t a single playbook. It’s a mindset that balances innovation with integrity, and automation with human judgment.
Read More from This Article: 4 lessons on more deliberate uses of AI
Source: News

