In my last column, I shared the findings from my informal poll with leading CISOs on AI’s impact on people, policies, and processes. Feedback from many has been that it was an eye-opening article putting a spotlight on the momentum AI has gained in enterprise operations since last year.
Looking back over a year to my April 2023 column, I stated that from my perspective in the venture world, AI penetration in the enterprise has barely scratched the surface. What a difference 18 months makes.
Today, leading enterprises are implementing and evaluating AI-powered solutions to help automate data collection and mapping, streamline administrative support, elevate marketing efficiencies, boost customer support, strengthen their cyber security defenses, and gain a strategic edge.
Over the summer, I connected again with C-Suite members on these topics, asking several questions about AI’s changing influence within their organizations. Their answers are summarized below, and I think what you’ll find is also very enlightening.
What existing (or legacy) technologies within your enterprise is AI impacting?
From my discussions around this topic, it is clear that AI has truly begun to penetrate the enterprise in the past year, often being incorporated into existing systems, in some cases replacing legacy systems, and in others being a net new product deployment. While a year ago, most of these AI deployments were tire-kicking exercises and Proof of Concepts (POCs). Real deployments at scale are now occurring for a variety of use cases.
An industry-renowned CISO summed-up the general theme by stating, “There is not much that isn’t being impacted or at least assessed to see how AI can be adopted. Everything from data processing, marketing, customer support, business content/records, as well as security.”
Another technology executive at a leading financial services firm told me that their organization has incorporated AI into a number of their existing platforms to maintain a competitive advantage in their industry. AI has been rolled out into systems including their trading technologies, data analytics, development tooling, security technologies (across identity to log analytics), and even security training.
A Fortune 500 CISO remarked that AI is being used to analyze legacy datasets in new and innovative ways. They shared the following scenario, “Imagine you have several years’ worth of risk assessments, penetration tests, and internal audit reports, for example, and you manually have been trying to analyze this information to show trends in your security program. Previously, we would have likely hired a consulting company to help us start over with a new assessment or analyze the data to create the trend. We can now perform these assessments, saving our companies’ human labor and significant dollars.”
A large healthcare system CISO told me that they are currently evaluating clinical administrative support in the form of intelligent notation and charting, noting that there’s also some serious discussion internally about possible AI use with clinical decision support. “We’re evaluating how we can leverage AI to monitor supplementation in the policing and patient safety areas,” they said. “The latter primarily identifies fall risk, predicts patient issues, etc. In the data analytics and information security space, everything’s AI at this point.” The executive mentioned that the organization now has AI integrated into EDR, device visibility, and patient trend analytics.
Another CISO in the healthcare sector mentioned that all of their cloud-based systems now had some level of AI incorporated. Epic was a notable on-prem system that was demonstrating AI-based innovation. AI is clearly entering a phase of mass adoption in the enterprise.
What new AI-based or AI-enabled technologies are you deploying into production (not just a pilot)?
While much of the AI deployed today has been delivered via an update to an existing product, several executives pointed to new technologies that they have deployed. One organization shared that they have acquired AI-based tools from CrowdStrike and Armis for information security monitoring and decision support, as those platforms lean heavily into AI for fundamental, routine decision support.
Another CISO told me about their organization’s use of Glean AI and the implementation of an AI chatbot to assist with customer service, which is much more intuitive than traditional chatbots.
The financial services executive highlighted his company’s use of AI, in particular leveraging Copilot across all platforms, from M365 services, GitHub, and Microsoft Security services, including Entra (for identity), Sentinel (for log analytics), and Azure (for AI integration into custom applications).
Another CISO at a healthcare organization informed me that the company was “…deploying all that we can, using Abridge, Bing, Copilot. etc, as well as all the AI that’s built into existing technology like Epic, which provides additional insight and coverage.”
Again, the narrative has changed considerably in the past year, as AI has been integrated into both legacy systems and adopted with new product purchases.
What are the AI positives & negatives?
Not surprisingly, lower costs, productivity increases, and gains in security compliance highlighted the answers from most C-level responses. The non-profit healthcare system CISO is happy with AI’s benefits. “We can do more with less,” they noted. “Due to nursing shortages and austerity measures (common in healthcare), we’re lean on staff, so creative ways are needed to alleviate the pressures caused by those shortfalls.”
Another healthcare CISO told me that AI benefits the overall documentation burden, reducing time and energy spent on workflow automation, summarization, note-taking, and recaps.
A top security exec noted that one of the bigger challenges was making the right AI investment, as the AI tools marketplace is flooded with innovations. Cutting through the noise and allocating budget for the right AI initiatives is a complex task.
On a positive note, another Fortune 500 CISO noted the benefits of using AI tools to harness the power of data and lower the costs of manual intervention. However, they pointed out that human supervision is still critical when using AI to eliminate hallucinations or ‘fake’ data.
But taking the high road to AI-powered productivity has its challenges. Among these include the tricky balancing act of AI regulation and compliance while still being able to generate revenues. As AI moves throughout the organization, departmental regulation and compliance may have to follow suit.
What concerns do you have about using AI-based products, with or without your approval?
Risks with AI are another area of concern. CISOs must watch for AI’s accuracy, explainability, and reliability concerns. “Most Gen AI models are not ‘intelligent’ per se, but more complex pattern management systems,” said one CISO. “The AI is error-checking for the probability of a pattern match based on its training, not for the quality of the content of the material.”
Other concerns include keeping a tight rein on what tools are allowed within the enterprise, who is authorized to use them, and overseeing AI for hallucination errors in data, marketing, and other areas. “I’m pretty certain our developers are using tools outside our trusted development zones, and we have little ways to detect this,” another CISO told me. “Trust, training, and awareness are important to ensure the employees are ‘doing the right thing,’ but honestly, if they are not, we’d likely not know.”
Augmenting this notion, a healthcare CISO added another concern: “I have concerns about AI tools being used unintelligibly. Many folks see AI and say, ‘I don’t need to proof this,’ without realizing that Gen AI is susceptible to quality issues and hallucinations. So if someone relies entirely on AI for clinical decision support without reviewing that, then eventual patient harm is the likely result.”
Another C-suite executive noted that approval is especially concerning in the clinical space as patient and research data may be impacted or touched.
A final concern cuts across all industry segments, dealing with the reliance on third parties for adopting AI, creating a higher risk of data leakage and privacy breaches. As stated by a leading industry CISO, “An increasing number of organizations do not have the ability to build or integrate and host AI natively within a controlled environment. So, they are turning to 3rd and 4th party outsourcers. The compromise of a 3rd or 4th party through a data breach could be very business consequential. Not only from a cybersecurity reputation perspective but also the loss of data which is extremely valuable in business today.”
Do you think most companies are experiencing unapproved use of Gen AI for various purposes? And if yes, how do you think you can control it?
This is a significant area of concern for the CISOs I spoke with. All of them believe that unapproved use was fairly rampant.
One remarked, “Absolutely. There’s no way to police the use of AI on personal devices, and with the proliferation of devices and apps that are AI supported, it’s almost guaranteed some workflow or data is leaking into those products.”
Another added, “Yes, as it is free and easy to use. Think ChatGPT, which is available on your phone, so people use it all the time.”
And a final blunt quote from another executive, “The answer is unequivocally yes. Some of it is well intended but others are looking for ways to get more done with less as there is an increasing demand for gains in productivity and the ability to climb career ladders.”
CISOs hope to use trust, governance, and training as key factors to control the unapproved use of AI. “We expect that our employees will do the right thing and have the company’s best interest in mind when they use AI,” a CISO remarked. “We lack appropriate ways to trust, but it is a balancing act in which we hope our workers verify and act with integrity.”
Another executive added, “This starts with a conversation, quite frankly. Not everyone is going to agree, but coming to a resolution on how to guide and direct the use will allow both those eager to adopt and those who want to cautiously adopt to come to a growth path and likely strategy. Not communicating and putting guard rails in place is when it is unclear how it is to be used and what the velocity for increased adoption is. This is like a risk tolerance approach. Understanding the risk tolerance in an organization and defining it is paramount.”
Meanwhile, another hoped that a stronger governance approach would eventually come into play. “We have to find a way to leverage innovation to help us do this. If AI can solve so many problems, it should be able to solve the governance aspect easily.”
How will emerging regulations impact AI use?
Regarding emerging AI regulations, the C-Suite notes that it’s all still a little up in the air. One CISO told me, “The governance process will be critical in how we approach our use of AI. We need to follow the law, but there is no law and very little guidance right now, so companies are doing what they can to manage its use.”
For healthcare CISOs, there are other concerns about emerging regulations, such as potential revisions of the HIPAA Security Rule. However, at least one CISO mentioned uncertainty regarding how enterprises will respond to the Biden Administration’s CIRCIA Act, requiring organizations to report covered cyber incidents and ransomware payments to the Cybersecurity and Infrastructure Security Agency (CISA). Understanding how to report incidents and what penalties may exist if incidents go unreported could become a significant issue in the years ahead. AI in the enterprise can offer pearls of productivity. However, it also contains risks and can produce perils for CISOs and CTOs. Striking the right balance between opportunity and risk will likely be a successful equation for enterprise organizations in the coming years.
Read More from This Article: CISO viewpoint part 2: Finding pearls among perils with AI productivity solutions
Source: News