In my previous column in May, when I wrote about generative AI uses and the cybersecurity risks they could pose, CISOs noted that their organizations hadn’t deployed many (if any) generative AI-based solutions at scale.
What a difference a few months makes. Now, generative AI use has infiltrated the enterprise with tools and platforms like OpenAI’s ChatGPT / DALL-E, Anthropic’s Claude.ai, Stable Diffusion, and others in ways both expected and unexpected.
In a recent post, McKinsey noted that generative AI is expected to have a “significant impact across all industry sectors.” As an example, the consultancy refers to how generative AI technology could potentially add $200 – $400 billion in added annual value to the banking industry if full implementation moves ahead on various use cases. The potential value across a broader spectrum of institutions could be enormous. But while organizations look to incorporate elements of generative AI to strengthen efficiency and productivity, others are worried about controlling its usage within the enterprise.
Generative AI on the loose in enterprises
I contacted a range of world-leading CIOs, CISOs, and cybersecurity experts across industries to gather their take regarding the recent surge in the unmanaged usage of generative AI in company operations. Here’s what I learned.
Organizations are seeing a dramatic rise in informal adoption of gen AI – tools and platforms used without official sanctioning. Employees are using it to develop software, write code, create content, and prepare sales and marketing plans. A few months ago, monitoring these unsanctioned uses was not on a CISO’s list of to-dos. Today, it is, as they create a mysterious new risk and attack surface to defend against.
One cybersecurity expert told me, “Companies are unprepared for the influx of AI-based products today – from a people, process, or technology perspective. Furthermore, heightening the issue is that a lot of the adoption of AI is not visible at the product level but at a contractual level. There are no regulations around disclosure of ‘AI Inside.’”
Another CISO told me that his primary concerns included the potential for IP infringement and data poisoning. He also identified technologies to secure the AI engines and workflows used by the company (or its 3rd party partners) that support creative content development.
A high-level CISO in capital management feared “plagiarism, biased information impacting decisions or recommendations, data loss to numerous organizations, and reliance and economic waste on products that don’t prove short or medium value.”
One CIO executive told me that his most significant concern right now is having their proprietary data or content incorporated into the training set (or information-retrieval repository) of a third-party product to then be presented as a work product of that company.
Privacy leaks?
Among the respondents, the clear message was that companies fear unintended data leakage. A CISO at a major marketing software firm worried about this explicitly, stating, “The real risk is that you have unintentional data leakage of confidential information. People send things into ChatGPT that they shouldn’t, now stored in ChatGPT servers. Maybe it gets used in modeling. Maybe it then winds up getting exposed. So I think the real risk here is the exposure of sensitive information. We have to ask ourselves, “Is that data being adequately protected or not?”
Another respondent provided a recent example of an engineer trying to send a source code snippet up to ChatGPT that included an API key in it. While they were able to detect the issue, in general, this could be very dangerous. Not all companies have security systems that can detect, block, or remediate this type of behavior.
Another information security executive cited Samsung’s temporary ban of ChatGPT in its systems. The electronics company learned the hard way that content input into ChatGPT’s prompt can be viewed publicly. In this case, the input contained the source code of software responsible for the company’s semiconductor equipment. What followed was a knee-jerk reaction to ban ChatGPT.
Controlling the Gen AI outbreak
What can CISOs and corporate security experts do to put some sort of limits on this AI outbreak? One executive said that it’s essential to toughen up basic security measures like “a combination of access control, CASB/proxy/application firewalls/SASE, data protection, and data loss protection.”
Another CIO pointed to reading and implementing some of the concrete steps offered by the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework report. Senior leaders must recognize that risk is inherent in generative AI usage in the enterprise, and proper risk mitigation procedures are likely to evolve.
Still, another respondent mentioned that in their company, generative AI usage policies have been incorporated into employee training modules, and that policy is straightforward to access and read. The person added, “In every vendor/client relationship we secure with GenAI providers, we ensure that the terms of the service have explicit language about the data and content we use as input not being folded into the training foundation of the 3rd party service.”
Corporate governance and regulatory requirements
And what about corporate governance and regulatory requirements? What can organizations do in this area? One of the CISOs surveyed suggested that executive boards should determine what governance practices should be established to maximize the benefits of generative AI against the potential risk and legal/regulatory requirements.
In a nutshell, the same executive provided the following checklist:
- Organizational awareness of existing and in-development legal and regulatory requirements
- Clearly identified roles and governance processes required to map, measure, and manage AI risks. This includes documentation of the risks and potential impacts of AI technology.
- Identification/discovery mechanisms to inventory AI systems
- Processes to address the AI lifecycle for development, implementation, and end-of-life software
- Development of policies and procedures for addressing third-party and supply-chain risks that leverage AI
- Processes to address failures or incidents in AI systems
In summary, enterprise employees are working with AI tools, with or without the corporate blessing for such use. To help rein in what could be a widespread information leak or other significant damaging incident, CIOs, CISOs, and their corporations need to control generative AI use in the organization.
They will need to determine whether this takes shape in the form of greater adherence to existing corporate security measures, augmenting these, and/or finding new forms of internal controls on employee use of third-party vendors.
In my next article, I’ll share some processes to manage and remediate the use of generative AI in enterprise organizations. Stay tuned!
Artificial Intelligence, Generative AI
Read More from This Article: CIOs are worried about the informal rise of generative AI in the enterprise
Source: News