Generative AI has quickly changed what the world thought was possible with artificial intelligence, and its mainstream adoption may seem shocking to many who don’t work in tech. It inspires awe and unease — and often both at the same time.
So, what are its implications for the enterprise and cybersecurity?
A technology inflection point
Generative AI operates on neural networks powered by deep learning systems, just like the brain works. These systems are like the processes of human learning. But unlike human learning, the power of crowd-source data combined with the right information in Generative AI means that processing answers will be light years faster. What might take 30 years for an individual to process could take just an eyeblink. That is a benefit that can be derived depending on the quality as well as massive amounts of data that can be fed into it.
It is a scientific and engineering game-changer for the enterprise. A technology that can greatly improve the efficiency of organizations – allowing them to be significantly more productive with the same number of human resources. But the shock of how fast Generative AI applications such as ChatGPT, Bard, and GitHub Pilot emerged seemingly overnight has understandably taken enterprise IT leaders by surprise. So fast that in just six months, the popularization of Generative AI tools is already reaching a technology inflection point.
The cybersecurity challenges
Generative AI, including ChatGPT, is primarily delivered through a software as a service (SaaS) model by third parties. One of the challenges this poses is that interacting with Generative AI requires providing data to this third party. Large learning models (LLMs) that back these AI tools require storage of that data to intelligently respond to subsequent prompts.
The use of AI presents significant issues around sensitive data loss, and compliance. Providing sensitive information to Generative AI programs such as personally identifiable data (PII), protected health information (PHI), or intellectual property (IP) needs to be viewed in the same lens as other data processor and data controller relationships. As such, proper controls must be in place.
Information fed into AI tools like ChatGPT becomes part of its pool of knowledge. Any subscriber to ChatGPT has access to that common dataset. This means any data uploaded or asked about can then be replayed back within certain app guardrails to other third parties who ask similar questions. It’s worth noting that this is very similar to software-as-a-service (SaaS) application problems as it can impact the response of future queries when used as a training set. As it stands today, most Generative AI tools do not have concrete data security policies for user-provided data.
The insider threat also becomes significant with AI. Insiders with intimate knowledge of their enterprise can use ChatGPT to create very realistic email. They can duplicate another’s style, typos, everything. Moreover, attackers can also duplicate websites exactly.
What enterprises need for security
Fortunately, there are Generative AI Protection solutions, such as Symantec DLP Cloud, Adaptive Protection on Symantec Endpoint Security Complete (SESC), and real time link in email security that address these emerging challenges and block attacks in different, targeted ways.
Symantec DLP Cloud extends Generative AI Protection for enterprises, with the capabilities they need to discover, and subsequently monitor and control, interaction with generative AI tools within their organizations. Among other benefits, DLP can use AI to speed incident prioritization, helping senior analysts to triage the most significant and recognize those that are not a critical threat to the enterprise.
The benefits include:
- Provide enterprises with the capability to understand the risks they’re subject to, on a per tool basis with generative AI.
- Allow the safe and secure use of popular AI tools by supplying the necessary safeguards for blocking sensitive data from being uploaded or posted intentionally or inadvertently.
- Identify, classify, and document compliance for PHI, PII, and other critical data.
The bottom line: Symantec Generative AI Protection allows enterprises to “say yes” to generative AI’s productivity enhancing innovations without compromising data security and compliance.
Learn more about the implications of Generative AI to the enterprise here.
About Alex Au Yeung
Broadcom Software
Alex Au Yeung is the Chief Product Officer of the Symantec Enterprise Division at Broadcom. A 25+ year software veteran, Alex is responsible for product strategy, product management and marketing for all of Symantec.
Generative AI
Read More from This Article: Implications of generative AI for enterprise security
Source: News