Generative AI is an innovation that is transforming everything. How much and in what ways is the subject of much discussion and controversy. But like many new technologies, the anxieties it creates may have more to do with fear for the future rather than how that future will be.
ChatGPT and the emergence of generative AI
The unease is understandable. Indeed, ten years ago, some experts warned that artificial intelligence would lead to us losing nearly 50% of our present jobs by 2033. But we are now nearly halfway there, and we don’t even have full self-driving, autonomous automobiles.
The reason for this conversation is the seemingly overnight emergence of generative AI and its most well-known application, Open AI’s ChatGPT. The reality is very similar to the early days of many paradigm-changing technologies. That is where we are today with generative AI. In just six months, we are already seeing it reach a technology inflection point as enterprises rush to implement widespread use of generative AI apps.
The implications for enterprise security
For most enterprises, the present moment is an educational process. Enterprises see generative AI as a way of accelerating their efficiency. But in the rush to adopt, they are putting themselves at risk.
Some of these risks are accidental. They involve copying or putting sensitive corporate data, files, or images into public generative AI apps. ChatGPT’s pool of knowledge is essentially the whole of the Internet. Information loaded into it becomes data that any other subscriber has access. That data leakage is the principal security concern regarding generative AI of enterprises today.
Another major concern is copyright infringement and intellectual property (IP). Who owns what when the output is something that’s an enterprise’s own IP is combined with another’s in this third party publicly accessible service? Generative AI does not vet for bias, attribution, or copyright protection.
A third major concern is its use as a tool by cyber attackers. It’s important to note here that generative AI today is a content development engine. ChatGPT can tell you the ways attackers have broken into a particular operating system; it can’t independently develop a new way that’s never been done before. At least, not yet. And probably not for a good five years or more.
So, how do we keep the train rolling with generative AI while securing the enterprise?
The importance of policy
Protecting the enterprise from potential generative AI cybersecurity risks doesn’t start with technology. It starts with the business policies of the organization — with education and setting a foundation to understand and recognize the risks that generative AI entails. The importance of policy extends to the regulatory sphere. Indeed, just recently, several leaders in the AI field called for a pause on aspects of AI development while such an official regulatory environment is developed to put guardrails in place.
The final element is for enterprises to put controls in place that will allow them to enforce and automate policies to help monitor generative AI use and minimize the risks to the enterprise.
Symantec and generative AI
Symantec has a long history with AI. Key to our focus is protecting user and enterprise IP. Organizations should feel especially confident when it comes to the threat posed by generative AI systems if they already have a data protection like Symantec Data Loss Prevention Cloud. A solution like this helps enterprises adopt generative AI tools by ensuring compliant sending of either images or data to generative tools.
We are still in the early days of generative AI. That’s why we have a large engineering team dedicated to keeping Symantec at the forefront of this technology. Indeed, we are using many of the same sorts of machine learning and even generative AI tools to help identify malicious behavior as they are used to create it.
When it comes to generative AI, it’s not a question of whether it’s a win for the enterprise or not. Enterprises that don’t embrace it will be at a severe disadvantage. Enterprises need to invest in security to safely take full advantage of the technology that is transforming everything.
To learn more about Generative AI and cybersecurity: download the whitepaper.
About Alex Au Yeung
Broadcom
Alex Au Yeung is the Chief Product Officer of the Symantec Enterprise Division at Broadcom. A 25+ year software veteran, Alex is responsible for product strategy, product management and marketing for all of Symantec.
Generative AI
Read More from This Article: Generative AI and the Transformation of Everything
Source: News