As I reflect on the biggest technology innovations during my career―the Internet, smartphones, social media―a new breakthrough deserves a spot on that list. Generative AI has taken the world seemingly by storm, impacting everything from software development, to marketing, to conversations with my kids at the dinner table.
At the recent Six Five Summit, I had the pleasure of talking with Pat Moorhead about the impact of Generative AI on enterprise cybersecurity. As with many disruptive innovations, Generative AI holds great promise to deliver fundamentally better outcomes for organizations, while at the same time posing an entirely new set of cybersecurity risks and challenges.
Key Risks from Generative AI
There are three key risks posted by Generative AI in enterprises today:
Sensitive Data Loss: Enterprise users can input sensitive information or other confidential company information into Generative AI systems such as ChatGPT and, intentionally or unintentionally, expose confidential information and put the reputation of their company at risk.
Copyright Issues: Enterprise employees use Generative AI to create content such as source code, images, and documents. However, one cannot know the origin of the content provided by ChatGPT, and the content may not be copyright free, posing risk to the organization.
Abuse by Attackers: There have also been concerns raised that attackers will leverage Generative AI tools such as ChatGPT to develop novel new attacks. While Generative AI can make attackers more efficient at certain tasks, it cannot, as of today, create entirely new attacks. Generative AI systems are information content development tools, not robots — you can ask such a tool to “Tell me all the common ways to infect a machine,” but you cannot ask it to “Infect these machines at this company.”
Protecting the Enterprise
So, what can security professionals do to properly safeguard the use of Generative AI tools by their employees?
First, every organization must determine their own policies for use of Generative AI within their environment, e.g., what is the best approach for enabling the business while applying appropriate security controls. Given that we are still in the early stages of Generative AI, organizations should regularly review and evolve their policies as needed.
Symantec Enterprise Cloud enables our customers to enforce their specific Generative AI policies. Some organizations have decided to ban the use of these tools for the time being, as they work through the issues, and they leverage our Secure Web Gateway to enforce such controls. Others allow the use of Generative AI, with caution, and use Symantec’s DLP Cloud for real-time granular inspection of submitted data and remediation so that no confidential information is exposed. Our DLP Cloud has out-of-the-box templates that allow blocking of data across key regulatory categories, e.g., HIPAA, PCI, PII, etc. Organizations can also create new DLP policies for Generative AI or leverage their existing policies. Please see our Symantec Enterprise Blog and our Generative AI Protection Demo for more details.
Organizations should also consider providing explicit, documented, requirements on the obligation of every employee to validate output from Generative AI tools for accuracy, copyright compliance, and compliance with overall company policies.
We do expect attackers to eventually use Generative AI to create and deliver new threats much more efficiently. So, organizations must be extremely vigilant about ensuring that their overall cybersecurity posture including information, threat, network, and email tools can handle this increased attacker sophistication. To-date, Generative AI is unable to create entirely novel attack techniques that have not previously been created by humans. So, our Symantec products are well tuned to catch these attacks, and we also use Generative AI as part of building our defenses for customers.
AI vs. AI
Given that Generative AI tools are freely available to both the attackers and the defenders (cybersecurity companies), there are understandable concerns about how such an “arms race” may evolve.
Over time, Generative AI tools will surely improve and there may be a time in the future where such tools can generate and execute entirely new attacks on specifically targeted organizations. At the same time, security companies will be able to leverage such tools to super-charge their defenses. At Symantec, we are investigating the use of Generative AI across every product line to improve our protection and make the day-to-day jobs of security professionals easier. Over time, we could leverage Generative AI in our products to optimize customer-specific security policies, to quickly generate remediation instructions, to summarize technical security information for SoC analysts, and perform many other critical activities.
We believe that whoever has the most computing power will ultimately have the advantage here. The massive computing power used by OpenAI to develop ChatGPT has been a key factor in the early success of this tool. We feel that security companies will invest appropriately in compute power and research to keep the defenders ahead in this race.
Where do we go from here?
As we’ve seen with other disruptive technologies, it is impossible to predict how the use of Generative AI will develop over time. Social media started as a tool to help people stay connected with friends and family, via their desktop and laptop computers, nobody imagined all the ways in which its use would evolve.
Similarly, Generative AI is transforming our personal and work lives. Just as with other groundbreaking technologies that preceded it, the Internet, smartphones, and social media, Generative AI will usher in a new set of cybersecurity and privacy concerns. Enabling organizations to benefit from the full power of Generative AI, while protecting them from the associated risks, will surely drive a new wave of cybersecurity innovation. At Symantec, we are fully investing to be at the cutting edge of this space.
To learn more, read our Symantec Enterprise Blog and our Generative AI Protection Demo.
About Rob Greer
Broadcom Software
Rob Greer is Vice President and General Manager of the Symantec Enterprise Division at Broadcom (SED). In this role, he is responsible for the go-to-market, product management, product development and cloud service delivery functions.
Artificial Intelligence, Security
Read More from This Article: Artificial Intelligence in Cybersecurity: Good or Evil?
Source: News