In my recent column, I delved into the challenges enterprises face in integrating AI into the workplace and outlined strategies for CISOs to monitor or control the use of AI effectively. The focus was on ensuring safe generative AI practices within organizations.
Here are the key recommendations I provided:
- AI training implementation: Introduce AI training aligned with company policies and processes to empower employees with the necessary skills and awareness.
- Public LLMs in the sandbox: Safely test publicly available Large Language Models (LLMs) in a sandbox environment, separate from the production setting, to assess their impact without risking operational disruptions.
- Enterprise AI traffic monitoring: Vigilantly monitor AI activities within the enterprise to identify anomalies or potential security threats and allow for prompt intervention.
- Firewall capability for AI security: Enhance security measures by providing firewall capabilities to safeguard against potential AI-related vulnerabilities.
Given the nascent stage of generative AI implementation in organizations, I sought further insights from Patricia Titus, one of the top security executives and thought leaders in our industry. Patricia was previously the CISO at Markel Insurance, Freddie Mac, Symantec, and Unisys, and her insights have always been extremely valuable to her peers. Our discussion explored various aspects that CISOs should prioritize in this evolving landscape.
Feel free to share our conversation below on your social channels to spark reactions and discussions on the challenges and opportunities of integrating generative AI into the corporate environment.
How has AI penetrated the typical enterprise?
Depending on the type of AI being used, many are in the exploratory phases and just scratching the surface of the ‘art of the possible’ with AI usage. Some organizations are including machine learning in the category, which makes the AI conversation more inclusive. Remember, AI has been around for a long time, and the definition makes a difference.
The benefits of AI for some industries will drive major strategy changes, and the impact will be vast. Documenting these plans and use cases will be critical to minimize the future workload if regulators come knocking (and they will).
How much of this usage is part of ‘approved and budgeted’ corporate policy and programs?
That is a great question, and there’s a lot to unpack in answering it. Approved usage is probably further along the path than many think if it’s part of a program. Understanding what has been approved could be where it may get a bit murky. If you have a digital transformation office, you likely have used AI in several facets of a program.
For example, within general automation, a Chief Digital Officer would be exploring ways to increase productivity using Robotic Process Automation, and now AI, with the addition of ML. However, those industries that have yet to embark on a digital transformation journey are likely well behind the curve and could be severely impacted financially by the power of AI implemented by their competition.
The discussion around policies is a great one. I’ve heard of companies creating a separate set of policies for every transformative technology. That means your control framework is lacking. Your policies and controls would be the same for cloud or AI. However, your enforcement of these controls may be different. But the premise remains: protect sensitive data with these transformative technologies.
Your privacy officer or the CISO should have been involved in these investments from the onset to ensure they remind company employees of their responsibilities to be good stewards of customer, citizen, or corporate sensitive information. A good rule of thumb for implementing a new capability like AI is to set guidelines in collaboration with IT, legal, and the CISO organization.
And how much is ‘bootleg’ usage? You know, those individuals, teams or employees using AI-based products (LLMs) independently, potentially without corporate support or knowledge?
If your company suffers from shadow IT, this will not be any different. This is more indicative of a larger cultural problem, however, one that technology is capable of solving. Some companies will or have developed ways to find your AI usage and help you inventory them, then make smart decisions.
One common scenario is you own a software or platform capability that solves a business problem, and suddenly, the vendor says, ‘Hey, we now do AI.’ Using it without performing a risk assessment sounds enticing since the security office is the office of ‘No,’ and they will block it. This happens more frequently than we like to admit.
In response to the AI enthusiasts who are embracing the use of AI, many CISOs are just blocking it. We all know that well-intended workers will figure out how to use it without going through the corporate firewalls. Embrace AI—it’s here!
What areas or functions are using AI today? Combine that question with “Where do you see that usage transitioning and expanding over time?”
AI is mainly used for convenience in most companies, like writing performance evaluations or researching specific topics. Many businesses are in the exploratory phases, creating unique business cases and testing their theory on uses. It’s quite fascinating that many people think if companies don’t get on the AI bandwagon, they will become extinct. I’m not sure I agree with this because there are plenty of companies not embracing digital transformation and data analytics properly, so they may just take a hit to their bottom line, but extinction is a fear tactic. I know that companies are approaching this cautiously for many reasons, especially relative to the ethical use of AI.
I also know that companies must have great data quality or scrubbed data to feed the AI models, or your outputs will be worthless. This means there could be a lot of work to do here. There is also the concern about using data. Your privacy policy may clearly state that your data will be used one way, and does using AI constitute a difference that will require rethinking the consent required to use consumer or privacy data (aka PII)? Then, you will have to figure out when the AI models need to be retrained when the data gets stale or starts to behave in an unethical way. Plus, there is a difference between supervised and unsupervised AI implementation. Most companies will be very cautious just letting AI run models that replace humans if there’s a risk of AI running amok.
If I use my crystal ball, unsupervised AI doing menial human tasks like sorting emails for service center agents to increase emergency responses will likely hit the top of the bell curve for adoption. But when it comes to making large financial decisions based on AI without someone checking and rechecking, I think (maybe hope) that’s still a few years out, if ever. Look what happened with everyone getting on board with cryptocurrency and then what happened with FTX.
How much adoption are you seeing in the security team today, and how much AI is under the hood of the products most organizations have deployed? Also, please address the bootlegs in your comments under SBOM.
Many security companies have integrated machine learning and robotic process automation (RPA) into their tools. When AI hit the mainstream media, all of a sudden, ML and RPA became AI. It didn’t help that many governing bodies blended ML and AI together, which complicated things a bit for us in security.
How much is there? More than we think, but less than the vendors say. We’re going to solve this with the mandates for SBOMs (software bill of materials), which will move us from fiction to fact. What we can’t lose sight of in all the noise of AI is that if we’re using it, so are the threat actors.
Using AI in social engineering will blow the top off our methods for authorization and authentication. What has been the silver bullet called ZTNA (Zero Trust) won’t mean a thing if the threat actors keep moving at the pace they are.
Most security teams are skeptical about coloring outside the lines regarding the bootlegs. So, using AI without proper approval and thinking shouldn’t be a problem. However, it’s an opportunity to work with startup companies in a design partnership to move faster with AI capabilities to solve real problems.
Regarding CISOs managing AI use, CISOs need to be part of a cross-functional team of leaders in a company that lays out guidance for employees. A governance framework and an inventory of existing AI use should be developed. You don’t want to stifle innovation, so you must develop a safe environment for innovators to work. CISOs cannot be the only decision-makers in the usage of AI.
I also am not a believer in creating different policies for tech adoption. If your policies and control framework follow an industry standard, then it doesn’t matter what tech you adopt. Monitoring standards bodies like NIST are a must for CISOs to keep their organizations following some framework.
Lastly, what do you think CISOs are missing?
Many CISOs are missing a mindset for innovation. With their overloaded work, adding the complexities of AI seems overwhelming. So, the quick reaction to that is to stifle innovation. I’ve seen that lead to many CISOs blocking and banning the use of AI. That’s the fastest way to get shown the door in that role. Embrace it because it’s not going anywhere.
The bottom line
I hope you’ve gained insights and knowledge from what Patricia shared above. As a leading voice in security, Patricia speaks with authority. She is in the trenches of data, cloud, and security and is at the forefront of understanding AI’s impact. She sees the landscape and knows what CISOs deal with on a daily basis.
As you can see, there are obstacles to implementing AI in any organization, but there are also common-sense strategies that can work. The bottom line is to move swiftly but carefully, maintain focus, and implement a well-thought-out plan.
Artificial Intelligence, CSO and CISO, Security
Read More from This Article: A CISO POV: Securing AI in your company
Source: News