AI has whet the appetites of organizations across nearly every sector. As AI pilots move toward production, discussions about the need for ethical AI are growing, along with terms like “fairness,” “privacy,” “transparency,” “accountability,” and the big one —”bias.”
But ensuring those and other measures are taken into consideration is a weighty task that CIOs will be grappling with as AI becomes integral to how people work and conduct business.
For many CIOs, implementations may be nascent, but mitigating biases in AI models and balancing innovation with ethical considerations are already among their biggest challenges. What they are finding is that the line between advancing technologically and ensuring AI doesn’t result in detrimental outcomes is thin.
Christoph Wollersheim, a member of the services and artificial intelligence practices group at global consulting firm Egon Zehnder, pinpoints five critical areas most organizations need to address when implementing AI: accuracy, bias, security, transparency, and societal responsibility.
Unfortunately, achieving 100% accuracy with AI is “impossible,” says Wollersheim, who recently co-authored The Board Member’s Guide to Overseeing AI. “The real ethical concern lies in how companies safeguard against misinformation. What’s the plan if customers are presented with false data, or if critical decisions are based on inaccurate AI responses? Companies need both a practical plan and a transparent communications strategy in their response.”
Bias can be inadvertently perpetuated when AI is trained on historical data, he notes.
“Both executive management and boards must ensure fairness in the use of AI and guard against discrimination.” Research is under way to correct biases, using synthetic data to address attributes such as gender, race, and ethnicity, he says, “but there will always be a need for a human-centric lens to be applied.”
The need to secure sensitive information is paramount for ethical AI deployment because AI’s heavy dependency on data increases the risk of breaches and unauthorized access, Wollersheim says. “Companies must fortify against attacks that could mislead AI models and result in ill-informed decisions. Ensuring the security of sensitive information is paramount for ethical AI deployment,” he says.
As for transparency, it’s not just about algorithms, but building trust, he says. “Stakeholders need to comprehend how AI makes decisions and handles data. A transparent AI framework is the linchpin for ethical use, accountability, and maintaining trust.”
Organizations must also consider what values guide them, and what obligations they have in terms of retraining, upskilling, and job protection. “Ethical AI is about shaping a responsible future for our workforce,” Wollersheim says.
To address these issues, establishing an AI review board and implementing an ethical AI framework are critical, Wollersheim says. “An ethical AI framework provides clear guidance on monitoring and approval for every project, internal or external. An AI review board, comprised of technical and business experts, ensures ethical considerations are at the forefront of decision-making.”
Here is a look at how CIOs are addressing ethical AI in their organizations.
Making ethical AI a team sport
Plexus Worldwide is one organization using AI to identify fraudulent account creation and transactions, says Alan McIntosh, CIO and CTO of the $500 million global health and wellness company. As McIntosh sees it, bias is fundamentally a data problem. “We attempt to eliminate bias and incorrect results by leveraging and validating against multiple, complete data sources,” he says.
Plexus IT is also in the analysis phase of using AI within the company’s e-commerce platform “to gain better insights for predicting and optimizing the customer experience and enhancing personalization,” McIntosh says. “We also see automation opportunities to eliminate many legacy manual and repetitive tasks.”
Plexus Worldwide
To ensure ethical AI practices are adhered to, Plexus Worldwide has formed a team of IT, legal, and HR representatives responsible for the development and evolution of AI governance and policy, he says. This team establishes the company’s risk tolerance, acceptable use cases and restrictions, and applicable disclosures.
Even with a team focused on AI, identifying risks and understanding how the organization intends to use AI both internally and publicly is challenging, McIntosh says. Team members must also understand and address the inherent possibility of AI bias, erroneous claims, and incorrect results, he says. “Depending on the use cases, the reputation of your company and brand may be at stake, so it’s imperative that you plan for effective governance.”
With that in mind, McIntosh says it’s critical that CIOs “don’t rush to the finish line.” Organizations must create a thorough plan and focus on developing a governance framework and AI policy before implementing and exposing the technology. Identifying appropriate stakeholders, such as legal, HR, compliance and privacy, and IT, is where Plexus started its ethical AI process, McIntosh says.
“We then created a draft policy to outline the roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance and management, and governance,” he says. “We continue to iterate and evolve our policy, but it is still in development. We intend to implement it in Q1 2024.”
McIntosh recommends seeking out third-party resources and subject matter expertise. “It will greatly assist with expediting the development and execution of your plan and framework,” McIntosh explains. “And, based on your current program management practices, provide the same level of rigor — or more — for your AI adoption initiatives.”
Treading slowly so AI doesn’t ‘run amok’
The Laborer’s International Union of North America (LIUNA), which represents more than 500,000 construction workers, public employees, and mail handlers, has dipped its toes into using AI, mainly for document accuracy and clarification, and for writing contracts, says CIO Matt Richard.
As LIUNA expands AI use cases in 2024, “this gets to the question about how we use AI ethically,” he says. The organization has started piloting Google Duet to automate the process of writing and negotiating contractor agreements.
LIUNA
Right now, union officials are not using AI to identify members’ wants and needs, nor to comb through hiring data that might be sensitive and return biases on people based on how the models are trained, Richard says.
“Those are the areas where I get nervous: when a model tells me about a person. And I don’t feel we’re ready to dive into that space yet, because frankly, I don’t trust publicly trained models to give me insights into the person I want to hire,” he says.
Still, Richard expects a “natural evolution” in which, down the road, LIUNA may want to use AI to derive insights into its members to help the union deliver better benefits to them. For now, “it’s still a gray area on how we want to do that,” he says.
The union is also trying to grow its membership and part of that means using AI to identify prospective members efficiently, “without identifying the same homogenous people,” Richard says. “Our organization is pushing very hard and does a good job of empowering minorities and women, and we want to grow those groups.”
That’s where Richard worries about how AI models are used, because avoiding “the rabbit hole of finding the same stereotypical demographic” and introducing biases means humans must be part of the process. “You don’t just let the models do all the work,” he says. “You understand where you are today, and then we stop and say, ‘OK, humans need to intervene here and look at what the models are telling us.’”
“You can’t let AI run amok … with no intervention. Then you’re perpetuating the problem,” he says, adding that organizations shouldn’t take the “easy way out” with AI and only delve into what the tools can do. “My fear is people are going to buy and implement an AI tool and let it go and trust it. … You have to be careful these tools aren’t telling us what we want to hear,” he says.
To that end, Richard believes AI can be used as a kick-starter, but IT leaders must use your team’s intuition “to make sure we’re not falling into the trap of just trusting flashy software tools that aren’t giving us the data we need,” he says.
Taking AI ethics personally
Like LIUNA, Czech-based global consumer finance provider Home Credit is early in its AI journey, using GitHub Copilot for coding and documentation processes, says Group CIO Jan Cenkr.
“It’s offered a huge advantage in terms of time-saving, which in turn has a beneficial cost element too,” says Cenkr, who is also CEO of Home Credit’s subsidiary EmbedIT. Ethical AI has been top of mind for Cenkr from the start.
Home Credit
“When we started rolling out our AI tool pilots, we also had deep discussions internally about creating ethical governance structures to go with the use of this technology. That means we have genuine checks in place to ensure that we do not violate our codes of conduct,” he says.
Those codes are regularly refreshed and tested to ensure they are as robust as possible, Cenkr adds.
Data privacy is the most challenging consideration, he adds. “Any information and data that we feed into our AI platforms absolutely has to comply with GDPR regulations.” Because Home Credit operates in multiple jurisdictions, IT must also ensure compliance in all those markets, some of which have different laws, adding to the complexity.
Organizations should develop their governance structures “in a way that reflects your own personal approach to ethics,” Cenkr says. “I believe that if you put the same care into developing these ethical structures that you do into the ethics you apply in your personal, everyday life, these structures will be all the safer.”
Further, Cenkr says IT should be prepared to update its governance policies regularly. “AI technology is advancing daily and it’s a real challenge to keep pace with its evolution, however exciting that might be.”
Put in guardrails
AI tools such as chatbots have been in use at UST for several years, but generative AI is a whole new ballgame. This fundamentally changes business models, and has made ethical AI part of the discussion, says Krishna Prasad, chief strategy officer and CIO of the digital transformation company, while admitting that “it’s a little more theoretical today.”
Ethical AI “doesn’t always come up” in implementation considerations, Prasad says, “but we do talk about … the fact that we need to have responsible AI and some ability to get transparency and trace back how a recommendation was made.”
UST
Discussions among UST leaders focus on what the company doesn’t want to do with AI “and where do we want to draw boundaries as we understand them today; how do we remain true to our mission without producing harm,” Prasad says.
Echoing the others, Prasad says this means humans must be part of the equation as AI is more deeply embedded inside the organization.
One question that has come up at UST is whether it is a compromise of confidentiality if leaders are having a conversation about employee performance as a bot listens in. “Things [like that] have started bubbling up,” Prasad says, “but at this point, we’re comfortable moving forward using [Microsoft] Copilot as a way to summarize conversations.”
Another consideration is how to protect intellectual property around a tool the company builds. “Based on protections that have been provided by software vendors today we still feel data is contained within our own environment, and there’s been no evidence of data being lost externally,” he says. For that reason, Prasad says he and other leaders don’t have any qualms about continuing to use certain AI tools, especially because of the productivity gains they see.
Even as he believes humans need to be involved, Prasad also worries about their input. “At the end of the day, human beings inherently have biases because of the nature of the environments we’re exposed to and our experiences and how it formulates our thinking,” he explains.
He also worries about whether bad actors will gain access to certain AI tools as they use clients’ data to develop new models for them.
These are areas leaders will have to worry about as the software becomes more prevalent, Prasad says. In the meantime, CIOs must lead the way and demonstrate how AI can be used for good and how it will impact their business models, and bring leadership together to discuss the best path forward, he says.
“CIOs have to play a role in driving that conversation because they can bust myths and also execute,” he says, adding that they also have to be prepared for those conversations to at times become very difficult.
For example, if a tool offers a certain capability, “do we want it to be used whenever possible, or should we hold back because it’s the right thing to do,” Prasad says. “It’s the most difficult conversation,” but CIOs must present that a tool “could be more than you bargained for. To me, that part is still a little fuzzy, so how do I put constraints around the model … before making the choice to offer new products and services that use AI.”
Artificial Intelligence, Data Governance, Generative AI, IT Governance, IT Leadership, IT Strategy
Read More from This Article: CIOs grapple with the ethics of implementing AI
Source: News