When a new wave of technology innovation seems to be breaking over the horizon, the fear of missing out — FOMO — can drive hasty decisions on new IT investments. Recent, rapid advances in artificial intelligence (AI) may represent one of the biggest FOMO moments ever, so, it’s critical that decision-makers get out in front of the wave and figure out how to implement Trustworthy AI.
The launch of Microsoft-backed OpenAI’s ChatGPT — based on generative AI technology that provides a consumer-ready, conversational interface to large language models — almost instantly spurred what has been likened to a corporate arms race. It thrust into the spotlight the potential of generative AI to revolutionize customer interactions, generate images from text input, and even automate software coding.
CEOs have taken notice, and a Gartner, Inc., survey of more than 2,500 executives found that 70% “said that their organization is in investigation and exploration mode with generative AI, while 19% are in pilot or production mode.”
The business implications are huge. “The rapid rise of artificial intelligence has sparked excitement in industries from fast food to theme parks, with executives rushing to show how they will be among beneficiaries of the new technology,” observed the Financial Times, citing data that almost 40% of S&P 500 companies mentioned AI or related terms in earnings calls in a recent financial quarter.
If you’re late to the party, you may be wondering what all the fuss is about. “Generative AI is poised to unleash the next wave of productivity,” gushed McKinsey & Co, which has released its own generative AI tool for its associates.
Avoiding pitfalls in AI adoption
As business executives evaluate these and other AI-driven tools and technologies, it’s up to IT leaders to help them avoid pitfalls that could alienate customers and employees, or spur lasting damage to corporate reputations if something goes amiss.
“Generative AI is just one strand of AI and business leaders need to determine what they mean by AI and to determine what kinds of AI they might need for their business’s automated decision-making, which can vary greatly across industries,” says Reggie Townsend, Vice President of the Data Ethics Practice at global AI and analytics provider SAS, and board member of the National Artificial Intelligence Advisory Committee (NAIAC).
“Whatever form of AI they end up pursuing, they really need to be on the alert for what could either damage their reputation or just lead them down some wrong paths,” he adds. “Where your data comes from, who it comes from, how it’s governed is all very important. You must appreciate how your models are built and optimized and have the ability to validate that over time. It’s OK to automate along the way, but we have to have strategic insertion points where humans are involved.”
Adherence to ethical considerations
That’s the essence of Trustworthy AI: Businesses should ensure adherence to ethical considerations aimed at avoiding unintentional harms that could result from a lack of awareness, expertise, or planning. That requires incorporating appropriate levels of accountability, inclusivity, transparency, completeness, and robustness.
“Fundamentally, this is about making software that doesn’t harm people,” Townsend explains. “When it comes to implementing AI, we see regulatory requirements as the floor — or minimum requirements — and see our principles as the ceiling. Businesses have to make sure that in their desire to be first to market, they don’t sacrifice on quality.”
For more insight into employing Trustworthy AI, view this survey.
Artificial Intelligence
Read More from This Article: How to capitalize on ‘Trustworthy AI’
Source: News