Generative artificial intelligence (AI) is hot property when it comes to investment, but there’s a pronounced hesitancy around adoption. AI faces a fundamental trust challenge due to uncertainty over safety, reliability, transparency, bias, and ethics. In a recent global survey, 86% of participants said their organizations had dedicated budget to generative AI, but three-quarters admitted to significant concerns about data privacy and security.
What makes AI responsible and trustworthy?
At the top of the list of trust requirements is that AI must do no harm. Compliance is necessary but not sufficient. Rather, the guiding principle for making AI trustworthy is to align it with societal values. Yet determining what AI should do is challenging. What’s considered right, accurate, and ethical can vary depending on context, use case, industry, country, and culture. AI teams have to figure out what values their organizations want to reflect and what “fair” and “accurate” mean in that context.
Governance implications for key gen AI use cases
Some key use cases for generative AI include increasing productivity, improving business functions, reducing risk, and boosting customer engagement. A good governance framework makes generative AI not only more responsible but also more effective.
“Aligning AI with organizational goals and deploying it responsibly and efficiently ensure long-term productivity benefits,” noted Bruno Domingues, CTO for Intel’s financial services industry practice. “Establishing guardrails based on organizational principles ensures efficient resource allocation, fosters accountability and transparency, and builds trust among stakeholders.”
A solid governance structure addresses ethical issues related to AI across the organization. As part of its model, SAS has an AI Oversight committee that might reject a generative AI marketing message as inappropriate, for example. “The committee essentially acts as an additional audit layer, ensuring that AI applications and decisions align with SAS’s ethical standards,” said Josefin Rosén, Trustworthy AI specialist at SAS’ Data Ethics Practice.
Structure, policies, and oversight
A solid AI governance framework bridges the divide between generative AI’s promise and realization of its benefits, including:
- Increased productivity due to more distributed decision-making
- Competitive advantage and market agility resulting from being forward-compliant
- Improved trust thanks to better accountability in data use
- Heightened brand value in response to addressing AI’s impact on society and the environment
- Ability to win and keep top talent who value responsible innovation
Partnering for a sustainable future
A strong AI governance framework also supports sustainability goals, which require intelligent data management, model development and deployment, and decision monitoring and management.
SAS and Intel have forged a partnership that integrates high-performance computing hardware with advanced analytics software to drive sustainability, energy efficiency, and cost-effectiveness. “SAS’s tools enable organizations to analyze and optimize energy consumption, carbon footprints, and operational efficiencies, while Intel’s processors and accelerators deliver the performance needed for these analytics with reduced power consumption,” noted Domingues.
The great promise of generative AI to deliver transformative business benefits rests on the willingness of organizations to commit to good governance and ethical AI practices. Those who make that connection will be well positioned as the AI revolution gains steam.
Check out this webinar to learn how to unlock the benefits of generative AI – ethically and responsibly.
Read More from This Article: The role of governance in AI: Building trust
Source: News