Artificial intelligence (AI) is an increasingly large portion of information technology investments and societal discussions. Many governments have started to define laws and regulations to govern how AI impacts citizens with a focus on safety and privacy; IDC predicts that by 2028 60% of governments worldwide will adopt a risk management approach in framing their AI and generative AI policies (IDC FutureScape: Worldwide National Government 2024 Predictions). This article focuses on nascent regulations in Europe and the U.S. and implications for CIOs.
AI regulations in Europe
In late 2023 the European Union (EU) created a draft AI Act, which was subsequently approved by the EU Parliament on March 13, 2024. As one member noted, the EU now has the first binding law on artificial intelligence that will protect the human rights of workers and citizens. The regulation will fully come into effect 24 months after its publication. The Act balances the need to protect democratic rights, rule of law, and environmental sustainability while encouraging innovation, particularly in Europe. AI applications that threaten citizens’ rights, such as predictive policing or untargeted scraping of internet facial images, are banned. Similarly, law enforcement’s use of biometric information systems is prohibited.
The EU AI Act will require member states to create a database of high-risk AI systems to monitor activities in the EU market. National governments will be required to enforce regulations and monitor AI market developments.
Similar to the General Data Protection Regulation (GDPR) adopted by the European Parliament in 2016 — which became fully effective in May 2018 — the AI Act results from extensive discussions with participating countries that began five years ago. The EU AI Act, as the first global AI regulatory framework, may set the AI standards for other jurisdictions, as GDPR has done for information privacy.
The United Kingdom, while not an EU member, published an intention to create an AI regulatory framework, on February 6, 2024, based on reactions to a March 2023 AI regulation white paper, and the U.K.-sponsored AI Safety Summit held at Bletchley Park in November 2023. However, the UK Parliament was prorogued on May 30, to prepare for the July 4, 2024, general election. Any new AI legislation must wait for a new U.K. government to take office later in 2024.
AI regulations in the U.S.
The United States has begun discussions on AI regulations, but no specific AI laws are in place. In September 2023, the U.S. Senate took steps to map out potential AI regulations, through public hearings and private consultations. Several proposed laws have been drafted, to regulate topics such as AI in political advertising or to protect individual rights to voice and visual likeness from being replicated using generative AI. The National Institute of Standards and Technology has developed an AI Risk Management Framework. While AI-specific laws are being formulated, several existing laws provide some form of AI regulation, such as the Federal Aviation Administration Reauthorization Act and the National AI Initiative Act at the federal level, and several state laws, such as California’s CCPA privacy regulations and Illinois’ Biometric Information Privacy Act.
As the U.S. government debates potential regulations, the AI industry advocates for self-regulation. In July 2023 seven of the leading U.S. AI companies, including Microsoft, Meta, Alphabet, and Amazon, agreed to a short voluntary code of conduct that emphasizes safety, security, and trust. It is worth noting that with over $8 trillion in market capitalization, these four companies are in the top six U.S. valued companies. Expect the AI industry to voice strong resistance to regulations and to continue advocacy for self-regulation.
Similar to the U.K., passage of possible AI legislation will depend on outcomes of the U.S. general and presidential election in November of this year.
AI regulatory implications for CIOs
This gets interesting for a CIO in a global enterprise, where AI is being used, for example, in external interaction with clients or suppliers, such as a chatbot that assists with online buying. Similar to GDPR, an enterprise with operations in the EU must comply with the EU regulations, which means that the AI Act will impact global enterprises by 2026 at the latest. This will also mean that internal operations supported by AI, such as hiring or promoting employees, will be subject to the EU regulations. At the same time, the U.S. regulations will be slower to come into effect, and depending on the government’s direction, may lean toward industry self-regulation. Hence a CIO will need to understand and navigate a set of different AI regulations depending on where the enterprise operates. Moving beyond the U.S. and the EU, for example, in China, India, and Singapore, navigating compliance across multiple jurisdictions will only become more challenging for CIOs.
CIOs must discuss regulatory compliance with AI providers and ensure, through independent verification, that AI products comply with relevant laws and regulations.
One final complexity will be enforcement. AI is rapidly evolving and the ability of an individual or organization to gather evidence and file a regulatory complaint takes time, not to mention the time for legal proceedings. The EU may be a first mover with enforcement, with fines for non-compliance at 7% of global revenue. As seen with GDPR, the EU is not shy in moving forward with enforcement. AI advocates include some of the largest companies in the world, capable of strong and protracted legal defense in any jurisdiction. The coming years will begin to show to what extent the new AI regulations will be enforced.
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the technology markets. IDC is a wholly owned subsidiary of International Data Group (IDG Inc.), the world’s leading tech media, data, and marketing services company. Recently voted Analyst Firm of the Year for the third consecutive time, IDC’s Technology Leader Solutions provide you with expert guidance backed by our industry-leading research and advisory services, robust leadership and development programs, and best-in-class benchmarking and sourcing intelligence data from the industry’s most experienced advisors. Contact us today to learn more.
Learn more about AI regulations globally in these reports from IDC: AI Regulations and Policies Around the World 2023 and Navigating the AI Regulatory Framework Landscape – Differing Destinations and Journey Times Exemplify Regulatory Complexity.
Dr. Ron Babin, an adjunct research advisor for IDC, is a senior management consultant and professor who specializes in outsourcing and IT management (ITM) issues. Dr. Babin is a professor in IT management at the Ted Rogers School of Management at Ryerson University in Toronto, as well as its director of Corporate and Executive Education.
Babin has extensive experience as a senior management consultant at two global consulting firms. As a partner at Accenture, and prior to that at KPMG, he was responsible for IT management and strategy practices in Toronto. While at KPMG, he was a member of the Nolan Norton consulting group. His consulting activities focus on helping client executives improve the business value delivered by IT within their organizations. In his more than 20 years as a management consultant, Babin has worked with dozens of clients in most industry sectors, mainly in North America and Europe. Currently, Babin’s research is focused on outsourcing, with particular attention to the vendor/client relationship and social responsibility. He has written several papers and a book on these topics.
Read More from This Article: Adapting to AI regulations in the U.S. and Europe: Impacts on CIOs and global enterprises
Source: News