An earlier article described emerging AI regulations for the U.S. and Europe. Building on that perspective, this article describes examples of AI regulations in the rest of the world and provides a summary on global AI regulation trends.
First, although the EU has defined a leading and strict AI regulatory framework, China has implemented a similarly strict framework to govern AI in that country. Second, some countries such as the United Arab Emirates (UAE) have implemented sector-specific AI requirements while allowing other sectors to follow voluntary guidelines.
Lastly, voluntary frameworks have been proposed by many countries such as Singapore and Japan, to encourage AI innovation. The G7 collection of nations has also proposed a voluntary AI code of conduct. India has avoided any commitment to AI regulations, at this time relying on existing legislation that protects personal digital privacy, an example that many other countries are following.
The complexity of varying global AI regulations is challenging for CIOs. Indeed, as IDC reported in a earlier this year, the U.S. has a complex web of differing state laws regarding AI (Navigating the Fragmented U.S. AI and GenAI Regulatory Landscape, IDC, July 2024). The complexity increases for CIOs that operate in a global environment, where national regulations span the spectrum from detailed and prescriptive such as in the EU or China, to voluntary or non-existent, such as India.
China follows the EU, with additional focus on national security
In March 2024 the Peoples Republic of China (PRC) published a draft Artificial Intelligence Law, and a translated version became available in early May. The Law provides a set of frameworks that are as comprehensive as the EU AI Act, with the intention of balancing the need for innovative AI development with the need to safeguard society. Importantly, where the EU AI Act identifies different risk levels, the PRC AI Law identifies eight specific scenarios and industries where a higher level of risk management is required for “critical AI.” The identified scenarios where AI is used include judicial, news, medical, biometric recognition, autonomous driving, social credit, social bots and where AI is used by state organizations. This allows for more rapid and targeted legislation when needed. Lastly, China’s AI regulations are focused on ensuring that AI systems do not pose any perceived threat to national security.
The UAE provides a similar model to China, although less prescriptive regarding national security. UAE has proactively embraced AI, to both foster innovation while providing secure and ethical AI capabilities. In particular, the UAE AI Office created an AI license requirement for applications in the Dubai International Finance Centre. Further, the Dubai Health Authority also requires AI license for ethical AI solutions in healthcare.
The G7 AI code of conduct: Voluntary compliance
In October 2023 the Group of Seven (G7) countries agreed to a code of conduct for organizations that develop and deploy AI systems. The code of conduct is directed by 11 guiding principles, many of which focus on risks, vulnerabilities, security, and protections. As well, the principles address the need for accountability, authentication, and international standards. The G7 leaders directed their national ministers to implement the code of conduct, stressing the need to maximize the benefits of AI while mitigating its risks. However, notably absent from the code is any form of enforcement or penalty; compliance is completely voluntary.
Similar voluntary guidance can be seen in Singapore and Japan. Singapore emphasizes AI innovation particularly in the financial sector, with no specific set of AI regulations. The government continues its emphasis on protection of digital privacy as a mechanism for controlling inappropriate AI. Japan has taken a slightly different approach, with two directions: voluntary guidelines for all industries and “sector-specific restrictions on large platforms to safeguard the use of AI” (Navigating the AI Regulatory Landscape: Differing Destinations and Journey Times Exemplify Regulatory Complexity, IDC, March 2024).
The rest of the world: Light-touch or non-existent AI regulations
India provides a model of how the rest of the world approaches AI, which aligns with the G7 model of voluntary compliance. As described by Carnegie Endowment for International Peace, India has a “light touch approach to AI regulation,” with a model that strikes a balance between innovation and safety while not delaying the country’s steady progress toward a growing and profitable digital economy. While India has multiple laws and regulations regarding electronic data and protection of digital privacy (e.g., the Information Technology Act of 2000), a single AI responsibility or a focused AI act such as that of the EU, does not exist. Recognizing the global economic importance of AI, India’s approach is to encourage AI development while monitoring AI usage to prevent societal abuse.
For many countries in the world, AI is recognized as economically important yet is dominated by the U.S. and countries of the EU. Innovation is seen as key to societal and economic improvement, with AI leading the list of innovation levers. Regulations are sometimes seen as a hindrance to innovation, and many jurisdictions will wait and watch for global consensus to emerge on AI regulations.
Unfortunately for CIOs, the global AI regulatory map will continue to be incomplete and uneven with developments occurring asynchronously in various countries. As IDC points out in a review of 11 jurisdictions, each country begins with a different set of goals, a different destination, and a variety of timelines for AI regulation (Navigating the AI Regulatory Landscape: Differing Destinations and Journey Times Exemplify Regulatory Complexity, IDC, March 2024). However, for the larger jurisdictions, such as the UK, EU or China, and some of the United States, CIOs must pay attention to established and emerging AI regulations and the probability of government enforcement. This is an unexpected new role for CIOs, but offers an opportunity for leadership in a fast-developing and complex global environment.
Learn more about IDC’s research for technology leaders OR subscribe today to receive industry-leading research directly to your inbox.
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the technology markets. IDC is a wholly owned subsidiary of International Data Group (IDG Inc.), the world’s leading tech media, data, and marketing services company. Recently voted Analyst Firm of the Year for the third consecutive time, IDC’s Technology Leader Solutions provide you with expert guidance backed by our industry-leading research and advisory services, robust leadership and development programs, and best-in-class benchmarking and sourcing intelligence data from the industry’s most experienced advisors. Contact us today to learn more.
Dr. Ron Babin, an adjunct research advisor for IDC, is a senior management consultant and professor who specializes in outsourcing and IT management (ITM) issues. Dr. Babin is a professor in IT management at the Ted Rogers School of Management at Ryerson University in Toronto, as well as its director of Corporate and Executive Education.
Babin has extensive experience as a senior management consultant at two global consulting firms. As a partner at Accenture, and prior to that at KPMG, he was responsible for IT management and strategy practices in Toronto. While at KPMG, he was a member of the Nolan Norton consulting group. His consulting activities focus on helping client executives improve the business value delivered by IT within their organizations. In his more than 20 years as a management consultant, Babin has worked with dozens of clients in most industry sectors, mainly in North America and Europe. Currently, Babin’s research is focused on outsourcing, with particular attention to the vendor/client relationship and social responsibility. He has written several papers and a book on these topics.
Read More from This Article: Global AI regulations: Beyond the U.S. and Europe
Source: News