The US Department of Commerce’s Bureau of Industry and Security (BIS) plans to introduce mandatory reporting requirements for developers of advanced AI models and cloud computing providers.
The proposed rules would require companies to report on development activities, cybersecurity measures, and results from red-teaming tests, which assess risks such as AI systems aiding cyberattacks or enabling non-experts to create chemical, biological, radiological, or nuclear weapons.
“This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” Gina M. Raimondo, secretary of commerce, said in a statement.
Impact on enterprises
The proposed regulations follow a pilot survey by the BIS earlier this year and come amid global efforts to regulate AI.
After the EU’s landmark AI Act, countries such as Australia have introduced their own proposals to oversee AI development and usage. For enterprises, these could increase costs and slow down operations.
“Enterprises will need to invest in additional resources to meet the new compliance requirements, such as expanding compliance workforces, implementing new reporting systems, and possibly undergoing regular audits,” said Charlie Dai, VP and principal analyst at Forrester.
From an operational standpoint, companies may need to modify their processes to gather and report the required data, potentially leading to changes in AI governance, data management practices, cybersecurity measures, and internal reporting protocols, Dai added.
The extent of BIS’ actions based on the reporting remains uncertain, but the agency has previously played a key role in preventing software vulnerabilities from entering the US and restricting the export of critical semiconductor hardware, according to Suseel Menon, practice director at Everest Group.
“Determining the impact of such reporting will take time and further clarity on the extent of reporting required,” Menon said. “But given most large enterprises are still in the early stages of implementing AI into their operations and products, the effects in the near to mid-term are minimal.”
Concerns over stifling innovation
Beyond concerns of costs, there is also a potential impact on innovation, according to Swapnil Shende, associate research manager at IDC.
“The proposed AI reporting requirements seek to bolster safety but risk stifling innovation,” Shende said. “Striking a balance is crucial to nurture both compliance and creativity in the evolving AI landscape.”
Significantly, this follows California’s recent passage of a contentious AI safety bill, SB 1047, which could set the toughest AI regulations in the US.
The tech industry had pushed back against SB 1047, with over 74% of companies expressing opposition. Major firms like Google and Meta have raised concerns that the bill could create a restrictive regulatory environment and stifle AI innovation.
Innovation in most sectors is typically inversely proportional to complex regulations, Menon added. High regulatory barriers tend to stifle innovation, which is also why the US has historically favored looser regulations compared to the EU.
“Complex regulations could also draw away innovative projects and talent out of certain regions with an emergence of ‘AI Heavens,’” Menon said. “Much like tax heavens, these could draw important economic activity into countries more willing to experiment.”
Read More from This Article: US targets advanced AI and cloud firms with new reporting proposal
Source: News