The White House’s new executive order, “Safe, Secure, and Trustworthy Artificial Intelligence,” is poised to usher in a new era of national AI regulation, focusing on safety and responsibility across the sector. But will it?
The executive order represents the U.S. government’s opening salvo in creating a comprehensive regulatory framework for AI, applicable both in the federal government and the private sector. While it addresses a broad spectrum of objectives for the AI ecosystem and builds upon previous directives related to AI, it is not without its challenges, notably a lack of accountability and specific timelines alongside potentially over-reaching reporting requirements.
Instead of setting up a few guardrails for the AI industry to guide itself, the executive order holds fast to an outdated approach to regulation in which the government will somehow guide AI’s future on its own. As other recent technology waves have taught us, developments will simply come too fast for such an approach and will be driven by the speed of private industry. Here are my thoughts regarding its potential implications and effectiveness.
AI regulations
The executive order calls for creating new safety and security standards for AI, most notably by requiring the largest of model developers to share safety test results with the federal government. However, and very importantly, the reporting requirements for the very large number of companies and builders fine-tuning the regulated large models for their particular use cases remain unclear.
AI must be regulated. It is a very powerful technology, and while it is not inherently good or bad, given its sheer power, guardrails must be put into place. While the executive order takes a focused approach towards applying these standards to the largest model developers, reporting requirements should continue to mirror the progressive structure of other regulated industries such that the largest underlying infrastructure providers that affect every American carry the regulatory burden. In contrast, US regulators must have a light touch with startups to maintain the country’s leadership position in innovation.
AI security
While it is refreshing to see the specificity in a handful of elements, such as the Department of Commerce’s development of guidance for content authentication and watermarking to label AI-generated content clearly, many security goals remain open to interpretation.
In citing the Defense Production Act as its precedence around sharing “test results and other critical information with the US government,” which can nationalize businesses during emergencies, the order’s breadth and vagueness in wording could pave the way for over-reach. While it might be touted as upholding American values, the potential for overreach is palpable. This section of the executive order also includes vagaries such as “The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.” The vague wording and breadth of the order leave room for ambiguity, potentially leading to unintended consequences.
Protecting consumer privacy
The AI Executive Order also calls to protect consumer privacy by creating guidelines for agencies to assess privacy techniques used in AI. If the agencies adopt these guidelines, they could be helpful to entrepreneurs who wish to avoid run-ins with the regulatory agencies. Naturally, such guidelines are only effective in encouraging innovation if they are consistently applied.
Similarly, the executive order calls for the Department of Health and Human Services to develop programs and resources on how educators can responsibly use AI tools. Such a use-case-specific approach to regulation is effective as it allows regulators to apply existing frameworks that guide technological innovation in these domains.
Opening up the code
An alarming detail in the Executive Order requires AI companies to ‘pull back the curtain’ and share their products’ internal testing data with the National Institute of Standards and Technology. This group is designated to set the safety standards for “red-team” testing ensuring safety before any public release.
Regulation of AI research and development makes little sense, given the speed of innovation. Instead, it would be best to apply a lighter touch on R&D and to adapt as innovation unfolds. Thus, additional clarity around reporting of deep R&D is needed.
Recruiting AI talent
The aspect of the executive order that is perhaps the most encouraging is the calls for the opening of existing visa criteria, interviews, and reviews to provide for more ‘highly skilled immigrants and non-immigrants’ with critical AI expertise to stay and work in the United States.
This would include loosening the tight H-1B work authorization criteria for non-U.S.-born workers. The executive order seeks to ease immigration for highly skilled workers, which is a critical element to continuing the growth of American AI companies, research, and a professional workforce.
It has been five years since I addressed this issue and how attracting and retaining fresh talent, educators, and data scientists must be a part of our national agenda. I stand by those words even more so today.
Immigrants have provided the fuel behind America’s technology startups. Immigrants have founded 55% of U.S. startup companies (valued at $1 billion and higher), according to a 2022 National Foundation for American Policy analysis. Even more importantly, 64% of billion-dollar unicorns were founded/co-founded by immigrants or their offspring.
The same think tank recently estimated that immigrants founded or co-founded 65% of top U.S. AI companies (28 out of 43 companies), with 42% of the founders having come to the United States as international students.
The numbers clearly show that international entrepreneurs are meaningfully contributing to the tech/AI industry, and it is in our national interest to continue to attract and retain fresh talent to take a leadership position in AI.
It is critical to invest in our existing students, along with international workers. The executive order establishes the National AI Research Resource to strengthen AI research by giving students and AI researchers access to essential AI resources and data and to provide technical assistance for small businesses.
These students will become the next machine learning experts, the AI algorithm engineers, and tomorrow’s company founders who can foster AI’s full potential and help drive American innovation and leadership. These are encouraging steps to foster development in this fast-growing industry.
The executive order represents a step in the pivotal regulation and advancement of AI in the United States. However, it has its challenges and ambiguities, which warrant further scrutiny and refinement.
Artificial Intelligence, Government
Read More from This Article: How the new AI executive order stacks up: B-
Source: News