The US government has so far taken a hands-off approach to AI regulation, with the US Congress yet to pass any major legislation regulating the technology.
Legislators have introduced dozens of bills, and President Joe Biden did issue an executive order detailing the safe use of AI in October 2023. But to date, the most that the US federal government has to show for its efforts are voluntary commitments on AI use from about 15 major companies.
IT leaders and professionals — especially those in AI- and data-related roles — have taken notice.
Nearly six in 10 US-based AI, privacy, and data management leaders surveyed by The Harris Poll don’t trust the current US government’s approach to AI. Moreover, the same percentage believes the US is lagging behind other countries in regulating the technology, according to the survey, which was commissioned by data intelligence company Collibra.
In lieu of federal action, several US states have passed AI legislation, with California Senate Bill 1047 recently drawing opposition from OpenAI.
Three-quarters of US-based IT leaders surveyed by The Harris Poll, however, applaud US states for passing their own AI regulations, and four in five want big tech to pay for the use of personal data to train AI. In addition, 84% would support an update of US copyright laws to protect creators from AI.
Support among AI, privacy, and data management professionals for AI regulation isn’t surprising, says Felix Van de Maele, CEO and co-founder of Collibra. Strong support for changes in copyright law and for compensation for use of private data make sense when AI presents new challenges in those areas, he adds.
“Current US copyright laws are not equipped to protect creators against AI-related inaccuracy, misinformation, and discrimination, and data is, and always will be, a precious commodity in the AI world,” he says.
Copying Europe
A central truism about data and AI appears to be uniting AI professionals and content creators in the desire for AI regulation.
“Data is the backbone of AI, and all models need high-quality, trusted data — like copyrighted content — to provide high-quality, trusted responses,” Van de Maele notes. “It seems only fair that content creators receive the fair compensation and protection that they deserve.”
To that end, Van de Maele points to a bill introduced in the US Senate in July that aims to give creators such as artists and musicians control over how AI uses their content.
The large levels of support for new copyright rules and compensation for the use of personal data doesn’t surprise Anthony Cammarano, CTO and vice president of engineering at Protegrity, a data privacy and security provider.
Cammarano, whose company uses AI internally and in its products, believes the US Congress should create both AI and privacy regulations that apply nationwide. AI will make it easier to copy someone else’s ideas or creations without giving credit, he says.
Outside the US, the EU AI Act, which bans some uses of AI and adds several transparency requirements, went into effect on Aug. 1. Some AI professionals see it being a model for other legislation, including in the US, in the name of global consistency. The EU AI Act’s regulations have extra-territorial reach — any AI used by citizens of EU countries is subject to the law.
“The EU AI Act puts notice on technology providers to develop adequate safeguards for their models and AI systems to ensure benefits far outweigh potential risks,” says Richard Sonnenblick, chief data scientist at Planview, a cloud software provider focused on connected work.
The EU’s law defining high-risk and low-risk uses of AI makes sense, as does its requirements for humans to remain in the loop for some AI uses, he adds.
Cammarano also likes the risk-based approach of the EU law, but he calls for any US AI laws to be more “prescriptive” about protecting personal data. There’s a fine line between passing generalized legislation that will stand up to long-term technological advancements and specific protections that work right now but may not age well.
He fears that AI regulation may end up the same way as data privacy and security regulations in the US, EU, and other jurisdictions. “Even in the face of all of these different privacy regulations, data privacy has significant problems,” he says.
Cammarano also recognizes that passing major new legislation through the US Congress may be difficult.
“Individually, we all want some kind of protection and compensation,” he says. “What becomes difficult is, the path to achieve that is not well laid out.”
The question of AI bans
One problem with the EU AI Act, Sonnenblick says, is that it bans some types of AI, including ones that engage in cognitive behavioral manipulation of people or vulnerable groups.
New legislation, whether in national or state legislatures, should focus on defining illegal activities that may be aided by AI, and not on banning certain types of AI, Sonnenblick adds. Those nations that outlaw some forms of AI risk an innovation gap with other countries, he says.
“There are going to be a lot of types of fraud that are much easier to create or may be impossible without present and future generative AI models,” he says. “There’s a catalyst element, or an accelerant that AI can provide, and it offers criminals that ability to speed up what would otherwise may be low-bandwidth activities.”
Like many survey respondents, Sonnenblick supports copyright reforms to protect creators from AIs stealing or repurposing their work without credit. The issue may play out more in US courts than in Congress, however, with several current lawsuits in progress.
The copyright fights are driven, he says, by uncertainty over AI will live up to its potential.
“If it does, it’s safe to say that the way we create in society is going to fundamentally change, and the tools and the sources of creation are going to evolve as well,” he adds. “It’s super important that we give credit to the people who have created and whose shoulders that large language models or other generative networks are standing on.”
Read More from This Article: What IT leaders want from US AI regulation
Source: News