The US government has been scrambling to keep up with AI technologies that are advancing at an unprecedented pace. As a means of providing a framework for AI governance and use in the US, the bipartisan House of Representatives Task Force on AI has released a 273 page AI report offering recommendations in areas including skilling and hiring, energy consumption, data privacy, and financial services.
The report, said the task force in its introduction, “is intended to serve as a blueprint for future actions that Congress can take to address advances in AI technologies.”
But at this point, it is still just a guideline, and some experts question how useful it really is for enterprise leaders.
“I don’t know that this is helpful,” said Forrester Senior Analyst Alla Valente. “This is really a summary of what we’ve done up to now, with some ‘key’ recommendations.”
Readying the workforce for AI
One of the challenges addressed in the report is the need for an AI-skilled workforce. There is a widespread lack of basic literacy in science, technology, engineering, and math (STEM) concepts, the task force pointed out. Also, required AI knowledge and skills are not clearly defined.
The task force advised organizations to reskill existing employees to work alongside AI, embrace a workforce that is more technically skilled in science and engineering, and look beyond traditional bachelor’s and advanced degrees to certificate programs and industry training programs. They pointed out that, while “AI” wasn’t generally a keyword in job descriptions before 2022, many skills required for AI have already been present in jobs such as IT, data science, and computer engineering.
Additional task force recommendations included facilitating public-private partnerships, evaluating existing workforce development programs, and standardizing job roles, categories, tasks, and skill sets.
“The skills gap exists because, unlike previous game-changing technology revolutions, AI — especially generative AI tools such as ChatGPT — is the exact opposite of fit-for-purpose tech,” said Arthur O’Connor, academic director of data science at the City University of New York (CUNY) School of Professional Studies. “It’s a technical marvel looking for a purpose.”
Organizations are still grappling to understand where genAI will be most useful, and the “massive natural language and visual computational processing power, once the exclusive realm of specialists and their supercomputers, that is now in the hands of rank and file employees,” he noted.
It’s ultimately an organizational problem, he pointed out, as data science expertise is not “sufficiently diffused” in most organizations, but is instead concentrated in IT departments. But AI requires new types of roles and functions in which data expertise is “an interconnected discipline spanning almost every aspect of a business.”
Not breaking the grid
Another critical concern is the sheer amount of energy AI models gobble up. The report pointed out that electricity consumption in the US has grown steadily by 0.5% a year over the last two decades. However, that could nearly double (to 0.9%) through the end of this decade.
A lot of this growing demand comes from data centers that power AI models.
The task force suggested creating new standards, metrics, and definitions for energy use and efficiency metrics. Organizations should track and project data center power usage. Also, it said, the organizations that benefit the most from new infrastructure should bear the brunt of growing costs. Further, the electric grid must be modernized and secured, and AI can be used to enhance energy infrastructure, production, and efficiency.
O’Connor detailed several more steps organizations can take that are “aligned with or supported by government recommendations.” In addition to simply using more energy-efficient hardware, some of these include:
- Optimizing models to reduce energy consumption, by model pruning, or removing redundant neurons from neural networks to reduce model size and computational load.
- Reducing the numerical precision of AI calculations through a technique known as quantization, which can result in up to 50% computational cost savings.
- Training a smaller model to replicate the behaviors of a larger model, reducing the need for extensive computational resources.
Clear and practical or 273 pages too long?
The extensive report also touches on other areas, including financial services, healthcare, data privacy, and R&D, and calls for federal preemption when it comes to AI — that is, federal law taking precedence over state law.
Some in the industry, including Center for Data Innovation Director Daniel Castro, lauded it for its “clarity and practicality,” saying it offers a “clear and actionable blueprint” for AI governance in specific sectors.
However, others call it vague and superficial, saying it offers no new real insights or recommendations.
Valente pointed to one section that suggests strengthening National Institute of Standards and Technology (NIST) best practices for genAI, but noted, “it’s a framework, it’s not a regulation, it’s not a requirement, it’s kind of a recommendation.” Enterprises can implement it (or not) as they will — in whole or in part.
As a whole, recommendations and findings in the report tend to contradict one another, she noted. “The only thing we’re left with is ‘genAI is here, we’re continuing to invest in it, we want to keep using it, we want to reap all the benefits, but we don’t want to stifle innovation, and we’re not really sure what we’re going to do when bad things happen.’”
Also, it’s unclear as of yet what level of oversight the incoming Trump administration will be looking to have over AI. “We don’t actually know that there will be more government oversight of AI,” Valente pointed out.
Recognize that AI is coming, and prepare now
Whatever their takeaway from the report, experts say it clearly underscores the fact that organizations must prepare for AI now. “You can’t pretend genAI is going away, or it’ll skip over you,” said Valente. “It’s here. If you want to reap the rewards, you have to prepare for uncertainty.”
From a security and risk standpoint, it’s important to understand that AI is already in your organization by way of employees, vendors, suppliers and software platforms.
“You need to understand where it’s coming in and at what pace, and make sure that you have a strategy that will keep genAI use within your organization’s risk appetite or risk tolerance,” Valente advised. “Every organization has to plan for generative AI, which means that if your company doesn’t have a genAI strategy, that is your genAI strategy.”
O’Connor noted that the real danger is not genAI, but humans’ “natural stupidity” when using technology.
The best preparation, he said, is to get into compliance with standards for transparency and ethical/responsible AI being put forth by organizations such as OpenAI, IBM, Anthropic, Microsoft, and industry groups.
“Guarding against hallucinations, preventing bias in training data, and ensuring transparency in the way an output is generated won’t eliminate all the regulatory risks,” he noted, “but it will go a long way in reducing them.”
Read More from This Article: US government AI Task Force offers recommendations — but are they really helpful to enterprises?
Source: News