Meta will allow US government agencies and contractors in national security roles to use its Llama AI. The move relaxes Meta’s acceptable use policy restricting what others can do with the large language models it develops, and brings Llama ever so slightly closer to the generally accepted definition of open-source AI.
Llama will be available to US government agencies and private sector partners, including Lockheed Martin, Microsoft, and Amazon, to support applications like logistics planning, cybersecurity, and threat assessment, Meta’s president of global affairs Nick Clegg wrote in a blog post Monday.
“We believe it is in both America and the wider democratic world’s interest for American open-source models to excel and succeed over models from China and elsewhere,” Clegg wrote. “As open-source models become more capable and more widely adopted, a global open-source standard for AI models is likely to emerge, as it has with technologies like Linux and Android.”
Significantly, this comes days after Reuters reported that Chinese research institutions linked to the People’s Liberation Army develop a chatbot using Llama for intelligence gathering and decision support.
Still not open source
Meta has long described its Llama models as “open source” but the 630-word acceptable use policy it imposes on anyone downloading them puts it at odds with the broader open-source movement.
While open-source software has long had a clear definition, it was only last week that the Open Source Initiative (OSI) finally published its definition of what open-source AI is: a model that can be used, studied, modified, and shared by anyone without permission.
The cornerstone of Meta’s partnership with the US government lies in its approach to data sharing, which remains unclear, says Sharath Srinivasamurthy, associate vice president at IDC.
The clarity on data sharing could be crucial, as it may impact how effectively the model adapts to government-specific needs while maintaining data security.
“The model needs to be trained on government-specific data, so they will need to build on top of the model Meta has developed,” Srinivasamurthy said. “As long as Meta keeps the training data confidential, CIOs need not be concerned about data privacy and security. However, if Meta decides to make the training data available to governments, it could raise concerns, leading them to reconsider their enterprise strategy for adopting Llama.”
A strategic move
Meta’s move comes at a strategic moment, as AI companies contend with mounting regulatory challenges and privacy concerns. It also unfolds amid the ongoing US-China trade war, marked by strict controls on AI technology.
“China is investing a lot in AI as technology, and the US government had to move fast on leveraging this technology to stay ahead in the game,” Srinivasamurthy said. “Considering this and the fact that Meta has faced some backlash from the US government on various aspects, Meta’s announcement is a strategic move. And the fact that it wants to make Llama open to them might also set a trend towards large scale adoption of open-source AI as compared to closed ones.”
The public sector presents a vast opportunity, spanning federal, state, and local governments, as well as enterprises that support public sector functions in areas like cybersecurity, defense, utilities, and transportation.
Many of these entities operate on a large scale, managing significant data flows and complex information systems, which amplifies the demand for robust AI solutions.
“GenAI-based models can solve a multitude of these large-scale yet disparate system-level problems,” said Neil Shah, VP of research and partner at Counterpoint Research. “The CIO’s role in these enterprises is among the toughest, as it involves issues of privacy, security, and criticality. Using the open-source Llama as a foundational model to build intelligent, automated systems with highly sensitive, locally trained data introduces a new level of complexity, compliance, governance, and engagement.”
However, the benefits are expected to remain largely confined to the US, with limited effect on CIO decisions in other regions, particularly Europe, where stringent regulations mean that credibility alone may not be enough to secure trust, according to Priya Bhalla, practice director at Everest Group.
“Trust in AI solutions across Europe and other regulated regions will hinge more on how well companies address concerns around data sovereignty, privacy, and compliance with local regulations, and simply having the endorsement of the US government may not be enough to win trust,” Bhalla said.
Read More from This Article: Meta offers Llama AI to US government for national security
Source: News