Proof that even the most rigid of organizations are willing to explore generative AI arrived this week when the US Department of the Air Force (DAF) launched an experimental initiative aimed at Guardians, Airmen, civilian employees, and contractors.
To be known as NIPRGPT, it will be part of the Dark Saber software ecosystem developed at the Air Force Research Laboratory (AFRL) Information Directorate in Rome, New York.
Dark Saber is an “ecosystem of Airmen and Guardians from across the DAF that brings together innovators and developers and equips them to create next-generation software and operational capabilities deployable to the Force at a rapid pace.” (Guardians are enlisted members of the US Space Force, a service created under the DAF umbrella in 2019. They don’t train to fight in zero gravity, though: They are mostly computer experts charged with things like preventing cyberattacks, maintaining computer networks, and managing satellite communications.)
NIPRGPT is an AI chatbot that will operate on the Non-classified Internet Protocol Router Network, enabling users to have human-like conversations to complete various tasks, DAF said. The chatbot works with the Department of Defense’s Common Access Card (CAC) authentication system and can answer questions and assist with tasks such as correspondence, preparing background papers, and programming.
“Technology is learned by doing,” said Chandra Donelson, DAF’s acting chief data and artificial intelligence officer. “As our warfighters, who are closest to the problems, are learning the technology, we are leveraging their insights to inform future policy, acquisition and investment decisions.”
Not instant perfection
The NIPRGPT experiment is an opportunity to conduct real-world testing, measuring generative AI’s computational efficiency, resource utilization, and security compliance to understand its practical applications.
For now, AFRL is experimenting with self-hosted open-source LLMs in a controlled environment. It is not training the model, nor are responses refined based on any user inputs.
Users will have the opportunity to provide feedback to shape policies and inform procurement conversations with vendors of such tools in future.
Alexis Bonnell, AFRL CIO, described the experiment as a “critical bridge to ensure we get the best tools we have into our team’s hands while larger commercial tools are navigating our intense security parameters and other processes. Changing how we interact with unstructured knowledge is not instant perfection; we each must learn to use the tools, query, and get the best results. NIPRGPT will allow Airmen and Guardians to explore and build skills and familiarity as more powerful tools become available.”
At a launch event for NIPRGPT this week, Donelson said that DAF is not committing to any single AI model or group of technology vendors because it is too early in the process for that.
“As tech leaders, we have a responsibility to ensure that models are fit for the purpose. So, we aim to partner with the best minds from government, industry, and academia to identify which models perform better on our specific tasks domains, as well as use cases to meet the needs of tomorrow’s warfighters,” she said.
IDC’s research manager for government trust and resiliency strategy, Aaron Walker, said his initial thoughts on the launch are that “learning by doing is well and good until sensitive information is exposed, or bad actors poison the model. It is good they are experimenting on the non-classified networks.”
DAF and other defense agencies, he said, are “ambitious and innovative, but could potentially benefit from letting civilian agencies work out some of the kinks, though they often have more financial and personnel resources with the necessary skills to use and develop emerging technologies.”
The tool, he said, could eventually be helpful with generating threat intelligence reports, reverse engineering malware, suggesting policy configurations, aggregating security data, and writing code, among other less technical use cases.
As for security concerns, Walker said that even though NIPRGPT is on the “non-classified network, humans are imperfect and could mislabel or misuse sensitive information, potentially risking its exposure. That would be a situation where Air Force staff or a contractor on the network inputs sensitive data to generate a response.”
Read More from This Article: US Air Force seeks generative AI test pilots
Source: News