The use of synthetic data to train AI models is about to skyrocket, as organizations look to fill in gaps in their internal data, build specialized capabilities, and protect customer privacy, experts predict.
The synthetic data trend will extend beyond the giant large language model (LLM) vendors to widespread adoption, including among enterprise CIOs, these experts contend. Gartner, for example, projects that by 2028, 80% of data used by AIs will be synthetic, up from 20% in 2024.
The concept of using synthetic data to train AI models has been around for years, and many companies in highly regulated industries have already adopted the technique, says Alexandra Ebert, chief AI and data democratization officer at Mostly AI, a synthetic data vendor.
“One of the biggest pain points for organizations when they want to go towards AI development is that the most valuable data they own, most often the customer data, is locked away due to the [EU] GDPR or other privacy laws,” she says. “Thanks to synthetic data, they can anonymize this data in a much more efficient and higher quality way than all the legacy anonymization technologies like masking and obfuscation.”
In addition to GDPR privacy law, the EU AI Act points to synthetic data as a way to protect privacy and sensitive information, as does the UK AI Opportunities Action Plan, released in January. Also in January, South Korea’s government announced an $88 million investment to drive the use of synthetic data in the biotechnology industry.
In addition to privacy challenges, some AI experts also suggest that large AI companies are running out of real-world information to train their AI models. A growing number of copyright lawsuits against AI vendors, including a recent court victory for copyright holder Thomson Reuters, may also drive AI vendors to embrace synthetic data.
Building better datasets
One of the biggest reasons to use synthetic data is when an organization’s internal data is incomplete or in bad shape. There are many kinds of synthetic data — an AI creating a picture of a unicorn riding a train on Mars would be a synthetic data output — but building better data from internal sources will soon be an essential capability for many organizations, says Jonathan Frankle, chief AI scientist at AI platform vendor Databricks.
The result of using internal organic information to build new datasets creates a form of synthetic data Frankle calls “bionic” data.
“That kind of bionic data is my favorite tool in the world of synthetic data, with the ability to leverage the information you have and transform it into the form that you need,” he says. “It would be a very fortunate, it would be very fortuitous, if the problem you were trying to solve happened to match an exact data set that you already had.”
This blending process can create domain- or context-specific data that can be a huge benefit to users, Frankle adds. “It can be very powerful, because it can help you get exactly the right data you want, exactly the right, behaviors, properties, and shape of data you want,” he adds.
Self-driving cars and AI software development
One good use of synthetic data would be to train autonomous cars when they need to hit the brakes, Mostly AI’s Ebert says. Instead of filming millions of hours of video showing multiple weather conditions, obstacles, and other potential variables, car makers can use synthetically generated visuals to mimic real-world conditions.
“We can use seed data, so some videos of rabbits or kids or whatever you want to train on, allowing us to create these millions of distinct examples which are still realistic,” she says.
Another example comes from Poolside, an AI developer focused on software engineering. The company uses synthetic data to create a “massive coding training ground” allowing its AI models to focus on complex coding tasks, says Eiso Kant, CTO and co-founder.
“Synthetic data addresses data scarcity by providing a cost-effective way to generate large, diverse datasets tailored to specific needs, such as software development,” he says. “In essence, synthetic data enables AI to learn from a broader and cleaner source of information, resulting in more efficient, secure, and robust AI systems.”
Synthetic data can also give companies a competitive advantage, Kant says, after the first wave of LLMs were trained on similar data sources.
“When the major AI vendors rely on the same readily available data to train their models, their only real competitive advantages are talent and access to more powerful computing resources,” he says. “These companies have been drawing from the same data well and limiting the potential for unique advancements.”
Human in the loop
Creating synthetic data, however, comes with its own challenges. Generating useful synthetic data takes careful curation by data professionals, Frankle says.
“Synthetic data is a powerful tool, but the tool still needs an operator,” he adds. “You can’t just open the spigot and get synthetic data.”
Using customer information to generate synthetic data, for example, can leave a residue of private data without careful oversight of the process, Frankle says. “It’s not a panacea for the problem of trying to obfuscate customer information and get a training data set,” he adds. “There’s no easy button for it. It’s not a cure-all, and it requires a lot of care.”
Synthetic data can be generated using several techniques, including random data generation and generative models, a type of machine learning. It’s also possible for an AI model to generate new training data for itself, but rigorous testing is necessary, because the process can lead to so-called self-referential loops, Kant says.
“This can introduce inaccuracies, as the model reinforces its own potentially flawed understanding,” he adds. “Just as a snake eating its own tail proves no real sustenance and can be self-destructive, a model trained on its own distorted output can become increasingly detached from reality.”
Read More from This Article: Synthetic data takes aim at AI training challenges
Source: News