According to Kari Briski, VP of AI models, software, and services at Nvidia, successfully implementing gen AI hinges on effective data management and evaluating how different models work together to serve a specific use case. While a few elite organizations like Nvidia use gen AI for things like designing new chips, most have settled on less sophisticated use cases that employ simpler models, and can focus on achieving excellence in data management.
And Doug Shannon, automation and AI practitioner, and Gartner peer community ambassador, says the vast majority of enterprises are now focused on two categories of use cases that are most likely to deliver positive ROI. One being knowledge management (KM), consisting of collecting enterprise information, categorizing it, and feeding it to a model that allows users to query it. And the other is retrieval augmented generation (RAG) models, where pieces of data from a larger source are vectorized to allow users to “talk” to the data. For example, they can take a thousand-page document, have it ingested by the model, and then ask the model questions about it.
Gartner
“In both of these kinds of use cases, the enterprise relies on its own data, and it costs money to leverage your own information,” says Shannon. “Small- and medium-sized companies are at a big advantage compared to large enterprises burdened with legacy processes, tools, applications, and people. We all get in our own way sometimes when we hang on to old habits.”
Data management, when done poorly, results in both diminished returns and extra costs. Hallucinations, for example, which are caused by bad data, take a lot of extra time and money to fix — and they turn users off from the tools. But some IT leaders are getting it right because they focus on three key aspects.
Collect, filter, and categorize data
The first is a series of processes — collecting, filtering, and categorizing data — that may take several months for KM or RAG models. Structured data is relatively easy, but the unstructured data, while much more difficult to categorize, is the most valuable. “You need to know what the data is, because it’s only after you define it and put it in a taxonomy that you can do anything with it,” says Shannon.
Nvidia provides open-source tools and enterprise software for filtering, which can be configured to remove things like personally identifiable information (PII) or information that’s toxic for a given domain. Classifiers are provided in the toolkits to allow enterprises to set thresholds. “We also do data blending, where you combine data from different sources,” says Briski.
During the blending process, data can be re-arranged to change relative quantities. Some enterprises, for example, might want 30% of their data to be from people between the ages of 18 and 25, and only 15% from those over the age of 65. Or they might want 20% of their training data from customer support and 25% from pre-sales. During the blending process, duplicate information can also be eliminated.
Nvidia
Information should also be filtered for quality. According to Briski, this is an iterative process that involves a variety of tasks to get to the highest quality data — those signals that improve the accuracy of a model. And quality is relative to the context of the domain you’re in, so an accurate response for finance, for example, may be completely wrong for healthcare. “As a result of quality filtering, we find the right signals and we synthetically generate similar types of data to boost the importance of that signal,” she says.
Briski also points out the importance of version control on the data sets used to train AI. With different people filtering and augmenting data, you need to trace who makes which changes and why, and you need to know which version of the data set was used to train a given model.
And with all the data an enterprise has to manage, it’s essential to automate the processes of data collection, filtering, and categorization. “Many organizations have data warehouses and reporting with structured data, and many have embraced data lakes and data fabrics,” says Klara Jelinkova, VP and CIO at Harvard University. “But as the datasets grow with generative AI, making sure the data is high quality and consistent becomes a challenge, especially given the increased velocity. Having automated and scalable data checks is key.”
Hone data governance and compliance
The second aspect of data management to focus on is data governance and compliance, clearly illustrated by experiments run at Harvard. Last year, the IT department launched the AI Sandbox, a gen AI environment developed in-house and made available at no cost to its community of users. The sandbox offers access to several different LLMs to allow people to experiment with a broad range of tools.
The Harvard IT department also ran innovation programs, where people pitched projects that use gen AI. The pitches had to include something about the expected ROI, where it’s not necessarily about financial returns but could be some combination of other gains, like new knowledge and discovery, or improved processes. If the project was accepted, it was given a small seed grant, and projects that demonstrated expected benefits might be scaled up.
Harvard University
According to Jelinkova, one of the important aspects of data management in regard to gen AI projects is having a second look at data governance and thinking about what needs to change. “We started with generic AI usage guidelines, just to make sure we had some guardrails around our experiments,” she says. “We’ve been doing data governance for a long time, but when you start talking about automated data pipelines, it quickly becomes clear you need to rethink the older models of data governance that were built more around structured data.”
Compliance is another important area of focus. As a global enterprise thinking about scaling some of their AI projects, Harvard keeps an eye on evolving regulatory environments in different parts of the world. It has an active working group dedicated to following and understanding the EU AI Act, and before their use cases go into production, they run through a process to make sure all compliance obligations are satisfied.
“When you work with new technology, you’re on the bleeding edge and you run a risk that the legislative landscape shifts under you over time,” she says. “For us, it’s all part of data governance. You need to have a compliance framework that allows you to rework things you’ve done before as the legislative landscape changes.”
Prioritize data privacy and protecting IP
Third is data privacy and protection of intellectual property (IP). For most organizations, data management is intrinsically tied to privacy. They need to make sure they’re not exposing themselves to risk. “You have filtering, normalization, some sort of augmentation, and you have to annotate the data,” says Jelinkova. “But then you also address security and privacy of the data, and you need to protect your own IP.”
As they dig down into their data, many enterprises discover they don’t understand the role-based access control (RBAC) associated with some of it — if there was any. As a result, they have no idea what data was shared within, or even outside, the enterprise. That’s where guidelines and guardrails show their importance, and the reasons they need to be put into play well in advance.
Jelinkova says Harvard is very proactive on privacy principles, and it has a comprehensive data security program that includes data classification and guidance on which data can be used for different types of AI. “We’re very thoughtful about IP,” she says. “When we collect data to construct an AI tutor, we need to make sure we have all the IP rights for all the data we’re going to feed it.”
And because, like most universities, Harvard creates a lot of their own IP, it has to make sure it protects that, too. That’s not hard to do with AI tools created in-house. But when public models are used, extra measures have to be taken so they don’t use your precious information, either directly or indirectly, for commercial benefit. To be safe, Harvard puts in place contractual protections with third-party AI tool vendors to ensure the security and privacy of their data.
“When it comes to using your own data in very large foundational models, there’s still a lot of misunderstanding and not a lot of transparency about what some of the tools do with your data,” says Shannon. “Azure backs into using OpenAI, so even when they say they don’t take user data and give you a long list of all the stuff you’re protected from, it’s still a black box.”
Read More from This Article: 3 things to get right with data management for gen AI projects
Source: News