CIOs are under pressure to drive through AI initiatives that will make their organizations more competitive, effective, and productive.
Yet for many, data is as much an impediment as a key resource. At Gartner’s London Data and Analytics Summit earlier this year, Senior Principal Analyst Wilco Van Ginkel predicted that at least 30% of genAI projects would be abandoned after proof of concept through 2025, with poor data quality listed as one of the primary reasons. At the same summit, Senior Director Analyst, Roxane Edijlala, noted that “having data ready for AI drives greater business outcomes by 20%.”
The problem is that many businesses are unclear about how they should prepare their data. They’re concerned it will be onerous, particularly for organizations without in-house data or AI expertise.
Any organization trying to make progress on its AI journey needs to understand one fundamental truth: AI is dependent on good data, and the results you get are only as powerful as the data you feed in.
This data needs to be well-prepared, easy-to-manage, and accessible across the range of different environments in which you plan to implement genAI tools. It’s the bedrock of AI innovation, increased productivity, and new revenue streams.
Data fundamentals
For organizations to realize the potential of genAI, they need access to all the data across the organization. This immediately adds complexity. A typical organization has massive amounts of data in both physical and digital formats, spread across different, complex enterprise systems, and numerous data silos. Step one, then, is to create a comprehensive inventory, cataloging all your data, where it’s located, and how it’s formatted. This provides a starting point from which you can organize it.
Step two is to assess and address the data quality, establishing key standards by which to evaluate the accuracy, completeness, and reliability of the data. IT teams can then use those standards to identify what is needed. This starts with a one-time pass over existing data, but once quality standards are established, they can also be applied to all data coming in, to form an ongoing data hygiene mechanism.
Implementing governance and security for that data is also critically important, ensuring that it’s both protected against breaches and compliant with regulation. This in turn requires effective governance tools and a clear retention schedule. Data can live on forever, and spend years being fed into genAI tools, so it’s vital to track where it becomes redundant or irrelevant, or risk poor quality results.
Lastly, businesses need to be sure that their data is sourced legally and ethically, and in a way that respects privacy and confidentiality, along with any relevant intellectual property rights.
This whole piece around quality, relevance, governance, and responsibility is important, because these are powerful tools that generate responses and insights that can have a real impact on your organization – and on individual lives. That makes the ability to trace data back to the source crucial.
Basic errors
However, there are pitfalls that can spoil success. Not maintaining a full, comprehensive dataset is one of them. It’s critical to take a unified approach that covers both structured and unstructured data.
Based on what we see with our customers, only about 20% of the data you require for any use case is typically visible, while another 20% is what we call ROT: redundant, obsolete or trivial. The other 60% will be unstructured, sitting in paper documents, or hidden away on siloed systems.
Some of that 60% – audio, video, chat – has become hugely significant because it covers where so much communication happens now. Being able to mine it is a genAI superpower, but that can only happen if it is visible and accessible.
As a result, the inability to mine unstructured data limits your ability to get truly meaningful results. What’s more, you’re also missing information that can be used to train and fine-tune algorithms, and make them more intelligent.
Some businesses make the mistake of getting caught up in the momentum around AI and going big, trying to do everything at once. Yet, it actually makes more sense to do the opposite. Pick a use case where you don’t have too many data sources, where you’re not working across several different data formats.
The fast track to getting data AI-ready
Getting your data AI-ready is one of the core value propositions of Iron Mountain InSight® Digital Experience Platform (DXP). It brings together your data for intelligent decision making. It helps you create workflows to prepare your data and realize its value.
Iron Mountain works with clients to form a thorough inventory of their physical and digital data, so that they can consolidate all the useful information in a single, accessible location in InSight DXP. This eliminates any guesswork and ensures that data is pulled from fragmented environments into a single source.
What’s more, much of this is done using AI to sift through your data, evaluate it, and consolidate it. Once there, you can extract value using genAI-powered chat, allowing you to interact with your documents – digital or physical – intuitively in a human way, in real time.
At Iron Mountain, we use InSight DXP ourselves to enhance our contracts lifecycle management (CLM) systems and streamline data entry. Before InSight DXP, our operations team manually extracted and entered contract data into Salesforce and other systems, which was time-consuming. By automating data extraction and API integration through InSight DXP, we’ve reduced manual activity by 65%.
Ongoing data governance
It’s crucial that enterprises have an ongoing cadence around their data, rather than regard the process as a ‘one and done’. This makes governance vital, and that can be intimidating without the right technology.
Using InSight DXP, with its integrated information governance suite, can help to overcome that challenge. It also provides a unified platform so that how you source and maintain your data – and how you assess its quality – remain subject to the same clear rules. Enterprises can create their own workflows, using InSight DXP’s low-code and automation capabilities to enable data quality as well as drive compliance.
By spending time setting up their processes the right way, enterprises can ensure their data remains clean, relevant, and trustworthy going forward, and put their genAI initiatives on the fast track to success.
To find out more about getting AI-ready with Iron Mountain InSight DXP, visit www.ironmountain.com/insight.
Read More from This Article: Good data is the bedrock for genAI success: How can organizations process and prepare their data?
Source: News