Whether it’s a financial services firm looking to build a personalized virtual assistant or an insurance company in need of ML models capable of identifying potential fraud, artificial intelligence (AI) is primed to transform nearly every industry. In fact, a recent Cloudera survey found that 88% of IT leaders said their organization is currently using AI in some way.
AI’s ability to automate repetitive tasks leads to significant time savings on processes related to content creation, data analysis, and customer experience, freeing employees to work on more complex, creative issues. But adoption isn’t always straightforward. The path to achieving AI at scale is paved with myriad challenges: data quality and availability, deployment, and integration with existing systems among them.
To overcome those challenges and successfully scale AI enterprise-wide, organizations must create a modern data architecture leveraging a mix of technologies, capabilities, and approaches including data lakehouses, data fabric, and data mesh.
Barriers to AI at scale
Despite so many organizations investing in AI, the reality is that the value derived from those solutions has been limited. The factors influencing this success vary and aren’t just confined to purely technical limitations. There’s also an element of employee buy-in that can cause AI adoption to lag behind, or even stall out altogether. Cloudera’s survey revealed that 39% of IT leaders who have already implemented AI in some way said that only some or almost none of their employees currently use any kind of AI tools. So, even if projects are being implemented widely, in more than one-third of cases, the employees simply aren’t using it.
Another challenge here stems from the existing architecture within these organizations. They may implement AI, but the data architecture they currently have is not equipped, or able, to scale with the huge volumes of data that power AI and analytics. This requires greater flexibility in systems to better manage data storage and ensure quality is maintained as data is fed into new AI models.
As data is moved between environments, fed into ML models, or leveraged in advanced analytics, considerations around things like security and compliance are top of mind for many. In fact, among surveyed leaders, 74% identified security and compliance risks surrounding AI as one of the biggest barriers to adoption. These IT leaders are faced with a simultaneous need for a data architecture that can support rapid AI scaling and prepare users for an evolving regulatory landscape.
This challenge is particularly front and center in financial services with the arrival of new regulations and policies like the Digital Operational Resilience Act (DORA), which puts strict ICT risk management and security guidelines in place for firms in the European Union. Rapidly evolving regulatory requirements mean organizations need to ensure they have total control and visibility into their data, which requires a modern approach to data architecture.
Building a strong, modern, foundation
But what goes into a modern data architecture? While every platform is different, there are three key elements organizations should look out for data lakehouses, data mesh, and data fabric. Each of these accounts for a modern data architecture approach to data management that can help adhere to security requirements, break through barriers like data silos and deliver stronger outcomes with AI adoption enterprise-wide.
Before we go further, let’s quickly define what we mean by each of these terms. A data mesh is a set of best practices for managing data in a decentralized organization, allowing for easy sharing of data products and a self-service approach to data management. A data fabric is a series of cooperating technologies that help create a unified view of data from disparate systems and services across the organization. Then there’s the data lakehouse—an analytics system that allows data to be processed, analyzed, and stored in both structured and unstructured forms.
With AI models demanding vast amounts of structured and unstructured data for training, data lakehouses offer a highly flexible approach that is ideally suited to support them at scale. A data mesh delivers greater ownership and governance to the IT team members who work closest to the data in question. Data fabric presents an effective means of unifying data architecture, making data seamlessly connected and accessible, leveraging a single layer of abstraction.
Those benefits are widely understood, with 67% of IT leaders surveyed by Cloudera noting that data lakehouses reduce the complexity of data pipelines. Similarly, both data mesh and data fabric have gained significant attention among IT leaders in recent years, with 54% and 48% of respondents respectively stating they planned to have those components in place by the end of 2024.
Whatever the end goal of an organization’s AI adoption is, its success can be traced back to the foundational elements of IT and data architecture that support it. And the results for those who embrace a modern data architecture speak for themselves.
For example, Cloudera customer OCBC Bank leveraged Cloudera machine learning and a powerful data lakehouse to develop personalized recommendations and insights that can be pushed to customers through the bank’s mobile app. This was made possible by the hybrid data platform OCBC Bank utilized, enabling them to fast-track AI deployment and provide a major return on investment.
With a strong foundation of modern data architecture, IT leaders can move AI initiatives forward, scale them over time, and generate more value for their business.
To learn more about how enterprises can prepare their environments for AI, click here.
Read More from This Article: Are enterprises ready to adopt AI at scale?
Source: News