In our continuous effort to understand and support our customers better, we regularly conduct end-user surveys. Recently, we sponsored a study with IDC* that surveyed teams of data scientists, data engineers, developers, and IT professionals working on AI projects across enterprises worldwide. Our goal was to identify the top challenges they face and the best practices of more mature AI teams. The study revealed that enterprise AI challenges vary with the maturity level of the teams as they begin to test and operationalize AI and GenAI. However, several key findings stood out as persistent issues across all maturity levels which are now informing our approach to developing intelligent data infrastructure in support of AI.
One of the top findings was that 63% of respondents indicated that right-sizing storage for AI needs major improvement or a complete overhaul, with storage bottlenecks being a persistent problem in slowing down AI modeling. To address this, we have partnered closely with NVIDIA to qualify solutions for model training and have optimized ONTAP for SuperPOD qualification, which is currently in testing. Our CEO recently announced a new data infrastructure designed for the AI era, which will independently scale capacity and performance to handle the needs of the largest foundational model development and scale down for inferencing. This infrastructure will run both on-premises and as software in the world’s largest public clouds, offering ONTAP data management services critical to efficient and responsible AI.
Another significant finding was that respondents cited data access due to infrastructure restrictions as the #1 cause of AI project failure. To tackle this challenge, we are focused on simplifying and unifying data storage to support better data access. We provide a single data and control plane across the NetApp data estate, spanning the edge, data center, and public clouds. This approach natively supports all data formats and efficient data movement, bringing data to the right resources either on-premises or in one of the hyperscalers for each stage of the data pipeline. Additionally, we expose our capabilities to the tools data teams use, such as AWS SageMaker, Google Vertex, and Azure ML Studio.
The third key finding was that only 20% of respondents have mature, centralized policies for data governance and security for AI. This is an area where we are placing significant emphasis going forward. NetApp differentiates itself with policy-driven security at the data layer, employing continuously updated AI/ML models to detect and respond to threats in real time with 99%+ precision. This can be applied to protecting both data and models within the AI workflow. We are also developing tools for data scientists to get their work done safer and faster, in compliance with privacy laws. These tools will accelerate and simplify data discovery and curation, provide assurance of secured and compliant AI, guarantee accuracy and traceability, and integrate with data science workflow tools.
In conclusion, the insights from the IDC survey have significantly influenced our planning and approach to developing AI-ready intelligent data infrastructure. By addressing storage bottlenecks, improving data access, and enhancing data governance and security, we are better positioned to help our customers leverage AI and GenAI effectively. For more details, you can:
- Watch this conversation between NetApp’s Gabie Boko, CMO & Hoseb Dermanilian, Global Head of AI GTM & Sales.
- Access the IDC study to self-assess your organization’s AI maturity and learn best practices
- Register for the NetApp-IDC-NVIDIA webinar to dive deeper into the survey results
- Understand our CEO’s AI vision from the recent Insight Conference
*N=1220 – IDC Transformation Study, Mar 2024 |
Read More from This Article: Overcoming AI obstacles: Learnings from AI practitioners in the Enterprise
Source: News