When companies first start deploying artificial intelligence and building machine learning projects, the focus tends to be on theory. Is there a model that can provide the necessary results? How can it be built? How can it be trained?
But the tools that data scientists use to create these proofs of concept often don’t translate well into production systems. As a result, it can take more than nine months on average to deploy an AI or ML solution, according to IDC data.
“We call this ‘model velocity,’ how much time it takes from start to finish,” says IDC analyst Sriram Subramanian.
This is where MLOps comes in. MLOps — machine learning operations — is a set of best practices, frameworks, and tools that help companies manage data, models, deployment, monitoring, and other aspects of taking a theoretical proof-of-concept AI system and putting it to work.
“MLOps brings model velocity down to weeks — sometimes days,” says Subramanian. “Just like the average time to build an application is accelerated with DevOps, this is why you need MLOps.”
By adopting MLOps, he says, companies can build more models, innovate faster, and address more use cases. “The value proposition is clear,” he says.
IDC predicts that by 2024 60% of enterprises would have operationalized their ML workflows by using MLOps. And when companies were surveyed about the challenges of AI and ML adoption, the lack of MLOps was a major obstacle to AI and ML adoption, second only to cost, Subramanian says.
Here we examine what MLOPs is, how it has evolved, and what organizations need to accomplish and keep in mind to make the most of this emerging methodology for operationalizing AI.
The evolution of MLOps
When Eugenio Zuccarelli first started building machine learning projects several years ago, MLOps was just a set of best practices. Since then, Zuccarelli has worked on AI projects at several companies, including ones in healthcare and financial services, and he’s seen MLOps evolve over time to include tools and platforms.
Today, MLOps offers a fairly robust framework for operationalizing AI, says Zuccarelli, who’s now innovation data scientist at CVS Health. By way of example, Zuccarelli points to a project he worked on previously to create an app that would predict adverse outcomes, such as hospital readmission or disease progression.
“We were exploring data sets and models and talking with doctors to find out the features of the best models,” he says. “But to make these models actually useful we needed to bring them in front of actual users.”
That meant creating a mobile app that was reliable, fast, and stable, with a machine learning system on the back end connected via API. “Without MLOps we would not have been able to ensure that,” he says.
His team used the H2O MLOps platform and other tools to create a health dashboard for the model. “You don’t want the model to shift substantially,” he says. “And you don’t want to introduce bias. The health dashboard lets us understand if the system has shifted.”
Using an MLOps platform also allowed for updates to production systems. “It’s very difficult to swap out a file without stopping the app from working,” Zuccarelli says. “MLOps tools can swap out a system even though it’s in production with minimal disruption to the system itself.”
As MLOps platforms mature, they accelerate the entire model development process because companies don’t have to reinvent the wheel with every project, he says. And the data pipeline management functionality is also critical to operationalizing AI.
“If we have multiple data sources that need to talk to each other, that’s where MLOps can come in,” he says. “You want all the data flowing into the ML models to be consistent and of high quality. Like they say, garbage in, garbage out. If the model has poor information, then the prediction will itself be poor.”
MLOps fundamentals: A moving target
But don’t think just because platforms and tools are becoming available that you can ignore the core principles of MLOps. Enterprises that are just starting to move to this discipline should keep in mind that at its core MLOps is about creating strong connections between data science and data engineering.
“To ensure the success of an MLOps project, you need both data engineers and data scientists on the same team,” Zuccarelli says.
Moreover, the tools necessary to protect against bias, to ensure transparency, to provide explainability, and to support ethics platforms — these tools are still being built, he says. “It definitely still needs a lot of work because it’s such a new field.”
So, without a full turnkey solution to adopt, enterprises must be versed in all facets that make MLOps so effective at operationalizing AI. And this means developing expertise in a wide range of activities, says Meagan Gentry, national practice manager for the AI team at Insight, a Tempe-based technology consulting company.
MLOps covers the full gamut from data collection, verification, and analysis, all the way to managing machine resources and tracking model performance. And the tools available to aid enterprises can be deployed on premises, in the cloud, or on the edge. They can be open source or proprietary.
But mastering the technical aspects is only part of the equation. MLOps also borrows an agile methodology from DevOps, and the principle of iterative development, says Gentry. Moreover, as with any agile-related discipline, communication is crucial.
“Communication in every role is critical,” she says. “Communication between the data scientist and the data engineer. Communication with the DevOps and with the larger IT team.”
For companies just starting out, MLOps can be confusing. There are general principles, dozens of vendors, and even more open-source tool sets.
“This is where the pitfalls come in,” says Helen Ristov, senior manager of enterprise architecture at Capgemini Americas. “A lot of this is in development. There isn’t a formal set of guidelines like what you’d see with DevOps. It’s a nascent technology and it takes time for guidelines and policies to catch up.”
Ristov recommends that companies start their MLOps journeys with their data platforms. “Maybe they have data sets but they’re living in different locations, but they don’t have a cohesive environment,” she says.
Companies don’t need to move all the data to a single platform, but there does need to be a way to bring in data from disparate data sources, she says, and this can vary based on application. Data lakes work well for companies doing a lot of analytics at high frequencies who are looking for low-cost storage, for example.
MLOps platforms generally come with tools to build and manage data pipelines and keep track of different versions of training data but it’s not press and go, she says.
Then there’s model creation, versioning, logging, weighing the feature sets and other aspects of managing the models themselves.
“There is a substantial amount of coding that goes into this,” Ristov says, adding that setting up an MLOps platform can take months and that platform vendors still have a lot of work to do to when it comes to integration.
“There’s so much development running in different directions,” she says. “There’s a lot of tools that are being developed, and the ecosystem is very big and people are just picking whatever they need. MLOps is at an adolescent stage. Most organizations are still figuring out optimal configurations.”
Making sense of the MLOps landscape
The MLOps market is expected to grow to around $700 million by 2025, up from about $185 million in 2020, says IDC’s Subramanian. But that is probably a significant undercount, he says, because MLOps products are often bundled in with larger platforms. The true size of the market, he says, could be more than $2 billion by 2025.
MLOps vendors tend to fall into three categories, starting with the big cloud providers, including AWS, Azure, and Google cloud, which provide MLOps capabilities as a service, Subramanian says.
Then there are ML platform vendors such as DataRobot, Dataiku, and Iguazio.
“The third category is what they used to call data management vendors,” he says. “The likes of Cloudera, SAS, and DataBricks. Their strength was data management capabilities and data operations and they expanded into ML capabilities and eventually into MLOps capabilities.”
All three areas are exploding, Subramanian says, adding that what makes an MLOps vendor stand out is whether they can support both on-prem and cloud deployment models, whether they can implement trustworthy and responsible AI, whether they’re plug-and-play, and how easily they can scale. “That’s where differentiation comes in,” he says.
According to a recent IDC survey, the lack of methods to implement responsible AI was one of the top three obstacles to AI and ML adoption, tied in second place with lack of MLOps itself.
This is in large part because there are no alternatives to embracing MLOps, says Sumit Agarwal, AI and machine learning research analyst at Gartner.
“The other approaches are manual,” he says. “So, really, there is no other option. If you want to scale, you need automation. You need traceability of your code, data, and models.”
According to a recent Gartner survey, the average time it takes to take a model from proof of concept to production has dropped from nine to 7.3 months. “But 7.3 months is still high,” Agarwal says. “There’s a lot of opportunity for organizations to take advantage of MLOps.”
Making the cultural shift to MLOps
MLOps also requires a cultural change on the part of a company’s AI team, says Amaresh Tripathy, global leader of analytics at Genpact.
“The popular image of a data scientist is a mad scientist trying to find a needle in a haystack,” he says. “The data scientist is a discoverer and explorer — not a factory floor churning out widgets. But that’s what you need to do to actually scale it.”
And companies often underestimate the amount of effort it will take, he says.
“People have a better appreciation for software engineering,” he says. “There’s a lot of discipline about user experience, requirements. But somehow people don’t think that if I deploy a model I have to go through the same process. Then there’s the mistake assuming that all the data scientists who are good in a test environment will very naturally go and would be able to deploy it, or they can throw in a couple of IT colleagues and be able to do it. There’s a lack of appreciation for what it takes.”
Companies also fail to understand that MLOps can cause ripple effects on other parts of the company, leading often to dramatic change.
“You can put MLOps in a call center and the average response time will actually increase because the easy stuff is taken care of by the machine, by the AI, and the stuff that goes to the human actually takes longer because it’s more complex,” he says. “So you need to rethink what the work is going to be, and what people you require, and what the skill sets should be.”
Today, he says, fewer than 5% of decisions in an organization are driven by algorithms, but that’s changing rapidly. “We anticipate that 20 to 25% of decisions will be driven by algorithms in the next five years. Every statistic we look at, we’re at an inflection point of rapid scaling up for AI.”
And MLOps is the critical piece, he says.
“One hundred percent,” he says. “Without that, you will not be able to do AI consistently. MLOps is the scaling catalyst of AI in the enterprise.”
Artificial Intelligence, IT Operations, Machine Learning
Read More from This Article: Making the most of MLOps
Source: News