Five things you might not be considering about your MLOps needs

By Hernan Alvarez from Algorithmia

MLOps—short for “machine learning operations”—has become a buzzword in recent years, and for good reason. Algorithmia’s 2021 enterprise trends in machine learning report found that while organizations have dramatically increased their investments in machine learning (ML), the time required to deploy an ML model to production has actually increased. As their ML investments grow, organizations need an effective solution for delivering models reliably and at scale.

In simple terms, MLOps is the discipline of delivering machine learning models through repeatable and efficient workflows. If you’re investing in ML, then you need MLOps or the models you’ve poured so much time and effort into might not have any impact at all.

However, despite MLOps’ growing popularity, there are still many misconceptions about it. This post explores five things you might not be considering about MLOps, and what you need to know to succeed.

1. It’s not as simple as you might think

Many organizations want to take a do-it-yourself approach to MLOps, but this can incur a lot of unforeseen costs, risks, and operational overhead. One common question we get is, “Doesn’t deployment just mean using a container?” In reality, it entails a whole lot more than that—and MLOps is about a lot more than just deployment.

Why is MLOps so complicated? Put simply, MLOps feels simple because it’s relatively easy to deploy a single model, but there are more factors to a successful ML project than just deployment. In reality, there are several components to MLOps that organizations often fail to consider up front. When taken together, they can quickly become too complicated for most teams to manage on their own. And even if they could, it wouldn’t be worth the opportunity cost to do so.

To give you a taste of just some of the factors at play, a full MLOps pipeline will comprise multiple steps including hardware orchestration, integration of language and framework SDKs, container management, model versioning, incorporation of advanced hardware such as multithreaded CPUs and GPUs, inference API management, load balancing, and the security and governance of users, models, data, and operations.

For example, once you select a cloud computing platform to host and serve your models—a complex task on its own—you then need to containerize all your models and manage their scaling using Kubernetes or a similar service. This usually takes an entire team on its own, or at least a few dedicated developers. On top of this, you need to layer the many interfacing processes that are crucial for handling the necessary minutiae associated with model deployment. This includes tasks such as versioning models, upgrading libraries, and monitoring resources. And, of course, there are extremely important security considerations.

The bottom line? What may seem like a simple task actually entails many complicated, interacting factors that must operate in lockstep to succeed, all while being constantly maintained. Organizations that take a do-it-yourself approach ultimately end up with:

  • Long lead times
  • Expensive talent
  • High total cost of ownership
  • Polyglot programming language and framework integration that’s time-consuming to build, add to, and update over time
  • Authentication and security that’s difficult, expensive, and time-consuming to build and maintain
  • Limited model governance with only basic model and API versioning and no integration with DevOps tooling

Organizations that outsource their MLOps see the best results; our 2021 report revealed that companies using third-party MLOps solutions spend less on infrastructure, take less time to deploy models, and spend a smaller percentage of their data scientists’ time on model deployment when compared to those that build and maintain their own MLOps systems from scratch.

2. It’s critical as soon as you have your first model

You would think that if you only have one or a few models, you’d be better off just bootstrapping their deployment as you go, right? Wrong. All the issues you have when deploying many models are present when you only have a few. You still need to create APIs, train on advanced hardware like GPUs and TPUs, version iterations of your models, scale inference in response to user demand, generate logs and other metadata, establish security practices, and create governance around the ways in which your model is used. It’s always best to develop with the future in mind, and having a robust MLOps pipeline in place begins paying dividends even with just the first model. You don’t want to do things in an ad-hoc manner and then be stymied and disorganized once you begin adding additional models into your pipeline.

3. ML can’t use the same processes as traditional software

Traditional software and ML serve very different purposes and have significantly different needs. For one, unlike traditional software, data is at the core of every ML application. This means that the code is built around servicing the data rather than the application behavior. Machine learning applications require  more and larger databases, data lakes, and distributed file systems. The code for ML is usually modular, built up in containerized microservices and orchestrated around the interlocking data components. These require more versioning, more monitoring, and more frequent deployments. They also interact unpredictably at scale, leading to a need for frequent troubleshooting and inspection.

The ML lifecycle also differs from the traditional software lifecycle due to developers themselves and the nature of the problems they tackle. In classic software development, teams attempt to stick to a single language and paradigm as much as possible. A firmware product might be entirely architected in C++ or a website back end in Go, with perhaps only minor variations in the language used for specialized tasks. However, ML is an ever-evolving ecosystem in which data scientists, who often don’t have the same background as traditional software engineers, look to adopt the latest framework, language, or library that will help them solve some data-intensive task. This is why machine learning code spans the gamut from Java, Scala, Spark, Python, Julia, R, and many others, oftentimes all within a single codebase. Being able to integrate all these tools in a failsafe way takes a much different mindset and process than building on a standard full-stack app.

Finally, machine learning is an ever-evolving, open-loop system in which models are created, trained, and deployed, then retrained and redeployed in response to feedback from the user. In traditional software development, code is launched in fixed, incremental releases. Some changes are made by the development team, the code is pushed live, and then feedback may or may not be gathered for future features or upgrades. In machine learning, however, a model may need to be retrained on a daily basis as customers interact with the code and provide new data, which may lead to model drift and reduced accuracy.

4. ML without governance is extremely risky

ML governance refers to the policies that organizations set around their models and overall machine learning platforms. These may have to do with security, access rights, versioning, data gathering, and documentation. Governance is important because ML is often diffuse and slippery. It’s often unclear how a trained ML model might act, and models can drift and have declining performance over time. Access to ML models and their code also must be rigorously protected because customer trust is at the heart of every business.

Governance may seem like a concern that’s only relevant for businesses with a lot of models, but it’s actually important from the very beginning. Whether you have one model or thousands, if you’re using machine learning then you need governance.

The good news is, governance doesn’t need to be a manual effort. MLOps gives you the tools you need to govern your ML models without having to create a piecemeal solution that will require constant maintenance.

Yet another reason why you need MLOps as soon as you begin investing in ML—not after.

5. MLOps is a business necessity, not an afterthought

If the economic impacts of COVID-19 have taught us anything, it’s that companies that can’t adapt to new and difficult circumstances will often get left in the dust. In a time where consumer behavior and concerns are rapidly changing, understanding your customer base using machine learning, data, and statistics is vital.

In times of economic uncertainty, many organizations respond by cutting costs. However, now is not the time to decrease your investment in ML. As we’ve seen, the businesses that survive economic disruptions are the ones that embrace the technological innovations needed to weather the storm.

You need to invest in ML now. And in order to get the most out of your ML investments, you need MLOps.

Get started with MLOps

Algorithmia’s enterprise MLOps platform manages all stages of the production ML lifecycle within existing operational processes, so you can put models into production quickly, securely, and cost-effectively.

Don’t wait to get started with MLOps. Get your free trial of Algorithmia and make sure your ML investments don’t go to waste.

SIGN UP FOR THE DSS PLAY WEEKLY NEWSLETTER
Get the latest data science news and resources every Friday right to your inbox!