Interpretable Machine Learning and How to Build Trust in our Models

By Data Science Salon

Machine learning models can undoubtedly help humans make more informed decisions, with an increasing number of use cases among different industries. By feeding the algorithms with large amounts of data, they are able to identify patterns and make predictions accordingly.

While the cause of predictions of some models are relatively easy to understand by humans, others are hard or even impossible to interpret. This leads to crucial questions when it comes to defining accountability for machine-based decisions, such as who should be held accountable when a self-driving car causes an accident. The more complex the models are, the harder it gets for humans to interpret them and and we’re talking of the models as being “black boxes”, unable to reveal information about their inner processes and possible biased predictions.

Serg Masis, currently a Data Scientist at Syngenta, is the author of the book “Interpretable Machine Learning with Python” and has been working on ways on how we can build more trust in our machine learning models. In this interview, he’ll tell us more about the challenges of building fair models, how we can build trust in our ML products and the future of model interpretability.

Machine learning Interpretability is an exciting field in ML. Can you provide us with an overview of what it actually is all about?

"It's about fairness, accountability, transparency, and the many ethical concerns associated with each of these concepts. It's connected to Ethical AI, but not exclusively. Ultimately, it's about making AI systems more trustworthy. After all, trust is a desirable property for every technology because it drives adoption, solidifies reputations, and increases profits."

Why should we care about why a ML model makes a certain prediction?

"We don't use machine learning models to solve simple deterministic problems. For such problems procedural programming would suffice as a solution. Any problem solved by machine learning is only partially known. For instance:

  • Will a customer churn?
  • By how much will sales increase next month? 
  • Does a mammogram show a malignant breast tumor?
  • What is the customer in an email compose going to say next?

We probably don't have, nor never will have, enough data to support answers to these questions at 100% certainty, so any machine learning solution will be incomplete. Thus we must accept it will be wrong and extremely wrong a certain percentage of the time. It might be even right for the wrong reasons, right today and wrong tomorrow or even tricked into being wrong. To understand all of this and even begin to correct these problems, we ought to do more than assess models with only predictive performance metrics. That's where machine learning interpretability tools can help."

What’s the main challenge when it comes to building fair and more reliable machine learning models?

"It's a challenge on several fronts such as:

  • Complex Definitions: There are unresolved ethical questions about what constitutes fair that may vary enormously from case to case. To whom do you have to be fair? How do you measure fairness? What are reasonable thresholds for fairness to require? As for reliability, it isn't that easy either. First of all, you have to define failure. Should the scope, gravity, or frequency of the failure be a factor? Again, what thresholds make the deliverable acceptable?
  • Engineering mindset: Like software, ML models are ones and zeroes. However, unlike traditional software, humans did not program the logic behind an ML model. Obviously, they are not the same deterministic animal, but ML models are replacing or enhancing software in practice, so this gets overlooked. Also, software engineers are becoming ML engineers in the droves which are trained to think deterministically. As much as the engineering skill set does wonders for making data science possible, the mindset does not ask the hard questions needed in the model interpretability field. 
  • Objective simplicity is incentivized: Business leaders incentivize optimizing for a single metric. It's a lot simpler to train and select models based on a single performance metric than to look under the hood and realize that it's underperforming for an underprivileged group or prone to fail in certain scenarios that are not represented in the training data. Once you have more than one objective, especially those that compete with each other, it's harder to define the greater objective."

What are some ways we can build more trust in the machine learning products we build?

  • "Before deployment, certify models for adversarial robustness, fairness, and even level of uncertainty with sensitivity analysis.
  • Along with the model, deploy a "model card" that tells stakeholders important details about an ML model, such as the provenance of the training data and potential weaknesses and intended uses.
  • Allow users to see why they got a particular prediction when they get one. If a high-risk model, practice prediction abstention for low-confidence predictions or use human-in-the-loop to ensure that the risk is minimized.
  • Include more than predictive performance when monitoring a production ML model. Check for data drift and continually monitor robustness, feature importance, and/or fairness metrics.
  • Have a model manifest that can let auditors trace model decisions much as a black box does in aviation. Blockchain will eventually have a role here.
  • Models should have a strict shelf life and, like milk gone bad, should be tossed out as soon as it meets the date. However, a retraining procedure should aim to train a model to replace it well before the expiration date."

How do you see the future of model interpretability given that ML models are getting more and more complex?

"Model complexity is often seen as the culprit for all the ills of machine learning but it isn't always. After all, the problems we attempt to solve with machine learning are inherently complex. And for many problems we will find that the only way to improve the solution is to increase model complexity. That being said, it can go too far. For instance, we must wonder if we need to leverage trillions-parameter language models for Natural Language Processing tasks? Perhaps there's a simpler, less brute force way of achieving the same goals. After all, humans only have 86 billion neurons, and we only use a fraction of them for language at any given time. In any case, we can interpret the largest models that exist today, but one example at a time because understanding them holistically is impossible and not really necessary. The challenge remains in defining a comprehensive framework to delineate and prioritize where and how to concentrate our interpretation efforts — this would be especially useful with complex models. 

Another big issue is with the training data. Generally, the idea is to train machine learning models that reflect the reality on the ground. But this reality is often biased and so the data is biased too. A data-centric approach would call to scrutinize the data generation process and mitigate these biases accordingly. After all, the models don't have to reflect the truth we HAVE but the truth we WANT. We can fix so much of this with the data alone.

There's so much that can be improved with model interpretability. However, I think it has a promising future, mostly because there will be more opportunities for it to get much more attention in a few years. Right now the nuts and bolts of machine learning are data cleaning, data engineering, training pipelines, and the drudgery of writing all this code to orchestrate training and inference. In coming years, new and better no-code and low-code ML solutions will displace the artisanal ML approach. I believe the best ones will make interpretability prominent because once creating a sophisticated ML pipeline is less than one day's work in a drag and drop interface; we can devote the rest of the time to model interpretability. 

For this reason, eventually, every data scientist and machine learning engineer will have to master this topic. My book Interpretable Machine Learning with Python covers introductory and advanced topics."

Are you interested in joining the AI conversation with industry leaders and senior practitioners? Check out the Data Science Salon events 2023, early bird rates are now available.

SIGN UP FOR THE DSS PLAY WEEKLY NEWSLETTER
Get the latest data science news and resources every Friday right to your inbox!