Is there any trade off between the accuracy and the interpretability?

Is there any trade off between the accuracy and the interpretability?

In many cases, there is a clear trade-off between accuracy on the one hand and inter- pretability on the other. Models exhibiting the former property are many times more complex and opaque.

What is model interpretability in machine learning?

A (non-mathematical) definition I like by Miller (2017)3 is: Interpretability is the degree to which a human can understand the cause of a decision. The higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.

What is model complexity?

In machine learning, model complexity often refers to the number of features or terms included in a given predictive model, as well as whether the chosen model is linear, nonlinear, and so on. It can also refer to the algorithmic learning complexity or computational complexity.

Which is an example of an interpretable model?

A model is said to be interpretable if we can interpret directly the impact of its parameters on the outcome. Among interpretable models, one can for example mention : Linear and logistic regression, Lasso and Ridge regressions, Decision trees, etc. Explainability can be applied to any model, even models that are not interpretable.

Which is an interpretable model of a decision tree?

Linear models and decision trees are often cited as interpretable models using such justifications; the computation they require is simple, and it is relatively easy to interpret each of the steps executed when a prediction is made.

What’s the difference between interpretable and explainable machine learning?

First of all, in order to begin understanding interpretable machine learning, let’s define the difference between machine learning explainability and interpretability : Interpretability is linked to the model. A model is said to be interpretable if we can interpret directly the impact of its parameters on the outcome.

Which is the most important approach to interpretability?

Approaches to interpretability, which explain the model’s representations or which features are most relevant, could help diagnose these issues earlier and provide more opportunities to remedy the situation. Third, and perhaps most interestingly, contestability.