I have a model and I want to derive its interpretability by using feature contributions. In the end, I want to have some contribution per feature such that the sum of contributions equals the prediction of the model.
One approach may be to use shapley values. The issue with shapley values and its implementation in the shap python library is that it also derives and expected_value, which I'll call $E$. To obtain the prediction of the model, one must add all the shapley values and $E$. Is there a way to derive feature contributions without the need of $E$?
My solution
I've derived a solution but I'm not sure it makes sense. Let's say I have features $x_1, \dots, x_n$ with shapley values $\phi_1, \dots, \phi_n$ for the model $f$. Then, I have $$ f(x) = \phi_1 + \dots + \phi_n + E \Rightarrow f(x) = (1 + \frac{E}{\sum \phi_i})(\phi_1 + \dots + \phi_n) $$
(by using a simple math trick). Then, I claim that I have shapley values, $\hat{\phi_1}, \dots, \hat{\phi_n}$ where $$ \hat{\phi_j}=(1 + \frac{E}{\sum \phi_i})\phi_j $$
These values are suitable with $$f(x) = \hat{\phi_1} + \dots + \hat{\phi_n}$$ But, does it make sense to call them shapley values?
Any criticism is more than welcome.