I use genetic / evolutionary algorithms in python's TPOT package to find the overall best model (GBM, RF, SVM, elastic net, etc) and its tuning parameters. Now I need a way to measure each variable's contribution to the chosen model's predictive performance. How can I do this in a model-agonistic way?
My current approach is to retrain the best model architecture after holding out each of the variables. For example, if my variables are [a,b,c] I'll retrain on [a,b], [a,c], and [b,c]. I define the removed variable associated with the worst performing model as the most important variable and I define the variable's predictive contribution as the decrease in predictive performance. I measure all variable's predictive performance this way. Is there anything obviously wrong with this approach? Is there a better approach? I'm familiar with variable importance in decision trees, and p-values in linear models, but I need a model agnostic approach.