There are some models such as PCA or SVM where scaling and centering of training data is essential.
There are some models, mostly tree-based where scaling and centering is not required at all.
I don't think some of the linear models like linear or logistic regression need it. But I could be wrong. What are the implications of normalizing data for these models?
In general, is there a mental model or framework that I can use to determine whether scaling and centering is needed? How will it affect any kind of interpretability?