SHORT ANSWER: Bayesian cost/benefit calculations directly tie "usefulness" to the evaluation of a model with metrics. Therefore, they are the only metrics (and there are an infinite number of them) which are actually useful.
For classification, use a Bayesian prior estimate for each class prevalence (relative class balance/imbalance) to convert a stochastic confusion matrix into an unconditional probability estimate of classification results, including both mistakes and correct classifications. Multiply each term of this matrix by a corresponding term in a cost/benefit matrix and sum them all up; then maximize benefit/minimize cost when comparing the classification algorithms to each other.
For regression, select a point estimator (vector) which minimizes the Bayesian loss function through variational calculus. For example, use the posterior mean if your loss is quadratic, the median if it is an inverted triangle, and the mode if it is a Dirac delta loss. Please note that I have never derived any such point estimator using the calculus of variations; I am going completely off my memory of E.T. Jaynes text Probability Theory: The Logic of Science.