Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.
Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.