There are many pitfalls to not scaling your data and it is generally very advisable to scale it. It is so easy to do, it is reversible, and it useful in other operations like removing outliers.
Upon further analysis of the specifics of Hidden Markov Models using Multivariate Gaussians the theoretical accuracy should not suffer as a result of drastic differences in the scales of your features. But, an important operation involving multivariate Gaussian distribution is matrix inversion.
Though the theoretical invertibility won't change, the practical numerical solution to your problem will likely suffer inaccuracy due to the difference in scales. Iterative methods will have issues with convergence and direct solve methods will suffer from stiffness. This is especially true when common complications like linear dependence are present.
Here is a set of 3 lectures on HMM with some specifics on multivariate Gaussians (1-2-3)
I know we've already discussed this in the comments, but I wanted to close out this question, so have added it as an answer.
I hope this helps!