I don't work in the industry, BUT in my current field of research (text classification), it seems like (deep) neural networks are becoming the standard.
Although "classic" statistical methods like gradient boosting, random forest and SVMs are still being used, the interest (and possibly the better results) lean towards architectures like LSTM and RNN.
However, statistical methods appear regularly in published research and achieve results comparable (and even better at some cases) to the fancy NNs. In a lot of cases using a very good algorithm does not solve the underlying problems, so usually it is something else that decides the outcome (like the chosen features or the preprocessing).
Everything related to AI is a boom and bust cycle, things become popular, then more people work with them until something else grabs the attention. I know this was not part of the question (I didn't assume that this is why you asked), but my suggestion to anyone reading this is to think of algorithms simply as tools and pick the one that works best for the task at hand, instead of following trends and searching for golden standards.