In the book "Deep Learning with Python" by Francois Chollet (2018), in section 1.2.4 one can find:
Decisions trees learned from data began to receive significant research interest in the 2000s, and by 2010 they were often preferred to kernel methods.
...
In particular, the Random Forest algorithm introduced a robust, practical take on decision-tree learning that involves building a large number of specialized decision trees and then ensembling their outputs. Random forests are applicable to a wide range of problems—you could say that they’re almost always the second-best algorithm for any shallow machine-learning task. When the popular machine-learning competition website Kaggle (http://kaggle.com) got started in 2010, random forests quickly became a favorite on the platform—until 2014, when gradient boosting machines took over.
It sounds for me like the author trying to draw this kind of the evolution of classification methods:
| Method | From | To |
|-------------------|------|--------------|
| Kernel Methods | ... | 2000 |
| Decision Trees | 2000 | 2010 |
| Random Forest | 2010 | 2014 |
| Gradient Boosting | 2014 | Today (2019) |
Is Gradient Boosting the most popular library nowadays?
Is it can be universally applicable for any cases?
What do you think?