Usual decision trees are directed acyclic graphs. Are there generalizations of decision trees that contain cycles analogously to recurrent neural networks? If such trees exist, can they be applied to sequences of variable length? I could not find information about such trees. Or their existence is impossible of infeasible?
3 Answers
The goal of decision trees is to partition the feature space into successively smaller regions where each region is best characterized by a single label or value. Adding cyclical components would not help accomplish the goal, it would unnecessarily repeat the modeling fitting for a given region.
- 20,142
- 2
- 25
- 102
That's an interesting proposition ... What would you hope to gain from adding a cyclic component?
Various quick answers of my own to this question have lead to existing methods like boosting (ex: xgboost) and ensembling (ex: random forest) which help address bias, variance, smoothing, etc.
Maybe you could use a cyclical component to create a reinforcement-esque framework?
- 91
- 4
-
I am thinking about classification of sequences of variable length. Or making classification depend on the order of the vectors in the input sequence. – Vladislav Gladkikh May 22 '18 at 00:47
-
Maybe you can use sequence length and many fields of element_01, element_02, etc to capture the length and position features of a string, and then see how various classification methods perform on that task? – Eduard Gelman May 22 '18 at 01:39
Abstractly, if you've already considered decision trees as decomposable into directed acyclic graphs, then one example of you're looking for is, straightforwardly, a Markov Chain.
Markov chains can, indeed, model sequences of arbitrary length. Additionally, markov chains containing cycles are possible-- usually referred to as hamiltonian-embedded markov chains.
- 1,505
- 7
- 22