Decision trees are actually a generic architecture for ensemble learning, not just a standalone learning algorithm.
If you have any classifier, you can train it, sort your datapoints into subsets by their predicted class, and then train it again on each of these subsets.
Vanilla decision trees just do this in the most basic possible way: select a single binary feature and split the data accordingly.
You could in theory “arborise” a Support Vector Classifier, Multi-Layer Perceptron Classifier, Nearest-Neighbour Classifier, etc.
Discover more from quantia
Subscribe to get the latest posts sent to your email.