Double random forest

Sunwoo Han, Hyunjoong Kim, Yung Seop Lee

Research output: Contribution to journalArticlepeer-review

32 Citations (Scopus)

Abstract

Random forest (RF) is one of the most popular parallel ensemble methods, using decision trees as classifiers. One of the hyper-parameters to choose from for RF fitting is the nodesize, which determines the individual tree size. In this paper, we begin with the observation that for many data sets (34 out of 58), the best RF prediction accuracy is achieved when the trees are grown fully by minimizing the nodesize parameter. This observation leads to the idea that prediction accuracy could be further improved if we find a way to generate even bigger trees than the ones with a minimum nodesize. In other words, the largest tree created with the minimum nodesize parameter may not be sufficiently large for the best performance of RF. To produce bigger trees than those by RF, we propose a new classification ensemble method called double random forest (DRF). The new method uses bootstrap on each node during the tree creation process, instead of just bootstrapping once on the root node as in RF. This method, in turn, provides an ensemble of more diverse trees, allowing for more accurate predictions. Finally, for data where RF does not produce trees of sufficient size, we have successfully demonstrated that DRF provides more accurate predictions than RF.

Original languageEnglish
Pages (from-to)1569-1586
Number of pages18
JournalMachine Learning
Volume109
Issue number8
DOIs
Publication statusPublished - 2020 Aug 1

Bibliographical note

Publisher Copyright:
© 2020, The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature.

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Double random forest'. Together they form a unique fingerprint.

Cite this