[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Subscribe]
DM: FYI: Empirical comparison of classification trees with missing valuesFrom: Tjen-Sien Lim Date: Mon, 19 Apr 1999 10:52:33 -0400 (EDT) I've updated my empirical comparison of prediction accuracy of classification trees and rules induction algorithms on data sets with missing covariate values. The following results are averages from 34 data sets. More details are available at http://www.stat.wisc.edu/~limt/treeprogs.html Thanks for your attention. Methods Means Rank Means Error Rate (%) ================================================================== 1 Boosted C5.0 Rules 6.7 21.20 2 Boosted C5.0 Tree 8.1 21.80 3 Bagged CARTŪ (r) 8.7 21.96 4 M5class 10.3 22.97 5 ARCed CARTŪ (r) 10.8 22.38 6 QUEST (linear, 0-SE) 11.1 22.91 7 CARTŪ (r) 11.2 23.41 8 Boosted-RIPPER 11.5 22.67 9 IND CARTŪ (0-SE) 12.6 23.76 10 IND MML options 13.3 24.74 11 IND MML 13.8 24.47 12 IND CARTŪ (1-SE) 13.8 24.15 13 QUEST (univariate, 0-SE) 14.0 24.54 14 C5.0 Rules 15.1 24.53 15 IND Bayes options 15.4 24.59 16 Ripley's Tree function 15.8 25.61 17 C5.0 Tree 15.8 24.72 18 C4.5 Tree 16.2 24.77 19 RPART 17.2 26.10 20 C4.5 Rules 17.3 25.10 21 RIPPER 17.4 25.85 22 IND Bayes 17.4 24.89 23 CN2 18.2 26.14 24 KnowledgeSEEKER 19.0 26.17 25 S-Plus Tree 19.1 27.07 26 ITI 19.2 26.02 27 FIRM 21.4 29.10 28 DMTI 21.5 26.55 29 T1 23.0 33.74 -- Tjen-Sien Lim (608) 262-8181 Dept. of Statistics limt@stat.wisc.edu Univ. of Wisconsin-Madison http://www.stat.wisc.edu/~limt 1210 West Dayton Street Madison, WI 53706
|
MHonArc
2.2.0