GradientBoosting Module

A Gradient Boosting model.

Semble être bien performant, mais très lent !

The doc is here : http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html#sklearn.ensemble.GradientBoostingClassifier


Sortie du script

$ python GradientBoosting.py
Opening the file 'train.csv' and 'test.csv'...
Find the best value for the meta parameter n_estimators, with 10 run for each...
Searching in the range : [1, 2, 5, 7, 10, 15, 20, 50, 100]...
Using the first part (68.00%, 605 passengers) of the training dataset as training, 
and the second part (32.00%, 286 passengers) as testing !
For 1 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 61.79%...
For 2 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 67.98%...
For 5 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 81.06%...
For 7 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 82.02%...
For 10 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 82.18%...
For 15 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 83.31%...
For 20 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 83.47%...
For 50 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 85.67%...
For 100 estimator(s), learning from the first part of the dataset...
... this value of n_estimators seems to have a (mean) quality = 86.64%...
With trying each of the following n_estimators ([1, 2, 5, 7, 10, 15, 20, 50, 100]), each 10 times, the best one is 100. (for a quality = 86.64%)
Find the best value for the meta parameter max_depth, with 10 run for each...
Searching in the range : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 1000]...
Using the first part (67.00%, 596 passengers) of the training dataset as training, 
and the second part (33.00%, 295 passengers) as testing !
For random trees with depth <= 1, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 81.43%...
For random trees with depth <= 2, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 83.74%...
For random trees with depth <= 3, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 86.81%...
For random trees with depth <= 4, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 88.67%...
For random trees with depth <= 5, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 89.60%...
For random trees with depth <= 6, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 89.45%...
For random trees with depth <= 7, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 89.14%...
For random trees with depth <= 8, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 88.93%...
For random trees with depth <= 9, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 89.24%...
For random trees with depth <= 10, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 89.23%...
For random trees with depth <= 15, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 88.20%...
For random trees with depth <= 20, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 88.37%...
For random trees with depth <= 1000, learning from the first part of the dataset...
... this value of max_depth seems to have a (mean) quality = 88.66%...
With trying each of the following max_depth ([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 1000]), each 10 times, the best one is 5. (for a quality = 89.60%)
Creating the Gradient Boosting classifier with best meta parameters.
Learning...
 Proportion of perfect fitting for the training dataset = 95.85%
Predicting for the testing dataset
Prediction: wrote in the file csv/GradientBoosting_best.csv.

Résultats

La soumission du résultat à Kaggle donne 75.59%.


GradientBoosting.list_n_estimators = [1, 2, 5, 7, 10, 15, 20, 50, 100]

Espace de recherche

GradientBoosting.best_n_estimators = 100

La valeur optimale trouvée pour le paramètre n_estimators

GradientBoosting.list_max_depth = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 1000]

Espace de recherche

GradientBoosting.Number_try = 10

Nombre de tests utilisés pour méta-apprendre (faible car l’algo est lent)

GradientBoosting.proportion_train = 0.67

Proportion d’individus utilisés pour méta-apprendre.

GradientBoosting.best_max_depth = 10

La valeur optimale trouvée pour le paramètre n_estimators

GradientBoosting.score = 98.204264870931539

The score for this classifier.

Table des matières

Cette page en .txt et en .pdf

Sujet précédent

AdaBoost Module

Sujet suivant

DummyClassifier Module