A K Nearest Neighbors model.
Comme on doit utiliser une distance dans \(\mathbb{R}^d\) avec \(d = 8\), il faut que les données fournies soient des nombres flottants, et soient complètes.
The doc is here : http://scikit-learn.org/dev/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier
$ python KNN.py
Opening the file 'train.csv' and 'test.csv'...
Find the best value for the meta parameter n_neighbors, with 10 run for each...
Searching in the range : xrange(1, 30)...
Using the first part (65.00%, 579 passengers) of the training dataset as training,
and the second part (35.00%, 312 passengers) as testing !
For 1 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 86.18%...
For 2 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.05%...
For 3 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 88.43%...
For 4 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 5 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 6 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.56%...
For 7 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.05%...
For 8 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.22%...
For 9 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 86.70%...
For 10 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 86.87%...
For 11 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 86.70%...
For 12 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 86.70%...
For 13 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.05%...
For 14 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.39%...
For 15 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.22%...
For 16 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 17 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.56%...
For 18 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 19 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 20 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.56%...
For 21 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.91%...
For 22 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 23 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 24 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.91%...
For 25 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 26 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 27 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.74%...
For 28 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.56%...
For 29 Nearest Neighbors, learning from the first part of the dataset...
... this value of n_neighbors seems to have a (mean) quality = 87.22%...
With trying each of the following n_neighbors (xrange(1, 30)), each 10 times, the best one is 3. (for a quality = 88.43%)
Creating the classifier with the optimal value of n_neighbors.
Learning...
Proportion of perfect fitting for the training dataset = 97.42%
Predicting for the testing dataset
Prediction: wrote in the file csv/KNN_best.csv.
For the attribut survived , chi2=30.8736994366 , and pval=2.75378563203e-08.
For the attribut pclass , chi2=92.7024469789 , and pval=6.07783826353e-22.
For the attribut sex , chi2=0.308599072344 , and pval=0.578541125456.
For the attribut age , chi2=2.58186537899 , and pval=0.108094210127.
For the attribut sibsp , chi2=10.0974991118 , and pval=0.00148470675869.
For the attribut parch , chi2=8.81917152221 , and pval=0.00298081971968.
For the attribut fare , chi2=4.16460364538 , and pval=0.0412770695637.
La soumission du résultat à Kaggle donne 71.70%.
Espace de recherche
Nombre de tests utilisés pour méta-apprendre
Proportion d’individus utilisés pour méta-apprendre.
La valeur optimale trouvée pour le paramètre n_estimators
The score for this classifier.