From 1461c639abf0720f3219f0bb3f3f2f95ed37a18a Mon Sep 17 00:00:00 2001 From: SUR Frederic <frederic.sur@univ-lorraine.fr> Date: Tue, 13 Dec 2022 14:52:39 +0000 Subject: [PATCH] Replace TP4_ex1_sujet.ipynb --- TP4/TP4_ex1_sujet.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/TP4/TP4_ex1_sujet.ipynb b/TP4/TP4_ex1_sujet.ipynb index d4e1a78..79bc3ff 100755 --- a/TP4/TP4_ex1_sujet.ipynb +++ b/TP4/TP4_ex1_sujet.ipynb @@ -260,7 +260,7 @@ "source": [ "__Question 4__. Pour le noyau RBF et la valeur par défaut de $\\gamma$, la cellule suivante présente différentes classifications selon les valeurs de $C$. Retrouvez les situations identifiées dans le cas linéaire.\n", "\n", - "__Rappel__ : d'après __[la documentation](http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html)__ : _\"The C parameter trades off misclassification of training examples against simplicity of the decision surface. A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly by giving the model freedom to select more samples as support vectors.\"_ " + "__Rappel__ : d'après __[la documentation](http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html)__ : _\"The C parameter trades off correct classification of training examples against maximization of the decision function’s margin. For larger values of C, a smaller margin will be accepted if the decision function is better at classifying all training points correctly. A lower C will encourage a larger margin, therefore a simpler decision function, at the cost of training accuracy. In other words C behaves as a regularization parameter in the SVM.\"_ " ] }, { -- GitLab