-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How does XGBoost generate class probabilities in a multiclass classification problem? #1746
Comments
I also have the same question. Only thing that I noticed is that if you have 20 classes and set number of estimators to 100, then you'll have 20*100=2000 trees printed in your model. Now my guess was that first 100 estimators classify first class vs others. Next 100 estimators classify second class vs others etc. Can't confirm but maybe it could be something like this? |
That's a very good observation, which I hadn't noticed before. It's true, the number of trees in the model is n_estimator x n_class. However, to figure out the order of the trees, I used the following toy example: x = np.concatenate([np.ones([10,1])*np.array([1,0,0]),np.ones([10,1])*np.array([0,1,0]),np.ones([10,1])*np.array([0,0,1])], axis=0)
y = np.array([['a']*10+['c']*10+['b']*10]).reshape([30,1])
model = xgb.XGBClassifier(n_estimators=2, objective='mlogloss').fit(x, y)
model.booster().dump_model('trees.txt') which basically creates a training dataset where [1,0,0] is mapped to 'a', [0,1,0] is mapped to 'c', and [0,0,1] is mapped to 'b'. I intentionally switched the order of 'b' and 'c' to see if xgboost sorts classes based on their labels before dumping the trees. Here is the resulting dumped model: booster[0]:
0:[f0<0.5] yes=1,no=2,missing=1
1:leaf=-0.0674157
2:leaf=0.122449
booster[1]:
0:[f2<0.5] yes=1,no=2,missing=1
1:leaf=-0.0674157
2:leaf=0.122449
booster[2]:
0:[f1<0.5] yes=1,no=2,missing=1
1:leaf=-0.0674157
2:leaf=0.122449
booster[3]:
0:[f0<0.5] yes=1,no=2,missing=1
1:leaf=-0.0650523
2:leaf=0.10941
booster[4]:
0:[f2<0.5] yes=1,no=2,missing=1
1:leaf=-0.0650523
2:leaf=0.10941
booster[5]:
0:[f1<0.5] yes=1,no=2,missing=1
1:leaf=-0.0650523
2:leaf=0.10941 and here are the conclusions:
It would be nice if these information is added to xgboost documentation. |
I'm closing this issue now. |
I still have a comment here. I agree with how the trees are structured in the multi-class case. But how are the leaf-values converted to probabilities? It seems like for the binary case you have to apply 1/(1+exp(value)), while for the multi-class case you have to apply 1/(1+exp(-value)). |
That seems to be true, looking at the examples. Thanks for the tip! But I don't understand why these are defined differently for binary and multiclass cases. |
Yes indeed, it is very weird and confusing to say the least. I came to that conclusion by testing it on the UCI Car Dataset (where you classify cars into unacceptable, acceptable, good or very good). Would be nice to have an extra feature where you can convert the trees to sklearn trees or something like that (currently I have a code-base where I can convert these xgb trees and sklearn trees to my own defined decision tree). Of the latter, the sklearn trees, I'm atleast 100% sure that these are converted correctly. For the xgb trees, doubt remains. |
Agree that would be a great feature. |
@GillesVandewiele @sosata : so in that logistic (say for class |
For the example in my early comment, if I try to predict the class probabilities for [1 0 0]: print(model.predict_proba(np.array([[1,0,0]]))) it will generate these results:
There is no way for
where i is the target class (out of N classes), and val[i] is the sum of all values generated from the trees belonging to that class. print(np.exp(+0.122449+0.10941)/(np.exp(+0.122449+0.10941)+np.exp(-0.0674157-0.0650523)+np.exp(-0.0674157-0.0650523)))
print(np.exp(-0.0674157-0.0650523)/(np.exp(+0.122449+0.10941)+np.exp(-0.0674157-0.0650523)+np.exp(-0.0674157-0.0650523)))
print(np.exp(-0.0674157-0.0650523)/(np.exp(+0.122449+0.10941)+np.exp(-0.0674157-0.0650523)+np.exp(-0.0674157-0.0650523))) will generate:
which is exactly what the predict_proba() function gave us. |
That seems to be correct @sosata In my case, I wanted to convert each of the trees individually (thus not using leaf values of other trees). There, a sigmoid function seemed to do the job. |
@sosata Is there anyway to get feature importance per class? I think the current implementation will just sum all contributions over all classes. R API seems to have such a parameter for that (see CodingCat@e940523), though. |
@MLxAI I also only tried the Python API and I don't remember I was able to get them per class. However, in my opinion, the definition of feature importance in XGBoost as the total number of splits on that feature is a bit simplistic. Personally, I calculate feature importance by removing that feature from the feature set and calculating the resulting drop (or increase) in accuracy, when the model is trained and tested without that feature. I do this for all features, and I define "feature importance" as the amount of drop (or minus increase) in accuracy. Since training and testing can be done multiple times with different train/test subsets for each feature removal, one can also estimate the confidence intervals of feature importance. Plus, by calculating importance separately for each class, you get feature importances per class. This has produced more meaningful results in my projects than counting the number of splits. |
@sosata Your example is very intuitive, but I still don't know how you come up with scores for each class, can you explain that a little bit? Thanks! |
@chrisplyn You evaluate all decision trees in your ensemble belonging to a specific class and sum the resulting scores. As @sosata stated correctly earlier: the number of trees in your ensemble will be equal to |
@sosata your example is very clear. Thank you very much! @chrisplyn The predict instance [1,0,0] will be classified by booster[0~5], and get the leaf values [0.122449, -0.0674157, -0.0674157, 0.10941, -0.0650523, -0.0650523] respectively. |
When I try to dump a trained XGBoost tree, I get it in the following format:
While it is clear what the tree structure is, it is not clear how to interpret the leaf values. For a binary classification problem and a log-loss cost function, the leaf values can be converted to class probabilities using a logistic function: 1/(1+exp(value)). However, for a multiclass classification problem, there is no information on which class those values belong to, and without that information, it is not clear how to calculate the class probabilities.
Any ideas? Or is there any other function to get those information out of the trained trees?
The text was updated successfully, but these errors were encountered: