You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would recommend to always be using features as floats. XGBoost is explicit that it treats things as 32 bit due to performance optimizations (one example dmlc/xgboost#1410). If a model has been trained using xgboost its split values will be stored as floats and so giving it doubles may cause inaccurate predictions if hit just the right values.
Hi -
Have you done any parity tests between the scored output of the c++ models and the java models?
Asking because I'm seeing large differences (greater than 1) when using double precision values when doing regression.
Using these training parameters:
When I cast the Doubles in the FVec to a Float first, I then see the results are much closer, to within a .0001 tolerance.
The text was updated successfully, but these errors were encountered: