You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to us the nltools function for classifcation/prediction across several tasks?
So training a classifier on data from one task and testing on another.
In sklearn when training classifers there is a fit() method and then a predict() method. But in nltools everything seems to be included in predict().
Thanks for your help,
Sebastian
The text was updated successfully, but these errors were encountered:
hi @SebastianSpeer , apologies for the VERY long delay in answering this. Hopefully, this will help others as well.
Currently the way to perform prediction is to use the Brain_Data.similarity() method, which can take a method keyword that indicates if you want to take the dot-product, correlation, or cosine similarity with your model estimated from the predict function. This is I think what most people want in practice, but it is slightly different than what would happen with predict() in sklearn, which would be the dotproduct plus an intercept. You can manually do this yourself, and if you're interested in this type of approach, I recommend just rolling your own sklearn pipeline, or possibly even using himalaya from the Gallant Lab.
We've been considering completely refactoring the predict method to accomodate more use cases. It is currently modeled off a similar function in Tor Wager's CanlabCore matlab toolbox. In practice, I have found that training models on one dataset and testing on others raises a number of issues that are slightly different than what arises in more traditional machine learning contexts. Most notably, is that we often don't have access to the first level data and can't normalize the data beforehand to make sure that the test datasets are on the same scale as the training dataset. This makes traditional regression predictions difficult to interpret as the predictions will be way off. In my own work, I tend to use scale free evaluation methods when testing the generalizability of a model (e.g., pearson correlations or forced choice accuracy).
Dear nltools team,
Is there a way to us the nltools function for classifcation/prediction across several tasks?
So training a classifier on data from one task and testing on another.
In sklearn when training classifers there is a fit() method and then a predict() method. But in nltools everything seems to be included in predict().
Thanks for your help,
Sebastian
The text was updated successfully, but these errors were encountered: