Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: mondrian conformal prediction #168

Open
mcapuccini opened this issue Dec 11, 2023 · 2 comments
Open

feat: mondrian conformal prediction #168

mcapuccini opened this issue Dec 11, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@mcapuccini
Copy link

Feature Request

Describe the Feature Request
Looking at the documentation it seems like Fortuna implements Inductive Confomal Prediction. I couldn't understand if you are using a mondrian approac or not, meaning that you are calculating the non-conformity measures for the calibration set (or alphas) for each class separately (and computing conformal p-values for each class).

Describe Preferred Solution
If fortuna implements mondrian ICPs it would be good to add it to the documentation otherwise it would be nice to have it done with the mondrian approach for better handling class imbalances.

Related Code
n/a

Additional Context
n/a

If the feature request is approved, would you be willing to submit a PR?
Yes (if time permits, I am not sure if I have the capacity during my working hours)

@mcapuccini mcapuccini added the enhancement New feature or request label Dec 11, 2023
@gianlucadetommaso
Copy link
Contributor

Hi Marco, thanks a lot for your input! Not, at the moment the Mondrian version is not implemented.

If you did have time to work on this, it would be great! I believe it does not require major changes: you would mostly need to extend this class to allow for a class-dependent threshold.

Let me know if you get the chance to contribute!

@mcapuccini
Copy link
Author

mcapuccini commented Dec 15, 2023

Hi Gianluca, thanks for the quick answer. The threshold (sig. level) would remain the same but the p-value would be computed for each class separately according to the calibration examples of the relative class. This means that I would need to look at the true label of the calibration samples to filter out the examples on which the p-value is computed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants