Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Thresholder and RejectOptionClassifier #997

Open
wants to merge 58 commits into
base: main
Choose a base branch
from

Conversation

bramreinders97
Copy link
Contributor

For issue #958

So far I have the first version for Thresholder, and the first version of a unit test for Thresholder.

To those files, I for sure want to add:

  • A test for prefit = True, and some more options for predict_method
  • A way to deal with multiple sensitive features, when we have decided on how to do so (see discussion in issue)

Furthermore, I will start working on RejectOptionClassifier when everything for Thresholder is done.

@bramreinders97
Copy link
Contributor Author

Note that in _thresholder.py, I changed from relative to absolute imports. The reason for this is that I had many problems with the relative imports, and this fixed it for me. When we'll get this merged I'll change it back to relative imports.

@bramreinders97
Copy link
Contributor Author

I'm wondering if I should include more tests? I feel like the main thing to test is if the output is as expected, which is the case.

Copy link
Member

@romanlutz romanlutz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like your plan as outlined in the description to resolve the high-level questions in the issue. This should be a good way to get started, and that way the changes stay relatively small. I'll respond on the issue as well. Thanks for this contribution!!!

fairlearn/postprocessing/__init__.py Outdated Show resolved Hide resolved
Comment on lines 12 to 18
# for me to get it to work now
from utils._common import _get_soft_predictions
from utils._input_validation import _validate_and_reformat_input

# how it should be
# from ..utils._common import _get_soft_predictions
# from ..utils._input_validation import _validate_and_reformat_input
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything you want help with here, or do you know how to work with this? Feel free to ask!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was/am struggling with getting these relative imports to work for some reason, however at some point I figured I just change it to absolute imports as for writing the actual code it does not really matter how these things are imported. If you have a clear idea what might be going wrong here (I get this error: from ..utils._common import _get_soft_predictions
ImportError: attempted relative import beyond top-level package), I'm all ears, but as I said I can manage with these absolute imports if not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think your syntax is correct, I was having similar issues at some point too! Perhaps you are running this script directly? If so then that file is not part of the fairlearn module but the file is main itself instead and hence .. is above the top-level package (itself).

Otherwise, you might find an answer here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @SeanMcCarren is right. I removed your import changes using the .. and made one other import change (removing postprocessing in one import as mentioned in another comment) and everything worked fine. Perhaps try python -m purest test/unit/postprocessing from the repo root directory to confirm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to work now, thx :)!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we reverting this change or leaving as is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we revert the change (so go with relative import)

fairlearn/postprocessing/_thresholder.py Outdated Show resolved Hide resolved
fairlearn/postprocessing/_thresholder.py Outdated Show resolved Hide resolved

for a, threshold in self.threshold_dict.items():

operation = ThresholdOperation('>', threshold)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't allow < thresholds? Wouldn't the threshold dict have the full specification of the threshold operation? Maybe I'm misunderstanding the input type of threshold_dict, though, which is entirely possible given how long it's been since I worked on it. If you're not sure yourself I'll be happy to dig in a bit more, just lmk!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider the following example threshold_dict: {'A' : .6}

Now, for group 'A' we predict 1 if the output > .6, and 0 otherwise. It seems a bit redundant (and prone to confusion) to also allow threshold operations going the other way to be included in the dict.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I think what confused me is that I used the term threshold_dict elsewhere in the past as a dict mapping the sensitive feature group names to ThresholdOperation objects, not just to floating point numbers. In my case, I can make this richer in the sense that I get to specify not just the float, but also the operator, so that's where my confusion stems from.

Taking a step back, I'm still not convinced we should give up on such a capability (thresholds in both directions). What do others think @fairlearn/fairlearn-maintainers ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a very strong opinion on this. I think defaulting'>' suffices for most use cases and simplifies the API. I'm not opposed to having the capability though, provided it does not add too much complexity for how the user needs to supply the thresholds.

What would you suggest the API to look like if we go for both directions, @romanlutz? E.g., should the user supply a dict of ThresholdOperation objects, a dict of tuples (float, operator), a dict of floats and a separate argument for operator, or perhaps something else entirely?

test/unit/postprocessing/test_thresholder.py Outdated Show resolved Hide resolved
@bramreinders97
Copy link
Contributor Author

There is a strange problem when testing Thresholder with a simple Pipeline (see code below/in one of the last commits). When testing in my local IDE (visual studio code using python 3.9.9), I get BASE_ESTIMATOR_NOT_FITTED_WARNING, while if I perform the same tests in google colab (using python 3.7.12), there are no problems. I guess this is probably down to the python version? Is this something we want to look into? And if so, what kind of things should I be looking into?

image

@bramreinders97
Copy link
Contributor Author

The idea behind the validate_and_reformat function is the following:

The keys of threshold_dict represent the values of the sensitive feature column(s). If there is just one sensitive feature, these values are from just 1 column, and should therefore have the same dtype. If there are multiple sensitive features, I plan to have the user input the combinations in the threshold_dict like this: { ('SF1', 'SF2') : threshold }, so all keys should be tuples, hence have the same dtype.

In the last commit I have included two warnings, one to warn the user if the situation explained above is not the case, and one to warn the user about providing sensitive features at prediction time. The warn() function also asks for a Type[Warning], however I’m not sure what type of warning I should be providing. Do you have any suggestions for this? @hildeweerts @romanlutz @adrinjalali @mmadaio @MiroDudik @riedgar-ms

fairlearn/postprocessing/_thresholder.py Outdated Show resolved Hide resolved
Comment on lines 12 to 18
# for me to get it to work now
from utils._common import _get_soft_predictions
from utils._input_validation import _validate_and_reformat_input

# how it should be
# from ..utils._common import _get_soft_predictions
# from ..utils._input_validation import _validate_and_reformat_input
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @SeanMcCarren is right. I removed your import changes using the .. and made one other import change (removing postprocessing in one import as mentioned in another comment) and everything worked fine. Perhaps try python -m purest test/unit/postprocessing from the repo root directory to confirm.


for a, threshold in self.threshold_dict.items():

operation = ThresholdOperation('>', threshold)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I think what confused me is that I used the term threshold_dict elsewhere in the past as a dict mapping the sensitive feature group names to ThresholdOperation objects, not just to floating point numbers. In my case, I can make this richer in the sense that I get to specify not just the float, but also the operator, so that's where my confusion stems from.

Taking a step back, I'm still not convinced we should give up on such a capability (thresholds in both directions). What do others think @fairlearn/fairlearn-maintainers ?

@romanlutz
Copy link
Member

The idea behind the validate_and_reformat function is the following:

The keys of threshold_dict represent the values of the sensitive feature column(s). If there is just one sensitive feature, these values are from just 1 column, and should therefore have the same dtype. If there are multiple sensitive features, I plan to have the user input the combinations in the threshold_dict like this: { ('SF1', 'SF2') : threshold }, so all keys should be tuples, hence have the same dtype.

Instead of using tuples of strings as dict keys I'd suggest utilizing something similar to the existing functionality that maps such tuples to a single string with the difference that you're mapping just the sensitive features names rather than entire arrays of them, so it's probably somewhat less convoluted that in the function I pointed to above.

@romanlutz
Copy link
Member

There is a strange problem when testing Thresholder with a simple Pipeline (see code below/in one of the last commits). When testing in my local IDE (visual studio code using python 3.9.9), I get BASE_ESTIMATOR_NOT_FITTED_WARNING, while if I perform the same tests in google colab (using python 3.7.12), there are no problems. I guess this is probably down to the python version? Is this something we want to look into? And if so, what kind of things should I be looking into?

Where do I find this code? This is a screenshot so I couldn't copy. The tests all ran fine (after the import adjustments mentioned in other comments) for me locally using python 3.8.5 on MacOS.

@bramreinders97
Copy link
Contributor Author

Instead of using tuples of strings as dict keys I'd suggest utilizing something similar to the existing functionality that maps such tuples to a single string with the difference that you're mapping just the sensitive features names rather than entire arrays of them, so it's probably somewhat less convoluted that in the function I pointed to above.

I am struggling to see the difference between what you're proposing and what is already implemented in reformat_threshold_dict_keys(). The main reason I proposed using tuples as dict keys is to have a simple and clear way for the user to specify subgroups (if there are multiple sensitive features). Internally this tuple is changed to a single string.

@bramreinders97
Copy link
Contributor Author

Where do I find this code? This is a screenshot so I couldn't copy. The tests all ran fine (after the import adjustments mentioned in other comments) for me locally using python 3.8.5 on MacOS.

Here you go

@bramreinders97
Copy link
Contributor Author

I am struggling to see the difference between what you're proposing and what is already implemented in reformat_threshold_dict_keys(). The main reason I proposed using tuples as dict keys is to have a simple and clear way for the user to specify subgroups (if there are multiple sensitive features). Internally this tuple is changed to a single string.

To clarify, the tuple structure is what the user inputs as keys into the threshold_dict, to make it clearer for the user. Internally, this tuple structure is transformed into a single string.

@bramreinders97
Copy link
Contributor Author

In an offline discussion with @hildeweerts this morning, we decided that we'd add a keyword argument default_threshold to which we compare groups that are not mentioned in threshold_dict. This seems nicer for the user as it seems like a realistic situation that you'd only want to change the thresholds of a subset of all groups in the data, instead of manually specifying them all.

@adrinjalali adrinjalali removed this from the V.8 milestone May 3, 2022
@bramreinders97
Copy link
Contributor Author

When is the deadline to do have it included in the milestone? @adrinjalali

@adrinjalali
Copy link
Member

Not sure, with the pace we're going, I'd say June.

@bramreinders97
Copy link
Contributor Author

Small comment: is "examples/filename.png" supposed to be part of this PR?

No

@bramreinders97
Copy link
Contributor Author

I have tried to fix and resolve all comments. Some are still left open so I can reply directly to explain why I did something and/or to keep the discussion for that particular comment open. There were also some comments aimed at the maintainers, so I did not do anything about those.

I would really love for Thresholder and RejectOptionClassifier to be merged before the next milestone. The algorithms themselves did not get any feedback that’s unresolved at this point, so from that perspective it should be possible to get them merged.

However, the new changes made to the user guide are still to be checked, so the user guide is probably not ready for production? I’m not sure if the algorithms themselves can be merged and the user guide in a later stage? @adrinjalali

Then, there is the matter of the plotting functions. Having the plots in a separate PR will mean that Thresholder and RejectOptionClassifier are not blocked. This would mean however that the user guide has to be rewritten, as the plots will not be there for now, only to be changed back to it’s current state later (when we finish the plotting PR).

I think that if this is the only way to get Thresholder and RejectOptionClassifier included in the milestone, we should definitely do this. However, if the plotting is the only thing blocking this PR from getting merged, and we can get that done before the milestone deadline, I’d say we try to finish it in this PR?

Wdyt is the wisest thing to do here @hildeweerts @romanlutz @adrinjalali ?

@@ -447,3 +446,16 @@ you pause about using it in the future.

.. [#11] Kinmberlé Crenshaw, Mapping the margins: Intersectionality, identity politics, and violence against women of color,
Stanford Law Review, 1993, 43(6), 1241-1299.

.. _hospital_readmissions_dataset:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems reasonable. Seems related to #1086, or is that yet another dataset?

@@ -0,0 +1,25 @@
v0.7.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^^^^^^^^^^^
:class:`Thresholder` is an easy to interpret tool, that can help explore
trade-offs between fairness metrics and predictive performance. Where an algorithm like
:class:`ThresholdOptimizer` “blindly” satisfies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of "blindly" I would suggest using "simply" or "merely."
From a cursory search I'd say the internet isn't quite in unanimous agreement about this yet, but there are people who have called the use of "blindly" as an adverb ableist. That was my hunch, too. I learned quite a bit about this topic recently and was shocked how many expressions that plenty of people (including myself) use quite frequently are ableist. So much to learn, so little time 🙁

This may be a short starter on the topic: https://languageplease.org/ableism-ableist-language/

since :class:`Thresholder` is a post-processing technique, it is not necessary to alter
an already trained estimator, which might be the only feasible unfairness
mitigation approach when developers cannot influence training, due to
practical reasons or security or privacy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
practical reasons or security or privacy.
practical reasons, security or privacy.

*******************************************
Say, you want to change the threshold of a specific group to 0.4 (all
probabilities greater than 0.4 will be predicted as 1). By default,
:class:`Thresholder` assumes a provided threshold to be positive, which
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a positive threshold?


Changing this threshold for groups 'Asian' and 'Other' shows the impact a
change in threshold of just 0.1 can have on the false negative rates and the
amount of positive predictions. Note that the purpose of this user guide is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read this as a user I'd be super curious to know what the debate is all about 😆 Why would I not want to do this? I'm not sure if it's easy to capture that in a somewhat concise manner, though...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW I'm fine with merging this as is and creating an issue as a follow-up (that does not necessarily need to be done by you, of course) since this is a bit of a rabbit hole in and of itself.

.. figure:: ../auto_examples/images/user_guide_thresholder_multiple_sf_with_threshold.png
:align: center

RejectOptionClassifier
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
RejectOptionClassifier
Reject Option Classifier

... y_true=Y_test,
... y_pred=Y_pred_reject_clf,
... sensitive_features=sf_test_reject_clf).by_group
race
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're using a contrived example then it's probably best not to call it "race"

"""Utilities to plot information about the output of an estimator"""


def _check_sensitive_features(sensitive_features):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this live in the utilities file where merge_columns lives?



def plot_proba_distr(sensitive_features, Y_pred_proba, specified_sf_to_plot=None):
"""Create a stacked barplot showing the distribution of probabilities.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant here 🙂 Thanks!

@bramreinders97
Copy link
Contributor Author

Hey @adrinjalali, I would really love for Thresholder and RejectOptionClassifier to be merged before the next milestone.

I’m wondering if that is a realistic option at all, and if so what exactly I should do for that to happen? In this comment, I tried to explain the current situation as best as possible.

Copy link
Member

@romanlutz romanlutz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made another pass over Thresholder. I felt like I was just nitpicking on tiny things, so I feel like we're close to merging that part. I don't know if the same applies to Reject Option Classifier since I didn't quite get that far.

docs/user_guide/mitigation.rst Show resolved Hide resolved
docs/user_guide/mitigation.rst Outdated Show resolved Hide resolved
docs/user_guide/mitigation.rst Outdated Show resolved Hide resolved
docs/user_guide/mitigation.rst Outdated Show resolved Hide resolved
docs/user_guide/mitigation.rst Outdated Show resolved Hide resolved
>>> Y_pred_proba_clf = classifier.predict_proba(X_test)[:, 1]
>>> Y_pred_clf = classifier.predict(X_test)

>>> MetricFrame(metrics=false_negative_rate,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can pass multiple metrics, so it doesn't take up more space.

metrics={"false negative rate": false_negative_rate, "selection rate": selection_rate}


>>> plot_positive_predictions(sf_test,Y_pred_switched_threshold)

.. figure:: ../auto_examples/images/user_guide_thresholder_switch_predictions.png
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused as to where these get generated. Don't you need a .py file in examples for that to work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have created these locally and added them to the images folder.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that immediately raises the question for me: What if we need to adjust them in some way? You can just add your code as an example which will make it show up under auto_examples/...


Changing this threshold for groups 'Asian' and 'Other' shows the impact a
change in threshold of just 0.1 can have on the false negative rates and the
amount of positive predictions. Note that the purpose of this user guide is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW I'm fine with merging this as is and creating an issue as a follow-up (that does not necessarily need to be done by you, of course) since this is a bit of a rabbit hole in and of itself.

This can be useful if it is desired to transform the continuous output
of a regressor into a binary prediction. To illustrate how this could work,
consider a simple LinearRegression example on the
:ref:`boston housing dataset<boston_dataset>`, where the sensitive feature
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fairlearn/fairlearn-maintainers do we know of a useful regression dataset where this might make sense?




Specify threshold for multiple sensitive features
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few notes on this section:

  • Don't we want to make it workable as it is with all other mitigation techniques, i.e., you specify a DataFrame with multiple columns and the code takes care of it for you?
  • Follow-up to the previous question: assuming the answer is "yes, but it's more work and I fear this PR is already huge" then let's perhaps create an issue to add that functionality rather than add user guide sections on workarounds? Or I'm even fine with the workaround if you want to put it there until we have the functionality, but let's track the issue.
  • Removing sensitive features from the training data is not necessarily obvious, or in other words, if you do it perhaps explain why you're doing it. I know we haven't been terribly consistent about this so far, so I certainly won't block this PR for it, but I feel like we need to elaborate on this somewhere centrally (not in your changes at all) since it's one of the most-asked questions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d say we keep it in until we have the functionality, and we create a separate PR for the functionality. Should I create that PR, and if so what exactly should I specify in the PR description?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rereading the section after a long time, I'm mostly confused by the word "tuple."
AFAIK the line

A_multiple = df.loc[:, ['race','gender']]

just creates a DataFrame with two columns, rather than 1 column with a tuple as the value, right? I'm guessing that's what you meant and I'm just being pedantic about wording.

If that's true, then I think we already have the best solution and you can ignore what I wrote above as it doesn't really apply.

hildeweerts and others added 5 commits July 12, 2022 09:34
Co-authored-by: Roman Lutz <romanlutz13@gmail.com>
Co-authored-by: Roman Lutz <romanlutz13@gmail.com>
Co-authored-by: Roman Lutz <romanlutz13@gmail.com>
Co-authored-by: Roman Lutz <romanlutz13@gmail.com>
Co-authored-by: Roman Lutz <romanlutz13@gmail.com>
Copy link
Contributor

@hildeweerts hildeweerts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed I still had a bunch of comments I forgot to post...

fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved
fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved
fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved
fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved
fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved

References
----------
.. [1] F. Kamiran, A. Karim and X. Zhang,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've recently moved to BibTex for references, please adjust here see e.g., #936

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, If I add the bibtext using :footcite:kamiran2012rejectoptionclassifier, I should also include :footbibliography: at the end of the page. As this shows up in the API docs page of postprocessing, It doesn’t seem right to include this in the separate code files that end up o that page. I’m not sure if I should include :footbibliography: somewhere, and if so where exactly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct.

References
----------

.. footbibliography::

You can use https://raw.githubusercontent.com/fairlearn/fairlearn/main/docs/user_guide/fairness_in_machine_learning.rst as an example of how to do it.

>>> sensitive_features_train = pd.DataFrame(
... [['A'], ['A'], ['A'], ['B'], ['B'], ['B']], columns=['SF1'])
>>> estimator = RandomForestClassifier(random_state=1)
>>> estimator.fit(X_train, y_train) # doctest: +SKIP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the # doctest: +SKIP for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not sure what exactly the error was, but something went wrong if the doctest called .fit. The results from the subsequent lines are correct though, so the code works as intended.

fairlearn/postprocessing/_reject_option_classifier.py Outdated Show resolved Hide resolved
"critical_width should be between 0 and 1, but is {}".format(self.critical_width))

def _validate_selection_label(self):
"""Check if selection_label is 0 or 1, raise error if not."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we want to allow for other types of selection_labels as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might be something for the ‘follow-up’ on this PR after it gets merged as it is now?

docs/user_guide/mitigation.rst Show resolved Hide resolved
@bramreinders97
Copy link
Contributor Author

You can pass multiple metrics, so it doesn't take up more space.

I did it like this in order to show how to use the plot_positive_predictions function. With that in mind, is it okay like this?

@bramreinders97
Copy link
Contributor Author

FWIW I'm fine with merging this as is and creating an issue as a follow-up (that does not necessarily need to be done by you, of course) since this is a bit of a rabbit hole in and of itself.

Awesome, is there an action I should take for this to happen?

Copy link
Member

@romanlutz romanlutz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's anything I can do to get this review process sped up, please let me know. I have a lot more availability now than I had the last few months and I'm happy to prioritize this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants