Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to make the Contribution Scores more Interpretable #2

Closed
joshdunnlime opened this issue Feb 13, 2022 · 3 comments
Closed

Option to make the Contribution Scores more Interpretable #2

joshdunnlime opened this issue Feb 13, 2022 · 3 comments

Comments

@joshdunnlime
Copy link

In several domains, Contribution Scores/Shape Functions have real world meaning. It would be useful to visualise these learned shape functions on their correct scale.

For example, when using an EBM to predict the effects of a individual Air Handler Unit's (AHU) heating valve on the energy used by an attached boiler system, these shape fucntions approximate the efficiency of the AHU. E.g. for each 1% increase in valve openness, 0.2kWh increase in energy usage. However, as you can see, the predicted values are in kWh (energy). In this domain, negative energy doesn't make sense, and then using 0% valve openess, we would expect 0 extra heat usage. Therefore, the ability to rescale the shape function so that it is non-negative (in this case) would be really useful for stateholder understanding and model interpretation.

@joshdunnlime
Copy link
Author

I will add plots with examples when I get the chance. Currently, I don't have my example to hand.

@xiaohk
Copy link
Collaborator

xiaohk commented Feb 13, 2022

Agreed!

We conducted a user study for GAM Changer last year: interpretml/interpret#283 (we would have learned a lot from you if we had recruited you! 😁). One participant told us something similar: the predicted score should not be negative due to the definition of the value in their task. Therefore, they found the alignment and delete tool helpful, as they can easily "rescale" the scores in the negative region to be 0.

With GAM Changer you can rescale the shape function to be non-negative. Does GAM Changer meet your need for this example? Or you think there can be more sophisticated methods to recalibrate contribution scores?

Interpreting negative scores should be careful though. If you use multiple features to predict AHU, the negative score region on one feature should be interpreted in the context of considering all other features. For example, the score on valve openness can be negative if data in this region has larger positive scores on other features. With GAM Changer, you can also check correlated features with the feature panel. I would suggest select the negative region on valve openness and see if there are some "suspicious" features popping up in the feature panel.

correlation.mp4

@xiaohk
Copy link
Collaborator

xiaohk commented Feb 13, 2022

Thanks for raising this issue. I think it would be a good topic to discuss, and I will convert this issue into a discussion.

@interpretml interpretml locked and limited conversation to collaborators Feb 13, 2022
@xiaohk xiaohk converted this issue into discussion #3 Feb 13, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Development

No branches or pull requests

2 participants