Skip to content

Commit

Permalink
update release process notes and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
paulbkoch committed Feb 13, 2024
1 parent 979d93f commit fe334e8
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 12 deletions.
3 changes: 3 additions & 0 deletions README.md
Expand Up @@ -596,6 +596,7 @@ We also build on top of many great packages. Please check them out!

# External links

- [Explainable AI: unlocking value in FEC operations](https://analytiqal.nl/2024/01/22/fec-value-from-explainable-ai/)
- [Interpretable Machine Learning – Increase Trust and Eliminate Bias](https://ficonsulting.com/insight-post/interpretable-machine-learning-increase-trust-and-eliminate-bias/)
- [Machine Learning Interpretability in Banking: Why It Matters and How Explainable Boosting Machines Can Help](https://www.prometeia.com/en/trending-topics-article/machine-learning-interpretability-in-banking-why-it-matters-and-how-explainable-boosting-machines-can-help)
- [Interpretable or Accurate? Why Not Both?](https://towardsdatascience.com/interpretable-or-accurate-why-not-both-4d9c73512192)
Expand All @@ -615,6 +616,7 @@ We also build on top of many great packages. Please check them out!

# Papers that use or compare EBMs

- [DimVis: Interpreting Visual Clusters in Dimensionality Reduction With Explainable Boosting Machine](https://arxiv.org/pdf/2402.06885.pdf)
- [Distill knowledge of additive tree models into generalized linear models](https://detralytics.com/wp-content/uploads/2023/10/Detra-Note_Additive-tree-ensembles.pdf)
- [Explainable Boosting Machines with Sparsity - Maintaining Explainability in High-Dimensional Settings](https://arxiv.org/abs/2311.07452)
- [Cost of Explainability in AI: An Example with Credit Scoring Models](https://link.springer.com/chapter/10.1007/978-3-031-44064-9_26)
Expand Down Expand Up @@ -708,6 +710,7 @@ We also build on top of many great packages. Please check them out!
- [On the Physical Nature of Lya Transmission Spikes in High Redshift Quasar Spectra](https://arxiv.org/pdf/2401.04762.pdf)
- [GRAND-SLAMIN’ Interpretable Additive Modeling with Structural Constraints](https://openreview.net/pdf?id=F5DYsAc7Rt)
- [Identification of groundwater potential zones in data-scarce mountainous region using explainable machine learning](https://www.sciencedirect.com/science/article/pii/S0022169423013598)

# Books that cover EBMs

- [Machine Learning for High-Risk Applications](https://www.oreilly.com/library/view/machine-learning-for/9781098102425/)
Expand Down
24 changes: 12 additions & 12 deletions scripts/release_process.txt
Expand Up @@ -39,7 +39,7 @@
- conda env remove --name interpret_bdist && conda create --yes --name interpret_bdist python=3.10 && conda activate interpret_bdist
- pip install interpret_core-*-py3-none-any.whl[debug,notebook,plotly,lime,sensitivity,shap,linear,treeinterpreter,dash,skoperules,testing]
- cd <REPO_ROOT>
- cd examples/python
- cd docs/interpret/python/examples
- pip install jupyter
- jupyter notebook
- open all the example notebooks, run them, and check the visualizations
Expand All @@ -57,7 +57,7 @@ set_visualize_provider(InlineProvider())
- IN WINDOWS: get the Visual studio environment with: "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvars64.bat"
- pip install interpret-core-*.tar.gz[debug,notebook,plotly,lime,sensitivity,shap,linear,treeinterpreter,dash,skoperules,testing]
- cd <REPO_ROOT>
- cd examples/python/
- cd docs/interpret/python/examples
- pip install jupyter
- jupyter notebook
- open all the example notebooks, run them, and check the visualizations
Expand Down Expand Up @@ -173,7 +173,7 @@ set_visualize_provider(InlineProvider())
- conda env remove --name interpret_conda && conda create --yes --name interpret_conda python=3.10 && conda activate interpret_conda
- conda install --yes -c conda-forge interpret-core psutil ipykernel ipython plotly lime SALib shap dill dash dash-core-components dash-html-components dash-table dash_cytoscape gevent requests
- cd <REPO_ROOT>
- cd examples/python
- cd docs/interpret/python/examples
- pip install jupyter
- jupyter notebook
- open all the example notebooks, run them, and check the visualizations
Expand All @@ -196,21 +196,21 @@ set_visualize_provider(InlineProvider())
https://pypi.org/project/interpret/#files

- test PyPI release on colab:
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Interpretable_Classification_Methods.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Interpretable_Regression_Methods.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Differentially_Private_EBMs.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Merging_EBM_Models.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/EBM_Importances.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Explaining_Blackbox_Classifiers.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Explaining_Blackbox_Regressors.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/examples/python/Prototype_Selection_with_SPOTgreedy.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Interpretable_Classification_Methods.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Interpretable_Regression_Methods.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Differentially_Private_EBMs.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Merging_EBM_Models.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/EBM_Importances.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Explaining_Blackbox_Classifiers.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Explaining_Blackbox_Regressors.ipynb
- https://githubtocolab.com/interpretml/interpret/blob/develop/docs/interpret/python/examples/Prototype_Selection_with_SPOTgreedy.ipynb

- test PyPI release locally:
- open anaconda console window
- conda env remove --name interpret_pypi && conda create --yes --name interpret_pypi python=3.10 && conda activate interpret_pypi
- pip install interpret lime # remove lime if we remove lime from example notebooks
- cd <REPO_ROOT>
- cd examples/python
- cd docs/interpret/python/examples
- pip install jupyter
- jupyter notebook
- open all the example notebooks, run them, and check the visualizations
Expand Down

0 comments on commit fe334e8

Please sign in to comment.