You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I'm wondering if it would be possible to "bring your own" Spark model for use with the interpretability functions, like ICETransformer()? For instance, we have several tools that allow us to do inference on Spark data frames by calling a .predict() or .transform() method. Is it possible to wrap such a non-PySpark MLLib model in a way that we could still use this package for generating explanations and ICE plots in Spark?
Describe the solution you'd like
Suppose I have a custom model (e.g., some kind of scoring code that operates on Spark data frames to transform the data by adding prediction columns). I'd like to wrap said model in a way that would allow me to call the ICETransformer() method. E.g.,
Hey @brandongreenwell-8451 👋!
Thank you so much for reporting the issue/feature request 🚨.
Someone from SynapseML Team will be looking to triage this issue soon.
We appreciate your patience.
I assume you have a custom model but it is not implemented as a Spark Transformer object. If that's the case, I'm afraid the current implementation of explainers do not support such a scenario. The explainer logic is implemented entirely in scala, and I don't see a way to bring a python model to JVM side for interpretation.
Hi @memoryz, thanks for the reply, and that makes sense. I was more so asking about models that do operate on Spark data frames via a .predict() or .transform() method. For example, DataRobot and H2O both provide scoring code for models that can be used to make predictions on Spark data frames (e.g., in Python/pyspark). Is it possible to create a wrapper of some sort that would allow us to use it with some of SynapseML's RAI functions, like ICE curves?
Is your feature request related to a problem? Please describe.
I'm wondering if it would be possible to "bring your own" Spark model for use with the interpretability functions, like
ICETransformer()
? For instance, we have several tools that allow us to do inference on Spark data frames by calling a.predict()
or.transform()
method. Is it possible to wrap such a non-PySpark MLLib model in a way that we could still use this package for generating explanations and ICE plots in Spark?Describe the solution you'd like
Suppose I have a custom model (e.g., some kind of scoring code that operates on Spark data frames to transform the data by adding prediction columns). I'd like to wrap said model in a way that would allow me to call the
ICETransformer()
method. E.g.,Where
custom_model
has a.transform()
method similar to PySpark MLLib models that returns the input data with additional prediction columns.The text was updated successfully, but these errors were encountered: