Skip to content

shadow model inference #1051

Answered by yubozhao
jiyer2016 asked this question in General
Sep 1, 2020 · 2 comments · 4 replies
Discussion options

You must be logged in to vote

It's pretty great to see you guys are doing more advanced ML serving use cases.

Right now, I think it is challenging to do shadow inference well with BentoService.

  1. I don't think BentoML has out of box capability to mark models run in a low-priority thread. @parano might have a good idea for this. I am going to defer to him

  2. You can leverage BentoML's logging when you deploy them in the same services.

class MyService(BentoService):
    def prediction_current(self, df):
          return self.models.current_model.predict(df)

    def prediction_shadow(self, df):
          return self.models.shadow_model.predict(df)

    @api(input=DataframeInput(), batch=True)
    def predict(self, df):…

Replies: 2 comments 4 replies

Comment options

You must be logged in to vote
2 replies
@parano
Comment options

@jiyer2016
Comment options

Answer selected by jiyer2016
Comment options

You must be logged in to vote
2 replies
@jiyer2016
Comment options

@withsmilo
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants