Skip to content

Latest commit

 

History

History
71 lines (51 loc) · 1.8 KB

recipe.md

File metadata and controls

71 lines (51 loc) · 1.8 KB

Serving your pipeline with fastdeploy example

  • Create a recipe folder with the following structure:
recipe_folder/
├── example.py
├── predictor.py
├── requirements.txt (optional)
└── extras.sh (optional)
  • example.py
name = "your_app_or_model_name"

example = [
    example_object_1,
    example_object_2,
]
  • predictor.py
# Whatever code and imports you need to load your model and make predictions

# predictor function must be defined exactly as below
# batch_size is the optimal batch size for your model
# inputs length may or may not be equal to batch_size
# len(outputs) == len(inputs)
def predictor(inputs, batch_size=1):
    return outputs
  • requirements.txt (optional): all python dependencies for your pipeline

  • extras.sh (optional): any bash commands to run before installing requirements.txt

  • start the loop

fastdeploy --loop --recipe recipes/echo_chained
  • start the server

fastdeploy --rest --recipe recipes/echo_chained

Chained recipe example

  • Chained recipe means you have multiple predictor_X.py which are chained sequentially

  • predictor_1.py will be called first, then predictor_2.py and so on

  • Each predictor_X.py must have a predictor function defined as above

  • Each predictor_X.py is run separately i.e: can be in different virtualenvs

  • start all the loops

fastdeploy --loop --recipe recipes/echo_chained --config "predictor_name:predictor_1.py"

fastdeploy --loop --recipe recipes/echo_chained --config "predictor_name:predictor_2.py"
  • start the server

fastdeploy --rest --recipe recipes/echo_chained