Skip to content

GSoC_2020_project_deployments

Fernando J. Iglesias García edited this page Feb 2, 2020 · 2 revisions

Integration with frameworks for model deployments

Shogun has a lot of different ML models available for the user, but deploying them into a production environment would require adding libshogun to the runtime environment. There's already a handful of different solutions from various vendors that tries to standardized how ML models should be exported to so they could be used easily without having the library present in the runtime env., once the model has been exported.

The aim of the project is the integration with some of these standards so Shogun models could be easily used in production envs.

Mentors

Difficulty & Requirements

Medium.

You need know

  • c++
  • protobuf

Description

Both CoreML and Tensorflow Serving are trying to solve the problem of how one would serve/use a trained model in production. In both cases, serious engineering effort has been invested to create a framework for using ML models in a reliable, efficient way.

Ideally one would start with integrating with CoreML, namely add support to export Shogun models in the protobuf format that CoreML specifies. For details check the imported protobuf files: https://github.com/shogun-toolbox/shogun/tree/develop/src/interfaces/coreml. The easiest would be to add support for the normalizers and the trained SVM models.

The other interesting framework to integrate with would be TF Serving, which allows to serve models via GRPC. In order to achieve this one, read up on the architecture of TF Serving and check how to define new servables for TF Serving here.

As a bonus task one could integrate Shogun into AWS SageMaker as well. The pre-requirement for this would be to have the Shogun PyPi package already available.

Clone this wiki locally