Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modal.com processing backend? #103

Open
DaniruKun opened this issue Feb 13, 2023 · 2 comments
Open

Modal.com processing backend? #103

DaniruKun opened this issue Feb 13, 2023 · 2 comments
Assignees
Labels
type/feature Issue or PR related to a new feature

Comments

@DaniruKun
Copy link

Describe the feature you'd like to request

It would be great if you could choose which "back end" to use for processing - at the moment it seems it relies on the host to have a dGPU.

However, cloud GPU platforms like Modal allow to massively speed up the transcription process by spinning up hundreds of containers in parallel.

Describe the solution you'd like

Something like https://github.com/modal-labs/modal-examples/tree/26c911ba880a1311e748c6b01f911d065aed4cc4/06_gpu_and_ml/whisper_pod_transcriber , where the API facade remains the same, but the work queue chunks the big audio files, and gives that to Modal containers to process.

@DaniruKun DaniruKun added the type/feature Issue or PR related to a new feature label Feb 13, 2023
@3ddyBoi
Copy link
Member

3ddyBoi commented Feb 13, 2023

@auduny is this something you could look at?

@olekenneth
Copy link
Collaborator

That's really not a use case for us atm, but feel free to open a PR with this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature Issue or PR related to a new feature
Projects
None yet
Development

No branches or pull requests

4 participants