-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whisper support #180
Comments
Supporting encoder-decoder models is in our roadmap as mentioned in #187. Feel free to join the discussion and potentially contribute! |
+1 for this feature |
+2 for this feature |
+3 for this feature |
+4 for this feature |
+555 |
+1 |
+1 |
monitoring |
@zhuohan123 I am working on Whisper support. |
NO WAY!!!!!!!!!!!!!!!!!!! THAT WILL BE AWESOME!!!!!!!!!!!!!!!!!!!!! |
I am working on this PR, and will soon submit the draft. |
THIS IS GOING TO BE HUGE, THX! |
Hey @libratiger, together with @afeldman-nm I am now working full-time on the same target. Would you like to sync? It would be more efficient to share knowledge, rather than develop the same thing in two silos. |
You're right. I've just discovered a discussion about T5 #187 (comment) , where there are differing opinions on the encoder-decoder model. Perhaps it will improve after that PR is merged? |
@libratiger the current status is as follows: neural magic has finalized the original T5 PR, and we are now benchmarking the solution. In parallel, we are also developing support for Whisperer. |
@dbogunowicz any update on this issue? looking forward |
Hi! I am working on the Whisper on our team fork: neuralmagic#147 |
@dbogunowicz I ran the feature/demian/Whisper branch to run the Whisper model and found an error message: vllm/worker/model_runner. py, line 477, in prepare_decode |
@junior-zsy fixed for now. Please remember, that we are still working on that PR, so it's pretty much in WiP state. Let me explicitly set the appropriate PR flag. |
@dbogunowicz Ok, thank you. Hope it can be used soon |
same here, this is going to be really cool! |
@dbogunowicz thanks for your work on Whisper! Since there is clearly interest in this feature and its completion timeline, I want to add the context that Whisper support takes a dependency on encoder/decoder support - which is also WIP (currently works partially but is not quite complete.) I expect to complete encoder/decoder support soon. JFYI for anyone interested in timelines. |
+1 |
Hi, any update on serving faster-whisper via VLLM? |
Hi @twicer-is-coder , Whisper (or any variant thereof) is high of the list of models to add once infrastructure support is in; you can see the roadmap for infrastructure support in this PR: |
Is support for Whisper on the roadmap? Something like https://github.com/ggerganov/whisper.cpp would be great.
The text was updated successfully, but these errors were encountered: