Issues: bentoml/OpenLLM
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
bug: error coming up while install the vllm using pip install "openllm[vllm]"
#967
opened Apr 25, 2024 by
Developer-atomic-amardeep
bug: WARNING: openllm 0.4.44 does not provide the extra 'gemma'
#965
opened Apr 24, 2024 by
infinite-Joy
bug: An exception occurred while instantiating runner 'llm-mistral-runner'
#952
opened Apr 14, 2024 by
billy-le
Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser
#934
opened Mar 18, 2024 by
sanket038
I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2
#929
opened Mar 11, 2024 by
Lightwave234
bug: Error in sending post request for bentoml container service
#904
opened Feb 13, 2024 by
hahmad2008
bug: Requests with "use_beam_search: true" fail with an unclear exception message.
#903
opened Feb 13, 2024 by
yan-virin
How to update the prompt template without change openllm-core config
#895
opened Feb 11, 2024 by
hahmad2008
Previous Next
ProTip!
Follow long discussions with comments:>50.