Skip to content

Issues: ollama/ollama

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

llava broke in new version v0.1.33 bug Something isn't working
#4163 opened May 5, 2024 by VideoFX
Bunny-Llama-3-8B-V model request Model requests
#4157 opened May 5, 2024 by rawzone
More Quants for command-r-plus Please? model request Model requests
#4142 opened May 3, 2024 by chigkim
Original 2gb and 4gb Jetson Nano Developer Kits (Not Orin Version) - GPU Possible? feature request New feature or request nvidia Issues relating to Nvidia GPUs and CUDA
#4140 opened May 3, 2024 by dtischler
only 1 GPU found -- regression 1.32 -> 1.33 bug Something isn't working nvidia Issues relating to Nvidia GPUs and CUDA
#4139 opened May 3, 2024 by AlexLJordan
[Feature] Rapid Modelfile Updates feature request New feature or request
#4136 opened May 3, 2024 by Arcitec
Docker Build is failing because libcurl-httpd24 .so.4 cannot be loaded bug Something isn't working docker Issues relating to using ollama in containers
#4130 opened May 3, 2024 by SoniCoder
Normalization of output from embedding model feature request New feature or request
#4128 opened May 3, 2024 by hagemon
Add LLAVA++ model model request Model requests
#4127 opened May 3, 2024 by ddpasa
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.