Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
OLLAMA_NUM_PARALLEL
and multi-modal models lead to failed processing images
error
bug
#4165
opened May 5, 2024 by
jmorganca
implement LRU cache for GPU VRAM when inferencing MoE model
feature request
New feature or request
#4161
opened May 5, 2024 by
davinwang
On Windows , with version 0.1.33 assembling two models creates a path error. Version 0.1.32 works correctly.
bug
Something isn't working
#4158
opened May 5, 2024 by
amonpaike
Can't delete all characters when typing in non-english characters.
bug
Something isn't working
#4156
opened May 4, 2024 by
ktkalpha
Add option in the install scripts to auto set OLLAMA_HOST environment variable
feature request
New feature or request
#4155
opened May 4, 2024 by
centopw
v0.1.33 can't load gemma:7b-instruct-v1.1-fp16 due to failed to create context with model
bug
Something isn't working
#4152
opened May 4, 2024 by
MarkWard0110
mixtral:8x22b causes intermittent system freezes on Mac, runs very slow
bug
Something isn't working
#4151
opened May 4, 2024 by
joliss
Supports the computing power of NPUs and GPUs provided by Intel Ultra processors
feature request
New feature or request
#4150
opened May 4, 2024 by
Perry961002
Importing a Mistral finetune into Ollama fails with Something isn't working
invalid file magic
bug
#4148
opened May 4, 2024 by
BruceMacD
More Quants for command-r-plus Please?
model request
Model requests
#4142
opened May 3, 2024 by
chigkim
Original 2gb and 4gb Jetson Nano Developer Kits (Not Orin Version) - GPU Possible?
feature request
New feature or request
nvidia
Issues relating to Nvidia GPUs and CUDA
#4140
opened May 3, 2024 by
dtischler
only 1 GPU found -- regression 1.32 -> 1.33
bug
Something isn't working
nvidia
Issues relating to Nvidia GPUs and CUDA
#4139
opened May 3, 2024 by
AlexLJordan
Support for HyperGAI/HPT1_5-Air-Llama-3-8B-Instruct-multimodal
model request
Model requests
#4137
opened May 3, 2024 by
Extremys
[Feature] Rapid Modelfile Updates
feature request
New feature or request
#4136
opened May 3, 2024 by
Arcitec
WithSecure quarantined ollama_llama_server.exe as harmful file / Malware
bug
Something isn't working
windows
#4134
opened May 3, 2024 by
sjdevcode
"which/max" command line options to help with sizing.
feature request
New feature or request
#4133
opened May 3, 2024 by
bigattichouse
model run command not rendered on mobile
feature request
New feature or request
ollama.com
#4132
opened May 3, 2024 by
userforsource
Error "timed out waiting for llama runner to start: " on larger models.
bug
Something isn't working
#4131
opened May 3, 2024 by
CalvesGEH
Docker Build is failing because libcurl-httpd24 .so.4 cannot be loaded
bug
Something isn't working
docker
Issues relating to using ollama in containers
#4130
opened May 3, 2024 by
SoniCoder
Normalization of output from embedding model
feature request
New feature or request
#4128
opened May 3, 2024 by
hagemon
Some Ollama models apparently affected by llama.cpp BPE pretokenization issue
bug
Something isn't working
#4126
opened May 3, 2024 by
sealad886
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.