-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Need Support: Local Model Parameters Override Like Llama.cpp
#4904
opened Jun 7, 2024 by
DirtyKnightForVi
Intel/neural-chat-7b-v3 prompts itself
bug
Something isn't working
#4903
opened Jun 7, 2024 by
0x2E16CF0F
Performance issue with CPU only inference start 0.1.39 - to latest version of todate.
bug
Something isn't working
#4902
opened Jun 7, 2024 by
raymond-infinitecode
Error: pull model manifest: ssh: no key found
bug
Something isn't working
#4901
opened Jun 7, 2024 by
674316
Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16 with ollama
bug
Something isn't working
#4899
opened Jun 7, 2024 by
wenlong1234
Add "use_mmap" to environment variable
feature request
New feature or request
#4895
opened Jun 7, 2024 by
sisi399
Feature: Allow setting OLLAMA_NUM_PARALLEL per model
feature request
New feature or request
#4894
opened Jun 7, 2024 by
sammcj
Error: error loading llama server" error="llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4893
opened Jun 7, 2024 by
Hsiayukoo
Exists a way of implements authentication with api-key on Ollama Client?
feature request
New feature or request
#4888
opened Jun 7, 2024 by
claudiocassimiro
qwen2:7b-instruct is not running correctly, seems that the model is not correctly loaded.
bug
Something isn't working
#4887
opened Jun 7, 2024 by
henryclw
No proper response when IPEX-LLM setup with Ollama for intel cpu/gpu
bug
Something isn't working
#4884
opened Jun 6, 2024 by
filip-777
MacOS Ollama v.0.1.41 App Won't Install The Command Line
bug
Something isn't working
#4882
opened Jun 6, 2024 by
saimgulay
Extend ollama show command
feature request
New feature or request
#4880
opened Jun 6, 2024 by
royjhan
generate calls with llava:latest randomly come back incomplete with format='json', stream=False
bug
Something isn't working
#4878
opened Jun 6, 2024 by
mfriedman-pr
"ollama run" command loads until timeout
bug
Something isn't working
#4861
opened Jun 6, 2024 by
Vassar-HARPER-Project
Environment variable OLLAMA_MAX_LOADED_MODELS does not seem to work
bug
Something isn't working
needs more info
More information is needed to assist
#4855
opened Jun 6, 2024 by
troy256
Add New feature or request
strings
module from Go for template processing
feature request
#4851
opened Jun 6, 2024 by
qbit-
Previous Next
ProTip!
Follow long discussions with comments:>50.