Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
selects too many cores by default on orange pi 5 (2x slower)
bug-unconfirmed
#7176
opened May 9, 2024 by
calculatortamer
Messy CUDA graph error output on mixtral/MoE models
bug-unconfirmed
#7175
opened May 9, 2024 by
CISC
Should we add an autolabeler for PR?
devops
improvements to build systems and github actions
enhancement
New feature or request
help wanted
Extra attention is needed
#7174
opened May 9, 2024 by
mofosyne
Add support for mistral Dutch and Armenian models: Tweeties/tweety-7b-dutch-v24a and Tweeties/tweety-7b-armenian-v24a
enhancement
New feature or request
#7170
opened May 9, 2024 by
JohnClaw
Support for Consistency Large Language Models?
enhancement
New feature or request
#7168
opened May 9, 2024 by
unoexperto
how can i modify the setting,make it answer in Chinese by default
enhancement
New feature or request
#7167
opened May 9, 2024 by
LiangZeFenglzf
Add metadata override and also generate dynamic default filename when converting gguf
enhancement
New feature or request
help wanted
Extra attention is needed
need feedback
Testing and feedback with results are needed
#7165
opened May 9, 2024 by
mofosyne
Looking for help for using llama.cpp with Phi3 model and LoRA
bug-unconfirmed
#7164
opened May 9, 2024 by
SHIMURA0
Gibberish response from server and main exits on M1 macstudio ultra with gpu (cpu ok)
bug-unconfirmed
#7159
opened May 9, 2024 by
jrozentur
Impact of bf16 on Llama 3 8B perplexity?
enhancement
New feature or request
#7148
opened May 8, 2024 by
jim-plus
4 tasks done
error: implicit declaration of function ‘vld1q_s8_x4’; did you mean ‘vld1q_s8_x2’?
bug-unconfirmed
#7147
opened May 8, 2024 by
CaptainOfHacks
Make -DLLAMA_HIP_UMA a dynamic setting.
enhancement
New feature or request
#7145
opened May 8, 2024 by
sebastian-philipp
3 of 4 tasks
[SYCL] Implement Flash attention.
enhancement
New feature or request
#7141
opened May 8, 2024 by
qnixsynapse
Is it extending pre trained model or finetuning the pretrained model?
#7137
opened May 8, 2024 by
eswarthammana
llama : make vocabs LFS objects?
enhancement
New feature or request
need feedback
Testing and feedback with results are needed
#7128
opened May 7, 2024 by
ggerganov
could we get Aryanne/Calypso-3B-alpha-v2-gguf added to the demo?
enhancement
New feature or request
#7126
opened May 7, 2024 by
Louis654
4 tasks done
Minor improvement in cmake script for msvc/clang-cl
enhancement
New feature or request
#7121
opened May 7, 2024 by
skoulik
llama : add DeepSeek-v2-Chat support
good first issue
Good for newcomers
model
Model specific
#7118
opened May 7, 2024 by
DirtyKnightForVi
Add Support for IBM Granite
enhancement
New feature or request
#7116
opened May 7, 2024 by
YorkieDev
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.