Skip to content

Releases: ggerganov/llama.cpp

b3091

05 Jun 09:25
2b33896
Compare
Choose a tag to compare
ggml : refactor rope norm/neox (#7634)

* ggml : unify rope norm/neox (CPU)

* ggml : fix compile warning

* ggml : remove GLM rope mode

ggml-ci

* metal : better rope implementation

ggml-ci

* cuda : better rope implementation

ggml-ci

* naming : n_orig_ctx -> n_ctx_orig

ggml-ci

* dev : add reminders to update backends

ggml-ci

* vulkan : fix ggml_rope_ext() usage

* cuda : fix array size + indents

ggml-ci

b3089

05 Jun 00:21
c90dbe0
Compare
Choose a tag to compare
Fix per token atrributes bits (#7749)

b3088

04 Jun 21:55
b90dc56
Compare
Choose a tag to compare
Allow number of nodes in CUDA graph to change (#7738)

Previously the code would have failed to cope in the case that the
number of nodes changes in an existing CUDA graph. This fixes the
issue by removing an unnecessary conditional.

b3087

04 Jun 21:21
1442677
Compare
Choose a tag to compare
common : refactor cli arg parsing (#7675)

* common : gpt_params_parse do not print usage

* common : rework usage print (wip)

* common : valign

* common : rework print_usage

* infill : remove cfg support

* common : reorder args

* server : deduplicate parameters

ggml-ci

* common : add missing header

ggml-ci

* common : remote --random-prompt usages

ggml-ci

* examples : migrate to gpt_params

ggml-ci

* batched-bench : migrate to gpt_params

* retrieval : migrate to gpt_params

* common : change defaults for escape and n_ctx

* common : remove chatml and instruct params

ggml-ci

* common : passkey use gpt_params

b3086

04 Jun 21:13
554c247
Compare
Choose a tag to compare
ggml : remove OpenCL (#7735)

ggml-ci

b3085

04 Jun 20:53
0cd6bd3
Compare
Choose a tag to compare
llama : remove beam search (#7736)

b3083

04 Jun 17:16
adc9ff3
Compare
Choose a tag to compare
llama-bench : allow using a different printer for stderr with -oe (#7…

…722)

compare-commits.sh : hide stdout, use -oe to print markdown

b3082

04 Jun 16:47
987d743
Compare
Choose a tag to compare
Improve hipBLAS support in CMake (#7696)

* Improve hipBLAS support in CMake

This improves the detection of the correct CMAKE_PREFIX_PATH when using different distributions or a self-built ROCm SDK.

* Set ROCM_PATH correctly

b3080

04 Jun 11:46
3b38d48
Compare
Choose a tag to compare
Per token attributes (#7685)

* Add per token attributes enum
* Using phi-3 for testing 'rstrip'
* Using jina-v2 for testing 'lstrip'
* Brute force test for 'lstrip' and 'rstrip'
* Implement 'rstrip' and 'lstrip'
* Update phi-3 GGUF file (obsolete since 917dc8c)
* Replace llama_token_type with llama_token_attribs

b3079

04 Jun 11:50
6d16169
Compare
Choose a tag to compare
ggml : prevent builds with -ffinite-math-only (#7726)

This enforces a check that -fno-finite-math-only was set and that the operating
compiling mode is not in finite maths mode. This is because during rewriting of
silu and softmax for cpu #7154 there emerged an issue where the result that was
observed when >1 slot was nondeterministic as found by @JohannesGaessler.

@LostRuins narrowed the problem down to -ffinite-math-only which was theorised
to be due to SiLU, instead of flushing small values to 0, returns NaN or some 
other garbage. @jart proposed a fix that @ggerganov then implemented in this fix

ref https://github.com/ggerganov/llama.cpp/pull/7154#issuecomment-2145661825