{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":612354784,"defaultBranch":"master","name":"llama.cpp","ownerLogin":"ggerganov","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-03-10T18:58:00.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1991296?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1715225163.0","currentOid":""},"activityList":{"items":[{"before":"59f5a27fc51b46c6802e996b03b29d4c88cec7be","after":"3801db12d8e33588a1f28fe229c11ff9526a5a06","ref":"refs/heads/compilade/lazy-bfloat16-convert-hf","pushedAt":"2024-05-09T04:09:16.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"convert-hf : add missing space after comma","shortMessageHtmlLink":"convert-hf : add missing space after comma"}},{"before":null,"after":"59f5a27fc51b46c6802e996b03b29d4c88cec7be","ref":"refs/heads/compilade/lazy-bfloat16-convert-hf","pushedAt":"2024-05-09T03:26:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"gguf-py : flake8 fixes","shortMessageHtmlLink":"gguf-py : flake8 fixes"}},{"before":"f98eb31c517c95960df1d0abc48002787f145f3b","after":"4426e2987b566f09c7aa96ada9706cc778637620","ref":"refs/heads/master","pushedAt":"2024-05-08T23:55:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"cebtenzzre","name":"Jared Van Bortel","path":"/cebtenzzre","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/14168726?s=80&v=4"},"commit":{"message":"cmake : fix typo (#7151)","shortMessageHtmlLink":"cmake : fix typo (#7151)"}},{"before":"bc4bba364fb96d908f2698e908648df5e6f55e02","after":"f98eb31c517c95960df1d0abc48002787f145f3b","ref":"refs/heads/master","pushedAt":"2024-05-08T22:16:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"convert-hf : save memory with lazy evaluation (#7075)\n\n* convert-hf : begin refactoring write_tensor\r\n\r\n* convert : upgrade to sentencepiece v0.2.0\r\n\r\n* convert-hf : remove unused n_dims in extra_*_tensors\r\n\r\n* convert-hf : simplify MoE weights stacking\r\n\r\n* convert-hf : flake8 linter doesn't like semicolons\r\n\r\n* convert-hf : allow unusual model part names\r\n\r\nFor example, loading `model-00001-of-00001.safetensors` now works.\r\n\r\n* convert-hf : fix stacking MoE expert tensors\r\n\r\n`torch.stack` and `torch.cat` don't do the same thing.\r\n\r\n* convert-hf : fix Mamba conversion\r\n\r\nTested to work even with a SentencePiece-based tokenizer.\r\n\r\n* convert : use a string for the SentencePiece tokenizer path\r\n\r\n* convert-hf : display tensor shape\r\n\r\n* convert-hf : convert norms to f32 by default\r\n\r\n* convert-hf : sort model part names\r\n\r\n`os.listdir` is said to list files in arbitrary order.\r\nSorting the file names should let \"model-00009-of-00042.safetensors\"\r\nbe loaded before \"model-00010-of-00042.safetensors\".\r\n\r\n* convert-hf : use an ABC for Model again\r\n\r\nIt seems Protocol can't be used as a statically type-checked ABC,\r\nbecause its subclasses also can't be instantiated. (why did it seem to work?)\r\n\r\nAt least there's still a way to throw an error when forgetting to define\r\nthe `model_arch` property of any registered Model subclasses.\r\n\r\n* convert-hf : use a plain class for Model, and forbid direct instantiation\r\n\r\nThere are no abstract methods used anyway,\r\nso using ABC isn't really necessary.\r\n\r\n* convert-hf : more consistent formatting of cmdline args\r\n\r\n* convert-hf : align the message logged for converted tensors\r\n\r\n* convert-hf : fix Refact conversion\r\n\r\n* convert-hf : save memory with lazy evaluation\r\n\r\n* convert-hf : flake8 doesn't like lowercase L as a variable name\r\n\r\n* convert-hf : remove einops requirement for InternLM2\r\n\r\n* convert-hf : faster model parts loading\r\n\r\nInstead of pre-loading them all into a dict, iterate on the tensors\r\nin the model parts progressively as needed in Model.write_tensors\r\n\r\nConversion for some architectures relies on checking for the presence\r\nof specific tensor names, so for multi-part models, the weight map is read\r\nfrom the relevant json file to quickly get these names up-front.\r\n\r\n* convert-hf : minor changes for consistency\r\n\r\n* gguf-py : add tqdm as a dependency\r\n\r\nIt's small, and used for a progress bar\r\nin GGUFWriter.write_tensors_to_file","shortMessageHtmlLink":"convert-hf : save memory with lazy evaluation (#7075)"}},{"before":"c12452c7aec8a02264afc00196a13caa591a13ac","after":"bc4bba364fb96d908f2698e908648df5e6f55e02","ref":"refs/heads/master","pushedAt":"2024-05-08T20:55:50.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"slaren","name":null,"path":"/slaren","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2141330?s=80&v=4"},"commit":{"message":"Introduction of CUDA Graphs to LLama.cpp (#6766)\n\n* DRAFT: Introduction of CUDA Graphs to LLama.cpp\r\n\r\n* FIx issues raised in comments\r\n\r\n* Tidied to now only use CUDA runtime (not mixed with driver calls)\r\n\r\n* disable for multi-gpu and batch size > 1\r\n\r\n* Disable CUDA graphs for old GPU arch and with env var\r\n\r\n* added missing CUDA_CHECKs\r\n\r\n* Addressed comments\r\n\r\n* further addressed comments\r\n\r\n* limit to GGML_ALLOW_CUDA_GRAPHS defined in llama.cpp cmake\r\n\r\n* Added more comprehensive graph node checking\r\n\r\n* With mechanism to fall back if graph capture fails\r\n\r\n* Revert \"With mechanism to fall back if graph capture fails\"\r\n\r\nThis reverts commit eb9f15fb6fcb81384f732c4601a5b25c016a5143.\r\n\r\n* Fall back if graph capture fails and address other comments\r\n\r\n* - renamed GGML_ALLOW_CUDA_GRAPHS to GGML_CUDA_USE_GRAPHS\r\n\r\n- rename env variable to disable CUDA graphs to GGML_CUDA_DISABLE_GRAPHS\r\n\r\n- updated Makefile build to enable CUDA graphs\r\n\r\n- removed graph capture failure checking in ggml_cuda_error\r\n using a global variable to track this is not thread safe, but I am also not safistied with checking an error by string\r\n if this is necessary to workaround some issues with graph capture with eg. cuBLAS, we can pass the ggml_backend_cuda_context to the error checking macro and store the result in the context\r\n\r\n- fixed several resource leaks\r\n\r\n- fixed issue with zero node graphs\r\n\r\n- changed fixed size arrays to vectors\r\n\r\n- removed the count of number of evaluations before start capturing, and instead changed the capture mode to relaxed\r\n\r\n- removed the check for multiple devices so that it is still possible to use a single device, instead checks for split buffers to disable cuda graphs with -sm row\r\n\r\n- changed the op for checking batch size to GGML_OP_ADD, should be more reliable than GGML_OP_SOFT_MAX\r\n\r\n- code style fixes\r\n\r\n- things to look into\r\n - VRAM usage of the cudaGraphExec_t, if it is significant we may need to make it optional\r\n - possibility of using cudaStreamBeginCaptureToGraph to keep track of which ggml graph nodes correspond to which cuda graph nodes\r\n\r\n* fix build without cuda graphs\r\n\r\n* remove outdated comment\r\n\r\n* replace minimum cc value with a constant\r\n\r\n---------\r\n\r\nCo-authored-by: slaren ","shortMessageHtmlLink":"Introduction of CUDA Graphs to LLama.cpp (#6766)"}},{"before":null,"after":"494f70f939bc9e97dcc1173f5c6b4fea75c974bd","ref":"refs/heads/ceb/fix-cmake-typo","pushedAt":"2024-05-08T20:24:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"cebtenzzre","name":"Jared Van Bortel","path":"/cebtenzzre","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/14168726?s=80&v=4"},"commit":{"message":"cmake : fix typo","shortMessageHtmlLink":"cmake : fix typo"}},{"before":"9da243b36ac0b9d609adfaaa4c8f1cc8c592f737","after":"c12452c7aec8a02264afc00196a13caa591a13ac","ref":"refs/heads/master","pushedAt":"2024-05-08T19:53:08.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ngxson","name":"Xuan Son Nguyen","path":"/ngxson","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7702203?s=80&v=4"},"commit":{"message":"JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)","shortMessageHtmlLink":"JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)"}},{"before":"cad22e172659a2d5a03ab074c4b5dff136c7f3f5","after":"bffdaf401000770e3adfa80c4867ab452002dd04","ref":"refs/heads/compilade/lazy-convert-hf","pushedAt":"2024-05-08T19:36:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"Merge branch 'master' into compilade/lazy-convert-hf","shortMessageHtmlLink":"Merge branch 'master' into compilade/lazy-convert-hf"}},{"before":"bd1871fa2b1bf8a081b43ba9bc85f8ffd46fac46","after":"9da243b36ac0b9d609adfaaa4c8f1cc8c592f737","ref":"refs/heads/master","pushedAt":"2024-05-08T19:15:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"Revert \"llava : add support for moondream vision language model (#6899)\"\n\nThis reverts commit 46e12c4692a37bdd31a0432fc5153d7d22bc7f72.","shortMessageHtmlLink":"Revert \"llava : add support for moondream vision language model (#6899)\""}},{"before":"26458af1d63c85195cd96cd1673051e332d06d30","after":"bd1871fa2b1bf8a081b43ba9bc85f8ffd46fac46","ref":"refs/heads/master","pushedAt":"2024-05-08T19:12:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"server : add themes + favicon (#6848)\n\n* Added themes support with two sample themes and a favicon.\r\n\r\n* Newline\r\n\r\n* Newline\r\n\r\n* Newline\r\n\r\n* Trailing whitespace\r\n\r\n* Increased opacity for contrast\r\n\r\n* Increase opacity.\r\n\r\nCheck actions cancelled for some other priority job and I can't seem to manually re-run them, so MOAR OPACITY\r\n\r\n* Opacity action trigger.\r\n\r\nTrying to re-trigger the cancelled action.\r\n\r\n* One more opacity adjustment\r\n\r\nThis Actions pipeline is failing for random issues.\r\n\r\n* Delete examples/server/themes/buttons_top/completion.js\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Delete examples/server/themes/buttons_top/index.js\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Delete examples/server/themes/wild/completion.js\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Delete examples/server/themes/buttons_top/json-schema-to-grammar.mjs\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Delete examples/server/themes/wild/index.js\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Delete examples/server/themes/wild/json-schema-to-grammar.mjs\r\n\r\nThis will be served from the static string built-in to server.\r\n\r\n* Replaced underscore.","shortMessageHtmlLink":"server : add themes + favicon (#6848)"}},{"before":"83330d8cd6491e53e1aca4c5dfc47e039b3c04ff","after":"26458af1d63c85195cd96cd1673051e332d06d30","ref":"refs/heads/master","pushedAt":"2024-05-08T19:08:11.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"metal : use `vm_allocate` instead of `posix_memalign` on macOS (#7078)\n\n* fix: use `malloc` instead of `posix_memalign` in `ggml-metal.m` to make it not crash Electron proccesses\r\n\r\n* fix: typo\r\n\r\n* fix: use `vm_allocate` instead of `posix_memalign`\r\n\r\n* fix: don't call `newBufferWithBytesNoCopy` with `NULL` when `ggml_metal_host_malloc` returns `NULL`\r\n\r\n* fix: use `vm_allocate` only on macOS","shortMessageHtmlLink":"metal : use vm_allocate instead of posix_memalign on macOS (#7078)"}},{"before":"bffdaf401000770e3adfa80c4867ab452002dd04","after":"cad22e172659a2d5a03ab074c4b5dff136c7f3f5","ref":"refs/heads/compilade/lazy-convert-hf","pushedAt":"2024-05-08T18:50:34.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"gguf-py : add GGMLFileType\n\n* convert-hf : use GGMLFileType","shortMessageHtmlLink":"gguf-py : add GGMLFileType"}},{"before":"1eccde6f153ccefb2a47182adf8f3fe9f1ee66bd","after":"bffdaf401000770e3adfa80c4867ab452002dd04","ref":"refs/heads/compilade/lazy-convert-hf","pushedAt":"2024-05-08T18:32:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"Merge branch 'master' into compilade/lazy-convert-hf","shortMessageHtmlLink":"Merge branch 'master' into compilade/lazy-convert-hf"}},{"before":"94e667a9d8236fa8022bacb71c8637d4260d9828","after":"1eccde6f153ccefb2a47182adf8f3fe9f1ee66bd","ref":"refs/heads/compilade/lazy-convert-hf","pushedAt":"2024-05-08T16:12:32.000Z","pushType":"push","commitsCount":24,"pusher":{"login":"compilade","name":null,"path":"/compilade","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/113953597?s=80&v=4"},"commit":{"message":"convert-hf : support bfloat16 conversion","shortMessageHtmlLink":"convert-hf : support bfloat16 conversion"}},{"before":"465263d0cf1e8f8bc41948332dbd009d27a68590","after":"83330d8cd6491e53e1aca4c5dfc47e039b3c04ff","ref":"refs/heads/master","pushedAt":"2024-05-08T14:32:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"main : add --conversation / -cnv flag (#7108)","shortMessageHtmlLink":"main : add --conversation / -cnv flag (#7108)"}},{"before":"911b3900dded9a1cfe0f0e41b82c7a29baf3a217","after":"465263d0cf1e8f8bc41948332dbd009d27a68590","ref":"refs/heads/master","pushedAt":"2024-05-08T14:29:23.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"sgemm : AVX Q4_0 and Q8_0 (#6891)\n\n* basic avx implementation\r\n\r\n* style\r\n\r\n* combine denibble with load\r\n\r\n* reduce 256 to 128 (and back!) conversions\r\n\r\n* sse load\r\n\r\n* Update sgemm.cpp\r\n\r\n* oops\r\n\r\noops","shortMessageHtmlLink":"sgemm : AVX Q4_0 and Q8_0 (#6891)"}},{"before":"ad211edef5db1f1fb955874b7ca6a67bd0c88708","after":"911b3900dded9a1cfe0f0e41b82c7a29baf3a217","ref":"refs/heads/master","pushedAt":"2024-05-08T12:27:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"server : add_special option for tokenize endpoint (#7059)","shortMessageHtmlLink":"server : add_special option for tokenize endpoint (#7059)"}},{"before":"229ffff872f8ad0d21c997d18ee7a23692ae60a0","after":"ad211edef5db1f1fb955874b7ca6a67bd0c88708","ref":"refs/heads/master","pushedAt":"2024-05-08T12:22:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"convert.py : --vocab-only generates false but valid params (#7027)\n\nAn example of how this might be used in the style of baby-llama will be attached with this PR.","shortMessageHtmlLink":"convert.py : --vocab-only generates false but valid params (#7027)"}},{"before":"1fd9c1741d864d01cd7ec6d67227b92d7bfabf22","after":"229ffff872f8ad0d21c997d18ee7a23692ae60a0","ref":"refs/heads/master","pushedAt":"2024-05-08T12:06:43.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"llama : add BPE pre-tokenization for Qwen2 (#7114)\n\n* Add BPE pre-tokenization for Qwen2.\r\n\r\n* minor : fixes\r\n\r\n---------\r\n\r\nCo-authored-by: Ren Xuancheng <17811943+jklj077@users.noreply.github.com>\r\nCo-authored-by: Georgi Gerganov ","shortMessageHtmlLink":"llama : add BPE pre-tokenization for Qwen2 (#7114)"}},{"before":"4cd621c26de2095cd7c4464bdec5fe2e696ef3f3","after":"1fd9c1741d864d01cd7ec6d67227b92d7bfabf22","ref":"refs/heads/master","pushedAt":"2024-05-08T11:24:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ngxson","name":"Xuan Son Nguyen","path":"/ngxson","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7702203?s=80&v=4"},"commit":{"message":"clean up json_value & server_log (#7142)","shortMessageHtmlLink":"clean up json_value & server_log (#7142)"}},{"before":"7e0b6a7b3ba94ff624dc27c1e0e735fded8819b8","after":"4cd621c26de2095cd7c4464bdec5fe2e696ef3f3","ref":"refs/heads/master","pushedAt":"2024-05-08T10:43:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"convert : add BPE pre-tokenization for DBRX (#7132)\n\n* Add BPE pre-tokenization for DBRX.\r\n\r\n* Add vocab GGUFs.\r\n\r\n* Remove test.\r\n\r\n* Remove GGUFs.","shortMessageHtmlLink":"convert : add BPE pre-tokenization for DBRX (#7132)"}},{"before":"acdce3cdef6fc2f0b7b5623231fd7762c0884d1c","after":"7e0b6a7b3ba94ff624dc27c1e0e735fded8819b8","ref":"refs/heads/master","pushedAt":"2024-05-08T09:47:29.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"py : also print the normalizers","shortMessageHtmlLink":"py : also print the normalizers"}},{"before":"3855416027cb25d9a708ffa5581cf503a87856a6","after":"acdce3cdef6fc2f0b7b5623231fd7762c0884d1c","ref":"refs/heads/master","pushedAt":"2024-05-08T08:54:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"JohannesGaessler","name":"Johannes Gäßler","path":"/JohannesGaessler","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/18492268?s=80&v=4"},"commit":{"message":"compare-llama-bench.py: add missing basicConfig (#7138)\n\n* compare-llama-bench.py: add missing basicConfig\r\n\r\n* compare-llama-bench.py: Add line break between error message and print_help()\r\n\r\n* Add regular print() markdown table","shortMessageHtmlLink":"compare-llama-bench.py: add missing basicConfig (#7138)"}},{"before":"db5c2ad30e1f36e813060b887d264ce03cf13042","after":"0fc560fe96584b205ae55e9a5c22d7cc7d0555fe","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:53:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"ci : enable git lfs for build.yml","shortMessageHtmlLink":"ci : enable git lfs for build.yml"}},{"before":"837f426f19f892b0066bd5ffe99c39fc9730a5cf","after":"db5c2ad30e1f36e813060b887d264ce03cf13042","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:42:34.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"Revert \"tmp : dummy change to trigger ci\"\n\nThis reverts commit 97e40df5d63d55ba012a41e60be82657036d36af.","shortMessageHtmlLink":"Revert \"tmp : dummy change to trigger ci\""}},{"before":"9d13776f34acf5cc9a9daea7e6719791fb3d40b3","after":"837f426f19f892b0066bd5ffe99c39fc9730a5cf","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:30:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"ci : try lfs true","shortMessageHtmlLink":"ci : try lfs true"}},{"before":"2c7ff2c7ae056bab5763adcf94546c52c6f59ff3","after":"9d13776f34acf5cc9a9daea7e6719791fb3d40b3","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:25:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"ci : deps before checkout","shortMessageHtmlLink":"ci : deps before checkout"}},{"before":"58a7d82b14a9b735bb4924f7be5f6210475ca93c","after":"2c7ff2c7ae056bab5763adcf94546c52c6f59ff3","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:18:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"ci : add git-lfs\n\nggml-ci","shortMessageHtmlLink":"ci : add git-lfs"}},{"before":"0dc0e9aa42216c0ff4b972e5235cb02344cb483b","after":"58a7d82b14a9b735bb4924f7be5f6210475ca93c","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T07:15:46.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"ci : add git-lfs\n\nggml-ci","shortMessageHtmlLink":"ci : add git-lfs"}},{"before":null,"after":"0dc0e9aa42216c0ff4b972e5235cb02344cb483b","ref":"refs/heads/gg/lfs","pushedAt":"2024-05-08T06:55:11.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"ggerganov","name":"Georgi Gerganov","path":"/ggerganov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1991296?s=80&v=4"},"commit":{"message":"models : convert vocab files to LFS\n\nggml-ci","shortMessageHtmlLink":"models : convert vocab files to LFS"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAERT53sgA","startCursor":null,"endCursor":null}},"title":"Activity · ggerganov/llama.cpp"}