v0.1.28
New models
- StarCoder2: the next generation of transparently trained open code LLMs that comes in three sizes: 3B, 7B and 15B parameters.
- DolphinCoder: a chat model based on StarCoder2 15B that excels at writing code.
What's Changed
- Vision models such as
llava
should now respond better to text prompts - Improved support for
llava
1.6 models - Fixed issue where switching between models repeatedly would cause Ollama to hang
- Installing Ollama on Windows no longer requires a minimum of 4GB disk space
- Ollama on macOS will now more reliably determine available VRAM
- Fixed issue where running Ollama in
podman
would not detect Nvidia GPUs - Ollama will correctly return an empty embedding when calling
/api/embeddings
with an emptyprompt
instead of hanging
New Contributors
- @Bin-Huang made their first contribution in #1706
- @elthommy made their first contribution in #2737
- @peanut256 made their first contribution in #2354
- @tylinux made their first contribution in #2827
- @fred-bf made their first contribution in #2780
- @bmwiedemann made their first contribution in #2836
Full Changelog: v0.1.27...v0.1.28