v0.1.29
AMD Preview
Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features are now accelerated by AMD graphics cards, and support is included by default in Ollama for Linux, Windows and Docker.
Supported cards and accelerators
Family | Supported cards and accelerators |
---|---|
AMD Radeon RX | 7900 XTX 7900 XT 7900 GRE 7800 XT 7700 XT 7600 XT 7600 6950 XT 6900 XTX 6900XT 6800 XT 6800 Vega 64 Vega 56 |
AMD Radeon PRO | W7900 W7800 W7700 W7600 W7500 W6900X W6800X Duo W6800X W6800 V620 V420 V340 V320 Vega II Duo Vega II VII SSG |
AMD Instinct | MI300X MI300A MI300 MI250X MI250 MI210 MI200 MI100 MI60 MI50 |
What's Changed
ollama <command> -h
will now show documentation for supported environment variables- Fixed issue where generating embeddings with
nomic-embed-text
,all-minilm
or other embedding models would hang on Linux - Experimental support for importing Safetensors models using the
FROM <directory with safetensors model>
command in the Modelfile - Fixed issues where Ollama would hang when using JSON mode.
- Fixed issue where
ollama run
would error when piping output totee
and other tools - Fixed an issue where memory would not be released when running vision models
- Ollama will no longer show an error message when piping to stdin on Windows
New Contributors
- @tgraupmann made their first contribution in #2582
- @andersrex made their first contribution in #2909
- @leonid20000 made their first contribution in #2440
- @hishope made their first contribution in #2973
- @mrdjohnson made their first contribution in #2759
- @mofanke made their first contribution in #3077
- @racerole made their first contribution in #3073
- @Chris-AS1 made their first contribution in #3094
Full Changelog: v0.1.28...v0.1.29