Skip to content

v0.1.29

Compare
Choose a tag to compare
@jmorganca jmorganca released this 10 Mar 02:24
· 612 commits to main since this release
e87c780

AMD Preview

Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features are now accelerated by AMD graphics cards, and support is included by default in Ollama for Linux, Windows and Docker.

Supported cards and accelerators

Family Supported cards and accelerators
AMD Radeon RX 7900 XTX 7900 XT 7900 GRE 7800 XT 7700 XT 7600 XT 7600
6950 XT 6900 XTX 6900XT 6800 XT 6800
Vega 64 Vega 56
AMD Radeon PRO W7900 W7800 W7700 W7600 W7500
W6900X W6800X Duo W6800X W6800
V620 V420 V340 V320
Vega II Duo Vega II VII SSG
AMD Instinct MI300X MI300A MI300
MI250X MI250 MI210 MI200
MI100 MI60 MI50

What's Changed

  • ollama <command> -h will now show documentation for supported environment variables
  • Fixed issue where generating embeddings with nomic-embed-text, all-minilm or other embedding models would hang on Linux
  • Experimental support for importing Safetensors models using the FROM <directory with safetensors model> command in the Modelfile
  • Fixed issues where Ollama would hang when using JSON mode.
  • Fixed issue where ollama run would error when piping output to tee and other tools
  • Fixed an issue where memory would not be released when running vision models
  • Ollama will no longer show an error message when piping to stdin on Windows

New Contributors

Full Changelog: v0.1.28...v0.1.29