Releases: ollama/ollama
Releases · ollama/ollama
v0.0.13
New improvements
- Using Ollama CLI without Ollama running will now start Ollama
- Changed the buffer limit so that conversations would continue until it is complete
- Models now stay loaded in memory automatically between messages, so series of prompts are extra fast!
- The white fluffy Ollama icon is back when using dark mode
- Ollama will now run on Intel Macs. Compatibility & performance improvements to come
- When running
ollama run
, the/show
command can be used to inspect the current model ollama run
can now take in multi-line strings:% ollama run llama2 >>> """ Is this a multi-line string? """ Thank you for asking! Yes, the input you provided is a multi-line string. It contains multiple lines of text separated by line breaks.
- More seamless updates: Ollama will now show a subtle hint that an update is ready in the tray menu, instead of a dialog window
ollama run --verbose
will now show load duration times
Bug fixes
- Fixed crashes on Macs with 8GB of shared memory
- Fixed issues in scanning multi-line strings in a
Modelfile
v0.0.12
New improvements
- You can now rename models you've pulled or created with
ollama cp
- Added support for running k-quant models
- Performance improvements from enabling Accelerate
- Ollama's API can now be accessed by websites hosted on
localhost
ollama create
will now automatically pull models in theFROM
instruction you don't have locally
Bug fixes
ollama pull
will now show a better error when pulling a model that doesn't exist- Fixed an issue where cancelling and resuming downloads with
ollama pull
would cause an error - Fixed formatting of different errors so they are readable when running
ollama
commands - Fixed an issue where prompt templates defined with the
TEMPLATE
instruction wouldn't be parsed correctly - Fixed error when a model isn't found
v0.0.11
-
ollama list
: stay organized: see which models you have and their size% ollama list NAME SIZE MODIFIED llama2:13b 7.3 GB 28 hours ago llama2:latest 3.8 GB 4 hours ago orca:latest 1.9 GB 35 minutes ago vicuna:latest 3.8 GB 35 minutes ago
-
ollama rm
: have a model you don't want anymore? Delete it withollama rm
-
ollama pull
will now check the integrity of the model you've downloaded against it's checksum -
Errors will now correctly print, instead of showing another error
-
Performance updates: run models faster!
v0.0.10
v0.0.9
v0.0.8
v0.0.7
- Performance improvements with
ollama create
: it now uses less memory and will create custom models in less time - Fixed an issue where running
ollama create name -f
requires an absolute file path to the model file; relative paths are now supported - Fixed an issue where running
ollama pull
for a model that is already downloaded would show0B