Skip to content

Releases: ollama/ollama

v0.0.13

02 Aug 16:07
Compare
Choose a tag to compare

New improvements

  • Using Ollama CLI without Ollama running will now start Ollama
  • Changed the buffer limit so that conversations would continue until it is complete
  • Models now stay loaded in memory automatically between messages, so series of prompts are extra fast!
  • The white fluffy Ollama icon is back when using dark mode
    Screenshot 2023-08-03 at 4 30 02 PM
  • Ollama will now run on Intel Macs. Compatibility & performance improvements to come
  • When running ollama run, the /show command can be used to inspect the current model
  • ollama run can now take in multi-line strings:
    % ollama run llama2
    >>> """       
      Is this a
      multi-line
      string?
    """
    Thank you for asking! Yes, the input you provided is a multi-line string. It contains multiple lines of text separated by line breaks.
    
  • More seamless updates: Ollama will now show a subtle hint that an update is ready in the tray menu, instead of a dialog window
  • ollama run --verbose will now show load duration times

Bug fixes

  • Fixed crashes on Macs with 8GB of shared memory
  • Fixed issues in scanning multi-line strings in a Modelfile

v0.0.12

26 Jul 15:04
Compare
Choose a tag to compare

New improvements

  • You can now rename models you've pulled or created with ollama cp
  • Added support for running k-quant models
  • Performance improvements from enabling Accelerate
  • Ollama's API can now be accessed by websites hosted on localhost
  • ollama create will now automatically pull models in the FROM instruction you don't have locally

Bug fixes

  • ollama pull will now show a better error when pulling a model that doesn't exist
  • Fixed an issue where cancelling and resuming downloads with ollama pull would cause an error
  • Fixed formatting of different errors so they are readable when running ollama commands
  • Fixed an issue where prompt templates defined with the TEMPLATE instruction wouldn't be parsed correctly
  • Fixed error when a model isn't found

v0.0.11

21 Jul 20:59
Compare
Choose a tag to compare
  • ollama list: stay organized: see which models you have and their size

    % ollama list
    NAME         	SIZE  	MODIFIED       
    llama2:13b   	7.3 GB	28 hours ago  	
    llama2:latest	3.8 GB	4 hours ago   	
    orca:latest  	1.9 GB	35 minutes ago	
    vicuna:latest	3.8 GB	35 minutes ago
    
  • ollama rm: have a model you don't want anymore? Delete it with ollama rm

  • ollama pull will now check the integrity of the model you've downloaded against it's checksum

  • Errors will now correctly print, instead of showing another error

  • Performance updates: run models faster!

v0.0.10

20 Jul 09:15
Compare
Choose a tag to compare
fix broken link in `README.md`

v0.0.9

20 Jul 08:13
55b5f5d
Compare
Choose a tag to compare
ctrl+c on empty line exits (#135)

v0.0.8

19 Jul 08:01
Compare
Choose a tag to compare
  • Fixed an issue where the ollama command line tool wouldn't correctly install

v0.0.7

19 Jul 03:32
a6d03dd
Compare
Choose a tag to compare
  • Performance improvements with ollama create: it now uses less memory and will create custom models in less time
  • Fixed an issue where running ollama create name -f requires an absolute file path to the model file; relative paths are now supported
  • Fixed an issue where running ollama pull for a model that is already downloaded would show 0B

v0.0.6

18 Jul 19:45
9658a50
Compare
Choose a tag to compare

Early preview release

v0.0.5

13 Jul 17:45
77dc1a6
Compare
Choose a tag to compare
  • Show performance details, such as tokens per second with ollama run --verbose
  • Fixed a bug where ollama run would show an error when a model was already pulled

v0.0.4

13 Jul 02:01
Compare
Choose a tag to compare
  • Minor bug fixes