Skip to content

Releases: mudler/LocalAI

v2.16.0

24 May 17:35
Compare
Choose a tag to compare

local-ai-release-2 16

Welcome to LocalAI's latest update!

🎉🎉🎉 woot woot! So excited to share this release, a lot of new features are landing in LocalAI!!!!! 🎉🎉🎉

🌟 Introducing Distributed Llama.cpp Inferencing

Now it is possible to distribute the inferencing workload across different workers with llama.cpp models !

This feature has landed with #2324 and is based on the upstream work of @rgerganov in ggerganov/llama.cpp#6829.

How it works: a front-end server manages the requests compatible with the OpenAI API (LocalAI) and workers (llama.cpp) are used to distribute the workload. This makes possible to run larger models split across different nodes!

How to use it

To start workers to offload the computation you can run:

local-ai llamacpp-worker <listening_address> <listening_port>

However, you can also follow the llama.cpp README and building the rpc-server (https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is still compatible with LocalAI.

When starting the LocalAI server, which is going to accept the API requests, you can set a list of workers IP/address by specifying the addresses with LLAMACPP_GRPC_SERVERS:

LLAMACPP_GRPC_SERVERS="address1:port,address2:port" local-ai run

At this point the workload hitting in the LocalAI server should be distributed across the nodes!

🤖 Peer2Peer llama.cpp

LocalAI is the first AI Free, Open source project offering complete, decentralized, peer2peer while private, LLM inferencing on top of the libp2p protocol. There is no "public swarm" to offload the computation, but rather empowers you to build your own cluster of local and remote machines to distribute LLM computation.

This feature leverages the ability of llama.cpp to distribute the workload explained just above and features from one of my other projects, https://github.com/mudler/edgevpn.

LocalAI builds on top of the twos, and allows to create a private peer2peer network between nodes, without the need of centralizing connections or manually configuring IP addresses: it unlocks totally decentralized, private, peer-to-peer inferencing capabilities. Works also behind different NAT-ted networks (uses DHT and mDNS as discovery mechanism).

How it works: A pre-shared token can be generated and shared between workers and the server to form a private, decentralized, p2p network.

You can see the feature in action here:

output

How to use it

  1. Start the server with --p2p:
./local-ai run --p2p
# 1:02AM INF loading environment variables from file envFile=.env
# 1:02AM INF Setting logging to info
# 1:02AM INF P2P mode enabled
# 1:02AM INF No token provided, generating one
# 1:02AM INF Generated Token:
# XXXXXXXXXXX
# 1:02AM INF Press a button to proceed

A token is displayed, copy it and press enter.

You can re-use the same token later restarting the server with --p2ptoken (or P2P_TOKEN).

  1. Start the workers. Now you can copy the local-ai binary in other hosts, and run as many workers with that token:
TOKEN=XXX ./local-ai  p2p-llama-cpp-rpc
# 1:06AM INF loading environment variables from file envFile=.env
# 1:06AM INF Setting logging to info
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:295","message":" go-libp2p resource manager protection enabled"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:409","message":"max connections: 100\n"}
# 1:06AM INF Starting llama-cpp-rpc-server on '127.0.0.1:34371'
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"node/node.go:118","message":" Starting EdgeVPN network"}
# create_backend: using CPU backend
# Starting RPC server on 127.0.0.1:34371, backend memory: 31913 MB
# 2024/05/19 01:06:01 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). # See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
# {"level":"INFO","time":"2024-05-19T01:06:01.805+0200","caller":"node/node.go:172","message":" Node ID: 12D3KooWJ7WQAbCWKfJgjw2oMMGGss9diw3Sov5hVWi8t4DMgx92"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"node/node.go:173","message":" Node Addresses: [/ip4/127.0.0.1/tcp/44931 /ip4/127.0.0.1/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/127.0.0.1/udp/35660/quic-v1 /ip4/192.168.68.110/tcp/44931 /ip4/192.168.68.110/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/192.168.68.110/udp/35660/quic-v1 /ip6/::1/tcp/41289 /ip6/::1/udp/33160/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip6/::1/udp/35701/quic-v1]"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"discovery/dht.go:104","message":" Bootstrapping DHT"}

(Note you can also supply the token via args)

At this point, you should see in the server logs messages stating that new workers are found

  1. Now you can start doing inference as usual on the server (the node used on step 1)

Interested in to try it out? As we are still updating the documentation, you can read the full instructions here #2343

📜 Advanced Function calling support with Mixed JSON Grammars

LocalAI gets better at function calling with mixed grammars!

With this release, LocalAI introduces a transformative capability: support for mixed JSON BNF grammars. It allows to specify a grammar for the LLM that allows to output structured JSON and free text.

How to use it:

To enable mixed grammars, you can set in the YAML configuration file function.mixed_mode = true, for example:

  function:
    # disable injecting the "answer" tool
    disable_no_action: true

    grammar:
      # This allows the grammar to also return messages
      mixed_mode: true

This feature significantly enhances LocalAI's ability to interpret and manipulate JSON data coming from the LLM through a more flexible and powerful grammar system. Users can now combine multiple grammar types within a single JSON structure, allowing for dynamic parsing and validation scenarios.

Grammars can also turned off entirely and leave the user to determine how the data is parsed from the LLM to be correctly interpretated by LocalAI to be still compliant to the OpenAI REST spec.

For example, to interpret Hermes results, one can just annotate regexes in function.json_regex_match to extract the LLM response:

  function:
    grammar:
      disable: true
    # disable injecting the "answer" tool
    disable_no_action: true
    return_name_in_function_response: true

    json_regex_match:
    - "(?s)<tool_call>(.*?)</tool_call>"
    - "(?s)<tool_call>(.*?)"
  
    replace_llm_results:
    # Drop the scratchpad content from responses
    - key: "(?s)<scratchpad>.*</scratchpad>"
      value: ""
    replace_function_results:
    # Replace everything that is not JSON array or object, just in case.
    - key: '(?s)^[^{\[]*'
      value: ""
    - key: '(?s)[^}\]]*$'
      value: ""
    # Drop the scratchpad content from responses
    - key: "(?s)<scratchpad>.*</scratchpad>"
      value: ""

Note that regex can still be used when enabling mixed grammars is enabled.

This is especially important for models which does not support grammars - such as transformers or OpenVINO models, that now can support as well function calling. As we update the docs, further documentation can be found in the PRs that you can find in the changelog below.

🚀 New Model Additions and Updates

local-ai-yi-updates

Our model gallery continues to grow with exciting new additions like Aya-35b, Mistral-0.3, Hermes-Theta and updates to existing models ensuring they remain at the cutting edge.

This release is having major enhancements on tool calling support. Besides working on making our default models in AIO images more performant - now you can try an enhanced out-of-the-box experience with function calling in the Hermes model family ( Hermes-2-Pro-Mistral and Hermes-2-Theta-Llama-3)

Our LocalAI function model!

local-ai-functioncall-model

I have fine-tuned a function call model specific to leverage entirely the grammar support of LocalAI, you can find it in the model gallery already and on huggingface

🔄 Single Binary Release: Simplified Deployment and Management

In our continuous effort to streamline the user experience and deployment process, LocalAI v2.16.0 proudly introduces a single binary release. This enha...

Read more

v2.15.0

09 May 17:20
f69de3b
Compare
Choose a tag to compare

local-ai-release

🎉 LocalAI v2.15.0! 🚀

Hey awesome people! I'm happy to announce the release of LocalAI version 2.15.0! This update introduces several significant improvements and features, enhancing usability, functionality, and user experience across the board. Dive into the key highlights below, and don't forget to check out the full changelog for more detailed updates.

🌍 WebUI Upgrades: Turbocharged!

🚀 Vision API Integration

The Chat WebUI now seamlessly integrates with the Vision API, making it easier for users to test image processing models directly through the browser interface - this is a very simple and hackable interface in less then 400L of code with Alpine.JS and HTMX!

output

💬 System Prompts in Chat

System prompts can be set in the WebUI chat, which guide the user through interactions more intuitively, making our chat interface smarter and more responsive.

output

🌟 Revamped Welcome Page

New to LocalAI or haven't installed any models yet? No worries! The updated welcome page now guides users through the model installation process, ensuring you're set up and ready to go without any hassle. This is a great first step for newcomers - thanks for your precious feedback!

output

🔄 Background Operations Indicator

Don't get lost with our new background operations indicator on the WebUI, which shows when tasks are running in the background.

output

🔍 Filter Models by Tag and Category

As our model gallery balloons, you can now effortlessly sift through models by tag and category, making finding what you need a breeze.

output

🔧 Single Binary Release

LocalAI is expanding into offering single binary releases, simplifying the deployment process and making it easier to get LocalAI up and running on any system.

For the moment we have condensed the builds which disables AVX and SSE instructions set. We are also planning to include cuda builds as well.

🧠 Expanded Model Gallery

This release introduces several exciting new models to our gallery, such as 'Soliloquy', 'tess', 'moondream2', 'llama3-instruct-coder' and 'aurora', enhancing the diversity and capability of our AI offerings. Our selection of one-click-install models is growing! We pick carefully model from the most trending ones on huggingface, feel free to submit your requests in a github issue, hop to our Discord or contribute by hosting your gallery, or.. even by adding models directly to LocalAI!

local-ai-gallery
local-ai-gallery-new

Want to share your model configurations and customizations? See the docs: https://localai.io/docs/getting-started/customize-model/

📣 Let's Make Some Noise!

A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs!

Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Thanks a ton, and.. enjoy this release!


What's Changed

Bug fixes 🐛

  • fix(webui): correct documentation URL for text2img by @mudler in #2233
  • fix(ux): fix small glitches by @mudler in #2265

Exciting New Features 🎉

  • feat: update ROCM and use smaller image by @cryptk in #2196
  • feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants by @mudler in #2232
  • fix(webui): display small navbar with smaller screens by @mudler in #2240
  • feat(startup): show CPU/GPU information with --debug by @mudler in #2241
  • feat(single-build): generate single binaries for releases by @mudler in #2246
  • feat(webui): ux improvements by @mudler in #2247
  • fix: OpenVINO winograd always disabled by @fakezeta in #2252
  • UI: flag trust_remote_code to users // favicon support by @dave-gray101 in #2253
  • feat(ui): prompt for chat, support vision, enhancements by @mudler in #2259

🧠 Models

📖 Documentation and examples

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.14.0...v2.15.0

v2.14.0

03 May 07:29
b58274b
Compare
Choose a tag to compare

🚀 AIO Image Update: llama3 has landed!

We're excited to announce that our AIO image has been upgraded with the latest LLM model, llama3, enhancing our capabilities with more accurate and dynamic responses. Behind the scenes uses https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF which is ready for function call, yay!

💬 WebUI enhancements: Updates in Chat, Image Generation, and TTS

Chat TTS Image gen
chatui ttsui image

Our interfaces for Chat, Text-to-Speech (TTS), and Image Generation have finally landed. Enjoy streamlined and simple interactions thanks to the efforts of our team, led by @mudler, who have worked tirelessly to enhance your experience. The WebUI interface serves as a quick way to debug and assess models loaded in LocalAI - there is much to improve, but we have now a small, hackable interface!

🖼️ Many new models in the model gallery!

local-ai-gallery

The model gallery has received a substantial upgrade with numerous new models, including Einstein v6.1, SOVL, and several specialized Llama3 iterations. These additions are designed to cater to a broader range of tasks , making LocalAI more versatile than ever. Kudos to @mudler for spearheading these exciting updates - now you can select with a couple of click the model you like!

🛠️ Robust Fixes and Optimizations

This update brings a series of crucial bug fixes and security enhancements to ensure our platform remains secure and efficient. Special thanks to @dave-gray101, @cryptk, and @fakezeta for their diligent work in rooting out and resolving these issues 🤗

✨ OpenVINO and more

We're introducing OpenVINO acceleration, and many OpenVINO models in the gallery. You can now enjoy fast-as-hell speed on Intel CPU and GPUs. Applause to @fakezeta for the contributions!

📚 Documentation and Dependency Upgrades

We've updated our documentation and dependencies to keep you equipped with the latest tools and knowledge. These updates ensure that LocalAI remains a robust and dependable platform.

👥 A Community Effort

A special shout-out to our new contributors, @QuinnPiers and @LeonSijiaLu, who have enriched our community with their first contributions. Welcome aboard, and thank you for your dedication and fresh insights!

Each update in this release not only enhances our platform's capabilities but also ensures a safer and more user-friendly experience. We are excited to see how our users leverage these new features in their projects, freel free to hit a line on Twitter or in any other social, we'd be happy to hear how you use LocalAI!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and.. exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: config_file_watcher.go - root all file reads for safety by @dave-gray101 in #2144
  • fix: github bump_docs.sh regex to drop emoji and other text by @dave-gray101 in #2180
  • fix: undefined symbol: iJIT_NotifyEvent in import torch ##2153 by @fakezeta in #2179
  • fix: security scanner warning noise: error handlers part 2 by @dave-gray101 in #2145
  • fix: ensure GNUMake jobserver is passed through to whisper.cpp build by @cryptk in #2187
  • fix: bring everything onto the same GRPC version to fix tests by @cryptk in #2199

Exciting New Features 🎉

  • feat(gallery): display job status also during navigation by @mudler in #2151
  • feat: cleanup Dockerfile and make final image a little smaller by @cryptk in #2146
  • fix: swap to WHISPER_CUDA per deprecation message from whisper.cpp by @cryptk in #2170
  • feat: only keep the build artifacts from the grpc build by @cryptk in #2172
  • feat(gallery): support model deletion by @mudler in #2173
  • refactor(application): introduce application global state by @dave-gray101 in #2072
  • feat: organize Dockerfile into distinct sections by @cryptk in #2181
  • feat: OpenVINO acceleration for embeddings in transformer backend by @fakezeta in #2190
  • chore: update go-stablediffusion to latest commit with Make jobserver fix by @cryptk in #2197
  • feat: user defined inference device for CUDA and OpenVINO by @fakezeta in #2212
  • feat(ux): Add chat, tts, and image-gen pages to the WebUI by @mudler in #2222
  • feat(aio): switch to llama3-based for LLM by @mudler in #2225
  • feat(ui): support multilineand style ul by @mudler in #2226

🧠 Models

📖 Documentation and examples

👒 Dependencies

Other Changes

Read more

🖼️ v2.13.0 - Model gallery edition

25 Apr 20:34
c9451cb
Compare
Choose a tag to compare

Hello folks, Ettore here - I'm happy to announce the v2.13.0 LocalAI release is out, with many features!

Below there is a small breakdown of the hottest features introduced in this release - however - there are many other improvements (especially from the community) as well, so don't miss out the changelog!

Check out the full changelog below for having an overview of all the changes that went in this release (this one is quite packed up).

🖼️ Model gallery

This is the first release with model gallery in the webUI, you can see now a "Model" button in the WebUI which lands now in a selection of models:

output

You can choose now models between stablediffusion, llama3, tts, embeddings and more! The gallery is growing steadly and being kept up-to-date.

The models are simple YAML files which are hosted in this repository: https://github.com/mudler/LocalAI/tree/master/gallery - you can host your own repository with your model index, or if you want you can contribute to LocalAI.

If you want to contribute adding models, you can by opening up a PR in the gallery directory: https://github.com/mudler/LocalAI/tree/master/gallery.

Rerankers

I'm excited to introduce a new backend for rerankers. LocalAI now implements the Jina API (https://jina.ai/reranker/#apiform) as a compatibility layer, and you can use existing Jina clients and point to those to the LocalAI address. Behind the hoods, uses https://github.com/AnswerDotAI/rerankers.

output

You can test this by using container images with python (this does NOT work with core images) and a model config file like this, or by installing cross-encoder from the gallery in the UI:

name: jina-reranker-v1-base-en
backend: rerankers
parameters:
  model: cross-encoder

and test it with:

    curl http://localhost:8080/v1/rerank \
      -H "Content-Type: application/json" \
      -d '{
      "model": "jina-reranker-v1-base-en",
      "query": "Organic skincare products for sensitive skin",
      "documents": [
        "Eco-friendly kitchenware for modern homes",
        "Biodegradable cleaning supplies for eco-conscious consumers",
        "Organic cotton baby clothes for sensitive skin",
        "Natural organic skincare range for sensitive skin",
        "Tech gadgets for smart homes: 2024 edition",
        "Sustainable gardening tools and compost solutions",
        "Sensitive skin-friendly facial cleansers and toners",
        "Organic food wraps and storage solutions",
        "All-natural pet food for dogs with allergies",
        "Yoga mats made from recycled materials"
      ],
      "top_n": 3
    }'

Parler-tts

There is a new backend available for tts now, parler-tts. It is possible to install and configure the model directly from the gallery. https://github.com/huggingface/parler-tts

🎈 Lot of small improvements behind the scenes!

Thanks to our outstanding community, we have enhanced the performance and stability of LocalAI across various modules. From backend optimizations to front-end adjustments, every tweak helps make LocalAI smoother and more robust.

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix(autogptq): do not use_triton with qwen-vl by @thiner in #1985
  • fix: respect concurrency from parent build parameters when building GRPC by @cryptk in #2023
  • ci: fix release pipeline missing dependencies by @mudler in #2025
  • fix: remove build path from help text documentation by @cryptk in #2037
  • fix: previous CLI rework broke debug logging by @cryptk in #2036
  • fix(fncall): fix regression introduced in #1963 by @mudler in #2048
  • fix: adjust some sources names to match the naming of their repositories by @cryptk in #2061
  • fix: move the GRPC cache generation workflow into it's own concurrency group by @cryptk in #2071
  • fix(llama.cpp): set -1 as default for max tokens by @mudler in #2087
  • fix(llama.cpp-ggml): fixup max_tokens for old backend by @mudler in #2094
  • fix missing TrustRemoteCode in OpenVINO model load by @fakezeta in #2114
  • Incl ocv pkg for diffsusers utils by @jtwolfe in #2115

Exciting New Features 🎉

  • feat: kong cli refactor fixes #1955 by @cryptk in #1974
  • feat: add flash-attn in nvidia and rocm envs by @golgeek in #1995
  • feat: use tokenizer.apply_chat_template() in vLLM by @golgeek in #1990
  • feat(gallery): support ConfigURLs by @mudler in #2012
  • fix: dont commit generated files to git by @cryptk in #1993
  • feat(parler-tts): Add new backend by @mudler in #2027
  • feat(grpc): return consumed token count and update response accordingly by @mudler in #2035
  • feat(store): add Golang client by @mudler in #1977
  • feat(functions): support models with no grammar, add tests by @mudler in #2068
  • refactor(template): isolate and add tests by @mudler in #2069
  • feat: fiber logs with zerlog and add trace level by @cryptk in #2082
  • models(gallery): add gallery by @mudler in #2078
  • Add tensor_parallel_size setting to vllm setting items by @Taikono-Himazin in #2085
  • Transformer Backend: Implementing use_tokenizer_template and stop_prompts options by @fakezeta in #2090
  • feat: Galleries UI by @mudler in #2104
  • Transformers Backend: max_tokens adherence to OpenAI API by @fakezeta in #2108
  • Fix cleanup sonarqube findings by @cryptk in #2106
  • feat(models-ui): minor visual enhancements by @mudler in #2109
  • fix(gallery): show a fake image if no there is no icon by @mudler in #2111
  • feat(rerankers): Add new backend, support jina rerankers API by @mudler in #2121

🧠 Models

  • models(llama3): add llama3 to embedded models by @mudler in #2074
  • feat(gallery): add llama3, hermes, phi-3, and others by @mudler in #2110
  • models(gallery): add new models to the gallery by @mudler in #2124
  • models(gallery): add more models by @mudler in #2129

📖 Documentation and examples

👒 Dependencies

  • deps: Update version of vLLM to add support of Cohere Command_R model in vLLM inference by @holyCowMp3 in #1975
  • ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1991
  • build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0 by @dependabot in #1998
  • build(deps): bump github.com/docker/docker from 20.10.7+incompatible to 24.0.9+incompatible by @dependabot in #1999
  • build(deps): bump github.com/gofiber/fiber/v2 from 2.52.0 to 2.52.1 by @dependabot in #2001
  • build(deps): bump actions/checkout from 3 to 4 by @dependabot in #2002
  • build(deps): bump actions/setup-go from 4 to 5 by @dependabot in #2003
  • build(deps): bump peter-evans/create-pull-request from 5 to 6 by @dependabot in #2005
  • build(deps): bump actions/cache from ...
Read more

v2.12.4

11 Apr 10:36
Compare
Choose a tag to compare

Patch release to include #1985

v2.12.3

10 Apr 09:15
d692b2c
Compare
Choose a tag to compare

I'm happy to announce the v2.12.3 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.3

v2.12.1

09 Apr 13:46
cc3d601
Compare
Choose a tag to compare

I'm happy to announce the v2.12.1 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.1

v2.12.0

09 Apr 07:03
Compare
Choose a tag to compare

I'm happy to announce the v2.12.0 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.0

v2.11.0

26 Mar 17:18
1395e50
Compare
Choose a tag to compare

Introducing LocalAI v2.11.0: All-in-One Images!

Hey everyone! 🎉 I'm super excited to share what we've been working on at LocalAI - the launch of v2.11.0. This isn't just any update; it's a massive leap forward, making LocalAI easier to use, faster, and more accessible for everyone.

🌠 The Spotlight: All-in-One Images, OpenAI in a box

Imagine having a magic box that, once opened, gives you everything you need to get your AI project off the ground with generative AI. A full clone of OpenAI in a box. That's exactly what our AIO images are! Designed for both CPU and GPU environments, these images come pre-packed with a full suite of models and backends, ready to go right out of the box.

Whether you're using Nvidia, AMD, or Intel, we've got an optimized image for you. If you are using CPU-only you can enjoy even smaller and lighter images.

To start LocalAI, pre-configured with function calling, llm, tts, speech to text, and image generation, just run:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

## Do you have a Nvidia GPUs? Use this instead
## CUDA 11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-11
## CUDA 12
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-12

❤️ Why You're Going to Love AIO Images:

  • Ease of Use: Say goodbye to the setup blues. With AIO images, everything is configured upfront, so you can dive straight into the fun part - hacking!
  • Flexibility: CPU, Nvidia, AMD, Intel? We support them all. These images are made to adapt to your setup, not the other way around.
  • Speed: Spend less time configuring and more time innovating. Our AIO images are all about getting you across the starting line as fast as possible.

🌈 Jumping In Is a Breeze:

Getting started with AIO images is as simple as pulling from Docker Hub or Quay and running it. We take care of the rest, downloading all necessary models for you. For all the details, including how to customize your setup with environment variables, our updated docs have got you covered here, while you can get more details of the AIO images here.

🎈 Vector Store

Thanks to the great contribution from @richiejp now LocalAI has a new backend type, "vector stores" that allows to use LocalAI as in-memory Vector DB (#1792). You can learn more about it here!

🐛 Bug fixes

This release contains major bugfixes to the watchdog component, and a fix to a regression introduced in v2.10.x which was not respecting --f16, --threads and --context-size to be applied as model's defaults.

🎉 New Model defaults for llama.cpp

Model defaults has changed to automatically offload maximum GPU layers if a GPU is available, and it sets saner defaults to the models to enhance the LLM's output.

🧠 New pre-configured models

You can now run llava-1.6-vicuna, llava-1.6-mistral and hermes-2-pro-mistral, see Run other models for a list of all the pre-configured models available in the release.

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

🔗 Links

🎁 What's More in v2.11.0?

Bug fixes 🐛

  • fix(config): pass by config options, respect defaults by @mudler in #1878
  • fix(watchdog): use ShutdownModel instead of StopModel by @mudler in #1882
  • NVIDIA GPU detection support for WSL2 environments by @enricoros in #1891
  • Fix NVIDIA VRAM detection on WSL2 environments by @enricoros in #1894

Exciting New Features 🎉

  • feat(functions/aio): all-in-one images, function template enhancements by @mudler in #1862
  • feat(aio): entrypoint, update workflows by @mudler in #1872
  • feat(aio): add tests, update model definitions by @mudler in #1880
  • feat(stores): Vector store backend by @richiejp in #1795
  • ci(aio): publish hipblas and Intel GPU images by @mudler in #1883
  • ci(aio): add latest tag images by @mudler in #1884

🧠 Models

  • feat(models): add phi-2-chat, llava-1.6, bakllava, cerbero by @mudler in #1879

📖 Documentation and examples

  • ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1856
  • docs(mac): improve documentation for mac build by @tauven in #1873
  • docs(aio): Add All-in-One images docs by @mudler in #1887
  • fix(aio): make image-gen for GPU functional, update docs by @mudler in #1895

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.10.1...v2.11.0

v2.10.1

18 Mar 18:44
ed5734a
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(llama.cpp): fix eos without cache by @mudler in #1852
  • fix(config): default to debug=false if not set by @mudler in #1853
  • fix(config-watcher): start only if config-directory exists by @mudler in #1854

Exciting New Features 🎉

  • deps(whisper.cpp): update, fix cublas build by @mudler in #1846

Other Changes

Full Changelog: v2.10.0...v2.10.1