Skip to content

Releases: langgenius/dify

v0.4.6

12 Jan 04:51
9245f0a
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.4.5...0.4.6

v0.4.5

11 Jan 02:57
f7939c7
Compare
Choose a tag to compare

Fix some BUGs.

What's Changed

Full Changelog: 0.4.4...0.4.5

v0.4.4

05 Jan 19:10
9f58912
Compare
Choose a tag to compare

Add Together.ai model provider and fix some bugs.

What's Changed

Full Changelog: 0.4.3...0.4.4

v0.4.3

04 Jan 13:17
97c972f
Compare
Choose a tag to compare

Optimize performance & fix few bugs.

What's Changed

New Contributors

Full Changelog: 0.4.2...0.4.3

v0.4.2

03 Jan 17:36
91ff07f
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.4.1...0.4.2

v0.4.1

03 Jan 02:07
4de27d0
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.4.0...0.4.1

v0.4.0

02 Jan 16:20
d70d61b
Compare
Choose a tag to compare

🎉🎉 Dify's Version 0.4 is out now.

We've made some serious under-the-hood changes to how the Model Runtime works, making it more straightforward for our specific needs, and paving the way for smoother model expansions and more robust production use.

What's Changed

  • Model Runtime Rework: We've moved away from LangChain, simplifying the model layer. Now, expanding models is as easy as setting up the model provider in the backend with a bit of YAML.

    For more details, see: https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md

  • App Generation Update: Replacing the old Redis Pubsub queue with threading.Queue for a more reliable, performant, and straightforward workflow.

  • Model Providers Upgraded: Support for both preset and custom models, ideal for adding OpenAI fine-tuned models or fitting into various MaaS platforms. Plus, you can now check out supported models without any initial configuration.

  • Context Size Definition: Introduced distinct context size settings, separate from Max Tokens, to handle the different limits and sizes in models like OpenAI's GPT-4 Turbo.

  • Flexible Model Parameters: Customize your model's behavior with easily adjustable parameters through YAML.

  • GPT-2 Tokenizer Files: Now cached within Dify's codebase, making builds quicker and solving issues related to acquiring tokenizer files in offline source deployments.

  • Model List Display: The App now displays all supported preset models, including details on any that aren't available and how to configure them.

  • New Model Additions: Including Google's Gemini Pro and Gemini Pro Vision models (Vision requires an image input), Azure OpenAI's GPT-4V, and support for OpenAI-API-compatible providers.

  • Expanded Inference Support: Xorbit Inference now includes chat mode models, and there's a wider range of models supporting Agent inference.

  • Updates & Fixes: We've updated other model providers to be in sync with the latest version APIs and features, and squashed a series of minor bugs for a smoother experience.

Catch you in the code,

The Dify Team 🛠️

Change Log

New Contributors

Full Changelog: 0.3.34...0.4.0

v0.3.34

19 Dec 06:22
43741ad
Compare
Choose a tag to compare

Features

  • Annotation Reply, see details: Link
  • Dify Knowledge supports unstructured.io as the file extraction solution.
  • Azure OpenAI add gpt-4-1106-previewgpt-4-vision-preview models support.
  • SaaS services now support replacing the logo of WebApp after subscribing.

Important Upgrade Notice

  • Annotation Reply

    The annotation function can support direct replies to related questions, so we need to assign values to previously unstored questions for the table message_annotations.

    we need doing below command in your api docker container

    docker exec -it docker-api-1 bash
    flask add-annotation-question-field-value
    

    or direct run below command when you launch from source codes.

    cd api
    flask add-annotation-question-field-value
    
  • Unstructured.io Support

    Due to the support of this feature, we have added four new formats of text parsing( msg , eml, ppt, pptx ) and optimized two text parsing formats (text, markdown) in our SAAS erviroment.

    For localhost you need to do below actions to support unstructured.io

    Unstructured Document

    1. docker pull from unstructured's image repository.
    docker pull downloads.unstructured.io/unstructured-io/unstructured-api:latest
    
    1. Once pulled, you can launch the container
    docker run  -d --rm --name unstructured-api downloads.unstructured.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
    
    1. In our docker-compose.yaml, add two new environment variables for the api and worker services.
    ETL_TYPE=Unstructured
    UNSTRUCTURED_API_URL=http://unstructured:8000/general/v0/general
    
    1. Restart the Dify‘s services
    docker-compose up -d
    

What's Changed

New Contributors

Full Changelog: 0.3.33...0.3.34

v0.3.33

08 Dec 05:30
bee0d12
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.3.32...0.3.33

v0.3.32

25 Nov 08:46
3cc6978
Compare
Choose a tag to compare

New Features

  • Supported the rerank model of xinference for local deployment, such as: bge-reranker-large, bge-reranker-base.
  • ChatGLM2/3 support

⚠️ Breaking Change

We've recently switched provider ChatGLM to the OpenAI API protocol, so from now on, we'll only be supporting ChatGLM3 and ChatGLM2. Unfortunately, support for ChatGLM1 has been deprecated.

What's Changed

New Contributors

Full Changelog: 0.3.31...0.3.32