Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Plans to add model provider support #4030

Open
3 of 27 tasks
fred-bf opened this issue Feb 10, 2024 · 13 comments
Open
3 of 27 tasks

[Feature] Plans to add model provider support #4030

fred-bf opened this issue Feb 10, 2024 · 13 comments
Assignees
Labels
planned planned feature, will support in the future
Milestone

Comments

@fred-bf
Copy link
Contributor

fred-bf commented Feb 10, 2024

There have been many discussions in the community regarding support for multiple models.

Here, we will gather NextChat's current support plans for different models and provide dynamic updates on the overall progress.

Firstly, we expect to separate the model-related logic from the frontend and may consider creating a separate JavaScript component to standardize it (this could be managed as an independent package). Afterwards, we will develop adapters for each model based on this component/package. We anticipate that each adapter will have at least the following basic capabilities: multimodality (text, images), token billing, and customizable model parameters (temperature, max_tokens, etc.).

We have roughly divided the work into the following parts:

NextChat UI Separation

  • Separation of UI components
  • Allow to register model providers in Settings page
  • Standardization of configuration, statistics, sharing, and other functionalities
  • Standardization model features, including function calling, agents loader, etc

Implementation of Multi-Model Providers

  • Basic multi-model package (using OpenAI GPT as an example)
  • Ollama
  • Derivatives of GPT
    • Azure
    • ?OneAPI (to be determined)
  • Open-source models (considering support for a local App Model Manager)
    • Llama
    • Mistral
  • Closed-source models
    • Claude
    • AWS Lex
    • Google Gemini
  • Other models
    • Wenxinyiyan
    • Zhipu
  • Hosting Platforms
    • Poe.com
    • together.ai
    • Cloudflare AI

Local Model Manager

  • Support for local model downloading and running

Server-Side Multi-Model Service

  • Support for independently deploying a multi-model service as NextChat's API proxy

Current implementation:

@fred-bf fred-bf pinned this issue Feb 10, 2024
@fred-bf fred-bf changed the title [Feature] NextChat support multi model provider [Feature] Plans to add model provider support Feb 10, 2024
@fred-bf fred-bf added the planned planned feature, will support in the future label Feb 10, 2024
@fred-bf fred-bf added this to the v3.0 milestone Feb 10, 2024
@WBinBin001
Copy link

是否有计划支持 mistral.ai

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Are there any plans to support mistral.ai?

@fred-bf
Copy link
Contributor Author

fred-bf commented Feb 28, 2024

@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

@WBinBin001
Copy link

@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

是否会支持 mistral.ai 平台的 api 和 key

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

Will it support the api and key of the mistral.ai platform?

@PPoooMM
Copy link

PPoooMM commented Feb 29, 2024

The Mistral AI API models, which include Mistral-small-latest, Mistral-medium-latest, and Mistral-large-latest, particularly the Mistral Large model, have been ranked second, only behind GPT-4, which has an MMLU score of 86.4%. The Mistral Large model achieved a score of 81.2%, surpassing even GPT-4 Turbo, which scored 80.48%. This makes the model particularly interesting, and I support its inclusion in the most popular cross-platform chatbots, like ChatGPTNextWeb.

@EarlyBedEarlyUp
Copy link

Looking forward to supporting Claude 3

@snowords
Copy link

Vote for moonshot

@GrayXu
Copy link

GrayXu commented Mar 19, 2024

I think that some functions should not be implemented in this repo. Different LLM backends can standardize the API through xusenlinzy/api-for-open-llm or BerriAI/litellm. ChatGPTNextWeb only needs to focus on the functionality of setting URLs and models for different conversations.

@Genuifx
Copy link

Genuifx commented Mar 23, 2024

Kimi is awesome. Support it!

@0x5c0f
Copy link

0x5c0f commented Mar 30, 2024

希望支持 AWS Bedrock
Looking forward to supporting AWS Bedrock

@fred-bf
Copy link
Contributor Author

fred-bf commented Apr 9, 2024

claude is supported in PR: #4457

@qiqitom
Copy link

qiqitom commented Apr 19, 2024

I'd really like to see Gemini Pro 1.5 added.
The Gemini Pro 1.0 model performs surprisingly well! as well as, the video processing capabilities of the Gemini Pro 1.5 are impressive. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
planned planned feature, will support in the future
Projects
Status: In Progress
Development

No branches or pull requests

10 participants