Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support for sequential model integration for multi-layered language translation and specialized tasks #5

Open
arnediehm opened this issue May 12, 2024 · 2 comments

Comments

@arnediehm
Copy link

Is your feature request related to a problem? Please describe.
I really support the Idea for a language translation layer in #4 and would like to build on it with a suggestion of my own.
I'm always frustrated when a model trained with e.g. a medical dataset struggles with languages other than english, and sometimes even with english.

Describe the solution you'd like
Currently, it's possible to include a second model in a chat, allowing both models to respond to input. However, I think it would be beneficial to not only have the option to use a model simultaneously for responses but also sequentially with a second 'system prompt.' This means the second model would process the output of the first model along with a separate system prompt. Such a feature could also help with translation issues by enabling the addition of a second model for desired language translation, along with a prompt, like 'translate the following into Spanish: ' or 'Refine the following text: '
This would also allow Incorporating specialized models for particular tasks to boost the precision and relevance of responses. For example, a model dedicated to fact-checking or identifying objects in images could be added, and the output further processed with another model.

Describe alternatives you've considered
Writing a python script that implements the solution myself. However, this would be less convenient and would limit use cases, such as if I want to use this feature from my phone or give my roommates access to it. Therefore It would be amazing to be able to use this functionality from the webui.

Additional context
Add any other context or screenshots about the feature request here.

@tjbck tjbck transferred this issue from open-webui/open-webui May 24, 2024
@tcztzy
Copy link

tcztzy commented May 24, 2024

Pipeline operator | should be supported.

model = "translate_to_en | llm | translate_en_to"

@tjbck
Copy link
Collaborator

tjbck commented May 30, 2024

Implemented with filter pipeline using inlet, and outlet hooks!

pipelines-filter-inlet-outlet.mov

https://github.com/open-webui/pipelines/blob/main/pipelines/examples/libretranlsate_filter_pipeline.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants