-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Llama 2, Palm, Anthropic, Cohere Models - using litellm #114
base: main
Are you sure you want to change the base?
Conversation
cc @geekan can I get a review on this ? |
Thanks! Can we make llama2 a local file and call it? So everything is local without the need to call external APIs. |
|
||
class liteLLM(BaseGPTAPI, RateLimiter): | ||
def __init__(self): | ||
self.__init_openai(CONFIG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the base class does't have __iniit_openai
self._cost_manager = CostManager() | ||
RateLimiter.__init__(self, rpm=self.rpm) | ||
|
||
def _chat_completion(self, messages: list[dict], model: str) -> dict: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggest to implement acompletion_text
model_name = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1" | ||
|
||
# llama2 call | ||
response = completion(model_name, messages) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not a modified version to integrate with metagpt through api or local loading.
The pr is not enough for users to use llama2 inside metagpt.
any update here? @ishaan-jaff |
Thanks for bumping @stellaHSR taking a look |
self._cost_manager = CostManager() | ||
RateLimiter.__init__(self, rpm=self.rpm) | ||
|
||
def _chat_completion(self, messages: list[dict], model: str) -> dict: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is necessary to implement the 'acompletion_text' function in the liteLLM class because it is being used within the Action.
Hi @ishaan-jaff, I suggest implementing acompletion_text() to make using liteLLM in metagpt easier. |
will love to be able to use local models especially llama2 70B |
""" | ||
from metagpt.provider.base_gpt_api import BaseGPTAPI | ||
from metagpt.provider.openai_api import CostManager, RateLimiter | ||
from metagpt.config import CONFIG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any litellm integration inside this codebase.
Any updates on this? @ishaan-jaff |
Addressing #97
I'm the maintainer of litellm https://github.com/BerriAI/litellm - a simple & light package to call OpenAI, Azure, Cohere, Anthropic, Replicate API Endpoints
This PR adds support for models from all the above mentioned providers (by creating a class
liteLLM
)Here's a sample of how it's used: