Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Llama 2, Palm, Anthropic, Cohere Models - using litellm #114

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ishaan-jaff
Copy link

Addressing #97

I'm the maintainer of litellm https://github.com/BerriAI/litellm - a simple & light package to call OpenAI, Azure, Cohere, Anthropic, Replicate API Endpoints

This PR adds support for models from all the above mentioned providers (by creating a class liteLLM)

Here's a sample of how it's used:

from litellm import completion

## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion("command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

@ishaan-jaff
Copy link
Author

cc @geekan can I get a review on this ?

@hohoCode
Copy link

hohoCode commented Aug 4, 2023

Thanks! Can we make llama2 a local file and call it? So everything is local without the need to call external APIs.


class liteLLM(BaseGPTAPI, RateLimiter):
def __init__(self):
self.__init_openai(CONFIG)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the base class does't have __iniit_openai

self._cost_manager = CostManager()
RateLimiter.__init__(self, rpm=self.rpm)

def _chat_completion(self, messages: list[dict], model: str) -> dict:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to implement acompletion_text

model_name = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"

# llama2 call
response = completion(model_name, messages)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not a modified version to integrate with metagpt through api or local loading.
The pr is not enough for users to use llama2 inside metagpt.

@stellaHSR
Copy link
Collaborator

any update here? @ishaan-jaff

@ishaan-jaff
Copy link
Author

Thanks for bumping @stellaHSR taking a look

self._cost_manager = CostManager()
RateLimiter.__init__(self, rpm=self.rpm)

def _chat_completion(self, messages: list[dict], model: str) -> dict:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is necessary to implement the 'acompletion_text' function in the liteLLM class because it is being used within the Action.

@stellaHSR
Copy link
Collaborator

Hi @ishaan-jaff, I suggest implementing acompletion_text() to make using liteLLM in metagpt easier.

@izevar
Copy link

izevar commented Oct 31, 2023

will love to be able to use local models especially llama2 70B

"""
from metagpt.provider.base_gpt_api import BaseGPTAPI
from metagpt.provider.openai_api import CostManager, RateLimiter
from metagpt.config import CONFIG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any litellm integration inside this codebase.

@raphant
Copy link

raphant commented Apr 16, 2024

Any updates on this? @ishaan-jaff

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants