Replies: 1 comment
-
Yep - steps below 👇 1. Setup LiteLLM Proxymodel_list:
- model_name: claude-3 ### RECEIVED MODEL NAME ###
litellm_params: # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
model: claude-3-opus-20240229 ### MODEL NAME sent to `litellm.completion()` ###
api_key: "os.environ/ANTHROPIC_API_KEY" # does os.getenv("AZURE_API_KEY_EU") 2. Start proxylitellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000 3. Call LiteLLM in AutoGenfrom autogen import UserProxyAgent, ConversableAgent
local_llm_config={
"config_list": [
{
"model": "NotRequired", # Loaded with LiteLLM command
"api_key": "NotRequired", # Not needed
"base_url": "http://0.0.0.0:4000" # Your LiteLLM URL
}
],
"cache_seed": None # Turns off caching, useful for testing different models
}
# Create the agent that uses the LLM.
assistant = ConversableAgent("agent", llm_config=local_llm_config)
# Create the agent that represents the user in the conversation.
user_proxy = UserProxyAgent("user", code_execution_config=False)
# Let the assistant start the conversation. It will end when the user types exit.
assistant.initiate_chat(user_proxy, message="How can I help you today?") |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm considering integrating LITELLM+Anthropic with Autogen for a project I'm working on. Before diving into the details, I'd like to ask the community:
Is it technically feasible to integrate LITELLM+Anthropic with Autogen?
If anyone has experience or insights on this topic, I'd greatly appreciate your input. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions