Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support functions/tools in OpenAI API #121

Open
carsonwang opened this issue Feb 23, 2024 · 4 comments
Open

Support functions/tools in OpenAI API #121

carsonwang opened this issue Feb 23, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@carsonwang
Copy link
Contributor

Support functions/tools in the API to enable more use cases.
Refer to the OpenAI document below:
https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools

tools:
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

tool_choice:
Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type": "function", "function": {"name": "my_function"}} forces the model to call that function.

none is the default when no functions are present. auto is the default if functions are present.

curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "What is the weather like in Boston?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"]
            }
          },
          "required": ["location"]
        }
      }
    }
  ],
  "tool_choice": "auto"
}'
@carsonwang carsonwang added the enhancement New feature or request label Feb 23, 2024
@carsonwang
Copy link
Contributor Author

We can use mistral 7b to test this. It will also be useful to add a Langchain example that leverage this API.

@carsonwang carsonwang changed the title Support functions/calls in OpenAI API Support functions/tools in OpenAI API Feb 23, 2024
@carsonwang
Copy link
Contributor Author

@xuechendi is working on it.

@xuechendi
Copy link
Contributor

@carsonwang, I've been working on this issue but a little lost on How to enable.
Observations:

  1. HuggingFace support for 'tools', 'tool_choice': no native parameter support in generate function for "tools", "tool_choice". Seems huggingface has their own way to support Tools: https://huggingface.co/docs/transformers/main/en/custom_tools

  2. RayLLM tools/tool_choice support. RayLLM supports these two keywords in client API, but after digging into inference codes, can't find how these two keywords passing to model.inference function either vLLMEngine or TRTEngine.
    Even though AnyScale blog mentioned their fully support for function_calls, it is not come to RayLLM codes yet. Anyscale Endpoints: JSON Mode and Function calling Features

  3. llama_cpp toos/tool_choice support. I find function_call implementation in llama_cpp_python for two models: functionary and chatML. But only functionary is accepting these two keywords natually, chatML is still convert it as plain text as Langchain.initialize_agent did. https://github.com/abetlen/llama-cpp-python/blob/main/llama_cpp/llama_chat_format.py#L2032

  4. vLLM / triton tensor-RT / huggingface issues research on tools/tool_chain support. Not found aligned conclusion.

Below is my branch for duplicating RayLLM supports for tools/tool_choice:
xuechendi@42d9613

@xuechendi
Copy link
Contributor

pr: #134

cheehook pushed a commit to JoshuaL3000/llm-on-ray that referenced this issue Mar 8, 2024
* merge and update

* upd

* update asyncio.sleep time
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants