Passing an Authorization Token to the tool & avoiding the LLM. #534
Replies: 2 comments 1 reply
-
The general strategy would be to sub-class form StructuredTools, implement some configurable tools for tokens, and use configurable runnables. However, because agent executor does not propagate configuration information currently, this strategy will not work. (Issue needs to be fixed in langchain) The way to do this right now is to instantiate an appropriate agent at run time with auth bound to the tools. Instantiation of most of these objects at run time is very fast (should be far less than ~1ms), so performance is not a concern here. Here are two reference examples that show how an agent can be instantiated at run time:
If the token is supplied directly by the user, the examples are sufficient as they are. If the token comes from database look up that relies on the user identity, take a look at the auth examples here: https://python.langchain.com/docs/langserve#examples If you're comfortable with FastAPI / python, you can use Here's documentation for configurable runnables: https://python.langchain.com/docs/expression_language/how_to/configue Configurable runnables examples: https://github.com/langchain-ai/langserve/blob/main/examples/configurable_chain/server.py |
Beta Was this translation helpful? Give feedback.
-
Lets say the auth is passed to the endpoint like this: class Input(BaseModel):
question: str
token: str I want to pass the question to the LLM but once the corresponding function is mapped, I want to intercept the function call and pass the token before it is executed. It is unclear how I can pass the token directly to the mapped function from the examples shared. I attempted to write a custom parser and pass the token to it but I am not able to extract the token from the input request as it gets overwritten by the previous step ( I am not an expert with LCEL do point out if there is a better way to do this, I tried using RunnablePassthrough with RunnableParallel still didn't get the result I wanted) agent = (
{
"input": lambda x: x["question"],
"agent_scratchpad": lambda x: format_to_openai_functions(x["intermediate_steps"]),
}
| prompt
| llm_with_tools
| CustomOpenAIFunctionsAgentOutputParser(config_param={"token" :lambda x: x["token"]}) # Pass the "token" key
)
) Would appreciate a nudge to the right direction @eyurtsev |
Beta Was this translation helpful? Give feedback.
-
I am using Langserve Conversation Agent with GPT models to map user questions to private API's and do function calling.
I want to be able to pass the authorization token directly to the tool instead of it going through the LLM.
Could you point me in to the right direction to do this and what is the best practice?
Beta Was this translation helpful? Give feedback.
All reactions