New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Exception when using LangChain with GPTCache #567
Comments
@dwillie i will check it. |
Thank you @SimFG |
any solution to this?
|
did you get any solution to this? |
@theinhumaneme you can use the inner cahche of the langchain, like:
more details: #585 (comment) |
thank you @SimFG |
Current Behavior
When following the LangChain instructions from the docs for a custom LLM I'm getting:
I'm trying to follow the section below (from https://gptcache.readthedocs.io/en/latest/usage.html) but importantly I haven't included
get_prompt
orpostnop
as I don't know what those are (I can't see them anywhere in the doc).I have tried using an older version of langchain and also using the
dev
branch of GPTCache, to avoid the metaclass issue and I'm getting this NoneType not subscriptable in both.Code example excerpt from docs:
Hopefully I'm just doing something wrong. I've followed the instructions from LangChain to make my own custom LLM. (https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm)
Which appears to be working as expected.
Expected Behavior
I'd expect to get the response returned from the LLM and the cache populated
Steps To Reproduce
This script reproduces the error for me using the dev branch and langchain
0.0.332
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: