You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
There is currently no straightforward way to run LangChain-supported models in serverless environments without dealing with infrastructure or state concerns. This is a challenge because it creates a barrier for developers who want to deploy and manage LLM and chat model applications in serverless environments.
Describe the solution you'd like
I propose the integration of LangChain into Inngest. This would allow developers to run LangChain-managed language models in a serverless environment, handling all infrastructure and state concerns automatically. This would greatly simplify the process of deploying and managing LLM applications in serverless environments.
Describe alternatives you've considered
Alternatives to LangChain include LlamaIndex (previously known as GPT Index)
Additional context
Inngest's blog post on May 16, 2023, highlighted their interest in integrating with LangChain to allow people to run LangChain models in serverless environments. Given Inngest's mission to simplify and automate serverless workflows, and LangChain's goal to enable developers to build LLM applications, this integration seems like a natural fit.
The text was updated successfully, but these errors were encountered:
Hey @slavakurilyak - I'd love to hear more about this LangChain integration idea. From our testing with LangChan JS, we've explored using it by primarily wrapping any call from LangChain within a step.run meaning that it's mostly LangChain code with some added help of Inngest's step.run and not too much integration (see basic example below).
What type of integration would you like to see? How might that work for you?
exportconstbasicChain=inngest.createFunction({name: "Basic Chain"},{event: "ai/basic.chain"},async({ event, step })=>{// Get the input data from the event payloadconstproduct=event.data.product;constmodel=newOpenAI({temperature: 0});constprompt=PromptTemplate.fromTemplate("What is a good name for a company that makes {product}?");constchainA=newLLMChain({llm: model, prompt });constresult=awaitstep.run("First prompt",async()=>{returnawaitchainA.call({ product });});return{message: "success"};});
Is your feature request related to a problem? Please describe.
There is currently no straightforward way to run LangChain-supported models in serverless environments without dealing with infrastructure or state concerns. This is a challenge because it creates a barrier for developers who want to deploy and manage LLM and chat model applications in serverless environments.
Describe the solution you'd like
I propose the integration of LangChain into Inngest. This would allow developers to run LangChain-managed language models in a serverless environment, handling all infrastructure and state concerns automatically. This would greatly simplify the process of deploying and managing LLM applications in serverless environments.
Describe alternatives you've considered
Alternatives to LangChain include LlamaIndex (previously known as GPT Index)
Additional context
Inngest's blog post on May 16, 2023, highlighted their interest in integrating with LangChain to allow people to run LangChain models in serverless environments. Given Inngest's mission to simplify and automate serverless workflows, and LangChain's goal to enable developers to build LLM applications, this integration seems like a natural fit.
The text was updated successfully, but these errors were encountered: