Skip to content
This repository has been archived by the owner on Apr 15, 2024. It is now read-only.

inngest/inngest-example-llm-chaining

Repository files navigation

AI/LLM Chaining with Inngest

This project is a demo showing how to use Inngest to chain multiple calls to OpenAI's API within a Next.js application.

The demo concept is a function that accepts a technical description of a new product feature and brands that new feature writing headline and description copy as well as a new feature announcement blog post.

Why you might need to chain when using LLMs?

To learn why you may need to use chaining with LLMs (Large language models) you should read our full blog post here, but here are some quick highlights:

  • Given a user’s input, you might need to run 4 different prompts or ideas and present the output to users as choices (think Midjourney)
  • You might need to chunk a user’s input to reduce context/tokens in each call
  • You might need to continue to refine input, such as going from question → data → SQL → human readable answer
  • You might just want the LLM to introspect whether it made the right answer (eg. ask “Are you sure?”). This is a basic, but common, approach to testing LLM output
  • You might ask an LLM whether the prompt is susceptible to injection before running the actual prompt

Why chain with Inngest?

Inngest allows you to easily and reliable create chains without having to manage state or jobs in between the parts in your chain. Some benefits of using Inngest for chaining are:

  • You can define your chain in a single function instead of multiple separate functions or workers
  • You can define parts of your chain as Inngest "steps" using step.run()
  • You can pass state from one step to the next without having to manage the state/context yourself
  • Each step is retried automatically, improving reliability

Getting started

This is a Next.js project, so to get started just run:

npm run dev
# or
yarn dev
# or
pnpm dev

This command runs two processes concurrently:

  • next dev - The Next.js app dev server on port 3000
  • npx inngest-cli@latest dev -u http://localhost:3000/api/inngest - The Inngest dev server on port 8288. The -u flag points to the Inngest endpoint on the app.

After running the command now you can:

Environment variables

To run this project you'll need to set the following keys as environment variables. See .env.example for more info.

The code

Deploying the code

You can easily deploy this code to Vercel and then register your functions with Inngest using the Inngest Vercel integration.

About

This project is a demo showing how to use Inngest to chain multiple calls to OpenAI's API within a Next.js application.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published