Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add caching and benchmark #417

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

joschkabraun
Copy link

Hey, I opened a PR which allows you to quickly iterate on your app locally.

Adding the init statement will automatically use a local redis cache for any of your LLM requests (more here). With that you won't need to wait for the slow & expensive API requests if you make any changes to your code/prompts and want to make sure everything works as expected.

This will also enable you to test MetaGPT across many inputs at the same time via:

parea benchmark --func startup:startup --csv_path benchmark-inputs.csv

The benchmark will create a CSV file with all the traces for you to debug.

I ran the benchmark with this CSV file:

idea
"Write a chess game in cli"
"Write a cli snake game"

@joschkabraun
Copy link
Author

CC: @garylin2099 @stellaHSR is that helpful for you guys?

@rusenask
Copy link

I think idea is good but you don't really need anything else except redis or even more generic cache set/get interface. It should be exposed and available to either roles or actions where you could set any kind of values you want on demand

@joschkabraun
Copy link
Author

joschkabraun commented Nov 10, 2023

It should be exposed and available to either roles or actions where you could set any kind of values you want on demand

So, you would want to cache/log any actions taken by a certain role? Is that to speed up writing code or as a form of memory?

@geekan
Copy link
Owner

geekan commented Dec 21, 2023

Please resolve all conflicts and Review comments

@geekan
Copy link
Owner

geekan commented Feb 17, 2024

Frankly, I think this implementation is probably better than the current rsp_cache. It doesn't pollute our commit history, and it's only minimal changes (it seems)

@joschkabraun
Copy link
Author

@geekan should I resolve all conflicts? Note, the current implementation only helps with caching OpenAI calls. Is that sufficient?

@garylin2099
Copy link
Collaborator

Hi, thanks for the proposal! However, to enable caching, the solution requires a user-maintained redis instance, which is a bit heavy. A simpler mechanism might be better.

@joschkabraun
Copy link
Author

Oh we can simply read-write from a file for caching. Is the idea to have the cache always on?

@garylin2099
Copy link
Collaborator

Ideally there should be a global switch which you can use to enable or disable caching

@joschkabraun
Copy link
Author

Okay! Should that be done via OS environment variables or another CLI argument?
BTW, the current implementation only supports caching for (Azure) OpenAI calls, does that suffice?

@geekan
Copy link
Owner

geekan commented Feb 27, 2024

@joschkabraun That really isn’t enough. Our existing cache takes over almost all network requests. Maybe it would be better at http layer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants