Skip to content

costs for LLM #393

Answered by mspronesti
UweW asked this question in Q&A
Jul 25, 2023 · 2 comments · 1 reply
Discussion options

You must be logged in to vote

We don't inject the entire dataframe but the header. Therefore, given two dataframes with the same columns and the same question, there's no difference in the token consumption, as pandasAI will likely produce the same code. Then the execution result will change, but that's a different story.

By the way, the total consumption can be displayed using pandasai callbacks as follows:

"""Example of using PandasAI with a Pandas DataFrame"""

import pandas as pd
from data.sample_dataframe import dataframe

from pandasai import PandasAI
from pandasai.llm.openai import OpenAI
from pandasai.helpers.openai_info import get_openai_callback

df = pd.DataFrame(dataframe)

llm = OpenAI()

# conversational…

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@UweW
Comment options

Answer selected by UweW
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants