You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OpenAI released their gpt-4o model today and it's now immediately available to all API users. It make sense for Langfuse to include this as one of the official pre-filled models. The docs say to request official support via an issue like this.
Additional information
There's both a gpt-4o and a gpt-4o-2024-05-13 per the usual convention. Also I believe langfuse will need to update to the latest version 7.0 of tiktoken which was just updated with gpt-4o support.
Here's a screenshot of the working a model definition I create for myself:
But note: in the model's openai Tokenizer config I had to just claim the model was the older gpt-4-turbo. Setting { ..., "tokenizerModel":"gpt-4o-2024-05-13", ... } resulted in gpt-4o logs showing 0 tokens, hence the need for a tiktoken upgraded.
The text was updated successfully, but these errors were encountered:
Discussed in https://github.com/orgs/langfuse/discussions/2045
Originally posted by varenc May 13, 2024
Describe the feature or potential improvement
OpenAI released their gpt-4o model today and it's now immediately available to all API users. It make sense for Langfuse to include this as one of the official pre-filled models. The docs say to request official support via an issue like this.
Additional information
There's both a
gpt-4o
and agpt-4o-2024-05-13
per the usual convention. Also I believe langfuse will need to update to the latest version 7.0 oftiktoken
which was just updated with gpt-4o support.Here's a screenshot of the working a model definition I create for myself:
But note: in the model's openai Tokenizer config I had to just claim the model was the older gpt-4-turbo. Setting
{ ..., "tokenizerModel":"gpt-4o-2024-05-13", ... }
resulted in gpt-4o logs showing 0 tokens, hence the need for atiktoken
upgraded.The text was updated successfully, but these errors were encountered: