Upcoming architecture changes for Langfuse 3.0 #1902
Replies: 7 comments 10 replies
-
As requested from Discord, my comment: I really do not want to move off serverless infra to a dedicated VM, I'd say a major reason I chose Langfuse was its Cloud Run deployment I could couple with my existing AlloyDB. And perhaps AlloyDB is quick enough it doesn't need help with analytical queries. Cloud Run recently introduced side car containers so perhaps that is an option? There is a managed Redis option too but its a bit more pricy. My serverless deployment at the moment I don't pay for Langfuse until I'm browsing the UI or its capturing traces, aside the already sunk cost of the database. |
Beta Was this translation helpful? Give feedback.
-
The current docs advise not to use Docker Compose for production. I'm guessing that will change for v3? I would be great to have a ready to deploy docker compose that just needs an env file to get started. I'd use the cloud hosted service but have relatively strict data privacy requirements. |
Beta Was this translation helpful? Give feedback.
-
I'm hesitant on a more complicated docker-compose setup. One of the reasons we were able to open to using Langfuse to begin with, was how easy it was to deploy on a serverless platform. |
Beta Was this translation helpful? Give feedback.
-
It feels weird to deploy using docker-compose. For me, it's always have been a good tool to use during development, but not really for production. My team is currently planning to deploy it. We haven't decided if we'll deploy to kubernetes or Cloud Run though. We'll stay tuned to see what will work best for us. |
Beta Was this translation helpful? Give feedback.
-
I want Radis and OLAP Clickhouse to be keep opt-in. I am deploying LangFuse on AWS with a simple architecture. |
Beta Was this translation helpful? Give feedback.
-
Any specific reason why not providing arm based images ? |
Beta Was this translation helpful? Give feedback.
-
when clickhouse, I think it's great |
Beta Was this translation helpful? Give feedback.
-
Hi all,
Langfuse is growing a lot, both in feature scope as well as in usage on single instances. Thus we plan for a couple of changes that will be released in Langfuse v3.
We are currently required to mature our architecture as we are working on the following challenges:
✅ Building model-based evals, which requires us to run asynchronous tasks, rate limited, with failover capabilities.
🧑🍳 Improve performance as instances scale out.
I wanted to give you a heads up on upcoming changes which are required to make these features work. Currently, Langfuse contains a single Docker container, which takes care of everything we do. This was fast to set up Langfuse initially, but we need more technical capabilities now. In addition to the existing components (Docker container + Postgres database), we will add the following:
If you self-host Langfuse, this means that we will likely advise to change to the following setup to be able to benefit from new infra changes easily. We are happy to hear your thoughts on this:
Feel free to share your thoughts below on these topics:
Find more context in the last Langfuse Townhall meeting. We will provide an easy to follow upgrade path for self-hosters once v3 is generally available. The infrastructure change does not affect public APIs, thus, users of Langfuse Cloud will not be affected by this change. Currently we pilot the async container & queue for the evals feature which is currently in public beta on Langfuse Cloud.
We plan to release v3 in June and will post updates in this thread.
Beta Was this translation helpful? Give feedback.
All reactions