You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried out the project, very impressed. Thanks for open sourcing. Quick question on latency.
Noticed a minimum latency of at least 3-4s. I am measuring latency as delay between when the human speaks and when the AI responds. This was with everything deployed on fly.io in Ashburn using the exact demo as instructed.
Looks like the biggest bottleneck is the request from Twilio -> Fly.io and Fly.io -> Twilio. Second biggest bottleneck looks like transcription via deepgram.
The ReadMe suggests a latency of 1s—can you clarify the definition of latency here? Is that just looking at gpt response + TTS?
Any ideas on how to reduce latency? Is there a roadmap for this project we can follow somewhere?
The text was updated successfully, but these errors were encountered:
Tried out the project, very impressed. Thanks for open sourcing. Quick question on latency.
Noticed a minimum latency of at least 3-4s. I am measuring latency as delay between when the human speaks and when the AI responds. This was with everything deployed on fly.io in Ashburn using the exact demo as instructed.
Looks like the biggest bottleneck is the request from Twilio -> Fly.io and Fly.io -> Twilio. Second biggest bottleneck looks like transcription via deepgram.
The ReadMe suggests a latency of 1s—can you clarify the definition of latency here? Is that just looking at gpt response + TTS?
Any ideas on how to reduce latency? Is there a roadmap for this project we can follow somewhere?
The text was updated successfully, but these errors were encountered: