Live transcription in Next.js by Deepgram
-
Updated
May 4, 2024 - TypeScript
Live transcription in Next.js by Deepgram
Your own personal voice assistant: Voice to Text to LLM to Speech, displayed in a web interface
End-to-end platform for building voice first multimodal agents
Official Python SDK for Deepgram's automated speech recognition APIs.
Official JavaScript SDK for Deepgram's automated speech recognition APIs.
.NET SDK for Deepgram's automated speech recognition APIs.
Go SDK for Deepgram's automated speech recognition APIs.
Rust SDK for Deepgram's automated speech recognition APIs.
This sample demonstrates interacting with the Deepgram API from Node to make transcriptions of prerecorded files.
WIP exploration using Twilio Media Streams and Generative AI
Sample app to display live captioning to a WebRTC video session with the Deepgram API.
A Next.js based demo showing how you can transcribe a YouTube video using Deepgram.
API playground for Deepgram built with Streamlit
A TypeScript chrome extension that uses Deepgram to provide live transcription and translation
A basic speech to text app.
A simple express server setup for live audio transcriptions using Deepgram.
A Python script that generates subtitles and renders them onto the video.
Sample app for generating and displaying speaker talk-time using the Deepgram API with diarization.
Add a description, image, and links to the deepgram topic page so that developers can more easily learn about it.
To associate your repository with the deepgram topic, visit your repo's landing page and select "manage topics."