-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resample audio file in chunks to reduce memory usage #16
Comments
Hi @finnvoor totally makes sense thanks for reporting this - there is an option to try that I'll recommend with the current codebase, and a path we could take moving forward I'm curious about your feedback on. First option would be handling the chunking on the app side by using the transcribe interface that accepts an audioArray: public func transcribe(audioArray: [Float],
decodeOptions: DecodingOptions? = nil,
callback: TranscriptionCallback = nil) async throws -> TranscriptionResult? Psuedo code for that would look similar to how to do streaming:
var currentSeek = 0
guard let audioFile = try? AVAudioFile(forReading: URL(string: audioFilePath)!) else { return nil }
audioFile.framePosition = currentSeek
let inputBuffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: AVAudioFrameCount(audioFile.fileFormat.sampleRate * 30.0))
try? audioFile.read(into: inputBuffer!)
let desiredFormat = AVAudioFormat(
commonFormat: .pcmFormatFloat32,
sampleRate: Double(WhisperKit.sampleRate),
channels: AVAudioChannelCount(1),
interleaved: false
)!
let converter = AVAudioConverter(from: audioFile.processingFormat, to: desiredFormat)
let audioArray = try? AudioProcessor.resampleBuffer(inputBuffer!, with: converter!)
let transcribeResult = try await whisperKit.transcribe(audioArray: audioArray, decodeOptions: options)
let nextSeek = (transcribeResult?.segments.last?.end)! * Float(WhisperKit.sampleRate)
audioFile.framePosition = currentSeek + nextSeek Using this you could generate a multitude of TranscriptionResults and merge them together as they come in. This is similar to how we do streaming in the example app. As for a new option that would make this easier & built in - there might be a protocol method we'd want to add that simply requests audio from the input file at predefined intervals (like 20s -> 50s, 50s -> 80s) and loads from disk rather than storing it all in memory. That way when we reach the end of the current 30s and update the seek point, we could request the next window from whatever is available on disk, otherwise end the loop. We have also been thinking about a way to use the "steaming" logic for static audio files from disk (bulk transcription is an upcoming focus for us) so this might be a good way to go to keep the codebase simple, but curious to hear what you think? |
Thanks for the info! We can definitely split the audio and transcribe in chunks ourselves, but what I like so much about WhisperKit is how it handles all the annoying bits for you, so I think it would be nice if it would split large files automatically. Ideally we could just pass a I do think the easiest and simplest way to fix these bugs is just to add a loop in |
Many moons ago I wrote a pure AVFoundation based CMSampleBuffer decoder which only keeps the 30 seconds of memory buffers available - so you never go above that: Im unsure if its helpful, but you can find the code where: https://github.com/vade/OpenAI-Whisper-CoreML/blob/feature/RosaKit/Whisper/Whisper/Whisper/Whisper.swift#L361 I lost steam on my Whisper CoreML port, but would be happy to contribute if anything I can add is helpful! |
@vade This looks nice, thanks for sharing! |
WhisperKit/Sources/WhisperKit/Core/AudioProcessor.swift
Lines 197 to 217 in fed90c7
Creating an
AVAudioPCMBuffer
for the whole input audio buffer can easily surpass iOS memory limits.Attempting to transcribe a 44100hz, 2 channel, ~1hr long video crashes on iOS due to running out of memory. It would be nice if instead of reading all the input audio into a buffer at once and converting, the audio was read and converted in chunks to reduce the memory usage.
Another less common issue that would be solved by chunking the audio is that AVAudioPCMBuffer has a max size of UInt32.max, which can be hit when transcribing a 1-2hr, 16 channel, 44100hz audio file. This is a fairly typical audio file for a podcast recorded with a RODECaster Pro.
The text was updated successfully, but these errors were encountered: