Coder Social home page Coder Social logo

Comments (4)

finnvoor avatar finnvoor commented on August 25, 2024 2

Thanks for the info! We can definitely split the audio and transcribe in chunks ourselves, but what I like so much about WhisperKit is how it handles all the annoying bits for you, so I think it would be nice if it would split large files automatically.

Ideally we could just pass a URL to any length file and get back a transcript, for our use case we don't need any streaming, but the protocol method could work (just a bit more effort on the client side).

I do think the easiest and simplest way to fix these bugs is just to add a loop in resampleAudio to read the input file in chunks (the input file could easily pass memory limits, but the resampled audio would have to be incredibly long to hit any memory limits), but understand if you want a more general solution.

from whisperkit.

ZachNagengast avatar ZachNagengast commented on August 25, 2024 1

Hi @finnvoor totally makes sense thanks for reporting this - there is an option to try that I'll recommend with the current codebase, and a path we could take moving forward I'm curious about your feedback on.

First option would be handling the chunking on the app side by using the transcribe interface that accepts an audioArray:

    public func transcribe(audioArray: [Float],
                           decodeOptions: DecodingOptions? = nil,
                           callback: TranscriptionCallback = nil) async throws -> TranscriptionResult?

Psuedo code for that would look similar to how to do streaming:

  1. Generate a 30s array of samples from the audio file
        var currentSeek = 0
        guard let audioFile = try? AVAudioFile(forReading: URL(string: audioFilePath)!) else { return nil }
        audioFile.framePosition = currentSeek
        let inputBuffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: AVAudioFrameCount(audioFile.fileFormat.sampleRate * 30.0))
        try? audioFile.read(into: inputBuffer!)
  1. Convert it to 16khz 1 channel
        let desiredFormat = AVAudioFormat(
            commonFormat: .pcmFormatFloat32,
            sampleRate: Double(WhisperKit.sampleRate),
            channels: AVAudioChannelCount(1),
            interleaved: false
        )!
        let converter = AVAudioConverter(from: audioFile.processingFormat, to: desiredFormat)
        let audioArray = try? AudioProcessor.resampleBuffer(inputBuffer!, with: converter!)
  1. Transcribe that section and find the last index of the sample we have transcribed so far
        let transcribeResult = try await whisperKit.transcribe(audioArray: audioArray, decodeOptions: options)
        let nextSeek = (transcribeResult?.segments.last?.end)! * Float(WhisperKit.sampleRate)
  1. Restart from step one using that as the new frame position
        audioFile.framePosition = currentSeek + nextSeek

Using this you could generate a multitude of TranscriptionResults and merge them together as they come in. This is similar to how we do streaming in the example app.

As for a new option that would make this easier & built in - there might be a protocol method we'd want to add that simply requests audio from the input file at predefined intervals (like 20s -> 50s, 50s -> 80s) and loads from disk rather than storing it all in memory. That way when we reach the end of the current 30s and update the seek point, we could request the next window from whatever is available on disk, otherwise end the loop.

We have also been thinking about a way to use the "steaming" logic for static audio files from disk (bulk transcription is an upcoming focus for us) so this might be a good way to go to keep the codebase simple, but curious to hear what you think?

from whisperkit.

ZachNagengast avatar ZachNagengast commented on August 25, 2024 1

@vade This looks nice, thanks for sharing!

from whisperkit.

vade avatar vade commented on August 25, 2024

Many moons ago I wrote a pure AVFoundation based CMSampleBuffer decoder which only keeps the 30 seconds of memory buffers available - so you never go above that:

Im unsure if its helpful, but you can find the code where: https://github.com/vade/OpenAI-Whisper-CoreML/blob/feature/RosaKit/Whisper/Whisper/Whisper/Whisper.swift#L361

I lost steam on my Whisper CoreML port, but would be happy to contribute if anything I can add is helpful!

from whisperkit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.