• GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    In my experiments, local Whisper models I can run locally are comparable to YouTube’s — which is to say, not production-quality but certainly better then nothing.

    I’ve also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I’m put off by the mention of cloud services. Depends on how they implement it.

    • IrateAnteater@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      24 hours ago

      Since VLC runs on just about everything, I’d imagine that the cloud service will be best for the many devices that just don’t have the horsepower to run an LLM locally.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there’s no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).

      • zurohki@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 hours ago

        It’s not just computing power - you don’t always want your device burning massive amounts of battery.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Cool, thanks for sharing!

        I see you prompt it to “Make sure to only use knowledge found in the following audio transcription”. Have you found that sufficient to eliminate hallucination and going off track?

        • troed@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          23 hours ago

          Yes I have been impressed with the quality of summaries keeping to the content. I have seen, rare, attribution errors though, where who said what got mixed up in unfortunate ways.