User:Ralsettem: Difference between revisions

From SGUTranscripts
Jump to navigation Jump to search
mNo edit summary
(Updating my current setup)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
Self-proclaimed nerd venturing out into the world of transcription supporting a podcast I love to listen to.
Self-proclaimed nerd venturing out into the world of transcription supporting a podcast I love to listen to.


I’m using a local of installation of [https://openai.com/blog/whisper/ Whisper] by OpenAi on [https://github.com/openai/whisper Github] to using the large model to transcribe podcast episodes.
After some computer issues I'm back and I'm using [https://github.com/MahmoudAshraf97/whisper-diarization whisper-diarization] to create diarization transcriptions. It uses [https://openai.com/blog/whisper/ Whisper] and [https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html nemo].
It’s theoretically possible to have [https://github.com/lablab-ai/Whisper-transcription_and_diarization-speaker-identification- diarization] of speakers using [https://github.com/pyannote/pyannote-audio Pyannote on Github], however I don’t know python and there is a lack of video tutorials that show the process.
Hopefully there will be some kind developers that will create a webUI that is able to transcribe with diarization.

Latest revision as of 06:58, 4 May 2024

Self-proclaimed nerd venturing out into the world of transcription supporting a podcast I love to listen to.

After some computer issues I'm back and I'm using whisper-diarization to create diarization transcriptions. It uses Whisper and nemo.