Loading...
Hugging Face has developed Distil-Whisper, a distilled version of OpenAI's Whisper speech recognition model. Distil-Whisper is 5.8 times faster and has 51% fewer parameters. It maintains accuracy within 1% WER (Word Error Rate) compared to the original model. The smaller size and improved speed make it suitable for low-latency or resource-constrained environments. Distil-Whisper achieves this by shrinking the Whisper decoder down to 2 layers without losing robustness and implementing chunked decoding and speculative sampling techniques.
It's AI tools Thursday. We're highlighting https://t.co/gJgrQAZZTH an amazing ehanced text to speech tool that makes adding voiceovers to presentations across elearning, sales, podcasts, and more super easy and efficient. #murfAI https://t.co/mdj9gQHCIb
Audio Hijack for Mac now includes speech to text transcription powered by OpenAI's Whisper https://t.co/ozArFSrK1Q
Distil-Whisper (https://t.co/ghjAUwwD7M by @sanchitgandhi99) 5x faster speech recognition. Shrinks the Whisper decoder down to 2 layers without losing robustness. Shows that chunked decoding and speculative sampling improve speed further. Weights: https://t.co/6pIecU420y
Whisper just got smaller, faster!🔔 The audio team at @huggingface released DistilWhisper, a distilled version of @OpenAI Whisper🧪 DistilWhisper is a drop-in replacement for Whisper on English speech recognition being 5.8 times faster and accuracy within 1% WER🤯 🧶 https://t.co/l5lMnrwgpa
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling paper page: https://t.co/NcrgLnsVjp As the size of pre-trained speech recognition models increases, running these large models in low-latency or resource-constrained environments becomes challenging.… https://t.co/7k08wc8fGa https://t.co/0lwc2HI08m
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling abs: https://t.co/3ZEf0jpxvC code: https://t.co/ATlGbkz45R @huggingface has trained a distilled Whisper model that is 5.8x faster with 51% fewer parameters. This is done with both KL divergence loss… https://t.co/DooyNYZ0XL https://t.co/LByZlvt1oS