Researchers from Cornell have identified significant issues with OpenAI's Whisper, a speech-to-text AI tool. The AI has been found to hallucinate violent language, false facts, and fake websites. These hallucinations are more likely to occur with speakers who have long pauses, such as those with speech impairments or aphasia. The AI also generates random names, fragments of addresses, and irrelevant websites, and even incorporates YouTuber lingo into its transcriptions. These findings, reported by Tech Xplore, raise concerns about the reliability and ethical implications of using AI for speech-to-text applications.
"Unlike other widely used speech-to-text tools, Whisper is more likely to hallucinate when analyzing speech from people who speak with longer pauses between their words, such as those with speech impairments, researchers found." #ethics #tech #data #AI #research https://t.co/sIADIq2mEl
"In other examples of hallucinated transcriptions, #Whisper conjured random names, fragments of addresses and irrelevant... websites. Hallucinated traces of YouTuber lingo, like 'Thanks for watching and Electric Unicorn,' also wormed into transcriptions." #ethics #AI #data #LLMs https://t.co/sIADIq2mEl
AI speech-to-text can hallucinate violent language - Tech Xplore https://t.co/hiI1PKC0Fo
OpenAI's Speech-to-Text Hallucinations Cornell researchers found that OpenAI's Whisper sometimes makes up violent language, false facts, and fake websites. These mistakes often happen with speakers who have long pauses, like those with speech impairments or aphasia.โฆ https://t.co/gwNHls5ERB
AI speech-to-text can hallucinate #violentLanguage @cornell @arxiv https://t.co/eqMLslUg1E
AI speech-to-text can hallucinate violent language - Cornell Chronicle https://t.co/5bRGTh1Pmi
Researchers Say Thereโs a Vulgar But More Accurate Term for AI Hallucinations https://t.co/XlBKNqfZ9u