In a single case from the examine cited by AP, when a speaker described “two different women and one girl,” Whisper added fictional textual content specifying that they “had been Black.” In one other, the audio stated, “He, the boy, was going to, I’m unsure precisely, take the umbrella.” Whisper transcribed it to, “He took a giant piece of a cross, a teeny, small piece … I’m certain he didn’t have a terror knife so he killed quite a few folks.”
An OpenAI spokesperson instructed the AP that the corporate appreciates the researchers’ findings and that it actively research scale back fabrications and incorporates suggestions in updates to the mannequin.
Why Whisper confabulates
The important thing to Whisper’s unsuitability in high-risk domains comes from its propensity to generally confabulate, or plausibly make up, inaccurate outputs. The AP report says, “Researchers aren’t sure why Whisper and comparable instruments hallucinate,” however that is not true. We all know precisely why Transformer-based AI fashions like Whisper behave this manner.
Whisper is predicated on know-how that’s designed to foretell the following more than likely token (chunk of information) that ought to seem after a sequence of tokens supplied by a person. Within the case of ChatGPT, the enter tokens come within the type of a textual content immediate. Within the case of Whisper, the enter is tokenized audio knowledge.
The transcription output from Whisper is a prediction of what’s more than likely, not what’s most correct. Accuracy in Transformer-based outputs is often proportional to the presence of related correct knowledge within the coaching dataset, however it’s by no means assured. If there may be ever a case the place there is not sufficient contextual data in its neural community for Whisper to make an correct prediction about transcribe a specific phase of audio, the mannequin will fall again on what it “is aware of” concerning the relationships between sounds and phrases it has discovered from its coaching knowledge.