The AI transcription application Visper, developed by OpenAI, has been found to generate fictional sentences during operation, including racial comments, violent rhetoric, and made-up medical treatments. This has raised concerns among software engineers, programmers, and academic researchers who have encountered these “hallucinations” in a significant number of transcriptions.
Despite warnings from OpenAI about using Visper in high-risk domains, medical centers have been quick to adopt the tool for transcribing patient consultations. The prevalence of these fabrications in transcriptions has led to calls for AI regulations and improvements by experts, activists, and former OpenAI employees.
Visper is widely used for creating subtitles for the deaf and hard of hearing, posing a significant risk due to the hidden fabrications in the text. The need for higher standards in AI transcription tools, especially in critical settings like hospitals, has been emphasized by experts in the field.
While OpenAI acknowledges the issue and is working on reducing these “hallucinations,” the widespread use of Visper in various industries and consumer technologies highlights the urgency for resolving this problem. The tool’s integration into leading chatbots and cloud computing platforms underscores the need for enhanced accuracy and reliability in transcription technology.
One of the major concerns raised is the deletion of original audio recordings by companies like Nabla, who use Visper for medical transcription. This practice raises questions about the accuracy and integrity of AI-generated transcripts, especially in sensitive conversations between patients and healthcare providers.
As the debate around AI transcription tools continues, the importance of ensuring data security, accuracy, and transparency in transcription technology cannot be understated. The potential impact of inaccurate transcriptions on patient care and diagnosis underscores the need for a thorough evaluation and improvement of AI-powered transcription tools.