A Benchmark of French ASR Systems Based on Error Severity

Bibliographic Details
Title: A Benchmark of French ASR Systems Based on Error Severity
Authors: Tholly, Antoine, Wottawa, Jane, Rouvier, Mickael, Dufour, Richard
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language, I.2.7
More Details: Automatic Speech Recognition (ASR) transcription errors are commonly assessed using metrics that compare them with a reference transcription, such as Word Error Rate (WER), which measures spelling deviations from the reference, or semantic score-based metrics. However, these approaches often overlook what is understandable to humans when interpreting transcription errors. To address this limitation, a new evaluation is proposed that categorizes errors into four levels of severity, further divided into subtypes, based on objective linguistic criteria, contextual patterns, and the use of content words as the unit of analysis. This metric is applied to a benchmark of 10 state-of-the-art ASR systems on French language, encompassing both HMM-based and end-to-end models. Our findings reveal the strengths and weaknesses of each system, identifying those that provide the most comfortable reading experience for users.
Comment: To be published in COLING 2025 Proceedings
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.10879
Accession Number: edsarx.2501.10879
Database: arXiv
More Details
Description not available.