AI benchmarking: Speech-to-Text

A tool designed for simplicity, automation and minimal preparation of test data, developed to meet the needs of broadcasters.

In response to demands from broadcasters and other organizations that process large volumes of A/V content, the EBU has launched and coordinates a framework for benchmarking Artificial Intelligence (AI) and Machine Learning (ML) services. The project is spearheaded by ‘BenchmarkSTT’, a tool designed to facilitate the benchmarking of speech-to-text systems and services.

BenchmarkSTT

Unlike tools used by ML experts in academic settings, BenchmarkSTT targets non-specialists in production environments. It does not require meticulous preparation of test data, and it prioritises simplicity, automation and relative ranking over scientific precision and absolute scores.

With a single command, the tool calculates the accuracy of Automatic Speech Recognition (ASR) transcripts against a reference. Optionally, the user can apply normalization rules to remove non-significant differences such as case or punctuation. Supporting multiple languages and user-defined normalizations, this CLI tool can be integrated into production workflows to perform real-time benchmarking.

Open Source

This collaborative project is open source.