- Add the Levenshtein distance to the code (Q1 2020)
- Test the first release with the three metrics (Q1 2020)
- Test the API and Docker image (Q1 2020)
- Publish release 1.0.0 on PyPi (Q2 2020)
- Update the documenation on ReadTheDocs (Q2 2020)
- Study the metrics for release 2.0.0 (Q3 2020)
- Develop the new metrics for 2.0.0 on Github (Q3 2020)
In response to demands from broadcasters and other organizations that process large volumes of A/V content, the EBU has launched and coordinates a framework for benchmarking Artificial Intelligence (AI) and Machine Learning (ML) services. The project is spearheaded by ‘BenchmarkSTT’, a tool designed to facilitate the benchmarking of speech-to-text systems and services.
Unlike tools used by ML experts in academic settings, BenchmarkSTT targets non-specialists in production environments. It does not require meticulous preparation of test data, and it prioritises simplicity, automation and relative ranking over scientific precision and absolute scores.
With a single command, the tool calculates the accuracy of Automatic Speech Recognition (ASR) transcripts against a reference. Optionally, the user can apply normalization rules to remove non-significant differences such as case or punctuation. Supporting multiple languages and user-defined normalizations, this CLI tool can be integrated into production workflows to perform real-time benchmarking.
This collaborative project is open source.