AI and Automatic Metadata Extraction

AI and Automatic Metadata Extraction (AME) in Production.

As we move to (micro)service-based production, broadcasters need to consider the widespread adoption of automatic information extraction tools as new (cloud) processes in agile workflows. These tools can be used to produce more information (including structured metadata) that is needed by modern production systems, at a lower cost.

AME_ow_logo1-1.jpg

2020

  • status_done_12px.png Study on action detection and identification (Q1 2020)
  • status_done_12px.png Study on fake news detection (Q1 2020)

Automatic Metadata Extraction (AME) tools can be used to produce more information (including structured metadata) that is needed by modern production systems, at a lower cost. Artificial intelligence techniques such as machine learning and deep learning or neural networks are behind most AME tools. AME tools are characterised by the data they can extract. Ideally, tool capabilities should be registered to facilitate service registration and discoverability in micro service-based architectures.

Main goal

The EBU AME group is part of the EBU "Metadata and Artficial Intelligence" activities. It's main goal is to help Members adopt AI-based automatic information extraction tools, such as:

  • speech-to-text
  • face recognition and identification
  • voice identification
  • location, event and object detection and identification
  • natural language processing (NLP)
  • action detection and identification

Related publications

Related EBU work

The EBU "Media Cloud and Microservices Architecture" project

The EBU "AI Benchmarking" project, which develops tools to evaluate speech-to-text transcription and entity recognition services.

The EBU "AI Data Pool" project, which proposes a framework to share resources for training and assessing AI tools.

Related topics