Alexandre Rouxel, data scientist and coordinator of the Data Group at the Technology & Innovation Department, talks about AI and Machine Learning in the world of broadcasting, and about the projects that his group is working on.

T&I is at a convergence point from which you get a broad view of the field. There is a significant amount of AI work going on at T&I, some of which I'm going to explain here.

The raw material we work with is data, which can come from written content, videos, audio or other media. Because of their ability to quickly process huge amounts of data and generate value, AI and ML are impacting many different areas of media. Sometimes it’s invisible, such as when optimising the efficiency of 5G communication networks. Sometimes it’s in a very tangible way such as in media archives where AI tools can improve access to our cultural heritage or generate new content based on the old one. AI also allows a whole new set of possibilities in production, such as “deepfaking” an actor’s face into a film sequence. Other areas include audience analytics, newsroom support, recommendation systems, AI-generated creative content, and so on.

While the AI and Data Initiative focuses on strategic and cross-disciplinary aspects of the field, the Data Group at T&I is developing a series of technical projects leveraging the potential of ML for public service broadcasters.

In the AI Benchmarking project, we are developing tools for benchmarking AI applications. It’s a collaborative open source project led by members. In a market where many AI tools are available with various properties and price tags, the goal is to provide broadcasters with easy-to-use benchmarking tools so that they can evaluate their quality during production. Currently we are focusing on developing tools for evaluating the quality of speech-to-text engines and services, with metrics that are specific to public service media.

Media Cloud Micro-service Architecture (MCMA) is a project that deals with serverless cloud computing and micro-services architecture for media applications. In serverless cloud computing, micro-services communicate with each other, sending messages. Developing a software architecture for this is crucial and unavoidable, but can also be very complex. Micro-services – as opposed to monolith applications – are what is best suited for serverless cloud computing, but jumping into that world is nevertheless not an easy task for a developer. For starters, it is very cloud provider specific. The idea with this project is to simplify the development process by gathering software that has been proven to be reliable in production, into cloud-agnostic libraries, so that developers can focus on generating value. But how do you handle the state of such architecture? We think it’s by standardising the messages exchanged between these micro-services. We are currently interacting with the Open Services Alliance for Media and SMPTE, in order to write a standard for this and develop good practice. A standard also means well-documented developments, which makes everything easier to use. MCMA is a very ambitious project requiring specific expertise in cloud computing, and we’re very happy to have top-notch engineers working on it.

In ML, a lot of algorithms can generate tags to define content, but without explaining why. In the X-Tagging project where X stands for 'explainable', we provide journalists with high-level tags for written content, but also explain why the content was tagged this way. For instance, we first started working on tagging fake news based on the linguistic properties of a text and the assumption that fake news carries certain grammatical properties, vocabulary and even psychological keywords. The analysis of these properties provides a statistical score of how likely the content is to be fake news - as opposed to following an if-ten type of rule set or relying on fact-checking. We do not aim to compete with journalists, who are very good at fact-checking and at understanding a text’s context (a humorous context for example). Our aim is to support their work by using what ML is good at and humans aren’t or simply cannot do, i.e. evaluating a text’s likelihood of fakeness without any context, and processing huge amounts of data. This method can also be used for determining the likely author of a text, or targeting the right audience.

The metadata structure used by Google in their services - such as Alexa - does not necessarily match the metadata that public service broadcasters use to describe their programmes and make them accessible: Several forms of cultural content for instance are not covered by Google’s metadata structure. One of the EBU’s main strengths is its ability to federate: The EBU’s Metadata community speaks as one voice to Google and to schema.org (who structure data on the web), in order to propose changes to their metadata structure and get a better fit with public service content. The result would be increased visibility of PSM content to audiences.

For more information about other ML, data and AI projects I work on at the Technology & Innovation department, please visit the T&I Production team's Data Group.

About the author

Alexandre Rouxel works at the EBU as a data scientist and project coordinator. His background is in Signal Processing, where he started to work on what is now called Machine Learning (ML) in the late ‘90s. He has 20 years of industry experience in the field of communications systems – specifically in algorithms, system modelling and standardization – where he worked for a number of high-tech NASDAQ-listed companies. He has now been at the EBU Technology & Innovation department for 2 years where he coordinate technical development in the Data Group and also manages the EBU’s Metadata community.

Latest news