How to contextualize scenes for an enriched viewing experience – WarMemoirSampo

Presenter(s): Eero Hyvönen (HELDIG)

The challenge addressed in this presentation is how to search and access different temporal scenes inside long videos, based on their time-stamped textual transcriptions. As a case study, the WarMemoirSampo project and system are presented, a collection of Finnish World War 2 veteran interview videos, and a portal for watching them on the semantic web. Using textual timestamped descriptions of video scenes and information extraction methods it is possible to automatically create a semantic knowledge graph annotating the interviews scene by scene. Published in a SPARQL endpoint, the graph can be used for searching and enriching the interviews with links to additional information in external data sources. They contextualize and enrich video watching in real time.