Program and Abstracts

|

CHANGING SCENERIES, CHANGING ROLES:
EMBRACING AUTOMATION, ENHANCING DISCOVERABILITY

 

Day 1:     8th June 2017 Morning session , 09.30-13.00

Seminar Opening | Welcome and introduction | Maurizio Canetta, Director RSI, Switzerland & Eva-Lis Green, Chair MMC FIAT/IFTA,  Kungliga Biblioteket, Sweden & Brid Dooley, President FIAT/IFTA, RTE, Ireland

Keynotes | Archives 2020 | Marco Derighetti, SRG SSR, Switzerland
In addition to their use as a means of production, the audiovisual archives of the media companies, in particular the service public, are increasingly perceived as a social memory. The SRG SSR takes this responsibility seriously and sees it as an opportunity to reinforce its service public mandate. A task force specifically set up for the repositioning of the archives has analyzed the objectives, opportunities, adversities and dangers on the way to the ARCHIVE 2020 so that today the course can be set and the resources can be planned.

Case Study | La Xarxa and VSN: a distributed MAM in the cloud | Miquel Herrada, La Xarxa, Spain & Roberto Pascual Fonte, VSN, Spain
La Xarxa, unified its media in a single, multipurpose space, with all the shared resources available, and to boost collaborative workflows between all the different media outlets: Radio, TV and Digital Media of over 50 TVs and 130 Radio Stations. A media exchange hub which allows associates to contribute, share, archive and perform actions and reports through VSNExplorer on top of Microsoft Azure Cloud

Case Study | Where's the line? The intersection of cloud-based and internal MAM systems | David Klee, Univision, USA
The cloud has driven a tremendous amount of change in the media industry, while also bringing a variety of new capabilities with it. The last few years have seen economics shift to the point that cloud services are competitive with or even preferred to servers maintained locally, inside an organization -- even for large files like video. This talk will explore the various steps of typical video supply chains in the broadcast space, tracking the points at which content can touch the cloud, and the increasingly blurred line between internally-maintained and cloud-hosted media management systems. 

Case Study | Transforming a silo into modular services. The continuous evolution of RSI multimedia archives | Sarah-Haye Aziz & Lorenzo Vassallo, RSI, Switzerland & Andrea Ceciarelli, Reply, Italy & Riccardo Savarè, Reply, Italy

 

Day 1:      Afternoon Session, 14.15-18.00

Hors d’oeuvre | Results of the 2nd FIAT/IFTA MAM Survey | Brecht Declercq, VIAA, Belgium & Gerhard Stanz, ORF, Austria
One of FIAT/IFTA Media Management Commission's primary goals is the collection and interpretation of expert knowledge that exists within the archive community, enabling collaboration, analysis and distribution back to the community. Therefore in 2015 we organised the first FIAT/IFTA MAM Survey. The results have been published in the Proceedings of the MMC Seminar in Glasgow: http://fiatifta.org/index.php/fiatiftaevents/2015-seminar-glasgow/. Because it was such a big success, in 2017 we decided to do a MAM survey again, focusing this time on on discoverability and  automation in  metadata creation strategies. In  this presentation we will present the results for the first time.

Case Study | From a video archive to a near-live media distribution platform |Olivier Gaches, UEFA, Switzerland
Since 2009 UEFA’s video archive has evolved from being purely tape-based into an end-to-end digital solution serving hundreds of users.  The workflow incorporates automated ingest and standardised metadata for archiving of new content, all of which is accessible via a user friendly web-based portal, from where users can browse, clip, share and order their content.  The next evolution of UEFA’s platforms is already being planned, with the aim to make more content (not only video) available to more people in a more efficient and timely manner.

Case Study | PGA Tour and the evolution of metadata | Michael Raimondo, PGA Tour, USA
The PGA TOUR has a unique vocabulary, statistical data and 18 playing fields that create a challenge to an archive.  See how the TOUR combines logging, automation, and other tools to enhance the world’s largest golf archive.

Case Study | ATP World Tour Archive : IMG Replay and Imagen Ltd | Simon Jones, IMG, UK & John Hunt, Imagen Ltd, UK
The ATP World Tour Archive offers footage of over 1,100 matches and 2,500 hours of match content from the ATP World Tour Masters 1000 Series, round-robin matches from the season-ending ATP Finals and other significant games from the last 27 years.  Powered by Imagen Ltd’s award-winning Video Platform and managed by IMG, the service provides ATP Media’s broadcast partners, as well as production companies and sponsors, with quick access to important content. Rights-holders use the Platform to enhance the ATP Media global product, exploiting a huge library of content, showcasing the world’s best players going back to the inception of the ATP World Tour.  This enables ATP Media  to protect and enhance the heritage of the Tour whilst constantly evolving in the process. Imagen Ltd and IMG delivered this video management solution to ATP Media in partnership, combining forces to store, manage and distribute ATP’s new and archive media worldwide.   

Case Study | Digitisation, industrialisation: Sport broadcasting challenges and the value of real time contents in an integrated newsroom | Emanuele Balossino, Mediaset, Italy
How could Sport content factory be digitized and automated allowing full rights exploitation over different platforms and devices, matching digital transformation and user behavior, and reducing operative costs? Mediaset faced through these challenges deploying a fully digital end-to-end solution, from content acquisition to broadcasting, distribution and archiving; “content is king” and the Premium Sport content factory solution aims to provide logs and highlights, as much as selected and archived materials immediately available to all users in real time, in order to maximize the value of assets.

Case Study |Augmented Reality, a new frontier for archives valorisation |Antonio Scuderi, Capitale Cultura Group, Italy
The presentation  will focus on the disruptive potential of Augmented Reality in the valorisation of media and heritage archives. Case history: the synergy between RSI and Capitale Cultura International/ARtGlass (the leading company in the global field of wearable AR for culture and edutainment) to enhance visitors experience at the Swiss Customs Museum in Gandria. An exclusive and unedited model, mixing historic footage  with the most futuristic AR solutions. A walk through time and space.  A model for future cooperations.

 

Day 2:      9th June 2017,  Morning Session 9.00-12.30

Case study | Fraunhofer IAIS Audio Mining: Automatic metadata generation of audio streams | Joachim Kohler, Fraunhofer IAIS, Germany
This presentation describes algorithms, technologies and a workflow engine to produce metadata from audio and video assets by analysing the audio stream with advanced machine learning methods. This covers the segmentation of the audio stream in meaningful segments including speaker clustering and speaker recognition using iVector algorithms. Further, thespeech segments are automatically transcribed with a deep learning based speech recognition engine. It will be shown how the speech recognition process is realized and provide information about the challenges, performance and progress regarding speech recognition results. All these technologies are integrated in a scalable and web-service based solution, called Fraunhofer Audio Mining which is already deployed in IT infrastructures of several media organisations. Finally, this talk will give practical applications how this solution is used in German broadcast (WDR, HFDB) and research scenarios (KA3 – Cologne centre for analysis and archiving of AV data) for archiving and search of large media collections.

Case study | Automated metadata generation projects at YLE | Elina Selkälä, YLE, Finland
Yle broadcasts and publishes online nearly 100 000 hours of radio programs and about 20 000 hours of television programs each year. In addition to radio and television programming, Yle produces and publishes a vast amount of photographs to be used in online news and articles. Unlike text, audio visual content is not immediately searchable. The findability of the published materials is largely based on text, therefore good quality metadata is of importance. Since production of metadata is a slow and laborious manual work, it is usually necessary to focus on core issues instead of maximizing findability. The need to find cost-effective ways to generate metadata to large amounts of material is substantial. Yle has started exploring automatic content analysis methods and applications based on artificial intelligence. In year 2016 Yle piloted, for example, image and speech recognition, as well as text analysis. The study of available methods and applications continues on the basis of lessons learned.

Case study | Automatic multi-modal metadata annotation based on trained cognitive solutions |
Jakob Rosinski, IBM, Germany
In this speech first an example will be given how the combination of different cognitive solutions are  used to automatically create a trailer out of a thriller movie. Later various analysis and detection technologies will be discussed to create a boilerplate for the main topic of the speech, the creation of an automatic model to annotate video content of a specific domain based on trained solutions from different angles. This validated model is based on a customer project for a large sports video archive. Further the overall workflow will be shown to do such an annotation both for archived and fresh ingested content as well as how the solutions and the model can be configured and maintained.

Case study | Smart Production | Hirokasu Arai & Jun Goto, NHK, Japan
NHK has proposed automatic TV program production technologies, known collectively as “Smart Production”, driven by artificial intelligence (AI) to rapidly and accurately obtain useful information from abroad range of sources in society and to generate content that is easily conveyed to all viewers, including persons with disabilities. Smart Production consists of several kinds of technologies such as big data analysis for obtaining useful information from social media, speech recognition for generating text from interviews, image recognition for generating metadata to video materials, and contents conversion for creating simplified Japanese, computer-generated sign language and audio description.

 

Day 2:    Afternoon Session, 13.45-17.30

Case study | Private broadcast: public access. Digitisation and semi-automatic indexation of Canal 9 TV Archives for preservation and multi-platform access | Yves Niederhäuser, Memoriav, Switzerland & Damian Elsig,  Mediatheque  Valais, Switzerland
A (very) short presentation of Memoriav, its general activities and cooperations in the field of broadcast archives. The new broadcast legislation provides new opportunities for preservation and access in Switzerland, Memoriav tries to grasp these and is ready to facilitate public access via its online platform Memobase.The public heritage institution Mediathek Wallis acquired the archives of the private regional TV-station "Canal 9" and issued a public-private-partnership for a preservation and access project. Digitization, semi-automatic indexing and online access are milestones of this pilot-project under the new law. Examples of first results and the next steps will be presented.

Case study | The adventurous trip of thesaurus terms into the portals of the new MAM system. Thesaurus management and portals at Sound and Vision. | Karin van Arkel, B&G, the Netherlands & Vincent Huis in ‘t Veld, B&G, the Netherlands.
NISV is implementing a new Media Asset Management (MAM) system at the moment. For descriptive metadata NISV is largely dependent on the ingest of metadata from the production environment and on automated metadata creation. For automated metadata NISV has implemented speaker recognition and term extraction (facial recognition planned). In metadata ingest ‘normalisation’ of thesaurus keywords is an important feature. Imported keywords that cannot be matched to the thesaurus are represented as ‘tags’.  For findability NISV differentiates between multiple user groups. Each of them having their own ‘portal’ suited to their specific needs: general public, media professionals and educational. In these portals traditional metadata and time-code based automated metadata are combined to optimize discoverability. Use of thesaurus entries and normalisation boosts both browse and findability. That’s why NISV will create a workflow on tags to reconciliate or add them to our thesaurus.

Case study | Automagically Archiving the BBC’s TV programmes | Lynne Dent, BBC, UK & Simon Allcorn, BBC, UK
Simon and Lynne will talk about the Digital Archiving solution that they have installed into the BBC, with specific reference to the automated archiving of post transmission TV programmes, and how a rich data set is automatically attributed to these programmes.

Case study | A framework for visual search in broadcasting companies’ multimedia archives| Federico Maria Pandolfi, RAI, Italy
In today’s digital age, the ability to access, analyse and (re)use ever-growing amounts of data is a strategic asset for the broadcasting and media industry. Despite the growing interest around the new technologies, archives search and retrieval operations are still usually done by means of text-based search over tags and metadata of manually pre-annotated material. This is particularly true because of its reliability and the broad availability of powerful full-text search platforms. However, this approach still does not completely meet the requirements that a search over huge multimedia archives poses, such as the need for semantic-driven indexing and retrieval, or the possibility to access contents based on visual features. In this presentation, we will describe a framework currently under development in Rai that enables visual search over the company's humongous archive, including still images as well as annotated broadcast content and raw footage material. The current architecture's core is based on LIRE (Lucene Image REtrieval), an open source Java Library for content-based image retrieval, and Apache SOLR, an enterprise full text search platform. Possible extensions of the framework to include new technologies such as deep learning or semantic learning will be discussed as well.

Case study | RTBF.be’s recommender project | Xavier Jacques-Jourion, RTBF.be, Belgium