HomeBlog >
Moments Lab Unveils New Multi-Language Capabilities for its Groundbreaking MXT-1 AI Indexing Technology
< Home
News & Events

Moments Lab Unveils New Multi-Language Capabilities for its Groundbreaking MXT-1 AI Indexing Technology

Moments Lab Content Team
September 15, 2023

MXT-1 Now Offers Automated Multilingual Transcription and Automatic Content Descriptions in Several Languages, Speeding Up Media and Sports Workflows.

Leading AI and cloud media company Moments Lab (ex Newsbridge) announced today new language enhancements for MXT-1, the company’s revolutionary multimodal and generative AI indexing technology. At IBC2023, Moments Lab will showcase innovative multilingual transcription and automatic content description capabilities on its MXT-1 technology to dramatically speed up media and sports workflows. 

Moments Lab’s core AI indexing tech uses natural language models to generate human-like descriptions of video content, with the power to index more than 500 hours of video per minute.

"While most AI technologies only support English, many of our customers in Europe and the Middle East are operating in their native tongue and need solutions that speak the same language as their business,” said Frederic Petitpont, co-founder and CTO at Moments Lab. “Moments Lab firmly believes that no language should be left behind when working on generative AI, even if it’s technically challenging. We’re excited to introduce these latest multi-language capabilities to our media, sports, and broadcast customers at IBC2023, enabling faster and more efficient operations.”

MXT-1 is now able to describe what’s happening in every shot of a video, as well as summarize the video in its entirety. These scene descriptions and media summaries are available in English, French, Spanish, German, Portuguese, and Arabic, with more languages to come. 

Moments Lab (ex Newsbridge)'s MXT-1 AI automatically detects and transcribes a single media asset when
more than one language is spoken.

This flexibility enables users to choose the language they are most comfortable with, improving their engagement and understanding of the video content. In addition, MXT-1 enables speedy multilingual transcription. Media organizations can automatically detect and transcribe a single media asset when more than one language is spoken. 

The new multi-language capabilities complement MXT-1’s already powerful transcription and neural translation functionalities, which are available in more than 130 languages. Through MXT-1, media companies can translate video transcriptions into hundreds of languages in near real-time to extend the global reach of their video content.

Moments Lab’s MXT-1 generative AI indexing technology is successfully used by leading media organizations, including Le Parisien, an internationally acclaimed French newspaper producing both local and global news.

“When we adopted Moments Lab’s AI technology to manage our video content, my editorial department shifted overnight from a traditional workflow to one that was markedly more efficient at publishing and monetizing our content,” said Guillaume Otzenberger, video technical and creative director at Le Parisien. "MXT-1's new features, including automatic content descriptions, push the boundaries even further. In a highly competitive environment like news, being first to publish is key, so using an AI technology like MXT-1 – that literally speaks our working language of French as it rapidly generates suggested titles, descriptions, chapters and hashtags for the videos that we publish to social media platforms – gives us an important edge.”

At IBC203, Guillaume Oztenberger will discuss how Le Parisien is using generative AI in multiplatform video publishing workflows. His presentation, followed by a Q&A and drinks reception, will take place at the Moments Lab stand (7.C30) on Friday, Sept. 15 at 4 p.m. CEST.

Book a demo meeting with Moments Lab at IBC2023.

Or for more information, please contact us.

Moments Lab for your organization

Get in touch for a full demo
and a 7-day free trial.

Let’s Go →