The partnership links two leaders who define the AI landscape. Twelvelabs is recognized by frontier multimodal neural network technology. It was the first video model provider featured on Amazon Bedrock, offering flagship models Marengo and Pegasus to global customers.
LG CNS is at the forefront of AI transformation in the financial sector, with NH Longhyup Bank, Shinhan Bank, Shinhan Card, Mirae Asset Life Insurance, Mirae Asset Securities, and Mirae Asset Securities, and Mirae Asset Securities; Wooli Bank. The company recently expanded Axe's leadership to the public sector, securing projects with the Ministry of Foreign Affairs, the Gyeonggi Provincial Office of Education and the South Korean National Police Agency, forging into manufacturing and logistics.
Innovation to provide market solutions
The partnership will focus on two main pillars: technological collaboration and business co-development. In terms of R&D, Twelvelabs and LG CNS work to enhance the performance and functionality of Twelvelabs' video foundation model (VFMS). At the same time, businesses co-design services for their customers to unlock business value in a variety of use cases.
For example, media and broadcasting allow collaboration.
- Automatic summary and highlight generation of news, sports and TV shows
- Keyword and scenario-based search across large video archives
- Contextual ads based on real-time content consumption
The partnership supports the public safety and research departments.
- Automatic detection of specific events in CCTV footage
- Summary of body cam and dash cam footage
- Scene search for incident-related content
“The partnership with LG CNS enhances videos that understand videos that create real value in key industries,” he said. JerryCEO of Twelvelabs. “Together we will maximize the value of video data and lay the foundations for Korea as the top three global powerhouses of AI.”
About Twelvelabs
Twelvelabs uses a multimodal foundation model to bring human-like understanding of video data. The company's foundation model maps natural language to what's happening within a video, including actions, objects, and background sounds, allowing developers to create applications that can search for videos, categorize scenes, summarise and extract insights with unprecedented accuracy. Headquartered in the US, Twelvelabs serves media, entertainment, sports, advertising and government customers. For more information, please visit www.twelvelabs.io
Media Contact
Amber MooreMoore Communications, 1 5039439381, [email protected]Moore Communication
Source Moore Communication
