When it comes to streaming, viewers make a wide selection at their fingertips. With so many options available to customers, the user experience is a key differentiator when considering where to watch content. Prime Video continues to strive to provide the best streaming experience possible, from helping customers find the perfect movie (or series worthy of the next rampage) to watching big games. Generation AI is responsible for driving many of these improvements.
To build generated AI applications using Amazon Bedrock, a fully managed service from Amazon Web Services (AWS), Prime Video can provide audiences with more value and bespoke insights. Already, streaming experiences in a variety of areas have been improved, and they just hang on the surface of what is possible.
Below are five ways Prime Video can use AWS Generation AI to provide a premium viewing experience to its customers:
1. Efficient and personalized content recommendations
Between the must-see Amazon MGM Studio original films and series, licensed content, and even additional subscriptions (such as Apple TV+, HBO Max, Crunchyroll), Prime Video offers its customers a vast range of premium programming.
As the amount of content available in Prime Video increases, so does the importance of search and recommendation tools that help customers spend time watching and reduce time in searching. That's why Prime Video uses AI to provide, search and find the entertainment experiences that customers want.
For example, Amazon Bedrock directly supports power personalized content recommendations within Prime Video's “Movies” and “TV Shows” landing pages. Viewers will see “movies we think we like” and “TV shows we think we like” on each page.
2. Get caught up in the X-ray summary
Prime Video's X-ray summary feature helps you speed up anything your audience is watching without risking spoilers. X-ray summary creates a short, easy-to-digest summary of the seasons of television shows, single episodes, and even episodes.
X-ray summaries that require a few minutes of new episodes, mid-season, or watching the series for a break and review provide a short text snippet. The snippets explain important cliffhangers, character-driven plot points, and other details that are all accessible at any point in the viewing experience.
Equipped with a combination of Amazon Managed Foundation models and custom AI models, X-ray summaries trained using Amazon Sagemaker work by analyzing different video segments. When combined with subtitles and dialogue, it generates detailed descriptions of important events, locations, times and conversations. Amazon Bedrock Guardrails is applied to ensure that the summary remains a spoiler.
The X-ray summary is based on the existing X-ray features of Prime Video. This allows viewers to delve deeper into what their audience is seeing by providing information and information about the cast, soundtrack, and more.
Figure 1: Displaying X-ray summary on Prime Video.
3. Bring deep insight into Thursday night football and NASCAR
The latest seasons of the exclusive NFL Thursday night soccer (TNF) and Prime Video's NASCAR event marked the debut of several new Prime Insights. These new insights are AI-powered broadcast enhancements, specifically designed to bring fans closer to the action. They were built through a unique collaboration of leading sports producers, engineers, on-air analysts, AI and computer vision experts along with the AWS team using Amazon Bedrock. These new prime insights reveal important performance angles and storylines like never before.
The main insights show hidden aspects of the game and predict key moments before it happens. For example, the “defensive vulnerability” feature works TNF It is equipped with a unique machine learning model. It employs thousands of data points to analyze defensive and offensive formations and highlight where an attack attacks or should attack. NASCAR's “Barmbar” on Prime Coverage uses an AI model from Amazon Bedrock, combining live tracking data with in-car telemetry signals. It analyzes fuel consumption and fuel efficiency of all cars in the field, determine which drivers are saving fuel, and burns to reach the finish line and capture checker flags.
Also powered by AWS with Amazon Bedrock, Rapid Racap is a feature that helps fans quickly catch up with the action after participating in an already ongoing event. A quick summary automatically compiles a complete summary of highlights for up to 2 minutes and drops fans into a live feed.
Figure 2: AI-powered “defensive alert” feature during Thursday night football broadcasts.
4. Make content more accessible
The first feature of its kind, Dialogue Boost analyzes the original audio of a movie or series, and uses AI to intelligently identify points where dialogue is hard to hear above the music and effects in the background. This feature then separates audio patterns and enhances audio to make the dialogue more clear. This AI-based approach offers targeted enhancements to some of the spoken words instead of the common amplification in the center channel of home theater systems.
To support Power Dialogue Boost, Prime Video utilizes a variety of AWS services, including Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service (Amazon ECS), AWS Fargate, Amazon Simple Storage Service (Amazon Dynamodb, Amazon CloudWatch, and more. Dialogue Boost, originally launched in English, supports six additional languages, including French, Italian, German, Spanish, Portuguese and Hindi.
Figure 3: Prime Video's dialog boost function.
5. Improved video understanding
Generated AI has enabled media and entertainment companies to better understand media assets by extracting metadata and adding vector embeddings known as video understanding. Prime Video marketing assets are stored across different systems and may have insufficient metadata. This makes it difficult for teams to effectively discover, track content, validate, analyze and monetize quality control.
To address this, Prime Video has started using Media2Cloud with AWS Guidance. This provides comprehensive media analysis at frame, shot, scene and audio levels. This guidance helps to enrich the metadata of your assets (celebrity, on-screen text, moderation, mood detection, transcription, etc.). Powered by Amazon Bedrock, Amazon Nova, Amazon Rekognition and Amazon Transcrabe, Media2Cloud enables faster, more accurate video understanding to enhance content management, search capabilities and audience engagement.
Prime Video's media assets metadata is automatically fed to AWS partner Iconik, Media Asset Management (MAM). As a result, Prime Video has enriched hundreds of thousands of assets and increased the discoverability of its marketing archives.
Figure 4: Media2Cloud solution architecture used in Prime Video.
Whether it's for personalization, catching up with audiences, chasing deeper insights, making content more accessible, or better understand media assets, AWS Generated AI is essential for Prime Video to deliver an outstanding viewing experience.
Learn more about how Prime Video uses generated AI on AWS to improve streaming experiences and how other companies are profiting from Amazon Bedrock.
Contact AWS representatives to find out how we can help you accelerate your business.
Read more

