Skip to main content

Machine Learning

Content about machine learning at Prime Video.

The paper introduced a technique to efficiently compute the difference in cost between two versions of a program and was presented at the conference on Programming Language Design and Implementation (PLDI) 2022.
Simple matrix factorization techniques can be employed to build an accurate and provable clustering algorithm whose performance doesn’t necessarily degrade even if the space of the clusters is close.
Targeted handling of three distinct types of “special events” dramatically reduces false-alarm rate.
At Chaos Carnival 2022, Olga Hall, Geoff Robinson, and Ali Jalali presented on achieving continuous resilience in DevOps at Prime Video through AI and ML.
In a pilot study, an automated code checker found about 100 possible errors, 80% of which turned out to require correction.
Detectors for block corruption, audio artifacts, and errors in audio-video synchronization are just three of Prime Video’s quality assurance tools.
We developed a real-time, language-adaptive tool to flag misspellings across 24 languages and process 11,000 subtitle files every month.
Prime Video developed a language-agnostic system to flag and automatically synchronize out-of-sync subtitles.
Prime Video achieves a 99.4% F1 score in synchronizing dubbed audio to non-dubbed audio using an innovative, fast, and memory-efficient approach.
Actor identification and localization in movies and TV series seasons can enable deeper engagement with the content. Manual actor identification and tagging at every time-instance in a video is error prone as it is a highly repetitive, decision intensive and time-consuming task. The goal of this paper is to accurately label as many faces as possible in the video with actor names.
We propose a new prototype model for no-reference video quality assessment (VQA) based on the natural statistics of space-time chips of videos. Space-time chips (ST-chips) are a new, quality-aware feature space which we define as space-time localized cuts of video data in directions that are determined by the local motion flow.
In this paper, we present a novel, accurate and efficient method for temporal sync detection between dubbed audio tracks and corresponding non-dubbed original-language audio tracks.