Skip to main content

CNN-based audio event recognition for automated violence classification and rating for Prime Video content

By Tarun Gupta, Mayank Sharma, Kenny Qiu, Xiang Hao, and Raffay Hamid

Automated violence detection in Digital Entertainment Content (DEC) uses computer vision and natural language processing methods on visual and textual modalities. These methods face difficulty in detecting violence due to diversity, ambiguity and multilingual nature of data. Hence, we introduce a method based on audio to augment existing methods for violence and rating classification. We develop a generic Audio Event Detector model (AED) using open-source and Prime Video proprietary corpora which is used as a feature extractor. Our feature set includes global semantic embedding and sparse local audio event probabilities extracted from AED. We demonstrate that a global-local feature view of audio results in best detection performance. Next, we present a multi-modal detector by fusing several learners across modalities. Our training and evaluation set is also at least an order of magnitude larger than previous literature. Furthermore, we show that, (a) audio based approach results in superior performance compared to other baselines, (b) benefit due to audio model is more pronounced on global multi-lingual data compared to English data and (c) the multi-modal model results in 63% rating accuracy and provides the ability to backfill top 90% Stream Weighted Coverage titles in PV catalog with 88% coverage at 91% accuracy.

For the full paper, see CNN-based audio event recognition for automated violence classification and rating for Prime Video content on the Amazon Science website

Applied Scientist II – Prime Video
Principal Applied Scientist – Prime Video
Senior Principal Scientist – Prime Video