SCENE CUT DETECTION IN SHORT VIDEOS USING PRE-PROCESSING TECHNIQUES AND CONVOLUTIONAL NEURAL NETWORKS
Abstract
This study presents a comprehensive approach to scene cut detection in videos using a combination of frame extraction and a Convolutional Neural Network (VGG16 model) with a Color Histogram method to prevent raw video file reconstruction. The main dataset contains 167,490 frames collected from various users on TikTok for the scene cut detection model development. However, since this research utilizes a modular design that consists of data preparation, data pre-processing, and model implementation stages, it promotes efficient use of resources and guarantees data safety. The results demonstrate that the model performs excellently in recognizing scene cuts. Specifically, video frames with scene cuts achieved a precision of 85.11%, a recall rate of 93.96%, and an F1-score of 0.8932. This indicates its ability to identify true transitions among the scenes while ensuring low false positive rates. In video frames without scene cuts, the model achieved low false positives, highlighting its ability to accurately distinguish between static and transitional scenes.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.