Repository | Book | Chapter
Automated music video generation using multi-level feature-based segmentation
pp. 385-401
Abstract
The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.Our aim is automatically to extract a sequence of clips from a video and assemble them to match a piece of music. Previous authors [8, 9, 16] have approached this problem by trying to synchronize passages of music with arbitrary frames in each video clip using predefined feature rules. However, each shot in a video is an artistic statement by the video-maker, and we want to retain the coherence of the video-maker's intentions as far as possible.
Publication details
Published in:
Furht Borko (2009) Handbook of multimedia for digital entertainment and arts. Dordrecht, Springer.
Pages: 385-401
DOI: 10.1007/978-0-387-89024-1_17
Full citation:
Yoon Jong-Chul, Lee In-Kwon, Byun Siwoo (2009) „Automated music video generation using multi-level feature-based segmentation“, In: B. Furht (ed.), Handbook of multimedia for digital entertainment and arts, Dordrecht, Springer, 385–401.