Video Scene Segmentation Via Continuous Video Coherence

        In extended video sequences, individual frames are grouped into shots which are defined by a sequence taken by a single camera, and related shots re grouped into scenes which are defined by a single dramatic event taken by a small number of related cameras. This hierarchical structure is deliberately constructed, dictated by the limitations and preferences of the human visual and memory systems. We present three novel high-level segmentation results derived from these considerations, some of which are analogous to those involved in the perception of the structure of music. First and primarily, we derive and demonstrate a method for measuring probable scene boundaries, by calculating a short term memory-based model of shot-to-shot coherence. The detection of local minima in this continuous measure permits robust and flexible segmentation of the video into scenes, without the necessity for first aggregating shots into clusters. Second, and independently of the first, we then derive and demonstrate a one-pass on-the-fly shot clustering algorithm. Third, we demonstrate partially successful results on the application of these two new methods to the next higher, theme, level of video structure.

By: John R. Kender (Columbia University) and Boon-Lock Yeo

Published in: RC21061 in 1997

This Research Report is not available electronically. Please request a copy from the contact listed below. IBM employees should contact ITIRC for a copy.

Questions about this service can be mailed to reports@us.ibm.com .