Fast extraction of spatially reduced image sequences from MPEG-2 compressed video

Copyright 1998 Society of Photo-Optical Instrumentation Engineers. This paper was (will be) published in SPIE Proceedings and is made available as an electronic reprint [preprint] with permission of SPIE. Single print or electronic copies for personal use only are allowed. Systematic or multiple reproduction, distribution to multiple locations through an electronic listserver or other electronic means, duplication of any material in this paper for a fee or for commericial purposes, or modification of the content of the pater are all prohibited. By choosing to view or print this document, you agree to all the provisions of the copyright law protecting it.

The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu \cite{yeo95e} have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of Discrete Cosine Transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. The algorithms proposed in this paper are fundamental for efficient and effective processing of MPEG-2 video.

By: Junehwa Song and Boon-Lock Yeo

Published in: SPIE Proceedings, volume 3312, (no ), pages 93-107 in 1998

Please obtain a copy of this paper from your local library. IBM cannot distribute this paper externally.

Questions about this service can be mailed to reports@us.ibm.com .