Persistent Object Tracking by Figure-Ground Segmentation

Open Access
- Author:
- Yin, Zhaozheng
- Graduate Program:
- Computer Science and Engineering
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- May 08, 2009
- Committee Members:
- Robert T Collins, Dissertation Advisor/Co-Advisor
Robert T Collins, Committee Chair/Co-Chair
Jesse Louis Barlow, Committee Member
Yanxi Liu, Committee Member
David Miller, Committee Member - Keywords:
- motion segmentation
figure-ground segmentation
feature selection and fusion
tracking failure recovery
object tracking - Abstract:
- To persistently track objects through changes in appearance and environment, a tracker's object appearance model must be adapted over time. However, adaptation must be done carefully, since background pixels mistakenly incorporated into the object appearance model will contribute to tracker drift. In this thesis, we present a key technique for drift-resistant persistent tracking: figure-ground segmentation. The core idea in this thesis is that shape constrained figure-ground segmentation based on multiple local segmentation cues can help avoid drift during adaptive tracking, and can also provide accurate foreground and background data samples (pixels/regions) for feature selection, object modeling and detection. We introduce a figure-ground segmentation system based on a heterogeneous set of segmentation cues, including several novel motion segmentation methods such as forward/backward motion history images and steerable message passing in a 3D Random Field. Discriminative feature selection and fusion methods are applied to assign classification confidence scores to the different segmentation features. A shape constrained figure-ground segmentation system is then developed that combines bottom-up and top-down segmentation information. Finally, we provide two tracker failure recovery approaches for use when a tracker loses its target due to occlusion.