Optical Flow Based Object Motion Tracking With Cascaded Outlier Rejection
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh
Abstract
Object tracking is one of the main areas in computer vision. Tracking is basically
a process of locating a moving object over time using any kind of sensing device.
Here the image sensing device works as the input device and the whole tracking is
done by processing the image information. To track an object is not an easy task
because of various limitations of the input device and the processing ability of the
processors. The underlying scientific phenomenon of the dynamics and physics
of this mysterious world make it worse. Our tracker was developed to track an
object regardless of its any feature, for example, the shape, color or intensity,
illumination change. It was also developed to work in fast camera motion movements.
To have an idea of the motions of every pixel in two consecutive frames
we have to determine the motion vectors at first. Using Lucas-Kanade method
we managed to find the motion vectors. Now it is the problem of determining
the actual motion vector which has caused the tracked object move. We used
the Global motion estimation from the coarsely sampled motion vector field. We
incorporated the cascaded outliers rejection method where outliers indicates the
noises. It was incorporated after getting the motion vectors at first stage and
before calculating the global motion estimation. This system was experimented
on three different videos which had distinctive characteristics. Comparing with
the state of the art trackers our tracker showed a very good performance and
sometimes better than those.
Description
Supervised by
Dr. Md. Hasanul Kabir,
Co-supervised By
Faisal Ahmed,
Computer Science and Engineering (CSE),
Islamic University of Technology (IUT),
Board Bazar, Gazipur-1704. Bangladesh.
Keywords
Citation
1] Wang, Q., Chen, F., Xu, W., Yang, M.H.: An experimental comparison of online object tracking algorithms. Proceedings of SPIE: Image and Signal Processing Track (2011) [2] B. Lucas and T. Kanade, ”An iterative image registration technique with an application to stereo vision,” in Proceeding of International Joint Conference on Arti cial Intelligence, pp. 674-679, 1981. [3] A. Azarbayejani and A. Pentland, ”Recursive estimation of motion, structure, and focal length,” IEEE Transactions on Pattern Analysis and Machine Intelligence 17(6), pp. 562-575, 1995. [4] H. Grabner and H. Bischof, ”On-line boosting and vision,” in Proceedings of emphIEEE Conference on Computer Vision and Pattern Recognition, pp. 260-267, 2006. [5] H. Grabner, C. Leistner, and H. Bischof, ”Semi-supervised on-line boosting for robust tracking,” in Proceedings of European Conference on Computer Vision, pp. 234-247, 2008. [6] S. Stalder, H. Grabner, and L. Van Gool, ”Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition,” in Proceedings of IEEE Workshop on Online Learning for Computer Vision, 2009. 38 5.0 BIBLIOGRAPHY 39 [7] B. Babenko, M.-H. Yang, and S. Belongie, ”Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 983-990, 2009. [8] Z. Kalal, J. Matas, and K. Mikolajczyk, ”P-n learning: Bootstrapping binary classifiers by structural constraints,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 49-56, 2010. [9] Z. Kalal, J. Matas, and K. Mikolajczyk. ”Online learning of robust object detectors during unstable tracking.” OLCV, 2009. [10] Z. Kalal, K. Mikolajczyk, and J. Matas, ”Forward-Backward Error: Automatic Detection of Tracking Failures,” International Conference on Pattern Recognition, 2010. [11] J. Barron and N.A.Thacker, ”Tutorial: Computing 2D and 3D Optical Flow,” Tina Memo 2004-012 [12] Y. Chen, and I. Bajic, ”Motion vector outlier rejection cascade for global motion estimation,” IEEE Signal Processing Letters, vol. 17, no. 2, 2010, pp. 197-200. [13] Robert C. Bolles Martin A. Fischler, ”Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” ACM Communications, vol. 24(6), pp. 381-395, 1981. [14] Y. Su, M.-T. Sun, and V. Hsu, ”Global motion estimation from coarsely sampled motion vector field and the applications,” IEEE Trans. Circuits Syst. Video Technol, vol. 15, no. 2, pp. 232-242, Feb. 2005. [15] S. Tubaro and S. Rocca, ”Motion field estimators and their application to image interpolation,” in Motion Analysis and Image Sequence Processing,M. I. Sezan and R. L. Lagendijk, Eds. Norwell, MA: Kluwer, 1993, pp. 153-187. 5.0 BIBLIOGRAPHY 40 [16] D. Farin, ”Automatic Video Segmentation Employing Object/Camera Modeling Techniques,” Ph.D. Thesis, Technische Univ. Eindhoven, Eindhoven, Netherlands, 2005. [17] I. Haritaoglu, D. Harwood, and L. Davis, ”W4s: A real-time system for detecting and tracking people,” in Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition, pp. 962-968, 1998. [18] M. de La Gorce, N. Paragios, and D. Fleet, ”Model-based hand tracking with texture, shading and self-occlusions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008. [19] Z. Sun, G. Bebis, and R. Miller, ”On-road vehicle detection: A review,” IEEE Transactions on Pattern Analysis and Machine Intelligence 28(5), pp. 694-711, 2006. [20] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley, ”Face tracking and recognition with visual constraints in real-world videos,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008. [21] X. Zhou, D. Comaniciu, and A. Gupta, ”An information fusion framework for robust shape tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence 27(1), pp. 115-129, 2005. [22] ”MPEG-4 Video Verification Model version 18.0,”, ISO/IEC JTC1/SC29/WG11, 2001. [23] F. Dufaux and J. Konrad, ”Efficient, robust, and fast global motion estimation for video coding,” IEEE Trans. Image Process, vol. 9, no. 3, pp. 497-501, Mar. 2000. [24] Y. T. Tse and R. L. Baker, ”Global zoom/pan estimation and compensation for video compression,” in Proc. ICASSP'91, Toronto, ON, Canada, May 1991, pp. 2725-2728. 5.0 BIBLIOGRAPHY 41 [25] H. Jozawa, K. Kamikura, A. Sagata, H. Kotera, and H.Watanabe, ”Twostagemotion compensation using adaptive global MC and local affine MC,” IEEE Trans. Circuits Syst. Video Technol., vol. 7, no. 2, pp. 75-85, Feb. 1997.