In this paper, we propose a new supervised manifold learning approach, supervised preserving projection (SPP), for the depth images of a 3D imaging sensor based on the time-of-flight (TOF) principle. We present a novel manifold sense to learn scene information produced by the TOF camera along with depth images. First, we use a local surface patch to approximate the underlying manifold structures represented by the scene information. The fundamental idea is that, because TOF data have nonstatic noise and distance ambiguity problems, the surface patches can more efficiently approximate the local neighborhood structures of the underlying manifold than TOF data points, and they are robust to the nonstatic noise of TOF data. Second, we propose SPP to preserve the pairwise similarity between the local neighboring patches in TOF depth images. Moreover, SPP accomplishes the low-dimensional embedding by adding the scene region class label information accompanying the training samples and obtains the predictive mapping by incorporating the local geometrical properties of the dataset. The proposed approach has advantages of both classical linear and nonlinear manifold learning, and real-time estimation results of the test samples are obtained by the low-dimensional embedding and the predictive mapping. Experiments show that our approach obtains information effectively from three scenes and is robust to the nonstatic noise of 3D imaging sensor data.
© 2013 Optical Society of America
Original Manuscript: February 22, 2013
Revised Manuscript: June 5, 2013
Manuscript Accepted: June 24, 2013
Published: July 19, 2013
Vol. 8, Iss. 8 Virtual Journal for Biomedical Optics
Yi Jiang, Yong Liu, Yunqi Lei, and Qicong Wang, "Supervised preserving projection for learning scene information based on time-of-flight imaging sensor," Appl. Opt. 52, 5279-5288 (2013)