Projector technology is used widely with curved screens or non-planer screens for virtual reality or mobile projectors. In the case of large displays, curved screens are most often used in immersive display systems [1
1. J. van Baar, T. Willwacher, S. Rao, and R. Raskar, “Seamless multi-projector display on curved screens,” Eurographics Workshop on Virtual Environments, 281–286 (2003).
]. Much research on the compensation for geometric distortion of projected images has been conducted [2
2. R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin and H. Fuchs, “The office of the future: a unified approach to image-based modeling and spatially immersive displays,” SIGGRAPH, 179–188 (1998).
6. Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” IEEE Int. Workshop on Projector-Camera systems (2007).
]. If the screen used in virtual reality displays is non-planar, we need to compensate for the geometric distortion by measuring and modeling the screen. To measure 3D screen geometry, many methods exist: those based on measurements using structured light [7
7. R. Raskar, M. Brown, R. Yang, W. Chen, G. Welch, H. Towels, B. Seales, and H. Fuchs, “Multi-projector displays using camera-based registration,” in Proceedings of IEEE Visualization, 161–168 (1999).
], others on binary patterns that involve synchronizing the camera and projector [8
8. S. Zollmann, T. Langlotz, and O. Bimber, “Passive-active geometric calibration for view-dependent projections onto arbitrary surfaces,” Workshop on Virtual and Augmented Reality of the GI-Fachgruppe AR/VR (2006).
], and also those using 2D gray codes [9
9. S. Jordan and M. Greenspan, “Projector optical distortion calibration using gray code patterns,” IEEE Int. Workshop on Projector-Camera systems (2010).
Shashua and Toelg proposed the theory of the relationship between two perspective views on quadric surfaces [10
10. A. Shashua and S. Toelg, “The quadric reference surface: theory and applications,” Int. J. Comput. Vis. 23(2), 185–198 (1997). [CrossRef]
]. Raskar et al
. proposed the quadric transfer, a geometric compensation method for the projected image on the quadric curved screen. They also used GPU vertex shader for real-time implementation of the quadric transfer [11
11. R. Raskar, J. van Baar, S. Rao, T. Willwacher, and S. Rao, “Quadric transfer for immersive curved screen displays,” Comput. Graph. 23(3), 451–460 (2004). [CrossRef]
]. Emori and Saito proposed a stereo texture-overlay system with an HMD, which can warp the projected images adaptively to the surface of the projected object in real-time [12
12. M. Emori and H. Saito, “Texture overlay onto deformable surface using HMD,” in Proceedings of IEEE Virtual Reality, 221–222 (2004).
]. They used a real-time quadratic or a cubic geometric transformation.
When images are projected on quadric screens and a camera observes the screen, the quadric matrix of the screen is used to compensate for the geometric distortion on the camera image plane. The quadric transfer by Raskar is used to correct for the image distortion using the relationship between the projector, the camera, and the quadric screen. The camera is located at the observer’s position. Figure 1
Fig. 1 An example of a projector-camera system for the quadric transfer. Projecting a rectangular image onto a curved surface results in a distorted image. To correct the distortion, images can be pre-warped in such a way that it compensates for the curve of the screen. However, the change of the screen results in image distortion.
shows an example of a projector-camera system used for the quadric transfer.
For mobile projectors, screens and projectors are not anchored stably. Therefore, image distortions may arise due to relative movements between them after the initial calibration. Consider a particular case: the touch screen of an interactive projector. It is critical to correct for image distortion for correct interactions. Image warping can be caused by slight movements of the projector or by screen deformation.
In this paper, we will expand this quadric transfer method using a projector-camera system. When the curved screen moves or changes curvature, the image observed by the camera will be distorted accordingly. To show distortion-free images to an observer even after the screen is altered, we must estimate the changed 3D surface parameters. Then we need to measure the 3D coordinates of the screen points, and calculate the quadric parameters from the 3D surface positions.
If changes of parameters are small enough so as to approximate the change of the quadric transfer with the first order Taylor series, then we can calculate the quadric matrix change from the 2D shift of camera images. We propose a compensation method of the image distortion using 2D image coordinates instead of measuring 3D screen coordinates: our method estimates the perturbation of the quadric matrix from 2D measurements of the distorted image. The proposed method is simpler and faster than calculating a new 3D screen matrix, and real-time monitoring of the image distortion is possible when watermarks are employed.
The remainder of this paper is structured as follows: Section 2 describes the quadric matrix of the curved screen and that of the changed screen. The linear approximation of curved screen change is presented in Section 3. Simulations and experimental results are shown in Section 4, and conclusions are presented in the final section.
2. Quadric Transfer and Screen Change
Since the proposed method relies on the quadric transfer, we will describe the compensation of the projected image distortion on curved screen using the quadric transfer proposed by Raskar et al
11. R. Raskar, J. van Baar, S. Rao, T. Willwacher, and S. Rao, “Quadric transfer for immersive curved screen displays,” Comput. Graph. 23(3), 451–460 (2004). [CrossRef]
]. The quadric transfer is the mapping of image coordinates of two views on quadric curved screen.
represents the 3D coordinates of the first view,
the 3D coordinates of the second view, and
the epipole, the projection center of the first view in the second view. The ± sign indicates whether the screen type is concave or convex. We can determine the sign using one point correspondence. Matrices A
are defined as follows:
is a 3 × 3 homography matrix between the two coordinates, and Q
is the quadric matrix of the screen.
are submatrices of Q
as defined in the following equations:
If a point
in the 3D homogeneous coordinates is on the screen
. We can estimate the 4 × 4 quadric matrix of the screen using more than 9 points of the two view correspondences.
When the quadric screen sways or changes shape, we must find the new quadric matrix of the altered screen. The conventional method is to use the 3D coordinates of the screen. In this case, the 3D coordinates of the screen will be measured continuously for real-time compensation, and the quadric matrix must be calculated again. In this paper, however, we propose a new compensation method for image distortion that uses 2D image coordinates observed by a camera, and calculates the change of the quadric matrix to correct for the quadric transfer.
Let the changed quadric matrix of the curved screen be
. Then the changed 3D coordinates of the second view can be expressed as the following equation using the change of the quadric matrix
is the change of coordinates in the second view. We assume that the centers and the orientations of the two views are fixed. The change in the quadric transfer,
can be represented as follows.
Let us analyze
first. It can be expressed with the first order approximation term
, ignoring the higher order remainder term
. This linear approximation is shown in the following section.
accordingly to derive
We will use the above terms in Section 3 to derive the change of quadric matrix.
2.1 Quadric Transfer
A camera 3D point
is mapped to a projector 3D point
by the quadric transfer:
where the sign ≅ denotes equality up to a scale factor for the homogeneous coordinates. The superscript tilde on
indicates that the coordinates are homogeneous. That is, the above equation specifies the mapping relationship between the projection ray
, and the camera image line
as shown in Fig. 2
Fig. 2 Quadric transfer after screen change.
be the image position on the z
= 1 plane, which is the projection of
on the z
= 1 camera image plane. Then we define
Note that the
is not on the z
= 1 plane.
2.2 Quadric Transfer after Screen Change
After screen deformation, a projected point on the screen moves from X
is projected to
. The quadric transfer of the changed screen is:
The above equation expresses the mapping between one point on
, and one point on
, as shown in Fig. 2
be the projection of
on the z
= 1 plane. Then using the scale factor h
can be represented as
is defined as
The geometrical meaning of Eq. (1)
is the intersection of line
and a line with the direction vector e
. The scale parameter h
can be calculated using the minimum mean square error criterion.
Using the above two equations, we can calculate the change of the quadric matrix. A detailed explanation of the method will be presented in the next section.
3. Change of Quadric Matrix
To simplify the derivation of the perturbation of the screen quadric matrix, let us omit the superscript ^; we will use a prime notation for transferred coordinates. Then the original and the changed quadric transfer equations are succinctly represented by the following equations using the predefined m
Subtracting Eq. (2)
from Eq. (3)
and using the predefined term,
, yields the following equation:
The change of the camera image coordinates is represented by
, and the epipole is
. Then we have
Theoretically, these three ratios should be identical; however, they are not the same due to measurement errors. Therefore, we take the arithmetic mean.
To obtain a linear solution for the quadric matrix change, we take the Taylor expansion of
. We assume that
to obtain a linear solution. Also we ignore the second order term in
. Then we obtain Eq. (4)
By omitting the second order term
in Eq. (4)
, we can find a linear solution. We rearrange the equation as a multiplication of the quadric matrix perturbation
and the projector coordinates.
Since the above equation is linear when the quadric matrix changes, we can find the perturbation using the change of the camera coordinates and the projector coordinates. The changes of the nine quadric matrix parameters from to are calculated by nine or more correspondences between the camera and projector images. Thus, the parameters of the quadric transfer, A and E, are corrected using the linear solution of the change of the quadric matrix. With the corrected quadric transfer, we can compensate for the change and the movement of the screen.
4. Simulations and Experimental Results
We verify the accuracy of the proposed correction method from simulations and experiments.
For the simulation, we use a spherical screen for simplicity. The radius of the sphere is 50, the center is (−1,0,50). Figure 3
Fig. 3 3D plot of projector center (◊), camera center (□), 3D image points (•) and sphere screen. (a) Before screen translation, (b) After screen translation
shows the simulation setup. A projector on the left side projects a test pattern on a spherical screen and the right-side camera captures the pattern.
Fig. 4 Simulated test patterns on the spherical screen captured by a camera. (a) Compensated image using quadric transfer, (b) Distorted image after shift of the sphere, (c) Corrected image using the proposed method.
shows the test pattern observed after the quadric transfer. Figure 4(b)
is a distorted pattern resulting from a translation of the sphere 5 units to the left. Figure 4(c)
is the corrected pattern using the perturbation of the quadric matrix using the observed image change of Fig. 4(a)
and Fig. 4(b)
. When the width of the pattern is normalized to unit length, the mean absolute difference (MAD) of the position difference is 1.3%, while the MAD of the incremental compensation is 0.09%; therefore, the ratio of error reduction is around 15.
Fig. 5 Simulated mean absolute image position error before and after compensation when the screen moves from –15 to 15.
shows a plot of the mean absolute errors of the test patterns when the screen is shifted by
along the x
changes from –15 to 15 units, the MAD increases up to 4 pixels. However, after the compensation using the proposed method, the MAD is reduced under 1 pixel.
We compare calculation time of the conventional and the proposed method; calculation of a new quadric matrix from the 3D positions of the screen versus the proposed 2D perturbation correction method. The number of multiplication is decreased by 1/5 compared to the conventional method; however, because the proposed calculation method involves a square root, the computation time is decreased by 1/3 on a Core2Duo E6600 CPU and an nVidia GeForce 8800GTX GPU. Reduction in computation time enables rapid correction of image distortion.
4.2. Experimental Results
We used Flea® Miniature IEEE-1394 camera and Infocus LP600 projector. The resolution of both the camera and the projector is 1024 × 768 pixel. The experimental quadric curved screen has a cylindrical shape. This experimental setup works in real-time and it is shown in Fig. 6
Fig. 6 Experimental setup.
We adopt SIFT [13
13. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of ICCV, 1150–1157 (1999).
] to extract feature points from real images. We map the projection image using the quadric transfer to display the ideal pattern on the camera image plane. We use GPU pixel shader coded with Cg to generate the pre-warped pattern. Figure 7(a)
Fig. 7 (a) Real transferred image (b) Camera-captured image.
and Fig. 7(b)
show the pre-warped image and the compensated image. If the screen deforms or moves after the quadric transfer compensation, we need to measure the shape change and update the quadric matrix after the screen change. However, we do not need to calculate a new quadric matrix using 3D screen coordinates; rather, we calculate the perturbation of quadric matrix using 2D image coordinate changes captured by the camera using Eq. (5)
. The distorted camera-captured image after screen change is shown in Fig. 8(a)
Fig. 8 (a) Camera-captured image with distortion after screen change (b) Camera-captured image after compensation using the proposed method.
. The upper left corner of the image is lower than the upper right corner. The compensated image with the proposed method using the change of the quadric matrix is shown in Fig. 8(b)
. As shown in Fig. 7(b)
and Fig. 8(b)
, the images corrected for curvature and translation are almost identical.
In this paper, we proposed a compensation method for geometric distortion due to the change of a quadric curved screen. It does not measure a new quadric matrix after screen change; it estimates perturbation of the quadric matrix from changes of 2D image coordinates. Therefore, the 3D shape information of the screen or the calculation of a new quadric matrix is not required. The proposed method is simpler and faster than calculating 3D screen matrices, enabling more frequent updates. In the future, we plan to use the watermarks to monitor the deformation of the screen in real-time.
This research was partly supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0010378), and the Human Resource Training Project for Strategic Technology through the Korea Institute for Advancement of Technology (KIAT) funded by the Ministry of Knowledge Economy, the Republic of Korea.
References and links
J. van Baar, T. Willwacher, S. Rao, and R. Raskar, “Seamless multi-projector display on curved screens,” Eurographics Workshop on Virtual Environments, 281–286 (2003).
R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin and H. Fuchs, “The office of the future: a unified approach to image-based modeling and spatially immersive displays,” SIGGRAPH, 179–188 (1998).
R. Yang, M. S. Brown, W. B. Seales, and H. Fuchs, “Geometrically correct imagery for teleconferencing,” in Proceedings of ACM Multimedia, 179–186 (1999).
R. Yang and G. Welch, “Automatic and continuous projector display surface calibration using every-day imagery,” in Proceedings of 9th Int. Conf. in Central Europe in Computer Graphics, Visualization, and Computer Vision (2001).
S. Webb and C. Jaynes, “The DOME: a portable multi-projector visualization system for digital artifacts,” IEEE Workshop on Emerging Display Technologies (2005).
Y. Oyamada and H. Saito, “Focal pre-correction of projected image for deblurring screen image,” IEEE Int. Workshop on Projector-Camera systems (2007).
R. Raskar, M. Brown, R. Yang, W. Chen, G. Welch, H. Towels, B. Seales, and H. Fuchs, “Multi-projector displays using camera-based registration,” in Proceedings of IEEE Visualization, 161–168 (1999).
S. Zollmann, T. Langlotz, and O. Bimber, “Passive-active geometric calibration for view-dependent projections onto arbitrary surfaces,” Workshop on Virtual and Augmented Reality of the GI-Fachgruppe AR/VR (2006).
S. Jordan and M. Greenspan, “Projector optical distortion calibration using gray code patterns,” IEEE Int. Workshop on Projector-Camera systems (2010).
A. Shashua and S. Toelg, “The quadric reference surface: theory and applications,” Int. J. Comput. Vis. 23(2), 185–198 (1997). [CrossRef]
R. Raskar, J. van Baar, S. Rao, T. Willwacher, and S. Rao, “Quadric transfer for immersive curved screen displays,” Comput. Graph. 23(3), 451–460 (2004). [CrossRef]
M. Emori and H. Saito, “Texture overlay onto deformable surface using HMD,” in Proceedings of IEEE Virtual Reality, 221–222 (2004).
D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of ICCV, 1150–1157 (1999).