OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 14, Iss. 24 — Nov. 27, 2006
  • pp: 11551–11565
« Show journal navigation

Automatic source camera identification using the intrinsic lens radial distortion

Kai San Choi, Edmund Y. Lam, and Kenneth K. Y. Wong  »View Author Affiliations


Optics Express, Vol. 14, Issue 24, pp. 11551-11565 (2006)
http://dx.doi.org/10.1364/OE.14.011551


View Full Text Article

Acrobat PDF (247 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

© 2006 Optical Society of America

1. Introduction

The purpose of this research is to demonstrate that it is possible to exploit lens aberrations for source camera identification. It is generally accepted that most consumer-level cameras optics deviates from ideal pinhole camera model [1

1 . R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation 3(4), 323–344 (1987). [CrossRef]

, 2

2 . F. Devernay and O. Faugeras, “Automatic calibration and removal of distortion from scenes of structured environments,” in Investigative and Trial Image Processing, vol. 2567 of Proc. SPIE, pp. 62–67 (1995). [CrossRef]

, 3

3 . J. Perš and S. Kovačič, “Nonparametric, model-based radial lens distortion correction using tilted camera assumption,” in Proceedings of the Computer Vision Winter Workshop 2002, pp. 286–295 (2002).

]. Among different kinds of aberrations, lens radial distortion is the most severe. The inherent lens radial distortion causes non-linear geometrical distortion on the images. In this paper, we propose to estimate the lens radial distortion from an image and use it to identify the source camera of the image.

Source camera identification is useful in image forensics. With the availability of powerful software, digital images can be manipulated easily even by amateurs and the alterations may leave no observable traces. This hinders the credibility of digital images presented as news items or as evidence in court cases. As a result, in image forensics, one would like to ascertain the authenticity of a digital image by identifying the source camera of an image. In this paper, we focus on distinguishing between images captured by a limited number of camera models.

The problem of source camera identification can be approached from several directions. An obvious approach is to examine an image file’s header. For example, Exchangeable Image File Format (EXIF) header is attached in JPEG images by most consumer-level cameras. Information such as digital camera type, exposure, date and time of an image is included in the header. One can determine the source camera of an image from this information. However, this information is fragile. It may be maliciously altered or discarded after image editing.

Another approach is to make use of the difference in image processing methods among camera models. A number of steps are involved in image processing in digital cameras [4

4 . J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE Micro 18(6), 20–30 (1998). [CrossRef]

]. These include demosaicing, gamma correction, color processing, white balance, compression, and storage. The algorithms and details of image processing may vary from one manufacturer to another. Therefore, the output image may exhibit some traits and patterns regardless of the original image content. Kharrazi et al. [5

5 . M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).

] tried to use these traits for identifying the source camera of an image. They proposed to obtain a vector of thirty-four features from pixel intensities to represent an image. The features include average pixel value, RGB pairs correlation, center of mass of neighboring distribution, RGB pairs energy ratio, wavelet domain statistics and a set of Image Quality Metrics (IQM) [6

6 . I. Avcibas, N. Memon, and B. Sankur, “Steganalysis using image quality metrics,” IEEE Transactions on Image Processing 12(2), 221–229 (2003). [CrossRef]

]. A variation of this approach is to specifically make use of the difference in demosaicking methods among camera models. Most digital cameras employ a color filter array (CFA) and each element can only detect red, green or blue intensity value. The missing color samples are obtained by interpolation of neighboring pixel values. Bayram [7

7 . S. Bayram, H. T. Sencar, N. Memon, and I. Avcibas, “Source camera identification based on CFA interpolation,” in IEEE International Conference on Image Processing , vol. 3, pp. 69–72 (2005).

] and Long [8

8 . Y. Long and Y. Huang, “Image based source camera identification using demosaicking,” in IEEE International Workshop on Multimedia Signal Processing, vol. 3 (2006).

] proposed that different interpolation methods from camera models may introduce specific periodic correlations among pixel values. The periodic correlations can be used to identify the source camera of an image.

The third approach is to use the noise pattern in digital cameras. Due to the unavoidable defects in the manufacturing processes, pattern noise such as pixel non-uniformity, dust on lens, dark currents are introduced to the CCD sensor. Lukas et al. [9

9 . J. Lukáš, J. Fridrich, and M. Goljan, “Determining digital image origin using sensor imperfections,” in Image and Video Communications and Processing, vol. 5685 of Proc. SPIE, pp. 16–20 (2005).

] proposed that each CCD sensor has a unique pattern noise and the pattern noise can be used for source camera identification. A Gaussian denoising filter was used to extract the pattern noise from an image. The reference noise pattern of a camera is prepared by averaging a number of noise patterns from its images. Then the source camera of an image is determined by the correlation between the image noise pattern and the camera reference pattern.

The rest of this paper is organized as follows. In Section 2, we explain the origin of lens radial distortion and describe the method used for radial distortion in our experiments. In Section 3, we propose an approach to incorporate our lens radial distortion measurements into Kharrazi’s feature-based method, in order to increase the classification accuracy. The performance of the identification method is evaluated in Section 4. The influence of the zoom lens settings on the error rates are evaluated in this section. The future work and limitations are discussed in Section 5. Finally, this paper is concluded in Section 6.

2. Lens radial distortion

2.1. The imaging system

In a digital camera, the light from the scene passes through the camera’s lens system, an antialiasing filter, and color filter array, and finally reaches the camera's sensor [4

4 . J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE Micro 18(6), 20–30 (1998). [CrossRef]

, 5

5 . M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).

]. Each light sensing element of the sensor array integrates the incident light over the whole spectrum and obtains an electric signal representation of the scenery. The electric signal is digitalized by an analog-to-digital converter. Then the digital signal is processed by color processing algorithms built in the camera chips. These color processing algorithms includes demosaicing, color correction, white balancing, and gamma correction [13

13 . E. Y. Lam, “Image restoration in digital photography,” IEEE Transactions on Consumer Electronics 49(2), 269–274 (2003). [CrossRef]

]. Finally, the raw image is compressed and stored in the camera memory device. A block diagram of the image processing pipeline is shown in Fig. 1. The imperfect lens system in digital cameras distorts the light from the scenery. Due to the limitation in computational complexity, the distortion is unlikely to be removed by the later color processing stages in consumer-level digital camera. Therefore, we propose to use the lens radial distortion for the source camera identification problem. In the following subsections, we introduce the background and the mathematical models of lens radial distortion.

2.2. Background of lens radial distortion

Due to the design and manufacturing process, lens produces aberrations in images. The six major types of aberrations are spherical aberration, coma, astigmatism, field curvature, lens radial distortion and chromatic distortion. Among these aberrations, lens radial distortion is the most severe, especially in inexpensive wide-angle lenses [3

3 . J. Perš and S. Kovačič, “Nonparametric, model-based radial lens distortion correction using tilted camera assumption,” in Proceedings of the Computer Vision Winter Workshop 2002, pp. 286–295 (2002).

]. In this paper, we will focus on lens radial distortion.

Fig. 1. Block diagram of color processing pipeline in digital cameras.

The radial distortion causes straight lines in the object space rendered as curved lines on the film or camera sensor. It originates from the fact that the transverse magnification, MT, is a function of the off-axis image distance r, rather than a constant predicted by paraxial theory [14

14 . E. Hecht, Optics (Addison Wesly, San Francisco, California, 2002).

]. In other words, the lens has various focal lengths and magnifications in different areas. The radial distortion deforms the whole image even though every point is in focus. The optical system suffers from pincushion distortion when MT increases with r. Similarly, the optical barrel distortion occurs when MT decreases with r. Examples of barrel distortion and pincushion distortion are shown in Fig. 2 and Fig. 3.

Fig. 2. Distortion of a rectangular grid. Left: Undistorted grid. Middle: Grid with barrel distortion. Right: Grid with pincushion distortion.
Fig. 3. A rectangular grid taken by Casio. The grid has barrel distortion.

2.3. Measuring lens radial distortion

2.3.1. Mathematical model

ru=rd+k1rd3+k2rd5
(1)

where ru and rd are the undistorted radius and distorted radius respectively. The radius is the radial distance x2+y2 of a point (x, y) from the center of distortion. Since the image center is a good approximation of the center of distortion [15

15 . B. Tordoff and D. W. Murray, “Violating rotating camera geometry: the effect of radial distortion on selfcalibration,” in Proc. 15th International Conference on Pattern Recognition, vol. 1, pp. 423–427 (2000).

], we simply take the center of an image as the center of distortion.

2.3.2. Overview of Devernay’s straight line method

To find the distortion parameters of a camera, we use Devernay’s [2

2 . F. Devernay and O. Faugeras, “Automatic calibration and removal of distortion from scenes of structured environments,” in Investigative and Trial Image Processing, vol. 2567 of Proc. SPIE, pp. 62–67 (1995). [CrossRef]

] straight line method. The method uses the fundamental property that the projection of every straight line in space onto a pinhole camera is a straight line. As a result, if we can find a transformation on a radial distorted image so that every straight line in space is viewed as a straight line in the transformed image, then we can estimate the distortion parameters of an image. By using this property, an iterative process is employed to estimate the distortion parameter k 1. The process is to first perform subpixel edge detection on a distorted image, and then apply polygonal approximation to extract possible distorted line segments from the image. The distortion error is measured between the distorted line segments and their corresponding straight lines. Then the distortion parameter, k 1, is optimized to minimize the distortion error. The optimization process is repeated until the relative change of distortion error is less than a threshold. An implementation of Devernay’s algorithm in Matlab is publicly available [16

16 . P. D. Kovesi, Matlab and octave functions for computer vision and image processing. Software available at http://www.csse.uwa.edu.au/pk/research/matlabfns/.

]. We modified that program to estimate radial distortion parameters, k 1 and k 2, for every image under consideration. The details of each step are discussed in the following subsections.

2.3.3. Edge detection method

The first step of the Devernay’s method is to extract edges from an image. Canny edge detector was used to obtain edge magnitude and orientation. Then, non-maxima suppression and hysteresis thresholding were used for edge localization. Non-maxima suppression is to thin wide contours by selecting the maxima of edge magnitude perpendicular to the edge orientation and the hysteresis thresholding is to remove noisy maxima while preserving the continuity of contours. The image distortion is sometimes less than a pixel, therefore, Devernay suggested to extend the accuracy of edge detection method to sub-pixel.

2.3.4. Extracting distorted line segments

After edge detection, we need to extract distorted line segments which are most probably straight line in 3D space. Devernay proposed a number of thresholds to select useful distorted segments. Since some segments may be broken by edge detector, he proposed to join broken segments together when the distance between edge ends are less than a threshold T 1. He also suggested to put a threshold on the segment length because short segments are usually noisy.

2.3.5. Measuring distortion error

In order to measure the curvature of a distorted line segment, Devernay used the points on a distorted segment to form a straight line by least square approximation. The distortion error is the sum of squares of the distances from the points to the straight line. As a result, the distortion error is zero, when a distortion segment is a straight line. The distortion error will be bigger when a distorted segment has a bigger curvature.

3. Using lens radial distortion as features

According to [5

5 . M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).

], Kharrazi et al. proposed to use a number of features to capture the photometric effects left behind by the color processing algorithms on the images. Our lens distortion parameters can capture the geometric footprints left behind by the camera (lens system) on the images. Our new features can serve as complementary features to the features proposed by Kharrazi et al. For each image under consideration, a vector of thirty-six features is extracted from the image. The feature vector consists of thirty-four features proposed by Kharrazi et al. and our lens radial distortion parameters, k 1 and k 2. Assuming that a collection of images is available for each possible camera, they are then used to train a classifier for distinguishing between images originating from a specific camera.

4. Experimental results

Four sets of experiments were performed. The first set of experiments is a feasibility test using lens radial distortion in image classification. The second set of experiments shows that our approach has a statistically significant improvement in accuracy over the procedures using only image intensities. The third set of experiments study how the proposed featurs perform when considering more testing images and more cameras. The forth set of experiments study how the error rates are influenced by changing the focal length of zoom lenses.

4.1. Camera and test images

In our first and second set of experiments, three different cameras were used. They are recent models from three different manufacturers. The Canon PowerShot A80 and Casio EX-Z4 were used to produce 1600×1200 images, while Ricoh R2 was used to take 2560×1920 images. The images were taken with no flash, auto-focus, no manual zooming, best JPEG compression quality and other default settings. The configurations of the cameras are shown in Table 1. Each camera was used to take 100 images randomly around a university campus. Some of the samples from our image data set are shown in Fig. 4. After collecting the dataset, the proposed features were measured from each image.

Table 1. Cameras used in experiments and their properties

table-icon
View This Table
| View All Tables
Fig. 4. Sample images obtained using the Canon (A80).

4.2. Classification by lens radial distortion only

This experiment is a feasibility test using radial distortion in classifying images originating from a three-camera model.We obtained the lens distortion parameters, k 1 and k 2 by the method mentioned in Section 2. In the experiments, we used the SVM classifier available in the LibSvm package [17

17 . C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines (2001). Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm.

]. The steps for training and testing the SVM classifier are as follows:

  1. 40 images from each camera were randomly selected to train a classifier.
  2. The rest of the images were used to test the classifier.
  3. Step 1 and Step 2 were repeated 50 times and the average classification accuracy is obtained.

The success rates of the classification for Canon (A80), Casio and Ricoh are 97.8%, 92% and 84.8% respectively. The average accuracy obtained is 91.5% and the corresponding confusion matrix is shown in Table 2.

We obtained the lens distortion parameters, k 1 and k 2, from our database and plotted them in Fig. 5. Canon (A80), Casio and Ricoh are represented by ‘×’, ‘o’ and ‘+’ respectively. It can be seen that the lens radial distortion parameters can be clearly separable into three groups. Those outliers in the plot are images with very short straight lines or very few straight lines. The classification results in Table 2 and the scatter plot in Fig. 5 show that it is feasible to identifying the source camera of a digital image by lens radial distortion.

4.3. Classification by lens radial distortion and Kharrazi’s proposed features

In this section, we investigate the improvement in accuracy by adding radial distortion to Kharrazi’s proposed statistics. Based on Kharrazi’s features, we evaluated the accuracies of the classification with and without radial distortion. The procedures for training and testing the classifier are the same as the Section 4.2. The average accuracy of the system with and without lens radial distortion parameter is 87.4% and 91.4% respectively. There is a 4% improvement in accuracy with our proposed lens radial distortion feature. The corresponding confusion matrices of these two experiments are in Table 3 and Table 4 respectively.

Fig. 5. The scatter plot of lens radial distortion parameters, k 1 and k 2, in Canon (A80) (×), Casio (o) and Ricoh (+) in our experiment. It can be seen that the lens radial distortion parameters can be clearly separable into three groups.

Table 2. The confusion matrix for camera identification with 60 testing images by lens radial distortion only.

table-icon
View This Table
| View All Tables

Table 3. The confusion matrix for camera identification with 60 testing images by features proposed by Kharrazi et al. only.

table-icon
View This Table
| View All Tables

Table 4. The confusion matrix for camera identification with 60 testing images by lens radial distortion and features proposed by Kharrazi et al.

table-icon
View This Table
| View All Tables

Table 5. The confusion matrix for camera identification with 100 testing images by lens radial distortion only.

table-icon
View This Table
| View All Tables

Table 6. The confusion matrix for camera identification with 100 testing images by features proposed by Kharrazi et al. only.

table-icon
View This Table
| View All Tables

Table 7. The confusion matrix for camera identification with 100 testing images by lens radial distortion and features proposed by Kharrazi et al.

table-icon
View This Table
| View All Tables

4.4. Effect of optical zoom on classification accuracy

Most consumer digital cameras are equipped with an optical zoom lens. It is likely that images from a dataset may be captured by various optical zoom. As discussed in Section 2, the lens radial distortion parameters change with focal length, making the classification of images by radial distortion more difficult.

Radial distortion is difficult to calibrate in zoom lenses, which usually go from barrel at the wide end to pincushion at the tele end. In this section, we investigate the impact of optical zoom on the reliability of camera identification.

One way of determining the relationship between focal length and radial distortion is to treat each configuration of lens settings as a fixed focal lens and to perform the calibration for each configuration [18

18 . M. Li and J.-M. Lavest, “Some aspects of zoom lens camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1105–1110 (1996). [CrossRef]

, 19

19 . Y.-S. Chen, S.-W. Shih, Y.-P. Hung, and C.-S. Fuh, “The JPEG still picture compression standard,” in Proc. of 15th International Conference on Pattern Recognition, vol. 4, pp. 495–498 (2000).

]. However, it is not efficient because a zoom lens may have many configurations. Therefore, we only calibrate the zoom lens at intervals of the focal length. It is suggested that the radial distortions may not vary significantly between intervals [18

18 . M. Li and J.-M. Lavest, “Some aspects of zoom lens camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1105–1110 (1996). [CrossRef]

].

Fig. 6. The scatter plot of lens radial distortion parameters, k 1 and k 2, in Canon (A80) (×), Casio (o), Ricoh (+), Canon (I55) (∇) and Olympus (Δ) in our experiment. It can be seen that the lens radial distortion parameters can be clearly separable into five groups.

Table 8. The confusion matrix for five-camera identification by lens radial distortion only.

table-icon
View This Table
| View All Tables

A new image dataset was prepared for the simulation.We used three cameras to take images around the university campus. The images were taken with no flash, auto-focus and best JPEG compression quality and other default settings. We divided each camera’s zoom range into five intervals and take 20 images for each interval. Consequently, 100 images are taken by each camera. After collecting the images, the radial distortion parameters are measured by Devernay’s straight line method mentioned in Section 2. The distortion parameters measured under various zoom intervals are plotted in Fig. 7.

Table 9. The confusion matrix for five-camera identification by features proposed by Kharrazi et al. only.

table-icon
View This Table
| View All Tables

Table 10. The confusion matrix for five-camera identification by lens radial distortion and features proposed by Kharrazi et al.

table-icon
View This Table
| View All Tables

A SVM classifier was used to evaluate the impact of optical zoom on accuracy in camera identification. We trained a SVM classifier by randomly selecting 8 images from each zoom interval of each camera. Then we test the classifier with the rest of the images. In other words, 40 and 60 images from each camera were chosen to train and to test the classifier respectively. The training and testing process were repeated 10 times and an average accuracy was obtained. The corresponding confusion matrix is given in Table 11. The success rates of the classification for Canon (A80), Casio, and Ricoh is 75.9%, 80%, and 86.6% respectively. The average accuracy obtained is 80.8%.

Fig. 7. The scatter plot of lens radial distortion parameters, k 1 and k 2, in Canon (A80), Casio, and Ricoh in our experiment. The distortion parameters were measured from images with various zoom intervals. The first zoom interval represents maximum zoom out (wide), whereas the fifth zoom interval represents maximum zoom in (tele). We can see that the distortion parameters from one camera can be clearly separable from another camera in some cases. However, the distortion parameters from 5th zoom interval of Canon (A80) have a considerable overlap with the parameters from Casio.

Table 11. The confusion matrix for camera identification by lens radial distortion only.

table-icon
View This Table
| View All Tables

Comparing the classification results that used only lens distortion parameters in Section 4.2, the accuracy of this experiment has dropped 10.7%. The relative low accuracy in this experiment is due to the considerable overlap of the distortion parameters from 5th zoom interval of Canon (A80) with the parameters from Casio. In the overlapping region, the distortion parameters, k 1 and k 2, are very close to zero which means that the radial distortion is very small in the images. The precision in the radial distortion algorithm may not be able to resolve the difference in the distortion between Canon (A80) and Casio.

5. Discussions and future work

In Section 4.2 fig. 5, we have shown a scatter plot of radial distortion parameters, k 1 and k 2. The outliers in the plot are images with very short lines and inadequate number of long straight lines. Short straight lines usually contain more noise than useful information about distortion. An example Casio image with short segments and its edge map is shown in fig. 8 (top). The estimated k 1 and k 2 is 0.049 and -0.014 respectively. The lack of long edges in the example image leads to wrong estimation of the lens distortion. Apart from the length of the straight lines, the position of the line also affects the lens distortion estimation. Since the radial distortion is a function of the radius from the center of distortion, the further away a line from the center, the more sever the distortion. If a line is close to the center, it will be less severely distorted and will provide less useful information for distortion estimation. An example image from Casio of this kind is shown in fig. 8 (bottom).We only used the straight line near the center for distortion estimation and the estimated k 1 and k 2 is 0.037 and 0.023 respectively. The k 2 and k 2 values from both examples are far from the majority of Casio images.

Fig. 8. Top: An example image from Casio and its edge map. Only a few short straight lines appeared in the edge map. Bottom: Another example image from Casio and its edge map. Only one straight line in the middle of the image. Lines which are short or at the center may not provide useful information for distortion estimation. The k 2 and k 2 values from both examples are far from the majority of Casio images in fig. 5.

From the above discussion, we have two criteria on the straight lines. Firstly, the straight line should be long enough. In our experiments, we only used the lines with minimum length longer than half of the image height. Another requirement is the position of lines. The further is the line from the center, the better the line. Usually, more straight lines provide more useful information for distortion estimation. When there is one straight line that fulfills both criteria, we can use our proposed classifier. However, more research and analysis is needed to determine the influence of lines’ length and position on the stability and accuracy of the classifier.

In Section 4.4, we propose to calibrate the relationship between the focal length and radial distortion parameters by sampling over the zoom intervals of a camera. However, it may not be able to get the samples at designated focal length intervals because the focal length is affected by the zoom and focus together. Also, in most consumer-level digital cameras, the focus is automatically controlled and we cannot retrieve the focus information. It may affect the classification accuracy. One solution is to estimate the effective focal length from the image and sample the distortion parameters over the intervals of the focal length.

From our experimental results in Section 4.4, the images taken by Canon (A80) with 5th zoom interval cannot be clearly distinguished from the images taken by Casio. The experimental results can be improved by using a more sophisticated method to estimate the lens distortion from an image. Another solution is to integrate our approach with other methods, such as the feature-based identification [5

5 . M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).

] and sensor pattern noise identification [9

9 . J. Lukáš, J. Fridrich, and M. Goljan, “Determining digital image origin using sensor imperfections,” in Image and Video Communications and Processing, vol. 5685 of Proc. SPIE, pp. 16–20 (2005).

], which are less likely to be influenced by focal length of lens.

Since camera identification techniques may be used in court, it is interesting to address malicious attacks intended to interfere the identification algorithm, such as adding or removing radial distortion from an image. By using (1), it is possible to manipulate the distortion parameters of an image by substituting different k 1 and k 2. However, these alterations will result in bending the image inward or outward and the final distorted image will no longer be rectangular. The attackers may need to add or discard scenery information on the image if they want to keep the image size same as the original image. This increases the difficulty of the malicious attacks.

6. Conclusion

In this paper, we examine the use of lens footprints left on the images in identifying the source camera of a digital image. We propose to use the lens radial distortion on the images for this problem. A classifier based on lens radial distortion is built and used to evaluate the effectiveness of this feature.We show that it is feasible to use the lens radial distortion to classify images originating from a five-camera model.We also propose to incorporate our lens radial distortion with the statistics obtained from image intensities for image classification. We demonstrate that comparing with the procedures using only statistics from image intensities, our approach shows a statistical improvement in accuracy. Since the lens distortion parameters vary with focal length, we also investigate the effectiveness of our lens distortion parameters, k 1 and k 2, in an image dataset with various optical zoom.

Acknowledgments

The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Number HKU 7143/05E).

References and links

1 .

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation 3(4), 323–344 (1987). [CrossRef]

2 .

F. Devernay and O. Faugeras, “Automatic calibration and removal of distortion from scenes of structured environments,” in Investigative and Trial Image Processing, vol. 2567 of Proc. SPIE, pp. 62–67 (1995). [CrossRef]

3 .

J. Perš and S. Kovačič, “Nonparametric, model-based radial lens distortion correction using tilted camera assumption,” in Proceedings of the Computer Vision Winter Workshop 2002, pp. 286–295 (2002).

4 .

J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE Micro 18(6), 20–30 (1998). [CrossRef]

5 .

M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).

6 .

I. Avcibas, N. Memon, and B. Sankur, “Steganalysis using image quality metrics,” IEEE Transactions on Image Processing 12(2), 221–229 (2003). [CrossRef]

7 .

S. Bayram, H. T. Sencar, N. Memon, and I. Avcibas, “Source camera identification based on CFA interpolation,” in IEEE International Conference on Image Processing , vol. 3, pp. 69–72 (2005).

8 .

Y. Long and Y. Huang, “Image based source camera identification using demosaicking,” in IEEE International Workshop on Multimedia Signal Processing, vol. 3 (2006).

9 .

J. Lukáš, J. Fridrich, and M. Goljan, “Determining digital image origin using sensor imperfections,” in Image and Video Communications and Processing, vol. 5685 of Proc. SPIE, pp. 16–20 (2005).

10 .

R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (Wiley, New York, 2001).

11 .

K. S. Choi, E. Y. Lam, and K. K. Y. Wong, “Source camera identification using footprints from lens aberration,” in Digital Photography II, vol. 6069 of Proc. SPIE, pp. 155–162 (2006).

12 .

K. S. Choi, E. Y. Lam, and K. K. Y. Wong, “Feature selection in source camera identification,” in IEEE International Conference on Systems, Man and Cybernetics, pp. 3176–3180 (2006).

13 .

E. Y. Lam, “Image restoration in digital photography,” IEEE Transactions on Consumer Electronics 49(2), 269–274 (2003). [CrossRef]

14 .

E. Hecht, Optics (Addison Wesly, San Francisco, California, 2002).

15 .

B. Tordoff and D. W. Murray, “Violating rotating camera geometry: the effect of radial distortion on selfcalibration,” in Proc. 15th International Conference on Pattern Recognition, vol. 1, pp. 423–427 (2000).

16 .

P. D. Kovesi, Matlab and octave functions for computer vision and image processing. Software available at http://www.csse.uwa.edu.au/pk/research/matlabfns/.

17 .

C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines (2001). Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm.

18 .

M. Li and J.-M. Lavest, “Some aspects of zoom lens camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1105–1110 (1996). [CrossRef]

19 .

Y.-S. Chen, S.-W. Shih, Y.-P. Hung, and C.-S. Fuh, “The JPEG still picture compression standard,” in Proc. of 15th International Conference on Pattern Recognition, vol. 4, pp. 495–498 (2000).

OCIS Codes
(080.2720) Geometric optics : Mathematical methods (general)
(100.2000) Image processing : Digital image processing
(100.5010) Image processing : Pattern recognition

History
Original Manuscript: September 13, 2006
Revised Manuscript: November 8, 2006
Manuscript Accepted: November 9, 2006
Published: November 27, 2006

Citation
Kai San Choi, Edmund Y. Lam, and Kenneth K. Y. Wong, "Automatic source camera identification using the intrinsic lens radial distortion," Opt. Express 14, 11551-11565 (2006)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-24-11551


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. R. Y. Tsai, "A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses," IEEE Journal of Robotics and Automation 3(4), 323-344 (1987). [CrossRef]
  2. F. Devernay and O. Faugeras, "Automatic calibration and removal of distortion from scenes of structured environments," in Investigative and Trial Image Processing, vol. 2567 of Proc. SPIE, pp. 62-67 (1995). [CrossRef]
  3. J. Pers and S. Kovacic, "Nonparametric, model-based radial lens distortion correction using tilted camera assumption," inProceedings of the Computer Vision Winter Workshop 2002, pp. 286-295 (2002).
  4. J. Adams, K. Parulski, and K. Spaulding, "Color processing in digital cameras," IEEE Micro 18(6), 20-30 (1998). [CrossRef]
  5. M. Kharrazi, H. T. Sencar, and N. Memon, "Blind source camera identification," in IEEE International Conference on Image Processing, pp. 709-712 (2004).
  6. I. Avcibas, N. Memon, and B. Sankur, "Steganalysis using image quality metrics," IEEE Transactions on Image Processing 12(2), 221-229 (2003). [CrossRef]
  7. S. Bayram, H. T. Sencar, N. Memon, and I. Avcibas, "Source camera identification based on CFA interpolation," in IEEE International Conference on Image Processing, vol. 3, pp. 69-72 (2005).
  8. Y. Long and Y. Huang, "Image based source camera identification using demosaicking," in IEEE International Workshop on Multimedia Signal Processing, vol. 3 (2006).
  9. J. Lukas, J. Fridrich, and M. Goljan, "Determining digital image origin using sensor imperfections," in Image and Video Communications and Processing, vol. 5685 of Proc. SPIE, pp. 16-20 (2005).
  10. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (Wiley, New York, 2001).
  11. K. S. Choi, E. Y. Lam, and K. K. Y. Wong, "Source camera identification using footprints from lens aberration," in Digital Photography II, vol. 6069 of Proc. SPIE, pp. 155-162 (2006).
  12. K. S. Choi, E. Y. Lam, and K. K. Y. Wong, "Feature selection in source camera identification," in IEEE International Conference on Systems, Man and Cybernetics, pp. 3176-3180 (2006).
  13. E. Y. Lam, "Image restoration in digital photography," IEEE Transactions on Consumer Electronics 49(2), 269- 274 (2003). [CrossRef]
  14. E. Hecht, Optics (Addison Wesly, San Francisco, California, 2002).
  15. B. Tordoff and D. W. Murray, "Violating rotating camera geometry: the effect of radial distortion on selfcalibration," in Proc. 15th International Conference on Pattern Recognition, vol. 1, pp. 423-427 (2000).
  16. P. D. Kovesi, Matlab and octave functions for computer vision and image processing. Software available at http://www.csse.uwa.edu.au/~pk/research/matlabfns/.
  17. C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines (2001). Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  18. M. Li and J.-M. Lavest, "Some aspects of zoom lens camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1105-1110 (1996). [CrossRef]
  19. Y.-S. Chen, S.-W. Shih, Y.-P. Hung, and C.-S. Fuh, "The JPEG still picture compression standard," in Proc. Of 15th International Conference on Pattern Recognition, vol. 4, pp. 495-498 (2000).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited