OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 15, Iss. 20 — Oct. 1, 2007
  • pp: 12922–12940
« Show journal navigation

Geometric distortion-invariant watermarking based on Tschebycheff transform and dual-channel detection

Zhang Li, Qian Gong-bin, and Ji Zhen  »View Author Affiliations


Optics Express, Vol. 15, Issue 20, pp. 12922-12940 (2007)
http://dx.doi.org/10.1364/OE.15.012922


View Full Text Article

Acrobat PDF (338 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Many proposed image watermarking techniques are sensitive to geometric distortions such as rotation, scaling, and translation. Geometric distortion, even by a slight amount, can disable a watermark decoder. In this study, a geometric distortion-invariant watermarking technique is designed by utilizing Tschebycheff moments of the original image to estimate the geometric distortion parameters of corrupted watermarked images. The Tschebycheff moments of an original image can be used as a private key for watermark extraction. The embedding process is a closed-loop system that modifies the embedding intensity according to the results of the performance analysis. The convergence of the closed-loop system is proved. Different from early heuristic methods, the optimal blind watermark detector is designed with the introduction of dual-channel detection utilizing high-order spectra detection and likelihood detection. Even with a small signal-to-noise ratio (SNR), the detector can still get a satisfying detection probability if there is enough high-order spectra information. When the high-order spectra are small, this dual-channel detection system will become a likelihood detection system. The watermark decoder extracts a watermark by blindly utilizing independent component analysis (ICA). The computational aspects of the proposed watermarking technique are also discussed in detail. Experimental results demonstrate that the proposed watermarking technique is robust with respect to attacks performed by the popular watermark benchmark, StirMark.

© 2007 Optical Society of America

1. Introduction

Research has been done to deal with the watermark’s vulnerability to geometric distortions. Various methods have been proposed [2

2. L. Zhang, G.-B. Qian, X.W.-W. Xiao, and Z. Ji, “Geometric invariant blind image watermarking by invariant Tschebycheff moments,” Opt. Express 15, 2251–2261 (2007).

,3

3. p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

,4

4. M. Alghoniemy and A. H. Tewfik, “Geometric invariants in image watermarking,” IEEE Trans. Image Process. 13, 145–153 (2004).

], which are summarized and categorized as follows. The first category is based on distortion inversion. A registration pattern is inserted into the host signal along with a watermark [5

5. Y. Xin, S. Liao, and M. Pawlak, “A multibit geometrically robust image watermark based on Zernike moments,” International Conference on Pattern Recognition4, 861–864 (2004).

], or the watermark is designed with a special structure [6

6. S. Pereira and T. Pun, “Robust template matching for affine resistant image watermarks,” IEEE Trans. Image Process. 9, 1123–1129 (2000).

] so that in the stage of watermark detection the involved geometric distortions can be identified and measured. Thus the geometric distortions can be removed by an inversion process. The second category is based on image normalization [7

7. M. Kutter, “Performance improvement of spread spectrum based image watermarking schemes through M-ary modulation,” Lect. Notes Comput. Sci. 1728, 238–250 (1999).

]. An image can be normalized to a certain position, orientation, and size, which are invariant to image translation, rotation, and scaling. The host image is normalized prior to watermark insertion, and the watermarked image is denormalized back to its original look after the watermark insertion. With the watermark detector, the test image has to undergo the same normalization process before watermark detection. The third category is based on invariance properties of some image features. Image features that have been used for invariance watermarking include Fourier-Mellin coefficients [8

8. P. Dong and N. P. Galasanos, “Affine transform resistant watermarking based on image normalization,” in Proceedings of IEEE International Conference on Image Processing, 3, 489–492 (2002).

], geometric moment invariants [9

9. P. Bas, J.-M. Chassery, and B. Macq, “Geometrically invariant watermarking using feature points,” in Proceedings of IEEE International Conference on Image Processing, 11, 1014–1028 (2002).

], and Zernike moments [10

10. J. O’Ruanaidh and T. Pun, “Rotation, scale, and translation invariant spread spectrum digital image watermarking,” Signal Process. 66, 303–317 (1998).

].

Moments of orthogonal basis functions, such as Legendre and Zernike polynomials [11

11. H. S. Kim and H. K. Lee, “Invariant image watermark using Zernike moments,” IEEE Trans. Circuits Syst. Vid. Technol. 13, 766–775 (2003).

] can be used to represent the image by a set of mutually independent descriptors with a minimal amount of information redundancy. However, these moments present several problems. The most important of these problems is their inaccuracy due to quantization errors introduced by the discrete approximation of continuous integrals and the coordinate space. Tschebycheff moments were proposed [12

12. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tschebycheff moments,” IEEE Trans. Image Process. 10, 1357–1364 (2001).

] to address these problems. They are discrete orthogonal moments and present a number of advantages over moments of continuous orthogonal basis [13

13. R. Mukundan, “Some computational aspects of discrete orthonormal moments,” IEEE Trans. Image Process. 13, 1055–1059 (2004).

]. Mukundan’s study showed that the implementation of Tschebycheff moments does not involve any numerical approximation, since the basis set is orthogonal in the discrete domain of the image coordinate space. This property makes Tschebycheff moments superior to the conventional continuous orthogonal moments in terms of preserving the analytical property needed to ensure information redundancy in a moment set. Yap also proposed a local watermarking technique based on Krawtchouk moments [14

14. P.-T. Yap and R. Paramesran, “Local watermarks based on Krawtchouk moments,” in IEEE Region 10 Conference pp. 73–76 (2002).

], which is a private method.

Observing a traditional watermarking embedding process shown in Fig. 1, it can be found that it is an open-loop system from the viewpoint of system structure. In order to ensure invisibility, the embedded intensity of traditional watermarking must be limited. The watermarking technique proposed in this paper is a closed-loop system shown in Fig. 2.

Fig. 1. Traditional watermark embedding system.
Fig. 2. Closed-loop watermark embedding system.

Recently, blind source separation by independent component analysis (ICA) has received much more attention because of its potential application in signal processing such as in speech recognition systems, medical signal processing, and telecommunications. ICA is a general-purpose statistical technique that extracts the linear transformation for a given set of observed data such that the resulting variables are as statistically independent as possible.

Higher-order spectra retain both amplitude and phase information from Fourier transform of a signal. Higher-order spectra are translation-invariant because linear phase terms are cancelled in the products of the Fourier coefficients that define them. Functions that can serve as features for pattern recognition can be defined from higher-order spectra that satisfy other desirable invariance properties such as scaling, amplification, and rotation invariance. Higher-order spectra are zero for Gaussian noise and thus provide high noise immunity to such features.

A geometric distortion-invariant watermarking is proposed utilizing the Tschebycheff moment of the original image to estimate the geometric distortion parameters. This estimation method can be used as preprocess of watermark detection to recovery synchronization of the watermarking system. The watermark is generated randomly, independent of the original image, and embedded by modifying perceptual Tschebycheff moments of the original image. The embedded intensity is modified according to the results of performance analysis to obtain optimal embedding. The initial embedded intensity can be selected randomly. The convergence of a closed-loop system is proved. An optimum blind watermark detector is designed utilizing dual-channel detection with the introduction of high-order spectra detection and likelihood detection. ICA is used to extract a watermark blindly. Computational aspects of proposed watermarking are discussed in detail. Experimental results have demonstrated that the proposed watermarking is robust against the watermark benchmark, StirMark.

2. Background

In this section, how to estimate geometric distortion parameters will be described utilizing one or more Tschebycheff moments of the original image in detail.

2.1 Definition of Tschebycheff moments

Discrete Tschebycheff moments with order p + q of image f(x, y) are defined as [12

12. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tschebycheff moments,” IEEE Trans. Image Process. 10, 1357–1364 (2001).

]

Tpq=1ρ(p,N)ρ(q,N)x=0N1y=0N1tp(x)tp(y)f(x,y),p,q=0,1,2,,N1,
(1)

where ρ(n,N)=x=0N1{tn(x)}2=N2(N21)(N222)(N2n2)2n+1=(2n)!N+n2n+1.

The discrete Tschebycheff polynomials are expressed as

tn(x)=n!k=0n(1)nkN1knkn+knxk.
(2)

The inverse moment transform can be defined as

f(x,y)=p=0N1q=0N1Tpqtp(x)tq(y).
(3)

2. 2 Relationship of geometric moments and Tschebycheff moments

If geometric moments with p + q order of an image f(x, y) are expressed as [15

15. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory IT-8, 179–187, (1962).

]

mpq=x=0N1y=0N1xpyqf(x,y),
(4)

Tschebycheff moments can be simplified as

Tpq=1ρ˜(p,N)ρ˜(q,N)i=0pj=0qcj,q,Nci,p,Nmi,j.
(5)

It is seen that Tschebycheff moments depend on geometric moments up to the same order. The explicit expression of Tschebycheff moments in terms of geometric moments up to 2nd order [for β(n,N) = Nn] are as follows:

T00=m00N2,T10=6m10+3(1N)m00N(N21),T01=6m01+3(1N)m00N(N21),
T20=30m20+30(1N)m10+5(1N)(2N)m00(N21)(N22),T02=30m02+30(1N)m01+5(1N)(2N)m00(N21)(N22),
T11=36m11+18(1N)(m10+m01)+9(1N2)m00(N21)2.
(6)

2.3 Property of geometric transform

Geometric attacks are common techniques applied to images that do not actually remove the embedded watermark itself but distort the synchronization of the watermark detector. The transformation sequences applied to images are important, since a translation followed by a rotation is not necessarily equivalent to the converse. Geometric distortions can be written as:

x2y2=abcd×x1y1+ef,
(7)

where (x 1, y 1) is the pixel of an input image, and (x 2, y 2) is the pixel of the output image.

Here, some properties that hold for geometric distortions are described:

Property 2: Ratios of triangle areas are preserved. That is, given two sets of noncollinear points, {a,b,c} and {d, e, f} (not necessarily distinct from a, b and c), if T is a geometric distortion, then ΔabcΔdef=ΔT(a)T(b)T(c)ΔT(d)T(e)T(f), where Δx is the area of triangle x.

These properties are important for our method to estimate the parameters of geometric distortions.

3. Methodology of geometric distortion parameters estimation by Tschebycheff moments

3.1 Rotation angle estimation

The rotation operator performs a geometric transform, which maps the position of a picture element in an input image onto a position in the corresponding output image by rotating it through a user-specified angle θ around an original. We define f (x′, y′) as the rotated watermarked image; that is, pixel (x′, y′) is obtained by rotating pixel (x, y) by θ degree. We assume that the watermarked image is rotated by the center normalized as (0,0). The rotation operator can be expressed as

[xy]=[cosθsinθsinθcosθ]×[xy],
(8)

since m 00 is rotation-invariant. The relationships of 2nd order geometric moments are

m20=(1+cosθ2)m20(sin2θ)m11+(1cosθ2)m02,m02=(1cosθ2)m20+(sin2θ)m11+(1+cosθ2)m02,
m11=(sin2θ2)m20+(cos2θ)m11(sin2θ2)m02,
(9)

where m 20, m 02, and m 11 are geometric moments of the original and m20, m02 and m11 are geometric moments of the rotated watermarked image. It can be deduced easily that

m20+m02=m20+m02.
(10)

That is, m 20 + m 02 is rotation-invariant. The relationships of 1st order Tschebycheff moments are

T10=T10cosθm01sinθ+3NN+1(cosθsinθ+1),
(11a)
T01=T10sinθT01cosθ+3NN+1(cosθ+sinθ1).
(11b)

So it can be obtained that

T10cosθ+T01sinθ=T10+3(1N)N2N(N21)(cosθ+sinθ1)T00,
(12)

where T 00, T 10 and T10, T01 are Tschebycheff moments of original and corrupted images, respectively. Set Δ=T10cosθ+T10sinθT103(1N)N2N(N21)(cosθ+sinθ1)T00. θ can be computed by numerical analysis. Once θ satisfies Δ < ε, estimated results will be obtained. ε is a small enough positive value, which is set ε = 10-5 in our experiments.

3.2 Scaling factor estimation

The scale operator performs a geometric transformation, which can be used to shrink in or zoom out the size of an image (or part of an image). Scaling can be divided into two categories. One is symmetric scaling, in which the scaling factor is the same in different directions. The other is asymmetric scaling, sometimes called as shearing, in which scaling factors are different in different directions. Since Tschebycheff transforms compute Tschebycheff moments for the images with the same size in different directions, only symmetric scaling is considered. Suppose f(xa,ya) is a scaled watermarked image, f (x, y) is the original image, and a is the scaling factor. The relationship of geometric moments with order p + q can be expressed as

mpq=ap+q+2mpq.
(13)

It can be deduced that

m00=a2m00.
(14)

It can be obtained that

mpq(m00)(p+q+2)2=mpq(m00)(p+q+2)2.
(15)

We define ηpq=μpq(μ00)(p+q+2)2, which is translation- and scaling-invariant. μnm is the central moment [15

15. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory IT-8, 179–187, (1962).

]. The original image size is N × N and the scaled watermarked image size is N′× N′. So the scaling factor can be estimated by

a=N(N21)T103(1N)N2T00N(N21)T103(1N)N2T00,3
(16)

where T 00, T 10 and T00, T10 are Tschebycheff moments of original and corrupted images.

3.3 Translation parameter estimation

The translate operator performs a geometric transformation, which maps the position of each picture element in an input image into a new position in an output image. Under the translation, an image element located at (x 1, y 1) in the original is shifted to a new position in the corresponding output image by displacing it through a user-specified translation (c,d). The translation operator can be expressed as

{x=x+cy=y+d,
(17)

where, c and d are the translation parameters in the x and y directions, respectively. The Tschebycheff moments of the original image can be computed as

T10=6x=0N1y=0N1xf(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21),T01=6x=0N1y=0N1yf(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21).
(18)

The Tschebycheff moments of the translated watermarked image can be expressed as

T10=6x=0N1y=0N1(x+c)f(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21),T10=6x=0N1y=0N1(y+d)f(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21)
(19)

So it can be deduced that

T10=6x=0N1y=0N1xf(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21)+6x=0N1y=0N1cf(x,y)N(N21)=T10+6cNT00N21,
(20a)
T10=6x=0N1y=0N1yf(x,y)+3(1N)x=0N1y=0N1f(x,y)N(N21)+6x=0N1y=0N1df(x,y)N(N21)=T01+6dNT00N21.
(20b)

From Eq. (20), c and d can be estimated by Tschebycheff moments

c=(N21)(T10T10)(6NT00),d=(N21)(T01T01)(6NT00).
(21)

3.4 Combined distortions parameters, estimation-scaling, and rotation

Suppose (x 1, y 1), (x′, y′), and (x 1, y 1) are pixels of the original image, an image rotated θ degree counterclockwise, and a corrupted watermarked image with a as the scaling factor. The corrupted image is obtained by scaling the watermarked image with a and then rotating the scaled image by θ degree counterclockwise. That is

{x2=xcosθysinθ=x1acosθy1asinθy2=xsinθ+ycosθ=x1asinθ+y1acosθ.
(22)

{N(N21)T103(1N)N2T00=a3[N(N21)T103(1N)N2T00]N(N21)T013(1N)N2T00=a3[N(N21)T013(1N)N2T00]N(N21)T103(1N)N2T00=N(N21)(T10cosθT01sinθ)3(1N)N2T00(cosθsinθ)N(N21)T013(1N)N2T00=N(N21)(T10sinθ+T01cosθ)3(1N)N2T00(cosθ+sinθ).
(23)

The scaling factor and rotation angle can be estimated as

{a=(T00N2)(T00N2)N(N21)(T10cosθT01sinθ)3(1N)N2T00(cosθ+sinθ)=a3[N(N21)T103(1N)N2T00].
(24)

3.5 Combined distortions parameters, estimation-scaling, and translation

{x2=x1a+my2=y1a+n.
(25)

a=(T00N2)(T00N2)
c=[N(N21)T10a3N(N21)T103(1N)N2T00+3(1N)N2T00](6N2T00)
d=[N(N21)T01a3N(N21)T013(1N)N2T00+3(1N)N2T00](6N2T00).
(26)

4. Methodology of geometric distortion invariant watermarking

In this section, the proposed geometric invariant watermarking scheme based on Tschebycheff moments and dual channel detection is described.

4.1 Watermark embedding process

The watermark is imperceptibly embedded into the Tschebycheff moments of the original image, and the embedding intensity is modified according to the results of the performance analysis. The embedding steps of the proposed watermarking system are described as follows:

Step 1: Creation of the watermark. The watermark is generated independent from the original image and having the same size as the original image.

Step 2: The watermark embedding process. A set of low-order Tschebycheff moments To={T om1,n1, ⋯T omk,nk} and Tw={T wm1,n1, ⋯T wmk,nk} are constructed from the original image f (x, y) and the watermark W, respectively. The watermark is embedded by adjusting the Tschebycheff moments of f (x, y)with

T̂mi,ni=Tomi,ni+Nwk(mi,ni)Twmi,ni.
(27)

Then a set of adjusted Tschebycheff moments T̂ = {T̂m1,n1, ⋯T̂mk,nk} is obtained. The closed-loop watermark embedding process is as follows:

  1. Embed watermark with initial intensity N w0, selected randomly. Suppose the initial step for modification of the embedded intensity is q 0. Set q0 = ρN w0, where 0 < ρ < 1. Set k = 1.
  2. Whether or not the embedded watermark renders a visible artifact is detected. Both subjective and objective evaluation can be used to estimate the quality of the embedded watermark. Since a subjective evaluation cannot be accomplished by a computer automatically, an objective evaluation, peak signal-to-noise ratio (PSNR), is utilized to describe the difference between the watermarked image y(k) i,j and the original image x i,j, which is defined as

    PSNRk=10log{xmax2[N2i=1Ni=1N(xi,jyi,j(k))2]},
    (28)

  3. where x max is the maximum luminance of the original image pixel. Suppose the threshold for PSNR is PSNR 0. If PSNR kPSNR 0, then the embedded watermark is invisible. Otherwise the embedded watermark is visible; that is, the embedded watermark renders a visible artifact.

    If PSNR k < PSNR 0, find an integer nk > 0 such that the watermark embedded with intensity N wk-1-nkq k-1 is visible. While the watermark embedded with intensity N wk-1 + (nk+1)q k-1 is invisible, N wk is the maximal probable embedded intensity. Set qk = ρq k-1, k = k +1, and go to (3). If PSNR kPSNR 0, find an integer nk > 0 such that the watermark embedded with intensity N wk = N wk-1 + nkq k-1 is invisible. While the watermark embedded with intensity N wk-1 + (nk+1)q k-1 is visible, N wk is the maximal probable embedded intensity. Set qk = ρq k-1, k = k+1.

  4. Calculate the detection probability pd and the false alarm probability fp to test whether the watermark that is embedded meets requirements in different applications. Suppose the thresholds of pd and fp are pd0 and pf0, respectively. The relationship between probabilities and the embedded watermark is shown in Table 1.

Table 1. The relation between the probabilities and the embedded watermark

table-icon
View This Table
| View All Tables

pd < pd0 or pf > pf0 means that the watermarking does not meet requirements. Increase the embedded intensity until it can meet the requirements. That is, find an integer m k-1 < 0 such that the watermark embedded with intensity N wk-1 + mk-1ρq k-1 yields results pd < pd0 or pf > pf0. If the embedded watermark intensity N wk = N wk-1 + ρ(m k-1)q k-1 yields results pdpd0 and pfpf0, set qk = ρq k-1, k = k +1, and go to (2). If the embedded watermark meets requirements, that is, pdpd0 and pfpf0, find an integer m k-1 < 0, such that watermark embedded with intensity N wk = N wk-1 - ρm k-1 q k-1 yields results pdpd0 and pfpf0. If the embedded watermark with N wk-1 - ρ(m k-1+1)q k-1 yields results pd < pd0 or pf0 > pf0, N wk is the final value of the embedded intensity of the invisible watermark.

From above, it can been seen that if k is an even number, N wk can be written as

Nwk=Nw0i=0k2ρ2(i1)q0((n2i1+1)ρ(m2i1+1)),
(29)
Nwk+1Nwk=ρkq0(mk+1).
(30)

If k is an odd number, N wk can be written as

Nwk=Nw0i=0(k1)2ρ2(i1)q0((n2i1+1)ρ(m2i1+1))ρk1q0(nk+1),
(31)
Nwk+1Nwk=ρkq0(mk+1+1).
(32)

Since 0 < ρ < 1, it can be concluded that

limk(Nwk+1Nwk)=0.
(33)

From Eq. (33), a conclusion can be drawn:

Conclusion 1: The closed-oop watermarking system shown in Fig. 2 is converged.

Proof: Suppose the system can be divided into two processes. One is whether the embedded watermark is invisible or not, namely, the P1 process. The other is whether the embedded watermark meets requirements in different real applications or not, namely, the P2 process. The embedded watermark intensity satisfies Nw ∈ (0, N × N), where the size of the image is N × N.

Step 1: Select initial watermark embedded intensity randomly. Suppose the initial step size for modification of the embedded intensity is q 0. Set q 0 = ρN w0, where 0 < ρ < 1, set k = 1.

Step 2: If the P2 process does not meet requirements, modify the embedded intensity N wk+1Nwk + ρq k. Determine the P2 process again. If it still cannot meet requirements, modify the embedded intensity again. Suppose the final embedded intensity N wk+1Nwk + ρn k q k. satisfies the P2 process where nk is an integer. Sometimes the P1 process may not satisfy requirements.

Step 3: If the P1 process does not meet requirements, decrease the intensity step. Set q k+1 = ρq k, N wk+1Nwk + ρn k q k - ρ2 q k. Determine the P1 process again. Suppose after mk steps the P1 process meets the requirements. Then Nwk+1Nwk+ρ(nkqkρmkqk).mk<1ρis the ceiling function.

Step 4: Repeat Step 2 and Step 3 until both the P1 process and the P2 process meet requirements. So it will be that

Nwk+1=Nwk+ρ(nkqkρmkqk)=Nw0+(n0m0ρ)q0+ρ(n1ρm1)q1++ρ(nkρmk)qk
=Nwo+[(n0ρm0)+ρ2(n1ρm1)++ρ2k(nkρmk)]q0.
(34)

Set: max(nk - ρmk) = u. Then

Nwk+1Nwk=ρ2k(nkρmk)q0<ρ2kuq00.
(35)

It can be concluded that when k → ∞ that N wk+1-Nwk → 0. The closed-loop system is converged.

Step 5: Perform the inverse Tschebycheff transform to retrieve the watermarked image.

4.2 Watermark detection

Many existing watermark detectors assume that a portion of the watermark is known in advance. In this paper, the detector is assumed not to be informed any information regarding the watermark and attacks. The watermark detection process first decides whether the watermark is present by dual-channel detection. If the watermark is detected, ICA is utilized to extract it blindly. The extraction steps of the proposed watermarking system are described as follows:

Step 1: Dual-channel detection. The proposed dual-channel watermark detector is shown in Fig. 3. The decision logic of the dual-channel detector is as follows:

  • If both the SNR and the high-order spectra are large enough, both the likelihood channel and the high-order spectra channel will detect the watermark.
  • If the high-order spectra are small while the SNR is large enough, the likelihood channel will detect the watermark while the high-order spectra channel will not detect the watermark.
  • If the SNR is small while the high-order spectra are large enough, the high-order spectra channel will detect the watermark while the likelihood channel will not detect the watermark.
  • If both the SNR and the high-order spectra are small, the likelihood channel and the high-order spectra channel will not detect the watermark.
Fig. 3 Watermark detection with dual-channel detection.

Step 2: Watermark extraction. If a watermark is detected, ICA is utilized to extract the watermark blindly. The ICA process is the core of the watermark detector accomplished by the FASTICA algorithm [16

16. A. Hyvarinen and E. Oja, “Independent component analysis: a tutorial,” in Notes for International Joint Conference on Neural Networks (1999), http://www.cis.hut.fi/projects/ica/.

]. Before applying the ICA model, it is helpful to conduct preprocessing such as centering and whitening to simplify the ICA process. The process of the watermark extractor is described in detail as follows:

  • Preprocessing of the test image for centering and whitening. The observed variable x is centered by subtracting the mean vector m=E{x} from the observed variable; this makes x a zero-mean variable. This preprocessing method is designed to simplify ICA algorithms. After estimating the mixing matrix A with the centered data, the estimation is completed by adding the mean vector of the original source signal back to the centered estimates of the source data. Another preprocessing method is designed to whiten the observed variables. Whitening means to transform the variable x linearly so that the new variablex x͂ is white, i.e., its components are uncorrelated and their variances equal unity. Whitening can be computed by eigenvalue decomposition of the covariance matrix E{xxT}= EDET, where E is the orthogonal matrix of eigenvector of E{xxT} and D is a diagonal matrix of its eigenvalues. Note that E{xxT} can be estimated in a standard way from the availability of x.
  • Perform ICA to the signal that has been centered and whitened; that is; to find the separate matrix L:
  • Choose an initial (e.g., random) weight vector L; let L + = E{yG(LTy)} - E{yG(LTy)}L, L = L +/∥L +∥, where, E(•) is the mean compute factor, G(·) is a non-linear function, and the following choices of G(•) have been proved to be very useful: G 1(u) tanh(a1u), G2(u)=uexp(u22). If the difference between the iterative results is less than the threshold, that is, ∣L + - L∣<ε, it can be concluded that the process is converged and the cycle will terminate; otherwise, go back to(2) until the result is converged. The threshold ε can be defined by the user, and ε = 10-6 is used in our experiments. If the result still is not converged after 3000 cycles, then the process will be forced to terminate and a conclusion can be drawn that there is no independent component for the corrupted watermarked image.
  • If there are multiple watermarks in the tested image, the extracted watermark must be subtracted before extracting the next one.

Step 3: Extract the perfect watermark by a secret key in the watermark embedding process.

5. Performance analysis of proposed watermarking process

The performance of the watermarking includes invisibility, robustness, and correct detection, which are measured in this paper by the PSNR between the watermarked image and the original image and the probabilities of detection and false alarm. The performance analysis of the watermark is done by calculating the probabilities of false alarm and detection based on the distribution of the likelihood function, which is used to control the embedding processing.

5.1 Likelihood channel detection

To do the performance analysis, the detection process can be formulated as a binary hypothesis test.

H0:Y(i,j)=X(i,j)=watermarkabsent,H1:Y(i,j)=X(i,j)W(i,j)watermarkpresent.
(36)

In order to compute the probabilities of false alarm and detection, a normalized correlation function (NC) sim(w,w′) between original watermark w(n) and extracted watermark w′(n) is used as likelihood function, which is expressed as

sim(w,w)=w(n)w(n)w2(n)w2(n).
(37)

If Λ(Y) > η, H 1 is true. Define the probability of detection as

PdE=Prob{sim(w,w)ηH1},
(38)

where η is the decision threshold. Define probability of false alarm as

pfE=Prob{sim(w,w)ηH0},
(39)

Suppose w(n) and w′(n) are either 1 or -1, then w 2(n) = w2 (n) = 1. Denote k(n) = w(n)w′(n), and sim(w,w′) can be rewritten as

sim(w,w)=w(n)w(n)Nw=k(n)Nw.
(40)

Then Eq. (39) can be rewritten as

pfE=P(sim(w,w)>ηH0)=p(k(n)>NwηH0).
(41)

As k(n)∈{-1,1}, ∑k(n) can only take discrete values from the set {-Nw,-Nw + 2,-Nw + 4, ⋯ Nw -4, Nw - 2, Nw}. Eq. (41) can be rewritten as

pfE=P(k(n)>NwηH0)=m=Nw(η+1)2Nwp(k(n)=Nw+2mH0),
(42)

where P(∑k(n) = -Nw + 2m/H 0) is a probability that the series {k(n)}contains m ones and -Nw - m negative ones. Suppose that the probability of k(n ) = -1 is p 0 then

P(k(n)=Nw+2mH0)=Nw!m!(Nwm)!p0Nwm(1p0)m.
(43)

If no watermark is given in the test image, ensure that the extracted watermark consists of a series of random, independent, equally probable values from the set {-1,1}. Thus p 0 = 0.5. It can be deduced that

pfE=m=Nw(η+1)2NwNw!m!(Nwm)!0.5Nw.
(44)

For pf, a conclusion can be drawn:

Conclusion 2: Increasing the decision threshold η will decrease pfE. Increasing the intensity of watermark Nw will decrease pfE with fixed η.

Proof: Mathematical induction is used to prove it. Define a function as

F(Nw)=m=(Nw+1)(η+1)2Nw+1(Nw+1)!m!(Nw+1m)!0.5Nw+1m=Nw(η+1)2NwNw!m!(Nwm)!0.5Nw.
(45)

When Nw = 2, F(2) = 0.25 - 0.5 < 0. Suppose Nw = k

F(k)=m=(k+1)(η+1)2k+1(k+1)!m!(k+1m)!0.5k+1m=k(η+1)2kk!m!(km)!0.5k<0.
(46)

When Nw = k + 1

F(k+1)=m=(k+2)(η+1)2k+2(k+2)!m!(k+2m)!0.5k+2m=k+1(η+1)2k+1(k+1)!m!(k+1m)!0.5k+1
=m=(k+2)(η+1)2k+1(k+2)!m!(k+2m)!0.5k+2m=k+1(η+1)2k(k+1)!m!(k+1m)!0.5k+1
<m=(k+1)(η+1)2k+1(k+1)!m!(k+1m)!k+1k+1m0.5k+2m=k(η+1)2kk!m!(km)!k+1k+1m0.5k+1
<m=(k+1)(η+1)2k+1(k+1)!m!(k+1m)!0.5k+1m=k(η+1)2kk!m!(km)!0.5k<0.
(47)

So a conclusion can be drawn that if the decision threshold η is fixed, increasing the embedded intensity Nw will decrease the probability of false detection pfE.

The detection probability pdE can be deduced from the central limit theory. It can be deduced that with hypothesis H 0, the mean of ww′ is E(ww′) = 0. The variance is D(ww)=19. So it can be deduced that: Z=i=1NwwwNw9=Nwsim(ww)Nw9~N(0,1). That is, Z obeys the normal distribution N(0,1). Then it can be deduced easily that sim(ww)~N(0,Nw9).. With hypothesis H 1, the mean of ww′ is E(ww′) = 1/3. The variance is D(ww)=445. And it can be concluded that z=i=1Nw(ww13)4Nw45=(Nwsim(ww)Nw3)4Nw45~N(0,1). It is not difficult to get that sim(ww′) obeys the normal distribution N(13,4Nw45).pdE is defined as

pdE=Prob{sim(w,w)ηH1}η18πNw45e(t13)24Nw45dt.
(48)

It can be seen that increasing Nw will increase pdE for fixed threshold η.

5.2 High-order spectra channel detection

Since a traditional likelihood detector is sensitive to noise, another watermark detection channel based on high-order spectra detection is proposed to suppress noise. The detector model utilizing a bispectrum can be expressed as a hypothesis test, too:

H0:By(w1,w2)=Bx(w1,w2)watermarkabsent,H1:By(w1,w2)=Bw(w1,w2)+Bx(w1,w2)present.
(49)

where By(w 1,w 2), Bx(w 1,w 2), and Bw(w 1,w 2) are bispectrums of the test image, original image, and watermark, respectively. Even with a small SNR, a satisfying detection probability can be obtained if there is enough bispectrum information. For large N(e.g., N > 256), the statistics can be constructed as

T=2PB(n)(wi,wj)2[NKL2py(wi)py(wj)py(wi+wj)]12.
(50)

B (N)(w i,w j) is the estimated bispectrum with N points, and py (w) is the power spectra of the test image. The data is divided into K segments, and each segment contains M points. The statistics obey Gaussian distribution [17

17. G. Sundaramorthy, M. R. Raghuveer, and S. A. Diana, “Bispectral reconstruction of signal in noise amplitude reconstruction issues,” IEEE Trans. Acoust. Speech Signal Process. 38, 1297–1300 (1990).

]. Conditional probability density of {T(i)} can be expressed as

p(TH0)=1(2πσ2)N2exp[i=1N(T(i)μ0)22σ2],p(TH1)=1(2πσ2)N2exp[i=1N(T(i)μ1)22σ2].
(51)

μ1, μ0 is the mean for H 1, H 0. σ is the variance. The likelihood function can be constructed as

λ(T)=p(TH0)p(TH1)=exp[μ1μ02σ2i=1NT(i)N(μ12μ02)2σ2]<H0>H1η.
(52)

In order to calculate probabilities efficiently, the likelihood function is rewritten in logarithm form

i=1NT(i)<H0>H1σ2μ1μ0[lnη+N(μ12μ02)2σ2].
(53)

Set T˜=1Ni=1NT(i)<H0>H1σ2lnηN(μ1μ0)+μ1+μ02T˜c. The false alarm probability can be expressed as

pfB=ηp(T˜H0)dT˜=η1(2πσ2)N2exp[i=1N(T˜(i)μ0)22σ2]dT˜.
(54)

The detection probability pdB can be expressed as

pdB=ηp(T˜H1)dT˜=η1(2πσ2)N2exp[i=1N(T˜(i)μ1)22σ2]dT˜.
(55)

Since estimation of the bispectrum of the received image is independent, the bispectrum likelihood ratio can be approximately expressed by a normal distribution N(4N1,1) under H 0, according to the central limit theory. Under H 1, the bispectrum likelihood ratio can be approximately expressed by a normal distribution N(1+B2N,N+BN2). Set pfB = α

pfB=1erf(2η4N1)=α.
(56)

So η can be determined by the required pfB

η=12(erf1(1α)+4N1)2.
(57)

pdB can be expressed as

pdB=1erf(η2NB2N+B).
(58)

So from the computation of probabilities of false alarm and detection, it can be seen that they do not depend on the SNR. Even with a very low SNR, the detector can give a good performance.

5.3 Dual channel detection

The detection probability pD of the dual channel is:

pD=pdE+pdBpdEpdB.
(59)

6. Computation aspects of the watermarking system

The symmetry property can be used to considerably reduce the time required for computing the Tschebycheff moments. This property suggests the subdivision of the domain of an N × N image (when N is even) into four equal parts and performs the computation of the polynomials only in the first quadrant, where 0x,y(N21). If N is odd, the image can be zero-padded to achieve an even N. Tschebycheff moments can be modified with the help of Eq. (1)

Tpq=1ρ~pNρ~qNx=0N21y=0N21t~p(x)t~q(y){f(x,y)+(1)pf(N1x,y)+(1)qf(x,N1y)+(1)p+qf(N1x,N1y)}.
(60)

Set β(n.N) = ρ(n,N) to further simplify the computation. The direct calculation of two-dimensional Tschebycheff moments with a p + q order from Eq. (1) requires 16N(p+q+2)(p+q+1)(4(p+q)+3N9) multiplications and 16(p+q+2)(p+q+1)(2N(p+q)+3N26N9) additions. It is known that the term ρ͂(n,N) needs only to be calculated once per moment at most. Then a Tschebycheff moment with a p + q order requires 124(p+q+2)(p+q+1)((p+q)2+7(p+q)+24) multiplications and 13(p+q)3+(p+q)2+23(p+q) additions.

Because a bispectrum function is a quantity that is third order in the Fourier transform of the intensity, and thus sixth order in the electric field of the source, it is extremely demanding of computational resources. To avoid computational complexity in calculation and interpolation of the bispectrum of two-dimensional data, a simple and effective method of calculating the bispectrum is introduced. Let f(x), and (x = (x, y), x, y = 0,1, … N -1) is the 2D digital image data of N × N pixels. The Fourier transform F(w 1) of the image data is calculated with a 2N × 2N point FFT using the Gaussian window exp(-(x - μ)(x - μ)/(2σ2)), μ = (N/2,N/2), σ = N/3, and padding of zero outside of the N × N data. For each frequency pair (w 1,w 2), the triple product F(w 1)F(w 2)F(-w 1 - w 2) is calculated. [The value of F(-w 1 - w 2) is calculated with the bilinear interpolation.] The calculation is done in O(N 4) time and in O(N 2)space.

7. Experimental results

The NC between the embedded watermark w(n) and the extracted watermark w′(n) is quantitative. It is observed that the higher the NC, the more similarity there is between the extracted watermark and the original watermark. The definition of the NC is given below:

NC=i,jwijw´iji,j(wij)2.
(61)

7.1 Experimental results with rotation angle estimation

Figure 4 gives the estimation results of rotation compared with the results in Ref. [3

3. p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

], where ‘*’ are the results of our method and ‘Δ’ are the results in Ref. [3

3. p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

]. The data in the x-axis are the real rotation angles, and the data in y-axis are the estimated rotation angles. It is clear that our estimation results are closer to the real rotation angle.

7.2 Experimental results with scaling factor estimation

Figure 5 gives the estimation results of scaling factors compared to the results in Ref. [3

3. p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

], where ‘Δ’ are the results in Ref. [3

3. p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

] and ‘*’ are the results of our proposed method. The data in x-axis are the real scaling factors, and the data in y-axis are the estimated scaling factors. From the figure comparison it can be seen that our results perform better.

Fig. 4. Results of estimation rotation angle.
Fig. 5. Results of estimating of scaling factor and comparison.

7.3 Experimental results with translation parameters estimation

The translation parameters estimation results are listed in Table 2.

Table 2. Translation parameters estimation translated by StirMark

table-icon
View This Table
| View All Tables

7.4 Experimental results with rotation angle and scaling factor combined estimation

No matter what order of geometric distortions of rotation and scaling are done to the watermarked images, geometric distortion parameters are estimated in the same way and the estimation results are listed in Table 3.

Table 3. Rotation angle and scaling factor estimation in different domain

table-icon
View This Table
| View All Tables

7.5 Experimental results with translation parameter and scaling factor estimation

No matter what order of geometric distortions of translation and scaling are done to the watermarked image, the translation parameter and scaling factors are estimated with Tschebycheff moments of the original image. The estimation results are listed in Table 4.

Table 4. Translation parameter and scaling factor estimated by StirMark

table-icon
View This Table
| View All Tables

7.6 Robustness against additive noise

Generally, lower-order Tschebycheff moments of the image captures the low spatial-frequency features of an image, while the higher-order captures the high-frequency features of an image. Since noise can be seen as high-frequency features of the image, it is shown that by reconstructing an image with just its lower-order moments and using only a partial lower-order moment, noise can be removed. The experimental results listed in Table 5 show that even in watermarked images degraded by additive noise including Gaussian, salt-and-pepper, and speckle noise, the detector can still get a high NC. The NC with closed-loop embedding is achieved by modifying the watermark intensity according to the performance of watermarking. The NC with open-loop embedding is obtained by embedding the watermark with the initial watermark intensity of the closed-loop watermarking system.

Table 5. Robust to additive noise (where μ and σ 2 are the mean and variance, respectively)

table-icon
View This Table
| View All Tables

7.7 Robustness against JPEG compression

JPEG compression causes high-frequency components of an image to be attenuated. Experiments were performed to examine the robustness of the proposed watermarking technique to the JPEG compression produced by StirMark with different quality factor Q from 90 to 10 with closed-loop watermark embedding and open-loop watermark embedding. Table 6 lists the results.

Table 6. Robust to JPEG compression produced by StirMark

table-icon
View This Table
| View All Tables

7.8 Robustness against other attacks performed by StirMark

Experiments were done to test the robustness with respect to the attacks performed by StirMark with closed-loop embedding and open-loop embedding. The experimental results are listed in Table 7.

Table 7. Robustness against attacks produced by StirMark

table-icon
View This Table
| View All Tables

7.9 Performance of the proposed watermarking detector

The average of false alarm probability and the detection probability are listed in Table 8 with the experiments of 1000 embedded watermarks.

Table 8. The probabilities of the detector

table-icon
View This Table
| View All Tables

7.10 Experimental results comparison

Experimental results are compared with Ref. [6

6. S. Pereira and T. Pun, “Robust template matching for affine resistant image watermarks,” IEEE Trans. Image Process. 9, 1123–1129 (2000).

] and commercial watermark techniques. The blank squares in the table indicate that no results are listed for that techniques.

Table 9. Experimental results compared to Ref. [6] and two commercial watermarking techniques

table-icon
View This Table
| View All Tables

7.11 Closed-loop watermark embedding experimental results

The PSNR and the embedded intensity of traditional (the watermark embedding process is an open-loop system) and closed-loop watermarking processes are listed in Table 10. From the experimental results, it can be seen that the embedded intensity of closed-loop watermarking is much greater than traditional watermarking.

Table 10. Experimental data

table-icon
View This Table
| View All Tables

8. Conclusions

Acknowledgments

This work is partly supported by the National Nature Science Foundation of China grant 60502027.

References and links

1.

S. Voloshynovskiy, S. Pereira, T. Pun, J. J. Eggers, and J. K. Su, “Attacks on digital watermarks: classification, estimation-based attacks, and benchmarks,” IEEE Commun. Mag. 8, 2–10 (2001).

2.

L. Zhang, G.-B. Qian, X.W.-W. Xiao, and Z. Ji, “Geometric invariant blind image watermarking by invariant Tschebycheff moments,” Opt. Express 15, 2251–2261 (2007).

3.

p. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine “Digital watermarking robust to geometric distortions,” IEEE Trans. Image Process. 14, 2140–2150 (2005).

4.

M. Alghoniemy and A. H. Tewfik, “Geometric invariants in image watermarking,” IEEE Trans. Image Process. 13, 145–153 (2004).

5.

Y. Xin, S. Liao, and M. Pawlak, “A multibit geometrically robust image watermark based on Zernike moments,” International Conference on Pattern Recognition4, 861–864 (2004).

6.

S. Pereira and T. Pun, “Robust template matching for affine resistant image watermarks,” IEEE Trans. Image Process. 9, 1123–1129 (2000).

7.

M. Kutter, “Performance improvement of spread spectrum based image watermarking schemes through M-ary modulation,” Lect. Notes Comput. Sci. 1728, 238–250 (1999).

8.

P. Dong and N. P. Galasanos, “Affine transform resistant watermarking based on image normalization,” in Proceedings of IEEE International Conference on Image Processing, 3, 489–492 (2002).

9.

P. Bas, J.-M. Chassery, and B. Macq, “Geometrically invariant watermarking using feature points,” in Proceedings of IEEE International Conference on Image Processing, 11, 1014–1028 (2002).

10.

J. O’Ruanaidh and T. Pun, “Rotation, scale, and translation invariant spread spectrum digital image watermarking,” Signal Process. 66, 303–317 (1998).

11.

H. S. Kim and H. K. Lee, “Invariant image watermark using Zernike moments,” IEEE Trans. Circuits Syst. Vid. Technol. 13, 766–775 (2003).

12.

R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tschebycheff moments,” IEEE Trans. Image Process. 10, 1357–1364 (2001).

13.

R. Mukundan, “Some computational aspects of discrete orthonormal moments,” IEEE Trans. Image Process. 13, 1055–1059 (2004).

14.

P.-T. Yap and R. Paramesran, “Local watermarks based on Krawtchouk moments,” in IEEE Region 10 Conference pp. 73–76 (2002).

15.

M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory IT-8, 179–187, (1962).

16.

A. Hyvarinen and E. Oja, “Independent component analysis: a tutorial,” in Notes for International Joint Conference on Neural Networks (1999), http://www.cis.hut.fi/projects/ica/.

17.

G. Sundaramorthy, M. R. Raghuveer, and S. A. Diana, “Bispectral reconstruction of signal in noise amplitude reconstruction issues,” IEEE Trans. Acoust. Speech Signal Process. 38, 1297–1300 (1990).

OCIS Codes
(070.6020) Fourier optics and signal processing : Continuous optical signal processing
(100.0100) Image processing : Image processing
(100.2000) Image processing : Digital image processing

ToC Category:
Image Processing

History
Original Manuscript: March 19, 2007
Revised Manuscript: June 8, 2007
Manuscript Accepted: June 16, 2007
Published: September 24, 2007

Citation
Zhang Li, Qian Gong-bin, and Ji Zhen, "Geometric distortion-invariant watermarking based on Tschebycheff transform and dual-channel detection," Opt. Express 15, 12922-12940 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-20-12922


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. Voloshynovskiy, S. Pereira, T. Pun, J. J. Eggers and J. K. Su, "Attacks on digital watermarks: classification, estimation-based attacks, and benchmarks," IEEE Commun. Mag. 8, 2-10 (2001).
  2. L. Zhang, G.-B. Qian, X. W.-W. Xiao, and Z. Ji, "Geometric invariant blind image watermarking by invariant Tschebycheff moments," Opt. Express 15, 2251-2261 (2007).
  3. P. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine "Digital watermarking robust to geometric distortions," IEEE Trans. Image Process. 14, 2140-2150 (2005).
  4. M. Alghoniemy and A. H. Tewfik, "Geometric invariants in image watermarking," IEEE Trans. Image Process. 13, 145-153 (2004).
  5. Y. Xin, S. Liao, and M. Pawlak, "A multibit geometrically robust image watermark based on Zernike moments," International Conference on Pattern Recognition4, 861-864 (2004).
  6. S. Pereira and T. Pun, "Robust template matching for affine resistant image watermarks," IEEE Trans. Image Process. 9, 1123-1129 (2000).
  7. M. Kutter, "Performance improvement of spread spectrum based image watermarking schemes through M-ary modulation," Lect. Notes Comput. Sci. 1728, 238-250 (1999).
  8. P. Dong and N. P. Galasanos, "Affine transform resistant watermarking based on image normalization," in Proceedings of IEEE International Conference on Image Processing, 3, 489-492 (2002).
  9. P. Bas, J.-M. Chassery, and B. Macq, "Geometrically invariant watermarking using feature points," in Proceedings of IEEE International Conference on Image Processing, 11,1014-1028 (2002).
  10. J. O'Ruanaidh and T. Pun, "Rotation, scale, and translation invariant spread spectrum digital image watermarking," Signal Process. 66, 303-317 (1998).
  11. H. S. Kim and H. K. Lee, "Invariant image watermark using Zernike moments," IEEE Trans. Circuits Syst. Vid. Technol. 13, 766-775 (2003).
  12. R. Mukundan, S. H. Ong, and P. A. Lee, "Image analysis by Tschebycheff moments," IEEE Trans. Image Process. 10, 1357-1364 (2001).
  13. R. Mukundan, "Some computational aspects of discrete orthonormal moments," IEEE Trans. Image Process. 13, 1055-1059 (2004).
  14. P.-T. Yap and R. Paramesran, "Local watermarks based on Krawtchouk moments," in IEEE Region 10 Conference pp. 73-76 (2002).
  15. M. K. Hu, "Visual pattern recognition by moment invariants," IRE Trans. Information Theory IT-8, 179-187, (1962).
  16. A. Hyvarinen and E. Oja, "Independent component analysis: a tutorial," in Notes for International Joint Conference on Neural Networks (1999), http://www.cis.hut.fi/projects/ica/.
  17. G. Sundaramorthy, M. R. Raghuveer, and S. A. Diana, "Bispectral reconstruction of signal in noise amplitude reconstruction issues," IEEE Trans. Acoust. Speech Signal Process. 38, 1297-1300 (1990).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited