## Geometric attack resistant watermarking in wavelet transform domain

Optics Express, Vol. 13, Issue 4, pp. 1307-1321 (2005)

http://dx.doi.org/10.1364/OPEX.13.001307

Acrobat PDF (934 KB)

### Abstract

In this paper, we propose an autocorrelation function (ACF) based watermarking scheme in the discrete wavelet transform (DWT) domain. Conventional ACF-based watermarking embeds a watermark in the spatial domain due to its detection mechanism. We show that the autocorrelation (AC) peaks, which play an important role in estimating the applied geometric attacks in ACF-based watermarking, can also be extracted by embedding the watermark in the DWT domain. In the proposed scheme, a periodic watermark is embedded in the DWT domain by considering the AC peak strength and noise visibility. The proposed scheme also deals efficiently with the image shift problem in the detection process by using the undecimated DWT. Experimental results show that the proposed scheme yields stronger AC peaks than the spatial domain scheme does and, as a result, shows improved robustness against combined geometric-removal attacks.

© 2005 Optical Society of America

## 1. Introduction

1. J. J. K. O’ Ruanaidh and T. Pun, “Rotation, scale and translation invariant spread spectrum digital image watermarking,” Signal Processing , **66**, 303–317 (1998). [CrossRef]

18. V. Darmstaedter, J.-F. Delaigle, J. J. Quisquater, and B. Macq, “Low Cost Spatial Watermarking,” Comput. & Graphics , **22**, 417–424 (1998). [CrossRef]

6. M. Barni, F. Bartolini, V. Cappellini, and A. Piva, “A DCT-domain system for robust image watermarking,” Signal Processing , **66**, 357–372 (1998). [CrossRef]

5. I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE Trans. on Image Processing , **6**, 1673–1687 (1997). [CrossRef]

20. C. I. Podilchuk and W. J. Zheng, “Image-adaptive watermarking using visual models,” IEEE Journal on Selected Areas in Communications , **16**, 525–539 (1998). [CrossRef]

9. R. Polikar, “The wavelet tutorial”, http://users.rowan.edu/p̃olikar/WAVELETS/WTtutorial.html

10. E. J. Stollnitz, T. D. DeRose, and D. H. Salesin, “Wavelets for computer graphics: A primer,” IEEE Computer Graphics and Applications , **15**, 76–84 (1995). [CrossRef]

## 2. Watermarking algorithm

### 2.1. Watermark embedding in discrete wavelet transform domain

*j*th level in the direction

*θ*(

*θ*=1: horizontal, 2: diagonal, 3: vertical). To embed the watermark into two sub-band levels, two different periodic watermarks are generated. To obtain a period of

*M*×

*M*in the spatial domain, a watermark with period

*j*th level sub-bands. For the watermark pattern for the first-level sub-bands, a random number sequence of size

*M*/2×

*M*/2 that follows a standard normal distribution is generated with a user key. In the same way, a basic block of size

*M*/4×

*M*/4 is generated for the second-level sub-bands. Each watermark block is repeated up to the corresponding sub-band size.

*W*

_{1}and

*W*

_{2}are embedded into the sub-bands

*α*and λ are global and local weighting factors, respectively.

20. C. I. Podilchuk and W. J. Zheng, “Image-adaptive watermarking using visual models,” IEEE Journal on Selected Areas in Communications , **16**, 525–539 (1998). [CrossRef]

17. M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise Masking,” IEEE Trans. on Image Processing , **10**, 783–791 (2001). [CrossRef]

*σ*

^{θ}

^{2}

_{j}(

*x*,

*y*) and

*σ*

^{θ}

^{2}

_{jmax}are local variances on (

*x*,

*y*) and the maximum of the local variance of the sub-band in direction

*θ*and

*j*th level sub-band.

*D*is a user-defined constant in [50,100]. The higher the value of

*D*, the higher the difference of the NVF values between in the plain region and in the textured region we have.

*L*

_{j}. For the experiments, we assigned a higher weight to the second-level sub-bands. For the experiment, we set

*L*

_{1}=0.7 and

*L*

_{2}=1.

*S*and

*S*

_{1}are the user-defined weighting factors for textured and plain regions, respectively.

*x*,

*y*), which has a value between 0 and 1, has a high value (near 1) in plain regions, and a low value (near 0) in textured regions. Therefore, in the equation,

*S*

_{1}affects the embedding strength in plain regions more than

*S*does. By contrast,

*S*affects the strength in textured regions more. Therefore,

*S*should be set to a higher value than

*S*

_{1}. For the experiment, we have set

*S*=5 and

*S*

_{1}=1.

### 2.2. Watermark detection using undecimated wavelet transform

#### 2.2.1. Geometric attack estimation

*µ*(

*x*,

*y*) and σ

^{2}(

*x*,

*y*) are the local mean and local variance of the original image, respectively.

*s*

^{2}is the noise variance. Since the noise variance is not available, we uses the average of the local variances for

*s*

^{2}. The extracted signal

*E*is given by

*E*is expected to have periodicity. To find out the periodicity, the ACF of the extracted signal

*E*is calculated. The ACF can be calculated by FFT-based fast correlation calculation method [24] as

*µ*

_{acf}and σ

_{acf}denote the average and standard deviation of the autocorrelation function, respectively.

*α*

_{acf}is a user defined value.

*α*

_{acf}should be defined by considering the false negative and false positive error rate. Supposing that AC values of non-peaks in ACF follow a normal distribution N(

*µ*

_{acf},

*σ*

_{acf}), we can calculate the false positive error rate as follows. If we define a random variable

*X*that follows a standard normal distribution N(0,1), the probability that an AC value is higher than

*µ*

_{acf}+

*α*

_{acf}

*σ*

_{acf}equals the probability that

*X*is higher than

*α*

_{acf}. Thus, the false positive error rate of the AC peak detection when the threshold is

*µ*

_{acf}+

*α*

_{acf}

*σ*

_{acf}is calculated by

*P*(

*A*) denotes probability of the event

*A*.

*AC*

_{non-peak}is a random variable that follows the normal distribution N(

*µ*

_{acf},

*σ*

_{acf}).

*base peak pair*. An example is shown in Fig. 5. Using the offset information of the base peak pair, we can calculate the period of the watermark and rotation angle.

*peak count*of the peak pair. Then, we can select the peak pair with the highest peak count as the base peak pair. This method works well in normal cases, but there do exist cases in which errors occur. For example, a peak may be falsely detected. In Fig. 5, there exists a false peak on (0, 64). In such a case, the peak pair on [(0,64),(128,0)] is selected as the base peak pair, since every peak that can be found by the peak pair on [(0,128),(128,0)] can be found also by the peak pair on [(0,64),(128,0)]. In order to avoid this problem, we introduce another term,

*peak ratio*, which refers to the ratio of the number of actually found peaks to the number of expected peaks with the testing peak pair as

*expected peak count*(the number of expected peaks) of a peak pair can be calculated by referring to the image size and the offset of the peak pair. For example, suppose that the testing peak pair is on [(0,128),(128,0)] and the image size is 512×512. Then, if the testing peak pair is the base peak pair, there must be

*weighted peak count*, as follows.

#### 2.2.2. Watermark signal detection

13. G. Beylkin, “On the representation of operators in bases of compactly supported wavelets.” SIAM J. Numer. Anal. , **29**, 1716–1740, (1992). [CrossRef]

*shift4*algorithm is a 2D expanded undecimated DWT [12]. The

*shift4*algorithm produces four wavelet transform results from a non-shifted, a horizontally 1 pixel shifted, a vertically 1 pixel shifted, and a diagonally 1 pixel shifted image. With the four transform results, we can express every possible shift in the spatial domain.

*shift4*algorithm up to the second level. After first-level wavelet decomposition, we have four transformed images. The lowpass sub-band in each transform result is transformed again by the

*shift4*algorithm. Finally, we have 16 transform results. This process is shown in Fig. 6. By shifting the sub-bands of one of the 16 transform results by appropriate offset, we can express every possible shift in the spatial domain.

*M*/4×

*M*/4 for the second level and

*M*/2×

*M*/2 for the first-level). The average of all segments in each transform result is calculated. The watermark is detected by calculating the correlation between the segment average

*E*

_{j,k}and the reference watermark pattern

*W*

_{rj}while shifting the segment average by every possible shift.

*k*denotes the transform result index. (1≤

*k*≤16 for the second level and 1≤

*k*≤4 for the first-level.) This process can be performed in reduced time with FFT by

_{j}is the threshold given adaptively by

*µ*

_{ncj}and

*σ*

_{ncj}are the average and standard deviation of

*NC*

_{j,k}, respectively.

*α*

_{ncj}is a user defined value. α

_{ncj}should be set also by considering the false positive and false negative error rate of the watermark detection. Differently from the AC peak detection, the watermark detection uses the maximum value among the correlations. Thus, the calculation method of the false positive error rate is a little different. If we suppose that the correlation values between an unmarked block and the reference pattern follow a normal distribution, the probability that each correlation value is higher than the threshold can be calculated in the same way as in Eq (9). (Let

*P*

_{fpNC}denote this probability.) The probability that the maximum value in a group of correlation values is higher than the threshold equals 1-

*P*(all correlation values are less than the threshold). Thus the false positive error rate is calculated by

*R*is the number of correlation values.

*shift4*algorithm first, and then the result sub-bands are segmented and averaged. Since the DWT is a linear transform, we can reorder this process. That is, (1) the image is segmented and averaged first, and (2) the averaged block is transformed by

*shift4*algorithm. This reordering reduces the computation time drastically by reducing the size of input data for the DWT.

*M*×

*M*(

*b*

_{1},

*b*

_{2},…,

*b*

_{N}). Then, the average of the blocks is calculated by

*b*

_{avg}is transformed by the

*shift4*algorithm up to the second-level. Then, we can calculate

*E*

_{j,k}by averaging the three directional sub-bands (horizontal, vertical, and diagonal) in each level (j) in each transform result (k). The watermark is detected from with

*E*

_{j,k}by Eq. (12)–(14). This reordered detection method yields the same result as the original method described above. The reordered method transforms the block of size

*M*×

*M*by

*shift4*algorithm while the original method transforms the full image. This results in computing time being much reduced.

## 3. Experimental results

_{s}and λ

_{s}denote the global and local weighting factors, respectively. For λ

_{s}, we used the NVF-based weighting factor:

*NVF*is calculated in the spatial domain by the same method as Eq. (2).

*W*

_{s}is the periodic watermark pattern of 128×128 period. The basic watermark block is a random number sequence with standard normal distribution. During the watermark detection, the geometric attack is estimated in the same method as the proposed scheme. After the estimation, the extracted signal

*E*in Eq. (6) is restored into the original geometry. The restored signal is segmented into blocks of size 128×128, and the blocks are averaged. Then, the watermark is detected from the average block by using the maximum correlation between the average block and reference watermark pattern as in the proposed scheme. The FFT based correlation calculation is also used here.

### 3.1. Time complexity analysis

*M*×

*M*size blocks (firstlevel decomposition) and 16 DWTs of

*M*/2×

*M*/2 size blocks (second-level decomposition). For computing the correlation, three FFTs are required for each

*E*

_{j,k}. Since the orders of complexities of FFT and DWT of

*N*×

*N*size block are O(

*N*

^{2}log

*N*) and O(

*N*

^{2}), respectively, the approximate computing time for the watermark signal detection is

*M*×

*M*size blocks to detect the watermark signal, the computing time of the spatial domain method is approximately 3

*M*

^{2}log

*M*. Thus, the computing time of the watermark signal detection of the proposed scheme is less than twice of that of the spatial domain method, and both schemes have the same order of complexity O(

*M*

^{2}log

*M*).

*X*×

*Y*are required. Thus, the computing time of this process is approximately 3

*XY*(log

*Y*+log

*X*). Since

*X*,

*Y*≫

*M*, the watermark signal detection step occupies a minor portion of the overall computing time. Thus, considering the detection procedure as a whole, the computing time gap between two schemes is minor. Moreover, since

*M*is fixed in a watermarking system, the difference is constant.

### 3.2. Robustness test of the AC peaks and watermark signal

*I*and marked image

*I*′ of size

*X*×

*Y*is calculated by

^{-5}), despite the JPEG compression.

### 3.3. Watermark detection test against geometric attacks

8. F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn. “Attacks on copyright marking systems,” in *International workshop on information hiding*, LNCS 1525 (Springer-Verlag, Berlin, Germany, 1998), pp. 218–238. [CrossRef]

*α*

_{acf}=3.5 in Eq. (8) and

*α*

_{ncj}=6 in Eq. (15). With these value, the false positive error rates of the AC peak detection and watermark signal detection are about 2.3×10

^{-4}by Eq. (9) and 1.6×10

^{-5}by Eq. (16), respectively. (If we do not consider the maximum correlation finding in the watermark signal detection, the probability that each correlation value from unmarked image is higher than the threshold is about 9.9×10

^{-10}.) We set the threshold for AC peak detection a little low because the AC peaks are vulnerable to attacks and the process of geometric attack estimation in Section 2.2.1 can work well even with a few false peaks.

## 4. Conclusion

## Acknowledgments

## References and links

1. | J. J. K. O’ Ruanaidh and T. Pun, “Rotation, scale and translation invariant spread spectrum digital image watermarking,” Signal Processing , |

2. | M. Kutter, S. K. Bhattacharjee, and T. Ebrahimi, “Towards second generation watermarking schemes,” in |

3. | S. Pereira and T. Pun, “Fast robust template matching for affine resistant image watermarking,” in |

4. | M. Kutter, “Watermarking resisting to translation, rotation, and scaling.” in |

5. | I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE Trans. on Image Processing , |

6. | M. Barni, F. Bartolini, V. Cappellini, and A. Piva, “A DCT-domain system for robust image watermarking,” Signal Processing , |

7. | J. S. Lim, |

8. | F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn. “Attacks on copyright marking systems,” in |

9. | R. Polikar, “The wavelet tutorial”, http://users.rowan.edu/p̃olikar/WAVELETS/WTtutorial.html |

10. | E. J. Stollnitz, T. D. DeRose, and D. H. Salesin, “Wavelets for computer graphics: A primer,” IEEE Computer Graphics and Applications , |

11. | A.B. Watson, G.Y. Yang, J.A. Solomon, and J. Villasenor, “Visual thresholds for wavelet quantization error,” in |

12. | A. Gyaourova, C. Kamath, and I. K. Fodor, “Undecimated wavelet transforms for image de-noising,” Technical report, Lawrence Livermore National Laboratory, UCRL-ID-150931 (2002). |

13. | G. Beylkin, “On the representation of operators in bases of compactly supported wavelets.” SIAM J. Numer. Anal. , |

14. | M. Lang, H. Guo, J. E. Odegard, and C. S. Burrus, “Nonlinear processing of a shift invariant DWT for noise reduction,” in |

15. | M. Kutter and F. A. P. Peticolas, “A fair benchmark for image watermarking systems,” in |

16. | H. C. Huang, J. S. Pan, and H. M. Hang, “Watermarking based on transform domain,” in |

17. | M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise Masking,” IEEE Trans. on Image Processing , |

18. | V. Darmstaedter, J.-F. Delaigle, J. J. Quisquater, and B. Macq, “Low Cost Spatial Watermarking,” Comput. & Graphics , |

19. | W. Bender, D. Gruhl, N. Morimoto, and A. Lu, “Techniques for data hiding,” in Storage and Retrieval for Image and Video Database III, Proc. SPIE |

20. | C. I. Podilchuk and W. J. Zheng, “Image-adaptive watermarking using visual models,” IEEE Journal on Selected Areas in Communications , |

21. | S. Voloshynovskiy, F. Deguillaume, and T. Pun, “Content adaptive watermarking based on a stochastic multiresolution image modeling,” in |

22. | S. Voloshynovskiy, A. Herrigel, N. Baumgartner, and T. Pun, “A stochastic approach to content adaptive digital image watermarking,” in |

23. | I. J. Cox, M. L. Miller, and J. A. Bloom, |

24. | T. Kalker, G. Depovere, J. Haitsma, and M. Maes, “A video watermarking system for broadcast monitoring,” in |

**OCIS Codes**

(100.0100) Image processing : Image processing

(100.2000) Image processing : Digital image processing

**ToC Category:**

Research Papers

**History**

Original Manuscript: December 16, 2004

Revised Manuscript: December 15, 2004

Published: February 21, 2005

**Citation**

Choong-Hoon Lee and Heung-Kyu Lee, "Geometric attack resistant watermarking in wavelet transform domain," Opt. Express **13**, 1307-1321 (2005)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-4-1307

Sort: Journal | Reset

### References

- J. J. K. O’ Ruanaidh and T. Pun, “Rotation, scale and translation invariant spread spectrum digital image watermarking,” Signal Processing 66, 303-317 (1998). [CrossRef]
- M. Kutter, S. K. Bhattacharjee, and T. Ebrahimi, “Towards second generation watermarking schemes,” in Proceedings of IEEE Int. Conference on Image Processing (Institute of Electrical and Electronics Engineers, New York, 1999), pp. 320-323.
- S. Pereira and T. Pun, “Fast robust template matching for affine resistant image watermarking,” in International Workshop on Information Hiding, LNCS 1768 (Springer-Verlag, Berlin, Germany, 1999), pp. 200-210.
- M. Kutter, “Watermarking resisting to translation, rotation, and scaling,” in Multimedia systems and applications, Proc. SPIE 3528, 423-431 (1998).
- I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE Trans. on Image Processing 6, 1673-1687 (1997). [CrossRef]
- M. Barni, F. Bartolini, V. Cappellini, and A. Piva, “A DCT-domain system for robust image watermarking,” Signal Processing 66, 357-372 (1998). [CrossRef]
- J. S. Lim, Two-dimensional signal and image processing (Prentice Hall, New Jersey, 1990).
- F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn. “Attacks on copyright marking systems,” in International workshop on information hiding, LNCS 1525 (Springer-Verlag, Berlin, Germany, 1998), pp. 218-238. [CrossRef]
- R. Polikar, “The wavelet tutorial”, http://users.rowan.edu/˜polikar/WAVELETS/WTtutorial.html
- E. J. Stollnitz, T. D. DeRose, and D. H. Salesin, “Wavelets for computer graphics: A primer,” IEEE Computer Graphics and Applications 15, 76-84 (1995). [CrossRef]
- A.B. Watson, G.Y.Yang, J.A.Solomon, and J.Villasenor, “Visual thresholds for wavelet quantization error,” in Human Vision and Electronic Imaging, B. E. Rogowitz and J. P. Allebach, eds., Proc. SPIE 2657, 382-392 (1996).
- A. Gyaourova, C. Kamath, and I. K. Fodor, “Undecimated wavelet transforms for image de-noising,” Technical report, Lawrence Livermore National Laboratory, UCRL-ID-150931 (2002).
- G. Beylkin, “On the representation of operators in bases of compactly supported wavelets,” SIAM J. Numer. Anal. 29, 1716-1740, (1992). [CrossRef]
- M. Lang, H. Guo, J. E. Odegard, and C. S. Burrus, “Nonlinear processing of a shift invariant DWT for noise reduction,” in Mathematical Imaging: Wavelet Applications for Dual Use, Proc. SPIE 2491, 640-651 (1995).
- M. Kutter and F. A. P. Peticolas, “A fair benchmark for image watermarking systems,” in Security and Watermarking of Multimedia Contents, P. W. Wong and E. J. Delp, eds., Proc. SPIE 3657, 226-239 (1999).
- H. C. Huang, J. S. Pan, and H. M. Hang, “Watermarking based on transform domain,” in Intelligent Watermarking Techniques, J. S. Pan, H. C. Huang, and L. C. Jain, eds. (World Scientific, Singapore, 2004), pp.147-163.
- M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise Masking,” IEEE Trans. on Image Processing 10, 783-791 (2001). [CrossRef]
- V. Darmstaedter, J.-F. Delaigle, J. J. Quisquater, and B. Macq, “Low Cost Spatial Watermarking,” Comput. & Graphics 22, 417-424 (1998). [CrossRef]
- W. Bender, D. Gruhl, N. Morimoto, and A. Lu, “Techniques for data hiding,” in Storage and Retrieval for Image and Video Database III, Proc. SPIE 2420, 165-173 (1995).
- C. I. Podilchuk and W. J. Zheng, “Image-adaptive watermarking using visual models,” IEEE Journal on Selected Areas in Communications 16, 525-539 (1998). [CrossRef]
- S. Voloshynovskiy, F. Deguillaume, and T. Pun, “Content adaptive watermarking based on a stochastic multiresolution image modeling,” in Tenth European Signal Processing Conference (EUSIPCO’2000), Tampere, Finland, Sept. 2000.
- S. Voloshynovskiy, A. Herrigel, N. Baumgartner, and T. Pun, “A stochastic approach to content adaptive digital image watermarking,” in International Workshop on Information Hiding, LNCS 1768 (Springer-Verlag, Berlin, Germany, 1999), pp. 212-236.
- I. J. Cox, M. L. Miller, and J. A. Bloom, Digital Watermarking (Morgan Kaufmann Publishers, San Francisco, Calif., 2002).
- T. Kalker, G. Depovere, J. Haitsma, and M. Maes, “A video watermarking system for broadcast monitoring,” in Security and Watermarking Multimedia Contents, P. W. Wong and E. J. Delp, eds., Proc. SPIE 3657, 103-112 (1999).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1. |
Fig. 2. |
Fig. 3. |

Fig. 4. |
Fig. 5. |
Fig. 6. |

Fig. 7. |
Fig. 8. |
Fig. 9. |

Fig. 10. |
Fig. 11. |
Fig. 12. |

« Previous Article | Next Article »

OSA is a member of CrossRef.