OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 21 — Oct. 10, 2011
  • pp: 21011–21017
« Show journal navigation

Digital cleaning and “dirt” layer visualization of an oil painting

Cherry May T. Palomero and Maricor N. Soriano  »View Author Affiliations


Optics Express, Vol. 19, Issue 21, pp. 21011-21017 (2011)
http://dx.doi.org/10.1364/OE.19.021011


View Full Text Article

Acrobat PDF (1406 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We demonstrate a new digital cleaning technique which uses a neural network that is trained to learn the transformation from dirty to clean segments of a painting image. The inputs and outputs of the network are pixels belonging to dirty and clean segments found in Fernando Amorsolo’s Malacañang by the River. After digital cleaning we visualize the painting’s discoloration by assuming it to be a transmission filter superimposed on the clean painting. Using an RGB color-to-spectrum transformation to obtain the point-per-point spectra of the clean and dirty painting images, we calculate this “dirt” filter and render it for the whole image.

© 2011 OSA

1. Introduction

2. Motivation for using neural networks

The painting that was digitally cleaned was Malacañang by the River by the Philippines’ first National Artist, Fernando Amorsolo. It was dated 1948 and was still in its original frame. When the painting was taken out of its frame, it was observed that parts that were previously covered by the frame were generally less dirty and darkened than the exposed parts. This discovery gave us a new source of clean samples that does not require doing actual restorations or micro-sampling on the painting. We used as our “clean” segments the painting parts that were previously covered by the frame (within around 5mm from the painting’s edge) while the “dirty” segments were the adjacent exposed parts. This kind of sampling limits the dirty-clean sample pairs to colors that are present on the edges and therefore makes the cleaning technique more challenging.

Although neural networks have already been used to solve classification and image in-painting problems [10

10. A. Gascadi and P. Szolgay, “Image inpainting methods by using cellular neural networks,” in Int’l Workshop on Cellular Neural Networks and Their Applications (IEEE,2005), pp 198–201.

], they are yet to be applied for digital cleaning. By exploiting the trained neural network’s ability to solve the desired output for completely new inputs, we were able to clean the whole painting using just the training data from the edges of the painting, where our exposed (dirty) – unexposed (clean) pairs are available. The application ofneural networks, has therefore allowed us to do total non-invasive digital cleaning of a very colorful painting.

3. Sampling procedure and neural network training

An image of the painting without its frame was captured without flash under ambient museum lights (50W halogen dichroic lamps) using an 8-megapixel Olympus E500 digital SLR camera.

A total of 1,350 pairs of pixels from exposed and unexposed parts were manually selected from all around the edges of the image. The number of sample pairs per color/element was dependent on the occurrence of that color/element on the unexposed part of the painting. A summary of the distribution of these sampling points is listed in Table 1

Table 1. Distribution of Sample Pairs

table-icon
View This Table
. Because the painting is not only composed of color but also texture, the effect of shadows and highlights was preserved by taking the exposed and unexposed pair from the same texture component (brushstroke, shadow of brush stroke, bumps and dips of canvas weave) .

The neural network creation and training was implemented using the nftool and the nntool interfaces from the Neural Network Toolbox of Matlab 2007 [11

11. Matlab 2007 Neural Network Toolbox Documentation page.

]. Of all the neural networks tested, the best training performance was obtained using a standard two-layer feed forward neural network trained using Levenberg-Marquardt optimization, with 30 neurons in the hidden layer. A tangent-sigmoid transfer function was used for both the hidden and the output layers. This was done to limit the output to acceptable RGB values only (0 to 255). The inputs to this network are the RGB of the pixels belonging to dirty paint segments and the desired output are the RGB of the pixels from clean segments. Of the 1350 sample pairs, 60% was used for network training, 20% was used for validation and 20% was used for testing.

4. Cleaning results

In actual physical cleaning, there is no standard quantitative criterion of restoration performance and the only gauge of the success of the cleaning process is by a visual examination of the results. Proof of this is evident in the trial and error approach implemented by most conservators [1

1. M. Pappas and I. Pitas, “Digital color restoration of old paintings,” IEEE Trans. Image Process. 9(2), 291–294 (2000). [CrossRef] [PubMed]

]. Likewise, although digital cleaning uses a scientific approach, results are still speculative [3

3. R. Berns, F. Imai, and L. Taplin, “Rejuvenating Seurat’s A Sunday On La Grande Jatte- 1884 using color And imaging science techniques: A simulation,” in ICOM 14th Triennial Meeting The Hague: 12–16September,2005: Preprints, I. Verger. ed. (Maney Publishing, 2005), pp 452–458.

,5

5. P. Cotte and D. Dupraz, “Spectral imaging of Leonardo Da Vinci’s Mona Lisa: A true color smile without the influence of aged varnish,” in Proc. IS&T CGIV’06, University of Leeds UK, June 19–22, 2006.

]. However, certain factors, like the presence of colored photographs or descriptions of the painting while it was still new, could help gauge the reasonability of results. In this case, we assess the performance of our technique using two measures. First of these is that the neural network should be able to clean even pixels of the boundary that were not part of the training set. This would mean that after digital cleaning the transition of colors from the unexposed to the adjacent exposed parts should be smoother. The second measure would involve looking if the color change induced by the digital cleaning is consistent with the context of the painting. For example, knowing that the Malacañang Palace depicted in the center of the painting is supposed to be white and that the upper right corner is supposed to be clear blue sky, we expect a color change towards white and towards blue for the two painting elements, respectively.

A side-by-side comparison of Malacañang by the River before and after digital cleaning is shown in Fig. 1
Fig. 1 Malacañang by the River by Fernando Amorsolo, oil on canvas board, 43.7x56.2x4.0 cm before (left) and after (right) digital cleaning. The painting is from the UP Vargas Museum Collection.
. Notice that the mask-like boundary between the exposed and unexposed portions is now less visible. The blue sky, green tree and the lower-left portion of the river all look more vivid after digital cleaning. The highlights on the clouds and the Malacañang are also more pronounced. Figure 2
Fig. 2 Detail of Malacañang by the River before (left) and after (right) digital cleaning.
shows a detail cropped from the upper right corner of the painting.

It must be noted that the total number of pixels used in training (1,350 pairs) is less than 0.04% of the total number of pixels belonging to the painting image (7,138,640) and yet the network was able to generalize the cleaning all throughout the painting.

However, certain portions, like the pinkish red leaves and shore (Fig. 3
Fig. 3 Detail of Malacañang by the River before (left) and with the over-cleaning (right).
) appeared over-cleaned; making it look like the paint has flaked off. It was observed that these overcleaned areas either had the same color as the dirty pixels from the training set, or were the colors that were not represented in the training set.

5. Context-based post processing

The over-cleaning was addressed by applying a post processing that looks at the context of the painting to rule out overcleaned areas. The first step is to segment the overcleaned areas. This could be done either manually, by drawing a polygon around the region of interest, or automatically, using any segmentation algorithm. In our case we use histogram back projection [12

12. M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991). [CrossRef]

], which is a color segmentation algorithm that uses the histogram of a sample portion of the region of interest as a probability distribution function in tagging the rest of the pixels belonging to the said region of interest. To remove the effect of brightness variations, the Euclidian color difference, D, in rg space was then computed for these areas using D = (r2 + g2)1/2 where r = R/(R + G + B) and g = G/(R + G + B). After observing that the parts that were over-cleaned would have a greater Euclidian color difference in rg space than those that were not over-cleaned, we imposed a condition that once the Euclidian color difference before and after cleaning exceeded a certain threshold, the pixel will retain its value prior to cleaning. The result for the post processing is shown in Fig. 4
Fig. 4 Detail of Malacañang by the River before (left), with the over-cleaning (center) and after the post processing (right).
. The final result for the whole painting is shown in Fig. 5
Fig. 5 Malacañang by the River before (left) and after (right) digital cleaning and context-based post processing.
.

6. Visualization of the Dirt Layer

Because the action of our digital cleaning procedure was to virtually remove the painting’s dirt and grime due to exposure and to eliminate the effect of the oxidized varnish, we associate the dirt layer with the oxidized varnish and the dirt and grime that adhered to it. As Cotte and Dupraz [5

5. P. Cotte and D. Dupraz, “Spectral imaging of Leonardo Da Vinci’s Mona Lisa: A true color smile without the influence of aged varnish,” in Proc. IS&T CGIV’06, University of Leeds UK, June 19–22, 2006.

] have experimentally proven, the effect of this aged varnish is similar to that of a brightness and color filter that is superimposed on the painting. Since the effect of a superimposed filter is to multiply the spectrum of the object beneath it with the filter’s spectrum, the dirt layer spectrum could then be obtained by getting the quotient of the painting image’s reflectance spectra before and after digital cleaning, that is,

Dirt_spectra(λ)=Dirty_pixel_spectra(λ)Clean_pixel_spectra(λ)
(1)

The reflectance spectra before and after digital cleaning can be reconstructed from the image RGB values using Imai and Berns’ technique, or alternatively using Haneishi et al’s method of Wiener estimation [13

13. F. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in Proc. of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (AIC, 1999) pp. 42–49.

,14

14. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000). [CrossRef] [PubMed]

]. In this case we used our variation of Imai and Bern’s technique [15

15. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [PubMed]

]. Using just the first three principal components of an ensemble of 1600 Munsell color chips reflectance spectra and the measured channel sensitivities of the camera the point-per-point spectral information of the painting image before and after digital cleaning was obtained. Figure 6
Fig. 6 Reconstruction of the dirty, clean, and dirt spectra of a pixel in the sky portion of the painting. The patches show the corresponding image pixel.
shows the calculation and rendering of the dirt spectrum of a point in the painting image. Applying Eq. (1) and computing the RGB of the dirt filter per pixel for the whole painting results to the dirt layer visualization shown in Fig. 7
Fig. 7 Visualization of Malacañang by the River’s dirt layer.
.

More accurate spectral estimation of the dirty and clean paint may be obtained if more than 3 camera channels are used to image the painting. However it is interesting to note that even with just 3 channels, an estimate of the dirt spectra can be obtained. The yellowish cast for the whole painting is consistent with the effect of the yellow oxidized varnish, while the greenish brown color that is visible especially at the corners of the image could be attributed to the dirt and grime that adhered to the painting over time. Although many of the color changes in the painting happens similarly across the entire surface of the painting, some changes could also be isolated to a particular area or pigment only. Differences in pigment steadfastness and variations in the dirtying of different locations in the painting could also contribute to the non-homogeneity of the actual dirt layer [16

16. K. Martinez, J. Cupitt, D. Saunders, and R. Pillay, “Ten years of art imaging research,” in Proc. IEEE 90, 28–41 (2002).

]. Also because the varnish and the dirt discoloration would not be equally visible on all colors, some non-uniformity on the dirt layer visualization could be expected. The black areas correspond to locations where the reconstruction of either the clean or dirty spectra has turned out negative values, and has therefore been set to zero.

7. Conclusion

We introduced two innovations in digital cleaning. The first is the use of neural networks and digital color samples of hidden, dirt-free parts of the painting to learn the transformation from dirty to clean segments of a painting. The application of neural networks has allowed us to introduce a totally non-invasive, whole-painting, digital cleaning. Although the results showed over-cleaning for portions that resemble the color of the dirty pixels during training, this was resolved by doing context-based image processing. Since the results showed visual correctness, as evidenced by the minimization of the difference between the exposed and unexposed painting portions, we conclude that our methodology was successful in digitally cleaning Fernando Amorsolo’s Malacañang by the River.

The second is a method for visualizing the discoloration of the painting as a transmitting film. From the RGB values of pixels of the dirty and digitally cleaned painting, we can recover an estimate of their reflectance spectra. By taking the ratio of the dirty and the clean spectrum point-per-point we can calculate the transmission of this “dirt” filter and render it for the whole painting. Even with just 3 camera channels, the calculated spectra appear consistent with common discoloration processes such as varnish oxidation, difference in color steadfastness, or accumulation of dirt particles.

Acknowledgements

References and links

1.

M. Pappas and I. Pitas, “Digital color restoration of old paintings,” IEEE Trans. Image Process. 9(2), 291–294 (2000). [CrossRef] [PubMed]

2.

M. Barni, F. Bartolini, and V. Cappellini, “Image processing for virtual restoration of artworks,” IEEE Multimed. 7(2), 34–37 (2000). [CrossRef]

3.

R. Berns, F. Imai, and L. Taplin, “Rejuvenating Seurat’s A Sunday On La Grande Jatte- 1884 using color And imaging science techniques: A simulation,” in ICOM 14th Triennial Meeting The Hague: 12–16September,2005: Preprints, I. Verger. ed. (Maney Publishing, 2005), pp 452–458.

4.

C. M. Palomero and M. Soriano, “After digital cleaning: visualization of the dirt layer,” Proc. SPIE 7869, 78690O, 78690O-7 (2011), doi:. [CrossRef]

5.

P. Cotte and D. Dupraz, “Spectral imaging of Leonardo Da Vinci’s Mona Lisa: A true color smile without the influence of aged varnish,” in Proc. IS&T CGIV’06, University of Leeds UK, June 19–22, 2006.

6.

R. S. Berns, “Rejuvenating the appearance of cultural heritage using color and imaging science techniques,” in Proc. AIC Colour 05 (AIC, 2005), pp. 369–374.

7.

M. Bacci, F. Baldini, R. Carla, R. Linari, M. Picollo, and B. Radicati, “Color analysis of the Brancacci chapel frescoes: part II,” Appl. Spectrosc. 47(4), 399–402 (1993). [CrossRef]

8.

M. Bacci, A. Casini, C. Cucci, M. Picollo, B. Radicati, and M. Vervat, “Non-invasive spectroscopic measurements on the Il Ritratto della figliastra by Giovanni Fattori: identification of pigments and colourimetric analysis,” J. Cult. Herit. 4(4), 329–336 (2003). [CrossRef]

9.

C. M. Palomero and M. Soriano, “Neural network for the digital cleaning of an oil painting,” in Digital Image Processing and Analysis, OSA Technical Digest (CD) (Optical Society of America, 2010), paper DMD5. http://www.opticsinfobase.org/abstract.cfm?URI=DIPA-2010-DMD5

10.

A. Gascadi and P. Szolgay, “Image inpainting methods by using cellular neural networks,” in Int’l Workshop on Cellular Neural Networks and Their Applications (IEEE,2005), pp 198–201.

11.

Matlab 2007 Neural Network Toolbox Documentation page.

12.

M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991). [CrossRef]

13.

F. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in Proc. of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (AIC, 1999) pp. 42–49.

14.

H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000). [CrossRef] [PubMed]

15.

M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [PubMed]

16.

K. Martinez, J. Cupitt, D. Saunders, and R. Pillay, “Ten years of art imaging research,” in Proc. IEEE 90, 28–41 (2002).

OCIS Codes
(100.0100) Image processing : Image processing
(100.2000) Image processing : Digital image processing
(100.3020) Image processing : Image reconstruction-restoration
(330.1690) Vision, color, and visual optics : Color

ToC Category:
Image Processing

History
Original Manuscript: April 20, 2011
Revised Manuscript: May 19, 2011
Manuscript Accepted: June 27, 2011
Published: October 7, 2011

Citation
Cherry May T. Palomero and Maricor N. Soriano, "Digital cleaning and “dirt” layer visualization of an oil painting," Opt. Express 19, 21011-21017 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-21-21011


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. Pappas and I. Pitas, “Digital color restoration of old paintings,” IEEE Trans. Image Process.9(2), 291–294 (2000). [CrossRef] [PubMed]
  2. M. Barni, F. Bartolini, and V. Cappellini, “Image processing for virtual restoration of artworks,” IEEE Multimed.7(2), 34–37 (2000). [CrossRef]
  3. R. Berns, F. Imai, and L. Taplin, “Rejuvenating Seurat’s A Sunday On La Grande Jatte- 1884 using color And imaging science techniques: A simulation,” in ICOM 14th Triennial Meeting The Hague: 12–16September,2005: Preprints, I. Verger. ed. (Maney Publishing, 2005), pp 452–458.
  4. C. M. Palomero and M. Soriano, “After digital cleaning: visualization of the dirt layer,” Proc. SPIE7869, 78690O, 78690O-7 (2011), doi:. [CrossRef]
  5. P. Cotte and D. Dupraz, “Spectral imaging of Leonardo Da Vinci’s Mona Lisa: A true color smile without the influence of aged varnish,” in Proc. IS&T CGIV’06, University of Leeds UK, June 19–22, 2006.
  6. R. S. Berns, “Rejuvenating the appearance of cultural heritage using color and imaging science techniques,” in Proc. AIC Colour 05 (AIC, 2005), pp. 369–374.
  7. M. Bacci, F. Baldini, R. Carla, R. Linari, M. Picollo, and B. Radicati, “Color analysis of the Brancacci chapel frescoes: part II,” Appl. Spectrosc.47(4), 399–402 (1993). [CrossRef]
  8. M. Bacci, A. Casini, C. Cucci, M. Picollo, B. Radicati, and M. Vervat, “Non-invasive spectroscopic measurements on the Il Ritratto della figliastra by Giovanni Fattori: identification of pigments and colourimetric analysis,” J. Cult. Herit.4(4), 329–336 (2003). [CrossRef]
  9. C. M. Palomero and M. Soriano, “Neural network for the digital cleaning of an oil painting,” in Digital Image Processing and Analysis, OSA Technical Digest (CD) (Optical Society of America, 2010), paper DMD5. http://www.opticsinfobase.org/abstract.cfm?URI=DIPA-2010-DMD5
  10. A. Gascadi and P. Szolgay, “Image inpainting methods by using cellular neural networks,” in Int’l Workshop on Cellular Neural Networks and Their Applications (IEEE,2005), pp 198–201.
  11. Matlab 2007 Neural Network Toolbox Documentation page.
  12. M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis.7(1), 11–32 (1991). [CrossRef]
  13. F. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in Proc. of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (AIC, 1999) pp. 42–49.
  14. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt.39(35), 6621–6632 (2000). [CrossRef] [PubMed]
  15. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express10(25), 1458–1464 (2002). [PubMed]
  16. K. Martinez, J. Cupitt, D. Saunders, and R. Pillay, “Ten years of art imaging research,” in Proc. IEEE 90, 28–41 (2002).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited