OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 21 — Oct. 10, 2011
  • pp: 21091–21097
« Show journal navigation

Optical processing of color images with incoherent illumination: orientation-selective edge enhancement using a modified liquid-crystal display

Ariel Fernández, Julia R. Alonso, Jorge L. Flores, Gastón A. Ayubi, J. Matías Di Martino, and José A. Ferrari  »View Author Affiliations


Optics Express, Vol. 19, Issue 21, pp. 21091-21097 (2011)
http://dx.doi.org/10.1364/OE.19.021091


View Full Text Article

Acrobat PDF (1102 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a novel optical method for edge enhancement in color images based on the polarization properties of liquid-crystal displays (LCD). In principle, a LCD generates simultaneously two color-complementary, orthogonally polarized replicas of the digital image used as input. The currently viewed image in standard LCD monitors and cell phone’s screens -which we will refer as the “positive image or true-color image”- is the one obtained by placing an analyzer in front of the LCD, in cross configuration to the back polarizer of the display. The orthogonally polarized replica of this image –the “negative image or complementary-color image”- is absorbed by the front polarizer. In order to generate the positive and negative replica with a slight displacement between them, we used a LCD monitor whose analyzer (originally a linear polarizer) was replaced by a calcite crystal acting as beam displacer. When both images are superimposed laterally displaced across the image plane, one obtains an image with enhanced first-order derivatives along a specific direction. The proposed technique works under incoherent illumination and does not require precise alignment, and thus, it could be potentially useful for processing large color images in real-time applications. Validation experiments are presented.

© 2011 OSA

1. Introduction

Contouring, segmentation and recognition of objects rely on a precise edge detection or edge enhancement. Edges evidence the structure and shape of objects as well as fine details of an image, i.e., they correspond to the high spatial frequencies of the Fourier spectrum of the image. Therefore, edge enhancement increases the discrimination capability of the segmentation and recognition systems [1

1. B.-L. Liang, Z.-Q. Wang, G.-G. Mu, J.-H. Guan, H.-L. Liu, and C. M. Cartwright, “Real-time edge-enhanced optical correlation with a cerium-doped potassium sodium strontium barium niobate photorefractive crystal,” Appl. Opt. 39(17), 2925–2930 (2000). [CrossRef] [PubMed]

3

3. J. Fan, D. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Process. 10(10), 1454–1466 (2001). [CrossRef] [PubMed]

].

Edges in gray scale images are defined in an achromatic way as discontinuities in brightness function. Hence, in this kind of images, edge detection is accomplished by means of searching rapid changes in intensity values. Besides considering an extension of this procedure to color images, color edge detection also involves finding discontinuities along the adjacent regions of a color image in a certain 3D color space, e.g. RGB- or HSI-color space, that would capture different color characteristics [3

3. J. Fan, D. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Process. 10(10), 1454–1466 (2001). [CrossRef] [PubMed]

,4

4. A. R. Weeks, C. E. Felix, and H. R. Myler, “Edge detection of color images using the HSL color space,” Proc. SPIE 2424, 291–301 (1995). [CrossRef]

]. Both changes in brightness and color between neighboring pixels should be exploited for more efficient color-edge extraction. In fact, Novak and Shafer [5

5. C. L. Novak and S. A. Shafer, “Color edge detection,” in Proceedings of DARPA Image Understanding workshop, Los Angeles, CA, USA, vol.1, (1987), pp. 35–37.

] found that 10% of the edges detected in color images fall into this non-intensity class.

Color edge detection methods can be classified as synthetic and vector ones [6

6. X. Chen and H. Chen, “A novel color edge detection algorithm in RGB color space,” International Conference on Signal Processing Proceedings, ICSP, art. no. 5655926, pp. 793–796 (2010).

]. The first class typically includes earlier approaches to color edge detection, which basically consist in extensions of gray value algorithms that act on each RGB color channel independently and are combined later following certain logic operation [7

7. M. Hedley and H. Yan, “Segmentation of color images using spatial and color space information,” J. Electron. Imaging 1(4), 374–380 (1992). [CrossRef]

,8

8. T. Carron and P. Lambert, “Color edge detector using jointly hue, saturation and intensity,” in Proceedings of ICIP-94, (IEEE, 1994), pp. 977–981.

]. Vector methods on the other hand, take into account three-dimensional structure of the information of each pixel of an image. Among these, statistical methods based on differences in local vector statistics have been widely applied [9

9. J. Scharcanski and A. N. Venetsanopoulos, “Edge detection of color images using directional operators,” IEEE Trans. Circ. Syst. Video Tech. 7(2), 397–401 (1997). [CrossRef]

11

11. P. E. Trahanias and A. N. Venetsanopoulos, “Vector order statistics operators as color edge detectors,” IEEE Trans. Syst. Man Cybern. B Cybern. 26(1), 135–143 (1996). [CrossRef] [PubMed]

]. Precision in assessing edge detection comes at the cost of more computation time, which makes these methods difficult to implement in real-time applications [6

6. X. Chen and H. Chen, “A novel color edge detection algorithm in RGB color space,” International Conference on Signal Processing Proceedings, ICSP, art. no. 5655926, pp. 793–796 (2010).

].

2. Description of the method

The proposed setup to obtain first partial derivatives of color images is shown in Fig. 1
Fig. 1 Proposed setup. For illustrative purposes, narrow light beams with (intentionally) exaggerated lateral displacements are shown. But actually, the beams cover the prism aperture and the slightly displaced images I+ and I will overlap.
. It consists of a commercial LCD (e.g., a standard PC monitor) with its own incoherent white light source, an imaging lens (L), a properly oriented piece of calcite crystal (C) placed in front of the lens, an analyzer (A), and a digital color camera (not shown in Fig. 1) to acquire the optically processed images. Our monitor is a twisted-nematic LCD manufactured to work between a crossed polarizer-analyzer pair. For our purpose, we have removed the analyzer glued in front of LCD.

In the absence of the calcite crystal, the usual crossed polarizer-analyzer configuration gives us what we will call “positive image” with intensity I+(x,y). For a color image one can writeI+(x,y)=(I+R(x,y),I+G(x,y),I+B(x,y)), where the superscripts R, G, B stand for the red, green and blue components of light intensity at (x,y). On the other hand, the parallel polarizer-analyzer configuration gives the “negative image” I(x,y) with RGB information complementary to that of I+(x,y), in other words a complementary-color image. Let I0=(1,1,1) be the intensity of the LCD light source. Then, neglecting absorption in the liquid-crystal, it can be easily shown that [12

12. J. A. Ferrari, J. L. Flores, and G. Garcia-Torales, “Directional edge enhancement using a liquid-crystal display,” Opt. Commun. 283(14), 2803–2806 (2010). [CrossRef]

,13

13. J. L. Flores and J. A. Ferrari, “Orientation-selective edge detection and enhancement using the irradiance transport equation,” Appl. Opt. 49(4), 619–624 (2010). [CrossRef] [PubMed]

]
I+(x,y)+I(x,y)=I0,
(1)
i.e. the exiting white image from the LCD (without analyzer) can be regarded as the incoherent superposition of complementary images.

The calcite crystal (C) is oriented with its optical axis lying on the incidence plane (i.e., the plane of Fig. 1). As it is well-known, a normally incident light wave splits into two partial waves; an ordinary wave with polarization direction orthogonal to the incidence plane, and an extraordinary wave with polarization parallel to it (see e.g [14

14. E. Hecht, Optics, 4th ed. (Addison Wesley, 2002), Chap. 8.

]). These partial waves will be laterally displaced (on the incidence plane) a distance δ between them. In the next, we will assume that the partial wave polarized in the x-direction is the extraordinary wave and it corresponds to the “negative image” I(x,y). [Of course, rotating the calcite prism by 90°, it is possible to displace laterally the partial waves along the y-direction. The image actually displaced will be now the “positive” one, in fact, whichever of the images, “positive” or “negative”, could be really displaced but it does not play an essential role in our argumentation (see below).]

Then, at the output plane we get
Iout(x,y)=c+I+(x,y)+cI(x+Δx,y),
(2)
where c+ and c are positive real quantities that take into account a possible intensity imbalance due to the position of the analyzer (A). Using Eq. (1), expression (2) can be rewritten as
Iout(x,y)=(c+c)I+(x,y)c{I+(x+Δx,y)I+(x,y)}+cI0.
(3)
An equivalent expression (with a sign change) is obtained when the displacement is in the y-direction. Thus, we get the superposition of the original imageI+ and its directional derivative, plus a homogeneous intensity pattern. When the analyzer (A) is at 45° respect to the x-direction, the intensity of both (positive and negative) images will be the same, i.e. c=c+=1/2, and thus, we will only have the directional derivative of the image on a homogeneous gray intensity background (1/2, 1/2, 1/2).

In our setup we are assuming that the LCD is relatively far from the prism and that we are dealing with paraxial rays, so that the dependence of δ with the incidence angle and the defocus produced by the difference of optical paths traveled by ordinary and extraordinary wave can be disregarded. [This condition can be fulfilled when s0>>f and the lens aperture is small (see below).] It is easy to demonstrate that, in a first-order approximation, the amount of image displacement (Δx) will be given by
Δxzδ/f.
(4)
Also, comparing the calcite birefringence for red and blue wavelengths, it is not difficult to verify that the variation of δ with the wavelength is at most of the order of 5%. [Of course, an equivalent setup can be built using another birefringent prism (or a prism system, e.g. Babinet’s or Savart’s prism).]

3. Experimental results

We performed a series of validation experiments using a setup similar to that shown in Fig. 1. The digital images to be processed were displayed on a color-LCD monitor (model LMS560s, AOC), whose original analyzer was removed. The white light source was the original monitor light source. The optically processed images were acquired with a commercial digital color camera (model Pentax K-x) with an objective lens (L) of focal length f=50 mm and aperture f/40. As beam displacer (C) we used a calcite prism (BD27, Thorlabs) with δ=2.7mm and a clear aperture of 10mm × 10mm. An extra positive lens (not shown in Fig. 1) with 500 mm focal length was introduced between prism and monitor, and located at a distance d=400mm from it. As d is less than the focal length, the acquisition system (digital camera) sees a virtual image located at a distance s02.5m (>> f) from the prism, and thus, we can assume that the incident rays on the calcite prism are nearly parallel. From Newton’s lens equation one has zf2/s01mm, and then, from Eq. (4) we can estimate Δx55μm (i.e., ~10 pixels of the digital camera).

Actually, commercial LCD monitors are designed to work with a polarizer-analyzer configuration rotated 45° relative to the monitor horizontal (vertical) direction. To simplify the description of setup and experimental results, we have chosen a (x,y)-coordinate system rotated 45° with respect to the monitor horizontal direction, and also the images displayed on LCD monitor were rotated through an angle of 45°.

Firstly we performed a series of experiments using a picture with colored geometric figures created by Byron Callas as test image [15]. Figure 2(a)
Fig. 2 Experimental results using a geometric abstraction image created by Byron Callas as test image: (a) true-color image; (b) complementary-color image; (c) and (d) optically processed images with fist-order derivative in y- and x- direction, respectively.
shows the “positive” image obtained when only light polarized along the y-direction (i.e., the ordinary wave of the calcite prism) is incident on the camera, while Fig. 2(b) shows the complementary (“negative”) image obtained when only light polarized in the x-direction, that corresponds to the extraordinary wave, is incident on the camera. Figures 2(c) and 2(d) show both images superimposed, with the “negative” image slightly displaced with respect to the “positive” one along the y - and x -direction, that correspond to the vertical and horizontal direction of the images shown in Fig. 2, respectively. The required (arbitrary) image displacements in the selected directions were achieved by rotating the beam displacing prism, as explained above.

We observe that the optically processed images have colored edge lines superimposed on a grayish, featureless background. The horizontal image borders in Fig. 2(c) and the vertical image borders in Fig. 2(d), are depicted in colors with different tones depending on the sign of the transition and the color interface, e.g., blue to yellow depicts a bluish tone and yellow to blue depicts yellowish tone, and thus, the technique distinguishes the sign of the derivative along an arbitrarily selected direction.

We repeated the experiment using a digital version of the picture Frida Kahlo picks flowers for her hair painted by Tasha [16] as test image. Figures 3(a)
Fig. 3 Experimental results using Tascha’s picture Frida Kahlo picks flowers for her hair as test image.
and 3(b) show the “positive” and “negative” image obtained experimentally, respectively. Figure 3(c) depicts the superposition of both images, but with the “negative” image slightly displaced with respect to the “positive” one along the vertical direction, respectively. In Fig. 3(c) is clearly shown the enhancement of the edges of the original picture: we see that the colors of the enhanced edges depend on the colors of the adjacent regions of the image.

Figures 4(a)–(d)
Fig. 4 Experimental results using Eduardo Urculo’s picture Catprovi as test image.
show the experimental results obtained using Eduardo Urculo’s picture Catprovi as test image [17]. Figures 4(a) and 4(b) are the “positive” and “negative” images, while Fig. 4(c) shows the superposition of both images, when the negative replica had been slightly displaced in vertical direction.

Again, the borders of the objects are enhanced, but, as mentioned above, the color and the magnitude of the enhanced edge depends on the colors and the brightness difference of the adjacent regions of the image.

The optically processed images 2(c), 2(d), 3(c) and 4(c) show clearly a grayish background tone in the regions where the first derivative of the intensity pattern is null, which is consistent with the assumption c=c+=1/2 in the expression (3).

It can be easily demonstrated [18

18. J. L. Flores, J. A. Ferrari, J. A. Ramos, J. R. Alonso, and A. Fernández, “Analog image contouring using a twisted-nematic liquid-crystal display,” Opt. Express 18(18), 19163–19168 (2010). [CrossRef] [PubMed]

] that an angular error (ξ) in the orientation of calcite principal directions with respect to the x- and y-axis, would generate an additional (non-linear) intensity term in Eq. (3) proportional to sin(2ξ). In our setup, the calcite prism was mounted on a rotatory stage with a precision better than 1°, and thus, we estimate that the additional spurious terms due to axis misalignment would be smaller than 2% of the total light intensity.

4. Performance analysis

Figure 5 in [19

19. N. Zhang, J. Wang, and Y. Chen, “Image parallel processing based on GPU,” International Conference on Advanced Computer Control (ICACC), 367–370 (2010).

] show a speed comparison of Sobel edge detector implemented in a GPU (NVIDIA Geforce 8800GT (1.5 GHz), 512 MB global memory) and a CPU (Intel Pentium Dual E2160 CPU@1.8GHz, 1 GB RAM) for gray-level image and using different image sizes. In this figure we observe some trends: the absolute runtime of the Sobel algorithm implanted on GPU increases almost proportional with image dimension NxM, e.g., the reported times were ~0.2 ms and ~2.5 ms for 0.26 Megapixels and 4.2 Megapixels image, respectively. For color images the execution times will increase by a factor of three. On the other hand, in the proposed method the processing time only depends on the LCD characteristics, but in principle the processing time does not depend on the image size. Most of the commercially available LCD computer monitors can run at just 60 Hz, 75 Hz and sometime 85 Hz, and have an image size up to 4 Megapixels. Thus, using a simple optical setup with a standard LCDs, one can achieve processing rates 60 to 85 frame/s (i.e., ~16.6-11.7 ms per image), which is a considerable processing speed (comparable with that of GPUs when processing RGB images), sufficient for most practical applications (e.g., traffic lights detection [20

20. C. Claus, R. Huitl, J. Rausch, and W. Stechele, “Optimizing the SUSAN corner detection algorithm for a high speed FPGA implementation,” International Conference on Field Programmable Logic and Applications, 138–145 (2009).

]). Next-generation LCD TVs will be refreshed at least 240 times a second, which will allow to process a higher rates (e.g., 4.1 ms per image), i.e. three to four times as quick as today's version [21

21. J. F. Wager and R. Hoffman, “Thin, fast, and flexible semiconductors,” IEEE Spectrum 48(5), 42–56 (2011). [CrossRef]

].

5. Discussion and conclusions

We presented a novel optical method for edge enhancement in color images based on the polarization properties of liquid-crystal displays and the optical implementation of partial first-order derivatives of the image. This operation can be seen as the superposition of two replicas of the original image with complementary colors and laterally displaced between them. The proposed method works with incoherent illumination, so that the technique is very robust and does not require a highly precise alignment, and allows selecting the direction along which the derivatives are taken. We presented validation experiments that show clearly enhanced (colored lines) edges when a small amount of displacement Δx or Δy is introduced, as predicted by expression (3).

To the best of our knowledge, no other incoherent optical system had been proposed which can perform color edge enhancement in a simple way. Unlike other methods, that propose the difficult decomposition (and subsequent recomposition) of color images in the basic RGB-components, our technique deals directly with the color images without decomposition. The proposed optical procedure consists of a superposition operation rather than using gradients and other digital operations which are computationally expensive, and thus, it could be potentially useful for processing large images in real-time applications requiring color edge detection.

Acknowledgment

J. L. Flores expresses his gratitude to Programa de Sabaticas en el Extranjero, CONACYT-Mexico, (No. project: 159889) for funding his academic stay at the Facultad de Ingeniería, UdelaR, Uruguay, where this research was developed. The authors thank the financial support from PEDECIBA (Uruguay) and the Comisión Sectorial de Investigación Científica (CSIC, UdelaR, Uruguay). G. Ayubi thanks a scholarship from the Agencia Nacional de Investigación e Innovación (ANII, Uruguay).

References and links

1.

B.-L. Liang, Z.-Q. Wang, G.-G. Mu, J.-H. Guan, H.-L. Liu, and C. M. Cartwright, “Real-time edge-enhanced optical correlation with a cerium-doped potassium sodium strontium barium niobate photorefractive crystal,” Appl. Opt. 39(17), 2925–2930 (2000). [CrossRef] [PubMed]

2.

R. Nevatia, “A color edge detector and its use in scene segmentation,” IEEE Trans. Syst. Man Cybern. SMC-7, 820–826 (1977).

3.

J. Fan, D. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Process. 10(10), 1454–1466 (2001). [CrossRef] [PubMed]

4.

A. R. Weeks, C. E. Felix, and H. R. Myler, “Edge detection of color images using the HSL color space,” Proc. SPIE 2424, 291–301 (1995). [CrossRef]

5.

C. L. Novak and S. A. Shafer, “Color edge detection,” in Proceedings of DARPA Image Understanding workshop, Los Angeles, CA, USA, vol.1, (1987), pp. 35–37.

6.

X. Chen and H. Chen, “A novel color edge detection algorithm in RGB color space,” International Conference on Signal Processing Proceedings, ICSP, art. no. 5655926, pp. 793–796 (2010).

7.

M. Hedley and H. Yan, “Segmentation of color images using spatial and color space information,” J. Electron. Imaging 1(4), 374–380 (1992). [CrossRef]

8.

T. Carron and P. Lambert, “Color edge detector using jointly hue, saturation and intensity,” in Proceedings of ICIP-94, (IEEE, 1994), pp. 977–981.

9.

J. Scharcanski and A. N. Venetsanopoulos, “Edge detection of color images using directional operators,” IEEE Trans. Circ. Syst. Video Tech. 7(2), 397–401 (1997). [CrossRef]

10.

P. E. Trahanias and A. N. Venetsanopoulos, “Color edge detection using vector order statistics,” IEEE Trans. Image Process. 2(2), 259–264 (1993). [CrossRef] [PubMed]

11.

P. E. Trahanias and A. N. Venetsanopoulos, “Vector order statistics operators as color edge detectors,” IEEE Trans. Syst. Man Cybern. B Cybern. 26(1), 135–143 (1996). [CrossRef] [PubMed]

12.

J. A. Ferrari, J. L. Flores, and G. Garcia-Torales, “Directional edge enhancement using a liquid-crystal display,” Opt. Commun. 283(14), 2803–2806 (2010). [CrossRef]

13.

J. L. Flores and J. A. Ferrari, “Orientation-selective edge detection and enhancement using the irradiance transport equation,” Appl. Opt. 49(4), 619–624 (2010). [CrossRef] [PubMed]

14.

E. Hecht, Optics, 4th ed. (Addison Wesley, 2002), Chap. 8.

15.

http://www2.ambientdesign.com/gallery/showimage.php?i=7082.

16.

http://www.etsy.com/listing/79045428/frida-kahlo-picks-flowers-for-her-hair.

17.

http://www.foroxerbar.com/viewtopic.php?t=5794&kb=true.

18.

J. L. Flores, J. A. Ferrari, J. A. Ramos, J. R. Alonso, and A. Fernández, “Analog image contouring using a twisted-nematic liquid-crystal display,” Opt. Express 18(18), 19163–19168 (2010). [CrossRef] [PubMed]

19.

N. Zhang, J. Wang, and Y. Chen, “Image parallel processing based on GPU,” International Conference on Advanced Computer Control (ICACC), 367–370 (2010).

20.

C. Claus, R. Huitl, J. Rausch, and W. Stechele, “Optimizing the SUSAN corner detection algorithm for a high speed FPGA implementation,” International Conference on Field Programmable Logic and Applications, 138–145 (2009).

21.

J. F. Wager and R. Hoffman, “Thin, fast, and flexible semiconductors,” IEEE Spectrum 48(5), 42–56 (2011). [CrossRef]

OCIS Codes
(100.1160) Image processing : Analog optical image processing
(100.2980) Image processing : Image enhancement

ToC Category:
Image Processing

History
Original Manuscript: August 17, 2011
Revised Manuscript: September 21, 2011
Manuscript Accepted: September 22, 2011
Published: October 7, 2011

Citation
Ariel Fernández, Julia R. Alonso, Jorge L. Flores, Gastón A. Ayubi, J. Matías Di Martino, and José A. Ferrari, "Optical processing of color images with incoherent illumination: orientation-selective edge enhancement using a modified liquid-crystal display," Opt. Express 19, 21091-21097 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-21-21091


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. B.-L. Liang, Z.-Q. Wang, G.-G. Mu, J.-H. Guan, H.-L. Liu, and C. M. Cartwright, “Real-time edge-enhanced optical correlation with a cerium-doped potassium sodium strontium barium niobate photorefractive crystal,” Appl. Opt.39(17), 2925–2930 (2000). [CrossRef] [PubMed]
  2. R. Nevatia, “A color edge detector and its use in scene segmentation,” IEEE Trans. Syst. Man Cybern.SMC-7, 820–826 (1977).
  3. J. Fan, D. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,” IEEE Trans. Image Process.10(10), 1454–1466 (2001). [CrossRef] [PubMed]
  4. A. R. Weeks, C. E. Felix, and H. R. Myler, “Edge detection of color images using the HSL color space,” Proc. SPIE2424, 291–301 (1995). [CrossRef]
  5. C. L. Novak and S. A. Shafer, “Color edge detection,” in Proceedings of DARPA Image Understanding workshop, Los Angeles, CA, USA, vol.1, (1987), pp. 35–37.
  6. X. Chen and H. Chen, “A novel color edge detection algorithm in RGB color space,” International Conference on Signal Processing Proceedings, ICSP, art. no. 5655926, pp. 793–796 (2010).
  7. M. Hedley and H. Yan, “Segmentation of color images using spatial and color space information,” J. Electron. Imaging1(4), 374–380 (1992). [CrossRef]
  8. T. Carron and P. Lambert, “Color edge detector using jointly hue, saturation and intensity,” in Proceedings of ICIP-94, (IEEE, 1994), pp. 977–981.
  9. J. Scharcanski and A. N. Venetsanopoulos, “Edge detection of color images using directional operators,” IEEE Trans. Circ. Syst. Video Tech.7(2), 397–401 (1997). [CrossRef]
  10. P. E. Trahanias and A. N. Venetsanopoulos, “Color edge detection using vector order statistics,” IEEE Trans. Image Process.2(2), 259–264 (1993). [CrossRef] [PubMed]
  11. P. E. Trahanias and A. N. Venetsanopoulos, “Vector order statistics operators as color edge detectors,” IEEE Trans. Syst. Man Cybern. B Cybern.26(1), 135–143 (1996). [CrossRef] [PubMed]
  12. J. A. Ferrari, J. L. Flores, and G. Garcia-Torales, “Directional edge enhancement using a liquid-crystal display,” Opt. Commun.283(14), 2803–2806 (2010). [CrossRef]
  13. J. L. Flores and J. A. Ferrari, “Orientation-selective edge detection and enhancement using the irradiance transport equation,” Appl. Opt.49(4), 619–624 (2010). [CrossRef] [PubMed]
  14. E. Hecht, Optics, 4th ed. (Addison Wesley, 2002), Chap. 8.
  15. http://www2.ambientdesign.com/gallery/showimage.php?i=7082 .
  16. http://www.etsy.com/listing/79045428/frida-kahlo-picks-flowers-for-her-hair .
  17. http://www.foroxerbar.com/viewtopic.php?t=5794&kb=true .
  18. J. L. Flores, J. A. Ferrari, J. A. Ramos, J. R. Alonso, and A. Fernández, “Analog image contouring using a twisted-nematic liquid-crystal display,” Opt. Express18(18), 19163–19168 (2010). [CrossRef] [PubMed]
  19. N. Zhang, J. Wang, and Y. Chen, “Image parallel processing based on GPU,” International Conference on Advanced Computer Control (ICACC), 367–370 (2010).
  20. C. Claus, R. Huitl, J. Rausch, and W. Stechele, “Optimizing the SUSAN corner detection algorithm for a high speed FPGA implementation,” International Conference on Field Programmable Logic and Applications, 138–145 (2009).
  21. J. F. Wager and R. Hoffman, “Thin, fast, and flexible semiconductors,” IEEE Spectrum48(5), 42–56 (2011). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited