OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 2 — Mar. 4, 2013
« Show journal navigation

Aggregation functions to combine RGB color channels in stereo matching

Mikel Galar, Aranzazu Jurio, Carlos Lopez-Molina, Daniel Paternain, Jose Sanz, and Humberto Bustince  »View Author Affiliations


Optics Express, Vol. 21, Issue 1, pp. 1247-1257 (2013)
http://dx.doi.org/10.1364/OE.21.001247


View Full Text Article

Acrobat PDF (3454 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper we present a comparison study between different aggregation functions for the combination of RGB color channels in stereo matching problem. We introduce color information from images to the stereo matching algorithm by aggregating the similarities of the RGB channels which are calculated independently. We compare the accuracy of different stereo matching algorithms and aggregation functions. We show experimentally that the best function depends on the stereo matching algorithm considered, but the dual of the geometric mean excels as the most robust aggregation.

© 2013 OSA

1. Introduction

The stereo matching problem consists in obtaining the three-dimensional information from two bi-dimensional images of the same scene taken from different viewpoints. When an image is taken, the depth of each point in the scene is lost. Therefore, the objective of a stereo matching algorithm is to retrieve this information.

The basis of stereo vision is that a single physical point in the scene is uniquely projected to a pair of image locations. Hence, to obtain the depth from both images, first we have to estimate the correspondence between the pixels in each image. This step consists in identifying the same physical point in both projections to determine the difference between the position in each image. This difference is called disparity. The disparity, together with the parameters of the camera allows us to obtain the depth.

Thereby, the main problem of the stereo matching is the difficulty to find the correspondence correctly. The images are taken from different cameras with different viewing angles. These facts sometimes produce occlusions, perspective distortion, different lighting intensities, reflections, shadows, repetitive patterns, sensory noise, etc. All these facts convert a simple correspondence task in a very difficult one.

An exhaustive overview on stereo matching can be found in [1

1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]

], while a complete introduction to stereo vision can be found in [2

2. B. Cyganek and J.P. Siebert, An Introduction to 3D computer vision techniques and algorithms (Wiley, 2009). [CrossRef]

]. Stereo matching algorithms can be classified into local and global methods. The local approaches compare the intensity levels of a finite window to determine the disparity for each pixel. These methods use different metrics or similarities to compare intensity levels such as SAD [3

3. R. Zabih and J. Woodfill, “Non-parametric local transforms for computing visual correspondence,” presented at the Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994.

], SSD [4

4. E. Trucco, V. Roberto, S. Tinonin, and M. Corbatto, “SSD disparity estimation for dynamic stereo,” presented at The Bristish Machine Vision Conference, Edinburgh, England, 1996.

] or NCC [5

5. O. Faugeras, T. Vieville, E. Theron, J. Vuillemin, B. Hotz, Z. Zhang, L. Moll, P. Bertin, H. Mathieu, P. Fua, G. Berry, and C. Proy, “Real-time correlation-based stereo: algorithm, implementations and applications,” INRIA Technical Report 2013, 1993.

], which are widely applied despite its simplicity due to their low computational complexity [6

6. X. Hu and P. Mordohai, “Quantitative evaluation of confidence measures for stereo vision,” IEEE T. Pattern Anal. 34, 2121–2133 (2012). [CrossRef]

, 7

7. X. Xiang, M. Zhang, G. Li, H. Yuyong, and Z. Pan, “Real-time stereo matching based on fast belief propagation,” Mach. Vision Appl. 23, 1219–1227 (2012). [CrossRef]

]. Global approaches apply some global assumptions about smoothness and try to determine all the disparities at the same time by using different optimization techniques such as, graph cuts [8

8. M. Bleyer and M. Gelautz, “Graph-based surface reconstruction from stereo pairs using image segmentation,” Proc. SPIE 5665, 288–299 (2005). [CrossRef]

10

10. V. Kolmogorov and R. Zabih, “Computing visual correspondence with occlusions via graph cuts,” Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 508–515.

], belief propagation [11

11. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient belief propagation for early vision,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 261–268.

], etc. These methods usually start from a local disparity estimation.

In this work we study the performance and influence of different aggregation operators, such as the arithmetic mean, the median, the minimum, etc. To do so, we use several test images from [1

1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]

] which have been taken using the ideal configuration of the cameras. Our aim is to study their different behaviors among different measures used in stereo matching and image comparison [6

6. X. Hu and P. Mordohai, “Quantitative evaluation of confidence measures for stereo vision,” IEEE T. Pattern Anal. 34, 2121–2133 (2012). [CrossRef]

,14

14. M. Galar, J. Fernandez, G. Beliakov, and H. Bustince, “Interval-valued fuzzy sets applied to stereo matching of color images,” IEEE T. Image Process. 20, 1949–1961 (2011). [CrossRef]

]. We empirically show that using the proper aggregation functions can produce significant better results, whereas using inappropriate ones could decrease the performance. Our objective is to find an aggregation for color similarities that works well whichever method (similarity measure) is used. In this sense, we want to study which aggregation is more robust. While some aggregations excels in some methods or images, our interest reside in finding an appropriate set of aggregations that can be safely used among different metrics and images.

This work is organized as follows: In Section 2 we remain the classical stereo matching algorithm and we present the metrics that we have considered. In Section 3 we present the aggregation operators that we are going to compare in Section 4, where the experimental study is carried out. Section 5 concludes this work.

2. Stereo matching for color images

In this section, we recall the typical steps of the classical stereo matching algorithm. Afterward, we present the different metrics and similarity measures used in the comparison.

2.1. Stereo matching algorithm

Minor changes have to be applied to transform the original stereo matching algorithm for gray scale images to color ones. In the first case, the algorithm computes the similarity of the window surrounding each pixel in the right image with several windows in the left image (considering the epipolar constraint and the maximum disparity). The pixel which surrounding window reaches the largest similarity degree is chosen and used to compute the disparity.

Regarding color images, we simply compute the correspondence between color channels independently, and then we aggregate these correspondence scores (similarity degrees). We can summarize the algorithm for color images as follows (in Fig. 1 we depict an overall view of the method):
  • Algorithm Stereo Matching
  •   const
  •     Window size := n × m
  •   begin
  •     For each pixel right image
  •       For each pixel in the epipolar line left image
  •         For each color channel
  •         Calculate the similarity between the window centered at the pixel of the right image and the window centered at the pixel of the left image
  •         end For
  •         Aggregate similarities
  •         end For
  •         Set correspondence:= arg max{value of aggregation of similarities}
  •         Disparity:= difference between the x-position of two pixels
  •     end For
  •     Create a disparity map from all the disparities obtained
  •   end.

Fig. 1 Stereo matching algorithm scheme using color information from RGB channels.

There are many different versions of the classical stereo matching algorithm. However, most of them use the scheme we have presented. The metric or the similarity measure used is usually the biggest difference between algorithms. Another key-factor of the algorithm is the aggregation function used to aggregate the color similarities. In the following subsection we present seven common similarity measures for stereo matching problem. Then, in Section 3 we present the aggregation functions considered for the empirical study.

2.2. Correspondence and similarity measures between windows

There exist several methods to compute the similarity between windows. The results (given by the obtained disparity maps) directly depends on these measures. In this paper, we study several metrics to show the behavior of different aggregations within each method.

2.2.1. Sum of Square Differences (SSD)

SSD [4

4. E. Trucco, V. Roberto, S. Tinonin, and M. Corbatto, “SSD disparity estimation for dynamic stereo,” presented at The Bristish Machine Vision Conference, Edinburgh, England, 1996.

] computes the matching score as the sum of the square differences between all pixels intensities from left window with respect to right window. Then, the disparity is computed with the one with the lowest value (largest correspondence), which indicates the most similar window. SSD can be expressed as follows:
SSD(Ir(x,y),Il(x+k,y))=m,nW(Ir(x+m,y+n)Il(x+m+k,y+n))2
(1)
being x, y the position of the pixel, k the displacement of the left window respect to the right window, W the window (size n × m) considered and Ir, Il right and left images respectively.

2.2.2. Sum of Absolute Differences (SAD)

SAD [3

3. R. Zabih and J. Woodfill, “Non-parametric local transforms for computing visual correspondence,” presented at the Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994.

], computes the disparity in the same way as SSD, but using the absolute differences between pixel intensities instead of the square differences:
SAD(Ir(x,y),Il(x+k,y))=m,nW|Ir(x+m,y+n)Il(x+m+k,y+n)|
(2)

2.2.3. Normalized Cross-Correlation (NNC)

NCC [5

5. O. Faugeras, T. Vieville, E. Theron, J. Vuillemin, B. Hotz, Z. Zhang, L. Moll, P. Bertin, H. Mathieu, P. Fua, G. Berry, and C. Proy, “Real-time correlation-based stereo: algorithm, implementations and applications,” INRIA Technical Report 2013, 1993.

] is expressed by the following formula:
NCC(Ir(x,y),Il(x+k,y))=m,nW(Ir(x+m,y+n)Il(x+m+k,n))(m,nW(Ir(x+m,y+n))2m,nW(Il(x+m+k,y+n))2)12
(3)

The disparity is obtained from the k reaching the maximum value.

2.2.4. Fuzzy similarity (SMFS)

The fuzzy similarity introduce the fuzzy set theory to compute the correspondence between two windows. It is computed with the following expression [12

12. G. Tolt and I. Kalaykov, “Measures based on fuzzy similarity for stereo matching of colour images,” Soft Comput. 10, 1117–1126 (2006). [CrossRef]

]:
SMFS(Ir(x,y),Il(x+k,y))=1m×nm,nWs(Ir(x+m,y+n),Il(x+m+k,n))wheres(a,b)={1|ab|α,if|xy|<α0,otherwise,
(4)
the parameter α = 16 is used generally [12

12. G. Tolt and I. Kalaykov, “Measures based on fuzzy similarity for stereo matching of colour images,” Soft Comput. 10, 1117–1126 (2006). [CrossRef]

]. Disparity is computed with the pixel which similarity measure attains its maximum value.

2.2.5. Distance-based similarities (SMM and SMK)

Distance-based similarities are widely used in image processing for image comparison techniques [15

15. D. Van der Weken, M. Nachtegael, and E. E. Kerre, “Using similarity measures and homogeneity for the comparison of images,” Image Vision Comput. 22, 695–702 (2004). [CrossRef]

]. Hence, they are appropriate to compare the correspondence between windows. The smaller the distance is, the greater similarity is obtained. In our experiments we consider two different cases. The first one is based on Minkowski distance dr with r = 1, that is equivalent to Manhattan distance, but in this case the measure is normalized by the sum of the intensities within windows (note that different w.r.t. Eq. (2)). We denote this measure as SMM[16

16. W.J. Wang, “New similarity measure on fuzzy sets and on elements,” Fuzzy Set. Syst. 85, 305–309 (1997). [CrossRef]

, 17

17. S.M. Chen, M.S. Yeh, and P.Y. Hsiao, “A comparison of similarity measures of fuzzy values,” Fuzzy Set. Syst. 72, 78–89 (1995).

]:
SMM(Ir(x,y),Il(x+k,y))=1m,nW|Ir(x+m,y+n)Il(x+m+k,y+n)|m,nW(Ir(x+m,y+n)+Il(x+m+k,y+n))
(5)

The second one is based on the Kullback distance [18

18. S. Kullback, Information theory and statistics (Wiley, 1959).

] between fuzzy sets. We denote this similarity as SMK:
SMK(Ir(x,y),Il(x+k,y))=11MN2ln2.m,nW[(Ir(x+m,y+n)Il(x+m+k,y+n))ln(1+Ir(x+m,y+n)1+Il(x+m+k,y+n))+(Il(x+m+k,y+n)Ir(x+m,y+n))ln(2Ir(x+m,y+n)2Il(x+m+k,y+n))]
(6)

In both cases, we normalize the intensities of the pixels to the unit interval in such way that we can apply these similarities and then compute the disparity from the largest one.

2.2.6. Similarity Measure based on Union and Intersection (SMUI)

The concept of similarity from union and intersection operations also comes from fuzzy set theory [19

19. C.P. Pappis and N.I. Karacipilidis, “A comparative assessment of measures of similarity of fuzzy values,” Fuzzy Set. Syst. 56, 171–174 (1993). [CrossRef]

], where the similarity between two fuzzy sets can be computed as the division of intersection’s cardinality and the union’s cardinality. In the same way as in distance-based methods, we normalize the intensities before applying this similarity. Then, the disparity is obtained from the largest output. The expression is as follows:
SMUI(Ir(x,y),Il(x+k,y))=m,nWmin(Ir(x+m,y+n),Il(x+m+k,y+n))m,nWmax(Ir(x+m,y+n),Il(x+m+k,y+n))
(7)

3. Aggregation functions

Note: We denote a vector of n elements with x = {x1, x2,..., xn}.
  • Minimum
    M(x)=min(x1,x2,,xn)
    (8)
  • Product
    M(x)=i=1nxi
    (9)
  • Arithmetic Mean (A-Mean)
    M(x)=1ni=1nxi
    (10)
  • Weighted Mean (W-Mean)
    M(x)=i=1nxiwi
    (11)
    where w = {w1, w2,..., wn} is the weight vector that satisfies i=1nwi=1.

    In our comparison we consider different weight vectors to compute the final similarity:
    μ(x)=wRμR(x)+wGμG(x)+wBμB(x)
    (12)

    For example if wR = 0.1, wG = 0.8 and wB = 0.1, we obtain
    μ(x)=0.1μR(x)+0.8μG(x)+0.1μB(x)
    (13)

    If wR = 0, 299, wG = 0, 5870 and wB = 0, 1140, we obtain
    μ(x)=0,299μR(x)+0,5870μG(x)+0,1140μB(x)
    (14)

    The weights values of Eq. (14) belong to the computation of the luminance of a RGB image [20

    20. E. Deza and M.M. Deza, “Image and audio distances,” in Dictionary of distances, E. Deza and M.M. Deza, (Elsevier, 2006), pp. 262–278. [CrossRef]

    ]. The expression of luminance is used to transform RGB color images into gray scale. The purpose of luminance is to represent the brightness of colors just as human perceive them. In this manner, it represents that humans consider the color green brighter than the color blue.

  • Harmonic Mean (H-Mean)
    M(x)=n(i=1n1xi)(1)
    (15)
  • Median
    M(x)={12(x(k)+x(k+1)),ifn=2kisevenx(k),ifn=2kisodd,
    (16)
    where x(k) is the k-th largest (or smallest) component of x.
  • Geometric Mean (G-Mean)
    M(x)=(i=1nxi)(1/n)
    (17)
  • Mode

    The mode is the most frequent value in x.

We also consider in our experiments the dual aggregation function of the Geometric and Harmonic means constructed as: M(x) = 1 − M(1 − x).

4. Experimental results

In our experimental study, we compare the behavior of different aggregation functions to aggregate color similarities. We have three main objectives:
  • To check whether using color information by aggregating the similarities improves the results of using gray scale images.
  • To study which aggregation fits each correlation method or similarity measure in order to carry out a global analysis.
  • To study which is the best (most robust) aggregation function in order to combine color similarities among all methods.

In order to evaluate the performance we use the Middlebury test bed proposed by Scharstein and Szeliski [1

1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]

] (http://cat.middlebury.edu/stereo), which is established as the common benchmark for stereo matching methods and allows one to easily reproduce the results obtained. In this test, the disparity maps obtained by each algorithm are compared with the ideal disparity maps. The test images are shown in Fig. 2 with their corresponding ideal disparity maps. We refer to each image pair by the name given in [1

1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]

]: “Tsukuba”, “Teddy” and “Cones”. We have to recall that stereo matching algorithms do not use any type of preprocessing or post processing steps (such as, optimization techniques, occlusion detection or image filtering).

Fig. 2 Test images from Middlebury test bed

As a particular case, we start presenting the quantitative results for the stereo matching algorithm using the fuzzy similarity measure (SMFS, Eq. (4)) and different aggregation functions to merge color channels similarities in Table 1. The leftmost column of Table 2 indicates the aggregation function used. Then, we present three columns for each image pair. These columns represent the percentage of absolute disparity error greater than one for three different regions in the image:
  • no-oc.: only non-occluded pixels are considered.
  • all: whole image is considered.
  • disc.: only pixels near discontinuities are considered.

Table 1. Quantitative evaluation results for different aggregation functions to add color similarities using fuzzy similarity measure where *Weighted Mean 262 means that wR = 0.2, wG = 0.6 and wB = 0.2

table-icon
View This Table
| View All Tables

Table 2. Total error obtained for each color aggregation and similarity measure where *Weighted Mean 262 means that wR = 0.2, wG = 0.6 and wB = 0.2.

table-icon
View This Table
| View All Tables

The rightmost column is the overall performance of the algorithm computed by the arithmetic mean of all other columns. The results are listed following the total error in descending order, and the row corresponding to the stereo algorithm applied to gray scale images is shaded to ease the comparison with respect to the performance using color images.

Following Table 1, we can conclude that using color information is beneficial. Notice that the aggregations using color information performing worse than the usage of gray scale images mainly consider one of the coefficients in the aggregation (in the case of the weighted means, the weights give most of the importance to a unique channel). However, these statements are based on a unique similarity measure; hence, we should study whether these conclusions are maintained across all metrics.

Table 2 summarizes the total error obtained for the considered metrics and aggregations. The last two columns present the mean error for each aggregation and the average rank (and rank position) respectively. The average rank is obtained by assigning a position to each aggregation depending on its performance on each metric. The aggregation which achieves the best accuracy in a specific method will have the first ranking (value 1); then, the aggregation with the second best accuracy is assigned rank 2, and so forth. This task is carried out for all metrics and finally an average ranking is computed as the mean value of all rankings. It provides a quick view of the overall behavior of each aggregation with respect to the others. Bold numbers indicate the best aggregation (rank 1) among the method. In addition, Fig. 3 shows the comparison of the best disparity maps obtained using color aggregation (those stressed in bold-face in Table 2) and the ones obtained with gray scale images.

Fig. 3 Best disparity maps for each method compared with the disparity map extracted from gray scale images

This experiment shows that the optimal aggregation function to be used depends on the stereo matching algorithm considered. However, the previously stated facts are confirmed, the usage of color information can improve the matching as long as all color channels are taken into account. The usage of color information may need a greater computational effort in order to compute the disparity map, but it can be really low since the process can be easily carried out in parallel, whereas the obtained improvements can make a difference. It is remarkable that the total error obtained using the weighted arithmetic mean based on the luminance formula is always better than using gray scale images. Gray scale images are obtained by the RGB luminance formula, and hence, it is advisable to use the three color channels to compute the matching independently, aggregating the matching scores instead of aggregating the information first and then applying the algorithm for a unique color channel.

Moreover, among the tested aggregations, the usage of the dual of the geometric mean (G-Mean Dual), the weighted mean with the weights from luminance formula (W-Mean Luminance) and the dual of the harmonic mean (H-Mean Dual) can be recommended, since they have stand out as the most robust ones. They behave well within different images and metrics, which address for their appropriateness on different frameworks. We should also note the good behavior of the product and the geometric mean in SAD and SSD, which are commonly used [6

6. X. Hu and P. Mordohai, “Quantitative evaluation of confidence measures for stereo vision,” IEEE T. Pattern Anal. 34, 2121–2133 (2012). [CrossRef]

, 7

7. X. Xiang, M. Zhang, G. Li, H. Yuyong, and Z. Pan, “Real-time stereo matching based on fast belief propagation,” Mach. Vision Appl. 23, 1219–1227 (2012). [CrossRef]

]. Both aggregations obtain equivalent results despite of the similarity values are different, since the order between these scores is the same (the results of the geometric mean are equal to the product ones but applying the root).

5. Conclusions

We carried out a comparison study of the performance of different aggregation functions in the stereo matching algorithm to aggregate the similarities from different color channels in RGB color space. We can conclude that it is better to make the color aggregation after the similarities are computed in order to avoid ambiguities (produced by color) than to aggregate the color to obtain gray scale images and then compute the similarities. That is, color information is useful for the matching process and must not be overlooked.

The experiment has shown that despite the optimum aggregation function depends on the metric used, there are robust aggregations such as the dual of the geometric and harmonic mean, the weighted arithmetic mean based on the luminance formula, the geometric mean or the product which performs properly in all metrics among all images and hence, whose usage can be recommended.

Acknowledgments

References and links

1.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision 47, 7–42 (2002). [CrossRef]

2.

B. Cyganek and J.P. Siebert, An Introduction to 3D computer vision techniques and algorithms (Wiley, 2009). [CrossRef]

3.

R. Zabih and J. Woodfill, “Non-parametric local transforms for computing visual correspondence,” presented at the Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994.

4.

E. Trucco, V. Roberto, S. Tinonin, and M. Corbatto, “SSD disparity estimation for dynamic stereo,” presented at The Bristish Machine Vision Conference, Edinburgh, England, 1996.

5.

O. Faugeras, T. Vieville, E. Theron, J. Vuillemin, B. Hotz, Z. Zhang, L. Moll, P. Bertin, H. Mathieu, P. Fua, G. Berry, and C. Proy, “Real-time correlation-based stereo: algorithm, implementations and applications,” INRIA Technical Report 2013, 1993.

6.

X. Hu and P. Mordohai, “Quantitative evaluation of confidence measures for stereo vision,” IEEE T. Pattern Anal. 34, 2121–2133 (2012). [CrossRef]

7.

X. Xiang, M. Zhang, G. Li, H. Yuyong, and Z. Pan, “Real-time stereo matching based on fast belief propagation,” Mach. Vision Appl. 23, 1219–1227 (2012). [CrossRef]

8.

M. Bleyer and M. Gelautz, “Graph-based surface reconstruction from stereo pairs using image segmentation,” Proc. SPIE 5665, 288–299 (2005). [CrossRef]

9.

L. Hong and G. Chen, “Segment–based stereo matching using graph cuts,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 74–81.

10.

V. Kolmogorov and R. Zabih, “Computing visual correspondence with occlusions via graph cuts,” Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 508–515.

11.

P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient belief propagation for early vision,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 261–268.

12.

G. Tolt and I. Kalaykov, “Measures based on fuzzy similarity for stereo matching of colour images,” Soft Comput. 10, 1117–1126 (2006). [CrossRef]

13.

A. Klaus, M. Sormann, and K. Karner, “Segment-based stereo matching using belief propagation and a self-adapting dissimilarity measure,” Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2006), pp. 15–18.

14.

M. Galar, J. Fernandez, G. Beliakov, and H. Bustince, “Interval-valued fuzzy sets applied to stereo matching of color images,” IEEE T. Image Process. 20, 1949–1961 (2011). [CrossRef]

15.

D. Van der Weken, M. Nachtegael, and E. E. Kerre, “Using similarity measures and homogeneity for the comparison of images,” Image Vision Comput. 22, 695–702 (2004). [CrossRef]

16.

W.J. Wang, “New similarity measure on fuzzy sets and on elements,” Fuzzy Set. Syst. 85, 305–309 (1997). [CrossRef]

17.

S.M. Chen, M.S. Yeh, and P.Y. Hsiao, “A comparison of similarity measures of fuzzy values,” Fuzzy Set. Syst. 72, 78–89 (1995).

18.

S. Kullback, Information theory and statistics (Wiley, 1959).

19.

C.P. Pappis and N.I. Karacipilidis, “A comparative assessment of measures of similarity of fuzzy values,” Fuzzy Set. Syst. 56, 171–174 (1993). [CrossRef]

20.

E. Deza and M.M. Deza, “Image and audio distances,” in Dictionary of distances, E. Deza and M.M. Deza, (Elsevier, 2006), pp. 262–278. [CrossRef]

OCIS Codes
(100.0100) Image processing : Image processing
(150.0150) Machine vision : Machine vision
(330.0330) Vision, color, and visual optics : Vision, color, and visual optics

ToC Category:
Image Processing

History
Original Manuscript: September 18, 2012
Revised Manuscript: December 17, 2012
Manuscript Accepted: January 1, 2013
Published: January 11, 2013

Virtual Issues
Vol. 8, Iss. 2 Virtual Journal for Biomedical Optics

Citation
Mikel Galar, Aranzazu Jurio, Carlos Lopez-Molina, Daniel Paternain, Jose Sanz, and Humberto Bustince, "Aggregation functions to combine RGB color channels in stereo matching," Opt. Express 21, 1247-1257 (2013)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-21-1-1247


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision47, 7–42 (2002). [CrossRef]
  2. B. Cyganek and J.P. Siebert, An Introduction to 3D computer vision techniques and algorithms (Wiley, 2009). [CrossRef]
  3. R. Zabih and J. Woodfill, “Non-parametric local transforms for computing visual correspondence,” presented at the Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994.
  4. E. Trucco, V. Roberto, S. Tinonin, and M. Corbatto, “SSD disparity estimation for dynamic stereo,” presented at The Bristish Machine Vision Conference, Edinburgh, England, 1996.
  5. O. Faugeras, T. Vieville, E. Theron, J. Vuillemin, B. Hotz, Z. Zhang, L. Moll, P. Bertin, H. Mathieu, P. Fua, G. Berry, and C. Proy, “Real-time correlation-based stereo: algorithm, implementations and applications,” INRIA Technical Report 2013, 1993.
  6. X. Hu and P. Mordohai, “Quantitative evaluation of confidence measures for stereo vision,” IEEE T. Pattern Anal.34, 2121–2133 (2012). [CrossRef]
  7. X. Xiang, M. Zhang, G. Li, H. Yuyong, and Z. Pan, “Real-time stereo matching based on fast belief propagation,” Mach. Vision Appl.23, 1219–1227 (2012). [CrossRef]
  8. M. Bleyer and M. Gelautz, “Graph-based surface reconstruction from stereo pairs using image segmentation,” Proc. SPIE5665, 288–299 (2005). [CrossRef]
  9. L. Hong and G. Chen, “Segment–based stereo matching using graph cuts,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 74–81.
  10. V. Kolmogorov and R. Zabih, “Computing visual correspondence with occlusions via graph cuts,” Proceedings of IEEE International Conference on Computer Vision (IEEE, 2001), pp. 508–515.
  11. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient belief propagation for early vision,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2004), pp. 261–268.
  12. G. Tolt and I. Kalaykov, “Measures based on fuzzy similarity for stereo matching of colour images,” Soft Comput.10, 1117–1126 (2006). [CrossRef]
  13. A. Klaus, M. Sormann, and K. Karner, “Segment-based stereo matching using belief propagation and a self-adapting dissimilarity measure,” Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2006), pp. 15–18.
  14. M. Galar, J. Fernandez, G. Beliakov, and H. Bustince, “Interval-valued fuzzy sets applied to stereo matching of color images,” IEEE T. Image Process.20, 1949–1961 (2011). [CrossRef]
  15. D. Van der Weken, M. Nachtegael, and E. E. Kerre, “Using similarity measures and homogeneity for the comparison of images,” Image Vision Comput.22, 695–702 (2004). [CrossRef]
  16. W.J. Wang, “New similarity measure on fuzzy sets and on elements,” Fuzzy Set. Syst.85, 305–309 (1997). [CrossRef]
  17. S.M. Chen, M.S. Yeh, and P.Y. Hsiao, “A comparison of similarity measures of fuzzy values,” Fuzzy Set. Syst.72, 78–89 (1995).
  18. S. Kullback, Information theory and statistics (Wiley, 1959).
  19. C.P. Pappis and N.I. Karacipilidis, “A comparative assessment of measures of similarity of fuzzy values,” Fuzzy Set. Syst.56, 171–174 (1993). [CrossRef]
  20. E. Deza and M.M. Deza, “Image and audio distances,” in Dictionary of distances, E. Deza and M.M. Deza, (Elsevier, 2006), pp. 262–278. [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited