OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 15, Iss. 1 — Jan. 8, 2007
  • pp: 62–75
« Show journal navigation

Discrimination among similar looking noisy color patches using Margin Setting

Kaveh Heidary and H. John Caulfield  »View Author Affiliations


Optics Express, Vol. 15, Issue 1, pp. 62-75 (2007)
http://dx.doi.org/10.1364/OE.15.000062


View Full Text Article

Acrobat PDF (538 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

The goal of discrimination of one color from many other similar-appearing colors even when the colored objects show substantial variation or noise is of obvious import. We show how to accomplish that using a technique called Margin Setting. It is possible not only to have very low error rates but also to have some control over the types of errors that do occur. Robust spectral filtering prior to spatial pattern recognition allows subsequent filtering processes to be based on conventional coherent optical correlation that can be done monochromatically.

© 2007 Optical Society of America

1. Introduction

Natural Color is a spectral discriminant computed by brain and attributed to the perceived image. We explore here what we have called Artificial Color [1

1. H. J. Caulfield, “Artificial Color,” Neurocomputing,51,463–465 (2003).

], which also computes spectral discriminants and uses them to select pixels for inclusion in a conceptually defined class. Artificial Color is a biomimetic technology for spectral sensing and processing that can be utilized for spectral discrimination. Animals use spectral information collected by cone cells with multiple broad sensitivity functions that have considerable spectral overlap. Most humans have three types of cone cells although some females have four. Some animal species have more types of cone cells than humans, with sensitivity functions extending into infrared and ultraviolet, evolved in accordance to their habitats and needs. Spectral discriminants called colors (hues) are computed by the animal brain and attributed to scene segments following normalization for brightness. Like biological color, Artificial Color uses two or more overlapping spectral sensitivity functions to collect spectral information from the scene. Optimum number, shape, and extent of spectral overlap of sensitivity functions are application driven. In some systems data storage and processing speed limitations dictate the number of sensitivity functions. Shape and spectral overlap of the sensitivity functions are then optimized in order to maximize mutual separations among color classes of importance to the specific application. The information collected by the spectral sensitivity functions is then electronically processes and discriminants are attributed to scene segments. Natural Color must serve multiple, quite-diverse purposes, but Artificial Color can use the same data to accomplish a much more narrowly defined goal. With the narrowing of the task, we expect an enhancement of the capability for that task.

The purpose of this paper is to explore how well spectral discrimination preprocessors specialized to a narrowly defined task can perform. Emphasis will be placed on between-class separation and within-class integrity of the Artificially Colored scenes. We will apply Artificial Color negative filters that set spectrally-recognized colors white and leave the other pixels unchanged. The white regions in the filtered images are those that would then be subjected to conventional Fourier optical filtering for shape recognition. Ideally, all pixels with the chosen color should be set to white (good within class packing) and no other pixels should be white (good between class separation). We assume noise on the color space, so perfection is impossible. That allows us to explore means to trade off within-class and between class figures of merit in the RGB preprocessing.

The discrimination method we use is called Margin Setting [2

2. H. J. Caulfield and Kaveh Heidary, “Exploring Margin Setting for good generalization in multiple class discrimination,” Pattern Recognition38,1225–1238 (2005).

,3

3. H. J. Caulfield, A. Karavolos, and J. E. Ludman, “Improving optical Fourier pattern recognition by accommodating the missing information,” Information Sciences162,35–52 (2004).

]. Derived to satisfy both the goal of Learning Theory to achieve perfect discrimination of members of the training set with a classifier of minimum complexity (minimum Vapnik-Chervonenkis dimension) and the goal of Support Vector Machines [4–6

4. C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).

] to achieve maximum margin; Margin Setting had to introduce some new concepts and with them new tradeoffs that we explore here. We will show that Margin Setting allows the trade off pertaining to within-class and between -class discrimination.

The examples presented here demonstrate that Margin Setting is an extremely powerful and flexible classification technique for differentiating one color class from others in a set of many closely packed color classes. Applications of Margin Setting are numerous and include spectral recognition of camouflaged objects and targets concealed in natural terrain. Other potential applications are detection of minute changes in color used in environmental impact studies and remote sensing. Uncovering fraud and identification of counterfeit objects, i.e. currency and personal identification documents, based on visually imperceptible color differential with respect to genuine articles is another area of potential application.

2. Margin Setting

In statistical pattern recognition (the task opticists wish to perform with Fourier filtering), the goal is to place new, previously unseen data in their proper categories on the basis of what has been learned form the training set. That goal is called generalization. There are many powerful techniques for achieving good generalization such as boosting [5

5. L. G. Valiant, “A theory of learnability,” Communications ACM 27,1134–1142 (1984). [CrossRef]

,6

6. R. E. Schapire, “The strength of weak learnability,” Machine Learning 5(2),197–227 (1990). [CrossRef]

] and the Support Vector Machine [4

4. C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).

]. Margin Setting is a new method that claims to be able to achieve better generalization than either of those. It has its origin in early works [2

2. H. J. Caulfield and Kaveh Heidary, “Exploring Margin Setting for good generalization in multiple class discrimination,” Pattern Recognition38,1225–1238 (2005).

,3

3. H. J. Caulfield, A. Karavolos, and J. E. Ludman, “Improving optical Fourier pattern recognition by accommodating the missing information,” Information Sciences162,35–52 (2004).

], that trained a sequence of classifiers with each being asked to classify only those that were not classified well by earlier classifiers. In particular, “classifying well” is now defined as having all of the classified points of one class a specified distance or more from any member of the other classes. That spacing of classes in the decision space is called the margin. It provides for variations of new data from the old data without having the new data misclassified. Setting big margins allows us to make very few errors. On the other hand, big margins lead to large numbers of data that cannot be classified at all. We will show how the choice of margin translates to within-class and between-class tradeoffs.

The simplest approach to Margin Setting is to classify data (in this case the three-dimensional RGB data) according to the distance of the data vector from various prototype or exemplar vectors. We use a simple evolutionary or immune systems method to choose these prototypes. All RGB vectors of a color class are potential prototypes for that class. The distance between a potential prototype and the closest vector from other classes is the zero-margin radius. Each potential prototype is rank ordered according to the number of same-class training set vectors inside the zero-margin radius sphere around that prototype. Superior prototypes (those with higher ranking or figure of merit) are obtained by mutation of the old prototypes. Prototypes with higher ranking (figure of merit) are mutated preferentially. The mutation process continues for a specific predefined number of generations or until no further improvement in the figure of merit is achieved. The vector with highest figure of merit is the class prototype vector. The figure of merit being the number of same-class training set vectors contained inside the zero-margin radius around the prototype.

Calling the distance from any prototype to the nearest member from other classes, the zero-margin radius; the ten-percent margin radius is defined as ninety-percent of the zero-margin radius and so forth. Even the slightest variation of the new data from the training data, could lead to an erroneous classification, if we use the zero margin radius. Using smaller radii decreases the likelihood of such an error, but will make the classifier more prone to making no decision at all on some data points. A set of prototypes in conjunction with their respective radii of influence (ten-percent margin radius, for example) constitute the classifier.

Margin Setting uses multiple rounds of training, each using as the training set only the points in the original training set that do not fall within one of the spheres around one of the existing exemplars. The training ceases when some stopping condition is met. Most frequently, we stop after a prescribed number of rounds. Effects of number of classification rounds on classifier performance are examined in Section 4.

Clearly, two types of failures can occur on new data – misclassification and nonclassification. Experimentally, the sum of those tends to be somewhat margin independent after as few as four rounds. Large margin leads to few misclassifications and many nonclassifications and conversely small margins lead to few nonclassifications at the expense of a higher rate of misclassification.

2.1. Pseudo code

Margin Setting is a hierarchical approach to classification. This section provides a general description of the algorithm. The round-one classifier is trained using all training elements (exemplars) from all classes. This classifier is comprised of N (1) prototype-radius pairs, where N (1) is the total number of classes. Each higher-round classifier is comprised of N (j) prototype-radius pairs

N(j+1)N(j);1jM
(1)

Where superscript j denotes classification round and M is the total number of rounds. All classes are present during training of the round-one classifier. Training higher-round classifiers, however, may involve partial subsets of the classes. Exemplars that are subsumed by the classifier in a certain training round are removed from the training set and are absent in subsequent training rounds. Exemplar sets from one or more classes can potentially be removed in their entirety prior to conclusion of the training process. In such cases the training process involves the remaining classes, and each training round results in prototype-radius pairs for classes that are present in that round. In each round of training all elements are assigned a condition number.

X(j)=i=1N(1){X(i)(j)};X(i)(j)={X(i)k(j);1k≤M(i)(j)}
(2)

Where N (1) is defined before, X (j) is the round-j training set, X (j) (i), x (j) (i)k denote respective class-i trainer set and a representative element of that set, respectively, and M (j) (i) is the number of class-i elements in round j. It is noted that some X (j) (i) sets may be empty. During each training round a radius is attributed to each element, defined as the distance between the element and its closest neighbor belonging to other classes.

R(i)k(j)={minlix(i)k(j)x(l)m(j);1mM(l)(j)};1kM(i)(j)
(3)

To each element a figure of merit, defined as number of same-class members whose distance to the element is smaller than the element radius, is assigned.

F(i)k(j)=n(X´(i)k(j));1kM(i)(j)
(4a)
X́(i)k(j)={x(i)l(j)x(i)l(j)x(i)k(j)<R(i)k(j);1lM(i)(j)}
(4b)
X́(i)k(j)X(i)(j)
(4c)

It is noted that figure of merit of each exemplar is at least one. The largest figure of merit is defined as the class figure of merit.

F(i)(j)=maxk{F(i)k(j)}
(5)

The computation of class prototypes and radii during each training round is carried out by processing one class at a time. Prototype-radius-pair computation of a particular class starts with generating a large number, far exceeding number of current class trainers (i.e. by factor of one-hundred times), of objects by mutation (perturbation) of current exemplars. Each member is mutated into a number, commensurate with its figure of merit, of new objects. Mutated objects are obtained from a multivariate Normal (Gaussian) probability distribution functions with means centered at the corresponding exemplar and standard deviation values derived from the respective round-one (original) class exemplars.

X(i)(j)X=(i)(j)
(6a)
X=(i)(j)=l=1n(X(i)(j))X¯(i)l(j)
(6b)
X¯(i)l(j)={x¯(i)l(j)x¯(i)l(j)N(μ(i)l(j),Σ(i))}
(6c)
μ(i)l(j)=x(i)l(j)
(6d)
Σ(i)=[σ(i)1,12σ(i)1,Nd2......σ(i)Nd,12σ(i)Nd,Nd2]
(6e)

Where μ, denote, respectively, mean vector and covariance matrix of the Gaussian process from which the mutated objects are obtained, and Nd represents dimensionality of the problem which in this case is three. The covariance matrix for each class is obtained from the entire set of exemplars for the respective class. Probability density function of x¯(i)l(j) is

fx¯ (i)l(j)x¯ (i)l,1(j)x¯(i)l,Nd(j)=1(2π)Nd2Σ(i)12exp(12(x¯(i)l(j)μ(i)l(j))TΣ(i)1(x¯(i)l(j)μ(i)l(j)))
(7)

A set of objects is obtained by choosing n(X (j) (i)) members from the set X=(i)(j) of Eq. (6b) by random sampling.

X(i)(j)X=(i)(j)
(8)

The set X(i)(j) is a mutated version of X (j) (i), class-i exemplars during round-j training, with the same number of elements as the exemplar set. The mutated set, however, tends to possess, in general, a higher class-figure-of-merit than the parent set. This is due to preferential treatment of class members with higher figures of merit in the mutation process. Members with higher figures of merit spawn larger numbers of offspring and the set X=(i)(j) from which X(i)(j) is derived is populated with progenies of superior members of X (j) (i). The mutation process continues for a user defined number of mutations or until no further improvement in class-figure-of-merit is observed from one generation to the next. In the examples presented in this paper four mutations were carried out during each round of training. For these examples the class-figure-of-merit values, in general, reach their respective plateaus, and increasing mutation rounds beyond four tend to result in diminished returns. The algorithm, however, is flexible and provides for optional dynamic control of the number of mutations, where the mutation process in each training round continues and the covariance matrix of the random process from which mutated objects are obtained is adaptively adjusted. The mutation process is automatically terminated when no further improvement in class-figure-of-merit is obtainable.

At the conclusion of the mutation process the object with the highest figure of merit is designated as the class prototype for the respective training round. If two or more objects have figures of merit equal to the maximum value, one of them is chosen randomly and designated class prototype. The object radius as defined in Eq. (3) is designated as the zero-margin radius.

3. Test scenes

As befits the objectives laid out earlier, there are independent spectral and spatial aspects to the scenes used in the following examples.

3.1. Spectral aspects

We chose 24 classes of greenish colors to make the spectral discrimination difficult. Each class was described by a Gaussian distribution of RGBs with means as noted in Table 1. Various variances were used to experiment with within-class variations.

Table 1. Twenty-four mean colors used in the images. The actual pixel colors in each patch of the image under test were drawn stochastically from a normal distribution with one of those means and a standard deviation we varied.

table-icon
View This Table

3.2. Spatial aspects

For us humans, color depends not just on the RGB set but also on the color of neighboring patches. To illustrate this and the superiority of Artificial Color to Natural Color in this case, we wanted to choose multiple random neighbors for each point.

To provide a pleasing image, we wrote a program to generate a space-filling array of rectangles of various sizes and aspect ratios – a style borrowed from the Dutch painter Piet Mondrian. The Mondrians used in all examples here are each composed of 1024 × 1024 pixels and have varied numbers of rectangular patches. We then assigned colors one at a time to randomly chosen rectangles to produce our final assignment of color categories. In some examples Mondrians with large number of patches were painted with colors drawn from the entire set. These examples are used to illustrate the effectiveness of Margin Setting in handling classification problems involving numerous classes. In other examples Mondrians with a few patches were painted with color classes that are exceedingly close to each other in the RGB space. These examples provide images with more apparent visual color differentiation and are intended to illustrate effects of various parameter settings on classifier performance.

4. Representative results

Figure 1 depicts a 256-patch Mondrian using all 24 color categories in both the pure (zero standard deviation) case and a moderate noise (0.02 standard deviation) case. Each rectangular patch of the pure Mondrian consists of pixels with identical RGB values drawn from table 1. Each pixel in a particular patch of the noisy Mondrian, on the other hand, is assigned an RGB value obtained from three Gaussian processes with mean values given in a respective row of Table 1 and user-specified variance. All negative RGB values are set to zero and a value of one is assigned to all those greater than one. In the examples of Figures

Fig. 1. Noiseless (top) and a noisy (bottom) Mondrian using the 24 color categories of Table 1.

In The example of Fig. 1 the classifier is trained to identify pixels that belong to any of the twenty-four color classes used for painting the Mondrian. The Margin Setting algorithm [2

2. H. J. Caulfield and Kaveh Heidary, “Exploring Margin Setting for good generalization in multiple class discrimination,” Pattern Recognition38,1225–1238 (2005).

] with various parameter settings is utilized for determination of the classifier. The classifier is subsequently used to isolate and white out pixels whose RGB values are determined to belong to specific color classes. Although each patch had numerous “noisy” pixels, we trained on only 20 samples from each of the 24 classes, far fewer samples than Learning Theory [4–6

4. C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).

] suggests are necessary. Experiments reveal that higher numbers of trainers lead to superior classifiers with improved error performance. The examples shown here, however, are intended to illustrate the efficacy of Margin Setting even when numbers of trainers are limited.

Figure 2 shows the results of using a 10% margin and four rounds in discriminating Color-1 from the other 23 in the Mondrian shown in Fig. 1. The top figure shows the original noisy Mondrian with the Color-1 patches indicated. Note how close in appearance Color-1 patches are to many of their neighbors. The bottom Figure is the result of setting all pixels identified as belonging to Color-1 and not to any of the other 23 color sets to white using a 10% margin and four rounds of training. Clearly this is very effective.

Fig. 2. Original (top) and filtered Mondrian (bottom) using a four-round classifier with ten-percent margin.

To examine the effect of margin value, we need to vary the margin in the Margin Setting classification process and observe its efficacy in the identification of two color classes that are very close to each other and are also close to some other colors in table 1. Figs. 3–5 are a sequence of images doing that. In each of these figures the top image shows the result of identification of all pixels whose RGB values are determined to belong to Color-15. The bottom image shows identification of all pixels belonging to Color-20. In each image the identified pixels are colored white.

In the images of Fig. 3 the aimed-for class is obvious, because its rectangles are colored white. It is seen that numerous pixels from rectangles painted with other color classes are also colored white. This is due to misclassification of those pixels. Pixels belonging to other color classes are erroneously identified (false positives) as Color-15 class members (top image) and Color-20 class members (bottom image) due to the low margin value of ten percent. Figure 4 shows that increasing the margin value to twenty-five percent results in a marked improvement in the classifier performance. The discernible reduction in the misclassification rate resulting from the increased margin value from 10% to 25% is not accompanied by any noticeable increase in the nonclassification rate (false negative). Figure 5 shows that increasing the margin value to 40% results in a dramatic improvement of the classifier as it relates to misclassification rates. This figure shows that 40% margin leads to excellent between-class discrimination performance. It is also seen from this figure that some of the aimed-for rectangular patches have pixels that are left with their original color and are not colored white as desired. This is due to nonclassifications brought about by the higher margin value. Further increase in margin value beyond 40% will result in significant within-class failure rates (nonclassification).

Fig. 3. Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=10%.
Fig. 4. Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=25%.
Fig. 5. Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=40%.

The example in Fig. 6 further illustrates the effect of margin value on the classifier performance and the detrimental consequences of excessive margin value. This example shows a set of filtering operations with different margins all using four classification rounds and all applied to one, new Mondrian. This example shows there is an optimum margin value in that 25% margin gives too many false positives and acceptable false negatives, whereas 60% margin leads to a classifier which gives too many false negatives and acceptable false positives. For this example 40% margin value leads to a classifier with superior performance in that it has lower false-positive failure rate than the 25%-margin classifier and lower false-negative failure rate than the 60%-margin classifier.

Fig. 6. Mondrian with pixels classified as Color-15 in the original image colored white in the filtered image. From top left clockwise the margin values are ten, twenty-five, sixty, and forty percent, respectively. Clearly 40% is better than either 25% or 60%.

Figures 7–11 illustrate the effect of number of classification rounds on the classifier performance. The three-section Mondrian in Fig. 7 (upper left) is filtered using one, two, and four classification rounds. It appears that the effect of increasing classification rounds saturates and no apparent improvement is achieved in going from two rounds to four. The Examples of Figs. 8–11 pertain to application of filtering operations on four-section Mondrians painted with Color-8, 18, 23, and 24 classes using different standard deviations.

Fig. 7. The top left image is the original. The top right image was obtained after one round. The bottom left used two rounds, and the bottom right used four rounds. No improvement is evident in going from two to four rounds.

In the example of Fig. 8 the standard deviation of the color classes is 0.05, and the classifier uses a margin value of 25%. The original Mondrian (upper left) and a set of filtered images using one, two, four, six, and eight classification rounds are shown. The classifier attempts to identify Color-8 pixels and white them out. This example clearly illustrates the benefit of utilizing higher classification rounds in reducing the number of false negatives. It is also seen that the eight-round classification process exhibits a noticeable false positive rate in that some pixels from other patches are mistakenly identified as Color-8. The reason for choosing margin value of 25% in this example despite the fact that the classifier trained with 40% margin had the best performance in the example of Fig. 6 is higher color class variability and interclass proximity in this example compared to the previous. This is brought about by the higher standard deviation. The classifier trained with 25% margin in this case results in superior performance than the 40% margin classifier in that it yields lower false negative error rates.

Fig. 8. Original Mondrian (top left) and filtered images using one (top right), two (middle left), four (middle right), six (bottom left), and eight (bottom right) classification rounds. Color standard deviation is 0.05 and classifier Margin is 25%.

In the example of Fig. 9 the standard deviation of color classes used for painting the Mondrian is 0.04. The filtering operation attempts to identify and white out Color-24 pixels. The zero-percent-margin classification process clearly exhibits excessive false-positive failure rate for the four-round classifier. It is seen that increasing the number of classification rounds from one to two improves the filtering process, whereas going to four rounds, despite decreased false-negative failure rate, has an adverse effect in that many Color-23 pixels are mistakenly identified as Color-24. This excessive false-positive failure rate arises because of the low margin value.

Fig. 9. Original Mondrian (top left) and filtered images using one (top right), two (bottom left), four (bottom right) classification rounds. Color standard deviation is 0.04 and classifier margin is zero percent.

The Example of Fig. 10 illustrates the result of filtering the same Mondrian as that shown in Fig. 9. The classifier uses a margin value of 10% and the number of classification rounds is increased from one to four. It is noted that a three-round classifier performs better than one-and two-round classifiers in that it results in lowering the number of false negatives. Increasing the number of classification rounds to four, however, does not result in a noticeable decrease in false negatives and it results in increasing false positives. A three-round classifier seems to be optimum for this example.

Fig. 10. Filtered images using one (top left), two (top right), three (bottom left), and four (bottom right) classification rounds. Color standard deviation is 0.04 and classifier Margin is ten percent.

In the Fig. 11 Mondrian the standard deviation of color classes is 0.035. The result of filtering operation with one, two, and four classification rounds utilized to identify and white out Color-8 pixels are shown here. In this example one-hundred samples from each color class are used to train the classifier (in all examples heretofore only twenty samples from each class were used as training elements) with margin value of thirty percent.

Fig. 11. Original Mondrian (top left) and filtered images using one (top right), two (bottom left), four (bottom right) classification rounds. Color standard deviation is 0.035 and classifier Margin is thirty percent.

The example of Fig. 12 illustrates application of this method to spectral filtering of real world images. The top image was obtained from the Caltech Vision Center Data Repository [7]. It was divided into two classes, namely target (plane) and non-target (background) pixels. For training we used thirty random samples from target and thirty from background. In order to have a few training samples representing each area of the two classes, thirty trainers were used here instead of twenty used in examples of Figs 2–10. Sample pixels were chosen to represent all areas of importance, i.e. white portion of background, shadow regions and concrete, as well as the body, nose, engine, tail areas of the plane, and the printed letters. Margin Setting was used to train a spectral filter which was subsequently applied to the original (top) image. Pixels identified as members of non-target class were set white in the processed images. Two different margin values, five classification rounds, and five mutations were used in the training process. The second image from top is the result of processing using a spectral filter obtained with margin value of 20%. It is seen that the target area was well recognized and a significant portion of background pixels was set to white. Third image from top shows the result of processing and background removal using a 10%-margin filter. It is seen that reducing margin resulted in lowering the number of background pixels identified as target-class members without affecting the number of plane pixels incorrectly identified as background. We suspect that the reason for not recognizing the shadow as non-target is the resemblance of shadow to the printed letters which is part of the target. The last image is the result of post processing the spectrally filtered image with a 5×5 median filter [8

8. R.C. Gonzalez and R.E. Woods, Digital Image Processing, (Prentice Hall, New York, 2002).

].

Fig. 12. Top image is the original. Second from top is the spectrally processed image using a filter trained with 20% margin, and third image is the result of processing with a filter trained with 10% margin filters. Bottom image is the result of application of a median filter to the spectrally filtered (third) image.

5. Conclusions

The results of experiments presented here are typical of a very large set of experiments conducted in our laboratory the results of which are consistent with what the authors have presented in this paper. The results of these experiments lead to several conclusions important to those who seek to recognized RGB images optically. We list them here in no particular order.

  • 1. Artificial Color filtering can isolate objects by very subtle RGB differences. This is because it can be specialized to the task of interest, as opposed to our Natural Color system that is quite general and hence not likely to be optimum for a specialized case such as the 24-class Mondrians with noisy and similar greenish colors.
  • 2. That observation justifies the concept of doing such filtering before spatial pattern recognition – the equivalent of working on the white pixels in optimally filtered Mondrians above. The subsequent filtering is then conventional coherent optical correlation that can be done monochromatically.
  • 3. Margin Setting allows an effective means to trade off between-class and within-class errors as seems best for any particular task. It does appear, however, that an optimum margin and a minimum number of rounds of training may exist for effective spectral discrimination. The examples here show that optimum margin value and minimum number of training rounds are problem dependent.

References

1.

H. J. Caulfield, “Artificial Color,” Neurocomputing,51,463–465 (2003).

2.

H. J. Caulfield and Kaveh Heidary, “Exploring Margin Setting for good generalization in multiple class discrimination,” Pattern Recognition38,1225–1238 (2005).

3.

H. J. Caulfield, A. Karavolos, and J. E. Ludman, “Improving optical Fourier pattern recognition by accommodating the missing information,” Information Sciences162,35–52 (2004).

4.

C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).

5.

L. G. Valiant, “A theory of learnability,” Communications ACM 27,1134–1142 (1984). [CrossRef]

6.

R. E. Schapire, “The strength of weak learnability,” Machine Learning 5(2),197–227 (1990). [CrossRef]

7.

http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html.

8.

R.C. Gonzalez and R.E. Woods, Digital Image Processing, (Prentice Hall, New York, 2002).

OCIS Codes
(100.2000) Image processing : Digital image processing
(100.2980) Image processing : Image enhancement
(330.1690) Vision, color, and visual optics : Color

ToC Category:
Image Processing

History
Original Manuscript: August 18, 2006
Revised Manuscript: December 9, 2006
Manuscript Accepted: December 18, 2006
Published: January 8, 2007

Virtual Issues
Vol. 2, Iss. 2 Virtual Journal for Biomedical Optics

Citation
Kaveh Heidary and H. John Caulfield, "Discrimination among similar looking noisy color patches using Margin Setting," Opt. Express 15, 62-75 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-1-62


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. H. J. Caulfield, "Artificial Color," Neurocomputing,  51, 463-465 (2003).
  2. H. J. Caulfield, Kaveh Heidary, "Exploring Margin Setting for good generalization in multiple class discrimination," Pattern Recognition 38, 1225-1238 (2005).
  3. H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).
  4. C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).
  5. L. G. Valiant, "A theory of learnability," Communications ACM 27, 1134-1142 (1984). [CrossRef]
  6. R. E. Schapire, "The strength of weak learnability," Machine Learning 5(2), 197-227 (1990). [CrossRef]
  7. http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html.
  8. R.C. Gonzalez, R.E. Woods, Digital Image Processing, (Prentice Hall, New York, 2002).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited