## Regional multifocus image fusion using sparse representation |

Optics Express, Vol. 21, Issue 4, pp. 5182-5197 (2013)

http://dx.doi.org/10.1364/OE.21.005182

Acrobat PDF (4925 KB)

### Abstract

Due to the nature of involved optics, the depth of field in imaging systems is usually constricted in the field of view. As a result, we get the image with only parts of the scene in focus. To extend the depth of field, fusing the images at different focus levels is a promising approach. This paper proposes a novel multifocus image fusion approach based on clarity enhanced image segmentation and regional sparse representation. On the one hand, using clarity enhanced image that contains both intensity and clarity information, the proposed method decreases the risk of partitioning the in-focus and out-of-focus pixels in the same region. On the other hand, due to the regional selection of sparse coefficients, the proposed method strengthens its robustness to the distortions and misplacement usually resulting from pixel based coefficients selection. In short, the proposed method combines the merits of regional image fusion and sparse representation based image fusion. The experimental results demonstrate that the proposed method outperforms six recently proposed multifocus image fusion methods.

© 2013 OSA

## 1. Introduction

1. H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc. **57**(3), 235–245 (1995) [CrossRef]

2. H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun. **285**(2), 91–100 (2012). [CrossRef]

4. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express **18**(21), 21757–21769 (2010). [CrossRef] [PubMed]

5. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express **9**(4), 184–190 (2001). [CrossRef] [PubMed]

7. X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi-scale center-surround top-hat transform,” Opt. Express **19**(9), 8444–8457 (2011). [CrossRef] [PubMed]

9. H. B. Mitchell, *Image Fusion: Theories, Techniques and Applications* (Springer, 2010). [CrossRef]

10. J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun. **284**(1), 80–87 (2011). [CrossRef]

11. Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process. **19**(2), 186–193 (2009). [CrossRef]

12. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(8), 888–905 (2000) [CrossRef]

13. A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und. **77**(3), 317–370 (2000). [CrossRef]

14. N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn. **26**(9), 1277–1294 (1993) [CrossRef]

15. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. **26**(7), 971–979 (2008). [CrossRef]

2. H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun. **285**(2), 91–100 (2012). [CrossRef]

16. L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express **20**(17), 18846–18860 (2012). [CrossRef] [PubMed]

18. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. **59**(4), 884–892 (2010). [CrossRef]

19. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. **43**(6), 2003–2016 (2010). [CrossRef]

## 2. Related work

### 2.1. Normalized cuts and image fusion

*G*can be represented as a set of nodes

*V*and is to be divided only into two sets

*A*and

*B*(

*A*∪

*B*=

*V*and

*A*∩

*B*= Φ) by removing edges connecting these two parts. If the image needs to be divided into more parts, the same principle applies. Shi and Malik [12

12. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(8), 888–905 (2000) [CrossRef]

*A*to all nodes in image

*G*. Here,

*w*denotes the weight value between nodes

*u*and

*v*. Similarly,

*B*to all nodes in image

*G*. Here, the expression is the degree of dissimilarity between these two parts A and B in graph theory. Due to the two equations:

*assoc*(

*A*,

*V*) =

*assoc*(

*A*,

*A*) +

*cut*(

*A*,

*B*) and

*assoc*(

*B*,

*V*) =

*assoc*(

*B*,

*B*) +

*cut*(

*A*,

*B*), the Eq. (1) can be transformed as follows. From Eq. (3), it can be known that Ncut can measure not only the dissimilarity between the two parts

*A*and

*B*but also the similarity among nodes in the same part. Hence, the optimal segmentation is obtained when the value of Ncut is minimized. The reference [12

12. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(8), 888–905 (2000) [CrossRef]

15. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. **26**(7), 971–979 (2008). [CrossRef]

20. S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion **26**(7), 169–176 (2001). [CrossRef]

### 2.2. Sparse representation and image fusion

22. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory. **52**(4), 1289–1306 (2006). [CrossRef]

23. B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London) **381**, 607–609 (1996). [CrossRef]

*v*, and Where

*T*is the number of atoms and

*D*= {

*d*

_{1},

*d*

_{2},...,

*d*} is the given overcomplete dictionary that can be created by using discrete cosine transforms, short-time Fourier transforms, wavelet transforms or even directly learning from some images [24

_{T}24. R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE **98**(6), 1045–1057 (2010). [CrossRef]

*x*= {

*x*

_{1},

*x*

_{2},...

*x*} denotes the coefficient of

_{T}*v*according to the overcomplete dictionary. Based on the sparse representation theory, if the actual pixel values of source images are determined by the given dictionary, the count of the non-zero entries in coefficient

*x*is needed to be minimized. Assume ||

*x*||

_{0}is the number of the non-zero entries in

*x*, the above discussion can be formulated as follows. Here

*ε*denotes the global error. The optimization problem in (5) can be solved by systematically testing all the potential combinations of columns of

*x*[25]. In this paper, we choose the greedy algorithm, orthogonal matching pursuit (OMP), to solve the problem and the details of the OMP algorithm can be found in [26

26. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces. **54**, (11)4311–4322 (2006) [CrossRef]

18. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. **59**(4), 884–892 (2010). [CrossRef]

## 3. Proposed method

### 3.1. Clarity measurement based on sparse representation

*n*×

*n*. Each image patch is translated into a vector that can be approximated by a linear combination of the fixed and known overcomplete dictionary whose number of atoms is

*T*. For any image patch of a source image, the corresponding vector

*v*can be expressed as follows Where

*d*denotes one atom of the overcomplete dictionary

_{t}*D*= [

*d*

_{1},

*d*

_{2},...,

*d*]. Assuming that each source image can be divided into

_{T}*J*patches, we can derive all the

*J*patches (their corresponding vectors) as Here the sparse coefficient (matrix)

*S*is defined as We adopt the metric application in traditional sparse representation method [18

18. B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. **59**(4), 884–892 (2010). [CrossRef]

_{1}is the Manhattan norm. For the two source image A and B, the sparse coefficients

*S*and

_{A}*S*can be calculated by OMP algorithm according to the principle of sparse representation [23

_{B}23. B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London) **381**, 607–609 (1996). [CrossRef]

*S*and

_{A}*S*, the patch clarity levels

_{B}*C*and

_{A}*C*for image patches of source images can be derived similar to (9). Then we can get the clarity level images

_{B}*A*and

_{s}*B*, in which the clarity level of a pixel is calculated by averaging the clarity levels of all the patches that cover the pixel. Finally, the relative clarity level images

_{s}*B*′

*can be calculated by using this expression as follows:*

_{s}**59**(4), 884–892 (2010). [CrossRef]

15. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. **26**(7), 971–979 (2008). [CrossRef]

### 3.2. Segmentation based on clarity enhanced image

*B*′

*is in the interval [0, 1], we normalize the source images into the same interval and denote them as*

_{s}*A*′ and

*B*′. The clarity enhanced image

*CC*is obtained by Here the parameter

*α*is designed to adjust the contribution of the relative clarity measure and the original information from source images. When the

*α*= 1, the relative clarity measure is directly selected as the clarity enhanced image. On the contrary, if

*α*= 0, the clarity enhanced image is degraded to the traditional image to be partitioned - the simple average of source images. Because the clarity enhanced image contains both the information form the source images and the clarity information generated by sparse coefficients, the risk of partitioning the in-focus pixels and out-of-focus pixels in the same region is decreased, and the better segmentation for image fusion can be obtained. As an illustrated sample, Fig. 3 showed a pair of source images, their simple average, the corresponding clarity enhanced image, and the segmentation results based on different images. Clearly the segmentation result from clarity enhanced image is much better than the one derived from the simple average of source images.

### 3.3. Regional image fusion

*CC*in phase 2, we partition the normalized images

*A*′ and

*B*′ into homogeneous regions. After calculating the mean clarity of each region of

*A*′ and

*B*′, for regions of the corresponding position of source images, we compared their means and adopted the choose-max rule to select the regions. Then according to the selected regions, we choose the corresponding column vectors of

*S*and

_{A}*S*to construct the fused sparse coefficient matrix

_{B}*S*. According to Eq. (7), by using the fused sparse coefficient matrix

_{F}*S*and the overcomplete dictionary D, the patch vectors of fused image can be derived by: Finally, the fused image

_{F}*I*is reconstructed using

_{F}*V*. We reshape each vector

_{F}*v*in

_{Fj}*V*into a patch with size

_{F}*n*×

*n*and then we combine all the image patches according to their position respectively. This basically is an inverse process of phase 1. For each pixel position, the pixel value is the sum of several patch values. And the pixel value is divided by the number of patches coving the pixel to obtain the final reconstructed result. In this phase, we use the regional selection to construct the sparse coefficient matrix and thereafter the fused image. As a result, we have taken advantage of the sparse representation, i.e., effectively and completely extracting more valuable information of original images [23

23. B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London) **381**, 607–609 (1996). [CrossRef]

## 4. Experimental results

- Multifocus image fusion based on sparse representation [18
**59**(4), 884–892 (2010). [CrossRef] - Multifocus image fusion based on region segmentation and spatial frequency [15
**26**(7), 971–979 (2008). [CrossRef]**26**(7), 971–979 (2008). [CrossRef] - Multifocus image fusion based on homogeneity similarity [2]: In this method, the initial fused image, which is processed by using multi-resolution image fusion method, is then improved by using the homogeneity similarity. The fused image can be obtained by weighting the neighborhood pixels of the point of source images [2
**285**(2), 91–100 (2012). [CrossRef]**285**(2), 91–100 (2012). [CrossRef] - Multifocus image fusion based on blurring measure [11]: This is also a region based method of multifocus image fusion. Blurring measure method is used to decide whether the blocks of image are on the focus or not [11
11. Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.

**19**(2), 186–193 (2009). [CrossRef]**19**(2), 186–193 (2009). [CrossRef] - Multifocus image fusion based on bilateral gradient [10]: In this method, bilateral sharpness criterion is used to decide whether the pixel of source is on the focus or not. The fused image can be obtained by using it [10
10. J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.

**284**(1), 80–87 (2011). [CrossRef]**284**(1), 80–87 (2011). [CrossRef] - Multifocus image fusion based on sum-modified-Laplacian [17]: This is a typical MSD method of multifocus image fusion. In this method, Sharp Frequency Localized Contour let Transform (SFLCT), which is one of multi-scale transformation, is used. The fused image can be obtained by using Sum-modified-Laplacian (SML) to distinguish SFLCT coefficients from the clear parts or from blurry parts [17].

### 4.1. Q^{AB/F}

*Q*proposed by C. Xydeas and V. Petrovic in [27

^{AB/F}27. C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. **36**(4), 308–309 (2000). [CrossRef]

*Q*method, a Sobel edge detector is used to calculate the edge strength and orientation information at each pixel in both the source and fused images [15

^{AB/F}**26**(7), 971–979 (2008). [CrossRef]

27. C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. **36**(4), 308–309 (2000). [CrossRef]

*Q*is used to show the total transferred information during the multifocus image fusion. The larger

^{AB/F}*Q*value indicates that more edge information from the source images are transferred to the fused image while the smaller

^{AB/F}*Q*value indicates that less edge information from the source images are transferred to the fused image and more edge information is lost.

^{AB/F}### 4.2. The average correlation coefficient between blocks of ground truth and blocks of fused image

*m*×

*n*, we pull two clear image blocks from them. We can pull two clear image blocks from them by observing the source images. And each image block size is

*cm*×

*cn*(

*cm*<

*m*,

*cn*<

*n*). Then blocks of the same size and same location are pulled from the fused image F. The correlation coefficient between ground truth images and fused image is given by: Where

*Ā*,

*B̄*and

*F̄*are the average or mean of matrix elements. The average correlation coefficient between blocks of ground truth and fused image is calculated as: Since a good multifocus fusion method should keep as much as possible information in the in-focus regions from source images to the fused image, the larger average correlation coefficient value shows a better fusion method.

*Q*and the average correlation coefficient between blocks of ground truth and blocks of fusion image are listed in Table 1 and 2, where the values in bold indicate the highest quality measures obtained by different fusion methods. It can be seen in Table. 1 and 2, according to the two quantitative performance criteria, that the proposed method is better than other methods in all testing image sets. Comprehensively considering all the results we have obtained, the conclusion can be safely drawn that the quantitative evaluation results coincide with the visual effect very well and our method can provide the best performance in the experiments.

^{AB/F}*α*values finally selected for different image sets in the proposed method according to the

*Q*. Because all the

^{AB/F}*α*have the values larger than zero, we know the embedding of clarity information into the image to be partitioned can bring better image fusion results. One additional interesting finding is that in the 8 testing image sets, 6 ones choose the

*α*= 1, i.e., directly taking the relative clarity measure as the clarity enhanced image being partitioned. This can be explained by Fig. 3; comparing the source images and the relative clarity measure (the clarity enhanced image in this case), we know the relative clarity measure already contained enough information like the texture and edge information from source images, and the supplementary combination with source image is not necessary. But just as the Fig. 13 illustrated, sometimes, the simple relative clarity measure is still not enough (in this case, the important information in the white box is missing), we need to add information from source images to get a better candidate which can produce nice partitions for image fusion. Anyway, considering the computation cost of selecting different

*α*, one simplified version of the proposed method is directly using relative clarity measure as the image being partitioned to obtain the regional information for regional sparse coefficients selection.

## 5. Conclusion

## Acknowledgments

## References and links

1. | H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc. |

2. | H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun. |

3. | Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in |

4. | Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express |

5. | Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express |

6. | T. Stathaki, |

7. | X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi-scale center-surround top-hat transform,” Opt. Express |

8. | H. Hariharan, “Extending Depth of Field via Multifocus Fusion,” PhD Thesis, The University of Tennessee, Knoxville, 2011. |

9. | H. B. Mitchell, |

10. | J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun. |

11. | Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process. |

12. | J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. |

13. | A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und. |

14. | N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn. |

15. | S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. |

16. | L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express |

17. | X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng. |

18. | B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas. |

19. | Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. |

20. | S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion |

21. | K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst. |

22. | D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory. |

23. | B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London) |

24. | R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE |

25. | G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx. |

26. | M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces. |

27. | C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett. |

28. | J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning , 417–424 (2009). |

29. | J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision , 64–71 (2009). |

**OCIS Codes**

(100.0100) Image processing : Image processing

(350.2660) Other areas of optics : Fusion

(100.4994) Image processing : Pattern recognition, image transforms

**ToC Category:**

Image Processing

**History**

Original Manuscript: December 17, 2012

Revised Manuscript: January 21, 2013

Manuscript Accepted: February 10, 2013

Published: February 22, 2013

**Virtual Issues**

Vol. 8, Iss. 3 *Virtual Journal for Biomedical Optics*

**Citation**

Long Chen, Jinbo Li, and C. L. Philip Chen, "Regional multifocus image fusion using sparse representation," Opt. Express **21**, 5182-5197 (2013)

http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-21-4-5182

Sort: Year | Journal | Reset

### References

- H. Li, B. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph. Model. Im. Proc.57(3), 235–245 (1995) [CrossRef]
- H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity similarity,” Opt. Commun.285(2), 91–100 (2012). [CrossRef]
- Y. Song, M. Li, Q. Li, and L. Sun, “A new wavelet based multi-focus image fusion scheme and its application on optical microscopy,” in Proceedings of IEEE Conference on Robotics and Biomimetics (Institute of Electrical and Electronics Engineers, Kunming, China, 2006), pp. 401–405.
- Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express18(21), 21757–21769 (2010). [CrossRef] [PubMed]
- Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express9(4), 184–190 (2001). [CrossRef] [PubMed]
- T. Stathaki, Image Fusion: Algorithms and Applications (Academic Press, 2008).
- X. Bai, F. Zhou, and B. Xue, “Fusion of infrared and visual images through region extraction by using multi-scale center-surround top-hat transform,” Opt. Express19(9), 8444–8457 (2011). [CrossRef] [PubMed]
- H. Hariharan, “Extending Depth of Field via Multifocus Fusion,” PhD Thesis, The University of Tennessee, Knoxville, 2011.
- H. B. Mitchell, Image Fusion: Theories, Techniques and Applications (Springer, 2010). [CrossRef]
- J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Opt. Commun.284(1), 80–87 (2011). [CrossRef]
- Y. Zhang and L. Ge, “Efficient fusion scheme for multi-focus images by using blurring measure,” Digital Sig. Process.19(2), 186–193 (2009). [CrossRef]
- J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000) [CrossRef]
- A. Bleau and L.J. Leon, “Watershed-based segmentation and region merging” Comput. Vis. Image Und.77(3), 317–370 (2000). [CrossRef]
- N.R. Pal and S.K. Pal, “A review on image segmentation techniques” Pattern Recogn.26(9), 1277–1294 (1993) [CrossRef]
- S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput.26(7), 971–979 (2008). [CrossRef]
- L. Guo, M. Dai, and M. Zhu, “Multifocus color image fusion based on quaternion curvelet transform,” Opt. Express20(17), 18846–18860 (2012). [CrossRef] [PubMed]
- X. Qu, J. Yan, and G. Yang, “Multifocus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian,” Opt. Precis. Eng.17(5), 1203–1212 (2009).
- B. Yang and S. Li, “Multifocus image fusion and restoration with sparse representation,” IEEE Trans. Instrum. Meas.59(4), 884–892 (2010). [CrossRef]
- Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn.43(6), 2003–2016 (2010). [CrossRef]
- S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. Fusion26(7), 169–176 (2001). [CrossRef]
- K. Huang and S. Aviyente, “Sparse representation for signal classification,” Adv. Neural Inf. Process. Syst.19, 609–616 (2007).
- D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory.52(4), 1289–1306 (2006). [CrossRef]
- B. A. Olshausen, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature (London)381, 607–609 (1996). [CrossRef]
- R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proc. IEEE98(6), 1045–1057 (2010). [CrossRef]
- G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constr. Approx.13(1), 57–98 (1997).
- M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse sepresentation,” IEEE Trans. Sig. Proces.54, (11)4311–4322 (2006) [CrossRef]
- C. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett.36(4), 308–309 (2000). [CrossRef]
- J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” Proceedings of the 26th Annual International Conference on Machine Learning, 417–424 (2009).
- J. Huang, X. Huang, and D. Metaxas, “Learning with dynamic group sparsity,” Proceedings of the 12th International Conference on Computer Vision, 64–71 (2009).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
||

« Previous Article | Next Article »

OSA is a member of CrossRef.