## Multi-channel data acquisition using multiplexed imaging with spatial encoding |

Optics Express, Vol. 18, Issue 22, pp. 23041-23053 (2010)

http://dx.doi.org/10.1364/OE.18.023041

Acrobat PDF (1370 KB)

### Abstract

This paper describes a generalized theoretical framework for a multiplexed spatially encoded imaging system to acquire multi-channel data. The framework is confirmed with simulations and experimental demonstrations. In the system, each channel associated with the object is spatially encoded, and the resultant signals are multiplexed onto a detector array. In the demultiplexing process, a numerical estimation algorithm with a sparsity constraint is used to solve the underdetermined reconstruction problem. The system can acquire object data in which the number of elements is larger than that of the captured data. This case includes multi-channel data acquisition by a single-shot with a detector array. In the experiments, wide field-of-view imaging and spectral imaging were demonstrated with sparse objects. A compressive sensing algorithm, called the two-step iterative shrinkage/thresholding algorithm with total variation, was adapted for object reconstruction.

© 2010 Optical Society of America

## 1. Introduction

*x*,

*y*,

*z*) space were captured by a single pinhole camera without the occlusion, the objects are superimposed and detection of the object range is difficult. A coded-aperture imaging system for single-shot object range detection consists of a single coded-aperture mask that is used to spatially encode objects [4, 5

5. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in *“SIGGRAPH ’07: ACM SIGGRAPH 2007 papers,”* (ACM, New York, NY, USA, 2007), p. 70. [CrossRef]

*Depth from defocus*imaging can be also considered as spatially encoded imaging for range detection [6

6. A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. **19**, 1158–1164 (1997). [CrossRef]

7. Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory **52**, 1289–1306 (2006). [CrossRef]

9. R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. **24**, 118–121 (2007). [CrossRef]

14. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express **18**, 19367–19378 (2010). [CrossRef] [PubMed]

12. A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. **47**, B44–BB51 (2008). [CrossRef] [PubMed]

19. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express **15**, 14013–14027 (2007). [CrossRef] [PubMed]

## 2. Compressive sensing

7. Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory **52**, 1289–1306 (2006). [CrossRef]

9. R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. **24**, 118–121 (2007). [CrossRef]

**∈ ℝ**

*g*^{Ng×1},

**Φ**ε ℝ

^{Ng×Nf},

**∈ ℝ**

*f*^{Nf×1},

**Ψ**∈ ℝ

^{Nf×Nβ}, and

**∈ ℝ**

*β*^{Nβ×1}are vectorized captured data, a system matrix, vectorized object data, a basis matrix, and a transform coefficient vector, respectively. ℝ

^{Ni×Nj}denotes an

*N*×

_{i}*N*matrix of real numbers. In CS,

_{j}*N*is smaller than

_{g}*N*and

_{f}*N*

_{β}.

**is**

*β**s*,

**Θ**should satisfy a sufficient condition for any

*s*-sparse

**to reconstruct the object data**

*β***accurately. The sufficient condition is called the restricted isometric property (RIP) and is expressed as where ε ∈ (0, 1) is a constant and ||·||**

*f*_{2}denotes

*ℓ*

_{2}-norm [20

20. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory **51**, 4203–4215 (2005). [CrossRef]

*s*nonzero coefficients in

**.**

*β*

*β*_{Λ}and

**Θ**

_{Λ}are elements of

**and columns of**

*β***Θ**that support the

*s*non-zero coefficients. Smaller ∈ indicates better RIP and that

**Θ**preserves the Euclidean length of

_{Λ}

*β*_{Λ}well. When ∈ is larger, larger

*N*, which is the number of elements in the captured data, or smaller

_{g}*s*is required for accurate reconstruction. A Gaussian random matrix is known as an ideal compressive sensing matrix, and it highly satisfies RIP [8

8. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. **25**, 21–30 (2008). [CrossRef]

*N*should be roughly four times larger than

_{g}*s*for accurate reconstruction, regardless the numbers of elements in the object data

**and the transform coefficient data**

*f***. The**

*β**s*non-zero coefficients in

**can be estimated accurately by solving where ||·||**

*β*_{1}denotes

*ℓ*

_{1}norm. The ideal compressive sensing matrix is difficult to implement physically. The proposed system in this paper may have worse RIP than the ideal compressive sensing matrix and require larger

*N*and smaller

_{g}*s*for accurate reconstruction.

**Φ**but also the basis matrix

**Ψ**[7

7. Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory **52**, 1289–1306 (2006). [CrossRef]

21. R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory **49**, 3320–3325 (2003). [CrossRef]

## 3. Generalized system model

*x*,

*c*) denote a discretized multi-channel object in ℝ

^{Nx×Nc}, where

*x*and

*c*represent the spatial dimension and a channel index, respectively. For simplicity, a model with a one-dimensional detector array is introduced.

*N*and

_{x}*N*are the numbers of detectors and channels, respectively. Extending to higher dimensions can be readily achieved with slight modifications to the model. Let 𝒢 denote captured data in ℝ

_{c}^{Nx×1}. 𝒢 can be written as where

*𝒲*(

*l*,

*c*) and

*𝒮*(

*l*,

*c*) represent the weights and the shifts of the

*l*-th copy in the spatial encoding at the

*c*-th channel, respectively.

*ℒ*(

*c*) is the number of copies at the

*c*-th channel.

*𝒲*(

*l*,

*c*),

*𝒮*(

*l*,

*c*), and

*ℒ*(

*c*) are designed or determined initially. As indicated in Eq. (4), the image of each of the channels is copied multiple times, the copies are weighted and spatially shifted, and the resultant copies of all channels are superimposed.

*A*_{l,c}∈ ℝ

^{Nx×Nx}, which denotes the

*l*-th copy in the spatial encoding at the

*c*-th channel, is expressed as where

*A*_{l,c}(

*p*,

*q*) is the (

*p*,

*q*)-th element in the matrix

*A*_{l,c}. The matrix

*E**∈ ℝ*

_{c}^{Nx×Nx}, which denotes the spatial encoding at the

*c*-th channel, is written as

**∈ ℝ**

*T*^{(}

^{Nx×Nc)×(Nx×Nc)}, which denotes the spatial encoding for the whole object data, is expressed as where

**∈ ℝ**

*O*^{Nx×Nx}is an

*N*×

_{x}*N*zero matrix. The matrix

_{x}**∈ ℝ**

*M*^{Nx×(Nx×Nc)}, which sums all of the channels, is written as where

**∈ ℝ**

*I*^{Nx×Nx}is an

*N*×

_{x}*N*identity matrix. The object data is spatially encoded using

_{x}**, and the result is multiplexed using**

*T***. Therefore, the vectorized captured data**

*M***∈ ℝ**

*g*^{Nx×1}can be written as where

**Φ**∈ ℝ

^{Nx×(Nx×Nc)}and

**∈ ℝ**

*f*^{(Nx×Nc)×1}are the system matrix and the vectorized object data, respectively.

## 4. Simulations

22. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. **16**, 2992–3004 (2007). [CrossRef]

23. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D **60**, 259–268 (1992). [CrossRef]

*(*

_{x}*l*,

*c*) and 𝒮

*(*

_{y}*l*,

*c*) are the shifts along the

*x*and the

*y*axes. They are random integers whose ranges are shown in the tables.

*s*in the domain of the object is 2162. The size of the captured data is 128 ×128. White Gaussian noise with a signal-to-noise ratio (SNR) of 40 dB was added to the captured data. The peak signal-to-noise ratios (PSNRs) of the reconstructed objects in the three systems were 27.3 dB, 24.0 dB, and 24.9 dB, respectively. The reconstructed results in systems B and C have some artifacts at the planes on which the phantoms were not located. Figure 2(h) shows the result reconstructed from Fig. 2(b) by the Richardson-Lucy method [24

24. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. **62**, 55–59 (1972). [CrossRef]

25. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. **79**, 745–754 (1974). [CrossRef]

*s*in the two-dimensional TV domain is 4321, which is larger than that of the previous simulation. The size of the captured data is 128 × 128. The measurement SNR is 40 dB. The reconstruction PSNR is 23.4 dB. A comparison between Figs. 2(c) and 3(c) shows the effect of the sparsity on the reconstruction fidelity.

*l*,

*x*), the larger the number of copies ℒ(

*c*) in the higher measurement SNRs, and the smaller number of copies ℒ(

*c*) in the lower measurement SNRs realizes higher reconstruction PSNRs. Also, the smaller sparsity

*s*results in a higher reconstruction PSNR.

## 5. Experiments

### 5.1. Wide field-of-view imaging

*μ*m × 6.45

*μ*m, respectively.

^{2}. The objects were passively illuminated with incoherent interior lights. Clipped captured data, shown in Fig. 7(a), was 181 ×221 pixels. The reconstructed result, whose size was 181 ×221 ×2 pixels, is shown in Fig. 7(b). The two sub-fields were separated successfully.

*N*= 1 and ℒ(0) = 3 in Eq. (4), respectively. The weight and the shift in Eq. (4) were measured manually from Fig. 9. The weights and the shifts of the copies are shown in Table 3.

_{c}### 5.2. Spectral imaging

## 6. Conclusions

## References and links

1. | A. Kak and M. Slaney, |

2. | E. R. Dowski Jr. and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. |

3. | M. Levoy, “Light fields and computational imaging,” IEEE Computer |

4. | S. Hiura and T. Matsuyama, “Depth measurement by the multi-focus camera,” in “CVPR ’98: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,” (IEEE Computer Society, Washington, DC, USA, 1998), p. 953. |

5. | A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in |

6. | A. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. |

7. | Y. Tsaig and D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory |

8. | E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. |

9. | R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. |

10. | M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, “An architecture for compressive imaging,” in “ICIP06,” (2006), pp. 1273–1276. |

11. | P. Ye, J. L. Paredes, G. R. Arce, Y. Wu, C. Chen, and D. W. Prather, “Compressive confocal microscopy,” in “ICASSP ’09: Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing,” (IEEE Computer Society, Washington, DC, USA, 2009), pp. 429–432. |

12. | A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt. |

13. | D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express |

14. | R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express |

15. | M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in |

16. | R. F. Marcia, C. Kim, J. Kim, D. J. Brady, and R. M. Willett, “Fast disambiguation of superimposed images for increased field of view,” in “Proceedings of the IEEE International Conference on Image Processing,” (San Diego, CA, 2008), pp. 2620–2623. |

17. | R. F. Marcia, C. Kim, C. Eldeniz, J. Kim, D. J. Brady, and R. M. Willett, “Superimposed video disambiguation for increased field of view,” Opt. Express |

18. | V. Treeaporn, A. Ashok, and M. A. Neifeld, “Increased field of view through optical multiplexing,” in |

19. | M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express |

20. | E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Info. Theory |

21. | R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Info. Theory |

22. | J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. |

23. | L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D |

24. | W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. |

25. | L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. |

26. | E. Hecht, |

**OCIS Codes**

(110.1758) Imaging systems : Computational imaging

(110.3010) Imaging systems : Image reconstruction techniques

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: July 27, 2010

Revised Manuscript: October 12, 2010

Manuscript Accepted: October 14, 2010

Published: October 18, 2010

**Citation**

Ryoichi Horisaki and Jun Tanida, "Multi-channel data acquisition using multiplexed imaging with spatial encoding," Opt. Express **18**, 23041-23053 (2010)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-22-23041

Sort: Year | Journal | Reset

### References

- A. Kak, and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Press, New York, 1988).
- E. R. Dowski, Jr., and W. T. Cathey, "Extended depth of field through wave-front coding," Appl. Opt. 34, 1859-1866 (1995). [CrossRef] [PubMed]
- M. Levoy, "Light fields and computational imaging," IEEE Computer 39, 46-55 (2006).
- S. Hiura, and T. Matsuyama, "Depth measurement by the multi-focus camera," in "CVPR ’98: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition," (IEEE Computer Society, Washington, DC, USA, 1998), p. 953.
- A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," in "SIGGRAPH ’07: ACM SIGGRAPH 2007 papers," (ACM, New York, NY, USA, 2007), p. 70. [CrossRef]
- A. Rajagopalan, and S. Chaudhuri, "A variational approach to recovering depth from defocused images," IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158-1164 (1997). [CrossRef]
- Y. Tsaig, and D. L. Donoho, "Compressed sensing," IEEE Trans. Inf. Theory 52, 1289-1306 (2006). [CrossRef]
- E. J. Candes, and M. B. Wakin, "An introduction to compressive sampling," IEEE Signal Process. Mag. 25, 21-30 (2008). [CrossRef]
- R. Baraniuk, "Compressive sensing," IEEE Signal Process. Mag. 24, 118-121 (2007). [CrossRef]
- M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, "An architecture for compressive imaging," in "ICIP06," (2006), pp. 1273-1276.
- P. Ye, J. L. Paredes, G. R. Arce, Y. Wu, C. Chen, and D. W. Prather, "Compressive confocal microscopy," in "ICASSP ’09: Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing," (IEEE Computer Society, Washington, DC, USA, 2009), pp. 429-432.
- A. Wagadarikar, R. John, R. Willett, and D. Brady, "Single disperser design for coded aperture snapshot spectral imaging," Appl. Opt. 47, B44-B51 (2008). [CrossRef] [PubMed]
- D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, "Compressive holography," Opt. Express 17, 13040-13049 (2009). [CrossRef] [PubMed]
- R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, "Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition," Opt. Express 18, 19367-19378 (2010). [CrossRef] [PubMed]
- M. D. Stenner, P. Shankar, and M. A. Neifeld, "Wide-field feature-specific imaging," in "Frontiers in Optics," (Optical Society of America, 2007), p. FMJ2.
- R. F. Marcia, C. Kim, J. Kim, D. J. Brady, and R. M. Willett, "Fast disambiguation of superimposed images for increased field of view," in "Proceedings of the IEEE International Conference on Image Processing," (San Diego, CA, 2008), pp. 2620-2623.
- R. F. Marcia, C. Kim, C. Eldeniz, J. Kim, D. J. Brady, and R. M. Willett, "Superimposed video disambiguation for increased field of view," Opt. Express 16, 16352-16363 (2008). [CrossRef] [PubMed]
- V. Treeaporn, A. Ashok, and M. A. Neifeld, "Increased field of view through optical multiplexing," in "Imaging Systems," (Optical Society of America, 2010), p. IMC4.
- M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, "Single-shot compressive spectral imaging with a dual-disperser architecture," Opt. Express 15, 14013-14027 (2007). [CrossRef] [PubMed]
- E. J. Candes, and T. Tao, "Decoding by linear programming," IEEE Trans. Inf. Theory 51, 4203-4215 (2005). [CrossRef]
- R. Gribonval, and M. Nielsen, "Sparse representations in unions of bases," IEEE Trans. Inf. Theory 49, 3320-3325 (2003). [CrossRef]
- J. M. Bioucas-Dias, and M. A. T. Figueiredo, "A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration," IEEE Trans. Image Process. 16, 2992-3004 (2007). [CrossRef]
- L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D 60, 259-268 (1992). [CrossRef]
- W. H. Richardson, "Bayesian-based iterative method of image restoration," J. Opt. Soc. Am. 62, 55-59 (1972). [CrossRef]
- L. B. Lucy, "An iterative technique for the rectification of observed distributions," Astron. J. 79, 745-754 (1974). [CrossRef]
- E. Hecht, Optics (Addison Wesley, 2001), 4th ed.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
||

« Previous Article | Next Article »

OSA is a member of CrossRef.