## Coded aperture compressive temporal imaging |

Optics Express, Vol. 21, Issue 9, pp. 10526-10545 (2013)

http://dx.doi.org/10.1364/OE.21.010526

Acrobat PDF (10538 KB)

### Abstract

We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video’s temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

© 2013 OSA

## 1. Introduction

1. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature **486**(7403), 386–389, (2012) [CrossRef] [PubMed] .

2. S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ. **36**(12), 2049–2059, (2001) [CrossRef] .

4. D. J. Brady, *Optical Imaging and Spectroscopy*. (Wiley-Interscience, 2009) [CrossRef] .

5. D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII **5907**(1), 590708, (2005) [CrossRef] .

6. M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. **49**(10), B9–B17, (2010) [CrossRef] .

11. M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens. **2010**, 1–17, (2010) [CrossRef] .

^{9}photons per second per pixel. At this flux, frame rates approaching 10

^{6}per second may still provide useful information. Unfortunately, frame rate generally falls well below this limit due to read-out electronics. The power necessary to operate an electronic focal plane is proportional to the rate at which pixels are read-out [12

12. A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in *Circuits at the Nanoscale: Communications, Imaging, and Sensing*, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484 [CrossRef] .

## 2. Theory

*f*(

*x,y,t*) ∈ ℝ

^{3}with a transmission function that shifts in time (Fig. 1). Doing this applies distinct local coding structures to each temporal channel prior to integrating the channels as limited-framerate images

*g*(

*x*′,

*y*′,

*t*′) ∈ ℝ

^{2}on the

*N*-pixel detector. An

*N*-frame, high-speed estimate of

_{F}*f*(

*x*,

*y*,

*t*) may be reconstructed from each low-speed coded snapshot

*g*(

*x*′,

*y*′,

*t*′), with

*t*′ <

*t*.

*x*,

*y*) →

*x*) and respectively denoting object-and image-space coordinates with unprimed and primed variables, the sampled data

*g*(

*x*′,

*t*′) consists of discrete samples of the continuous transformation [4

4. D. J. Brady, *Optical Imaging and Spectroscopy*. (Wiley-Interscience, 2009) [CrossRef] .

*T*(

*x*−

*s*(

*t*)) represents the transmission function of the coded aperture, Δ

*is the detector pixel size,*

_{x}*is the temporal integration time.*

_{t}*s*(

*t*) describes the coded aperture’s spatial position during the camera’s integration window.

*such that*

_{t}*s*(

*t*) =

*νt*, the image’s temporal spectrum is given by where

*f̂*(

*u*,

*v*) is the 2D Fourier transform of the space-time datacube and

*T̂*(

*w*) is the 1D Fourier transform of the spatial code. Without the use of the coded aperture,

*ĝ*(

*u*,

*v*) = sinc(

*u*Δ

*)sinc(*

_{x}*v*Δ

*)*

_{t}*f̂*(

*u*,

*v*) and the sampled data stream is proportional to the object video low-pass filtered by the pixel sampling functions. Achievable resolution is proportional to Δ

*in*

_{x}*x*and Δ

*in time. The moving code aliases higher frequency components of the object video into the passband of the detector sampling functions. The support of*

_{t}*T̂*(

*w*) extends to some multiple of the code feature size Δ

*(in units of detector pixels), meaning that the effective passband may be increased by a factor proportional to 1/Δ*

_{c}*in space and*

_{c}*ν*/Δ

*in time. In practice, finite mechanical deceleration times cause*

_{c}*T̂*(

*w*) to have significant DC and low-frequency components in addition to the dominant

*N*-pixel active sensing area, the discretized form of the three-dimensional scene is

*N*-voxel spatiotemporal datacube. In CACTI, a time-varying spatial transmission pattern

_{F}*N*temporal channels of

_{F}**f**prior to integrating them into one detector image

*. These measurements at spatial indices (*

_{t}*i*,

*j*) and temporal index

*k*are given by where

*n*represents imaging noise at the (

_{i,j}*i*,

*j*)

*pixel. One may rasterize the discrete object*

^{th}**f**∈ ℝ

^{NNF ×1}, image

**g**∈ ℝ

^{N}^{×1}, and noise

**n**∈ ℝ

^{N}^{×1}to obtain the linear transformation given by where

**H**∈ ℝ

^{N×NNF}is the system’s

*discrete forward matrix*that accounts for sampling factors including the optical impulse response, pixel sampling function, and time-varying transmission function. The forward matrix is a 2-dimensional representation of the 3-dimensional transmission function

**T**: where

**H**

*∈ ℝ*

_{k}^{N×N}is a matrix containing the entries of

**T**

*along its diagonal and*

_{k}**H**is a concatenation of all

**H**

*,*

_{k}*k*∈ {1,...,

*N*}. Figure 2 underlines the role

_{F}**H**plays in the linear transformation.

*k*temporal channel, the coded aperture’s transmission function

^{th}**T**is given by where Rand(

*m*,

*n*,

*p*) denotes a 50%,

*m*×

*n*random binary matrix shifted vertically by

*p*pixels (optimal designs could be considered for this system as well).

*s*discretely approximates

_{k}*s*(

*t*) at the

*k*temporal channel by where

^{th}*C*, the system’s compression ratio, is the amplitude traversed by the code in units of detector pixels.

*C*-pixel sweep of the coded aperture on the image plane and detects a linear combination of

*C*uniquely-coded temporal channels of

*f*.

**H**to reconstruct any given snapshot

**g**within the acquired video while adhering to the hardware’s mechanical acceleration limitations (Fig. 3). The discrete motion

*s*(Eq. (8)) closely approximates the analog triangle waveform supplied by the function generator (Fig. 4).

_{k}*d*represent the number of

*detector*pixels the mask moves between adjacent temporal channels

*s*and

_{k}*s*

_{k}_{+1}.

*N*frames are reconstructed from a single coded snapshot given by thus, altering

_{F}*d*will affect the number of reconstructed frames for given a compression ratio

*C*.

*d*= 1 (i.e.

*N*=

_{F}*C*), the detector pixels that sense the continuous, temporally-modulated object

*f*are

*critically-encoded*; each pixel integrates a series of nondegenerate mask patterns (Fig. 5(b)) during Δ

*.*

_{t}*d*< 1 (i.e.

*N*>

_{F}*C*), every

**H**will contain nondegenerate temporal code information. These channels will reconstruct as if the sensing pixels are critically-encoded. The other temporal slices will interpolate the motion

*between*critically-encoded temporal channels (Fig. 5(c)). Generally, this interpolation accurately estimates the direction of the motion between these critically-encoded frames but retains most of the residual motion blur. Although it is difficult to see how this form of interpolation affects the temporal resolution of

*g*, one may use it to smoothen the reconstructed frames as evident in the experimental videos as presented in Section 5.

## 3. Experimental hardware

20. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express **15**(21), 14013–14027, (2007) [CrossRef] [PubMed] .

*F*/8 achromatic relay lens (Edmund Optics), and a 640 × 480 FireWire IEEE 1394a monochrome CCD camera (Marlin AVT).

*f*onto the piezo-positioned mask. The function generator (Stanford Research Systems DS345) drives the piezo with a 10V pk-pk, 15Hz triangle wave to locally code the image plane while the camera integrates. We operate at this low frequency to accommodate the piezo’s finite mechanical deceleration time.

*N*video frames of the discrete scene

_{F}**f**are later reconstructed from each coded image

**g**offline by the Generalized Alternating Projection (GAP) [21] algorithm.

*, the piezo can move a range of 0 − 160*

_{t}*μm*vertically in the (

*x*,

*y*) plane. Using 158.4

*μm*of this stroke moves the coded aperture eight 19.8

*μm*elements (sixteen 9.9

*μm*detector pixels) during each camera integration period Δ

*. Using larger strokes for a given modulation frequency is possible and would increase*

_{t}*C*.

*N*= 65, 000 pixels. Importantly, when operating at a given framerate, CACTI’s passive coding scheme facilitates scalability without increasing power usage; one may simply use a larger coded aperture to modulate larger values of

*N*pixels with negligible additional on-board power overhead. This passive coding scheme holds other advantages, such as compactness and polarization independence, over reflective LCoS-based modulation strategies, whereby the additional bandwidth required to modulate the datacube increases proportionally to

*N*.

*. This device was preferable for the hardware prototype because of its precision and convenient built-in Matlab interface. However, a low-resistance spring system could, in principle, serve the same purpose while using very little power.*

_{t}### 3.1. Forward model calibration

*s*according to Eq. (8). Steps are

_{k}*d*detector pixels apart over the coded aperture’s range of motion (Fig. 7(c), Fig. 5,). This accounts for system misalignments and relay-side aberrations. A Matlab routine controls the piezoelectric stage position during calibration. Since Matlab cannot generate a near-analog waveform for continuous motion, we connect the piezoelectric motion controller to the function generator via serial port during experimental capture.

*s*with additional zero-padding. We choose

_{k}*d*= 0.99

*μm*apart into

**H**results in

*N*= 160 reconstructed frames.

_{F}*s*through iterative ||

_{k}**g**−

**Hf**

_{e}||

_{2}-error reconstruction tests, where

**f**

_{e}is GAP’s

*N*-frame estimate of the continuous motion

_{F}*f*. From these tests, we chose and compared two numbers of frames to reconstruct per measurement,

*N*=

_{F}*C*= 14 and

*N*= 148.

_{F}**H**has dimensions 281

^{2}× (281

^{2}×

*N*) for both of these cases.

_{F}*d*and estimating up to 148 frames from a single exposure

**g**does not significantly reduce the aesthetic quality of the inversion results, nor does it significantly affect the residual error (Fig. 8(b)). The reconstruction time increases approximately linearly with

*N*as shown in Fig. 8(a).

_{F}## 4. Reconstruction algorithms

**H**multiplexes many local code patterns of the continuous object to the discrete-time image

**g**, inverting Eq. (4) for

**f**becomes difficult as

*N*increases. Least-squares, pseudoin-verse, and other linear inversion methods cannot accurately reconstruct such underdetermined systems.

_{F}22. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing **16**(12), 2992–3014 (2007) [CrossRef] [PubMed] .

**f**that are unlikely or undesirable to occur in the estimated

**f**

_{e}while GAP takes advantage of the structural sparsity of the subframes in transform domains such as wavelets and discrete cosine transform (DCT).

### 4.1. TwIST

**f**) and

*λ*are the regularizer and regularization weights, respectively [22

22. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing **16**(12), 2992–3014 (2007) [CrossRef] [PubMed] .

**f**that would result in poor reconstructions. The reconstruction results presented in Section 5 employ a Total Variation (TV) regularizer given by and hence penalize estimates with sharp spatial gradients. Because of this, sparsity in spatial gradients within each temporal channel is enforced through the iterative process of estimating

**f**. We choose TV regularization since many natural scenes are well-described by sparse gradients. The regularization weight was chosen via experimental optimization over several test values

*λ*∈ [0.3, 2]. A weight of

*λ*= 1 yielded in the clearest reconstructions and was used for the estimates obtained by TwIST presented in Section 5.

### 4.2. Generalized alternating projection (GAP)

#### 4.2.1. The linear manifold

**g**by following the forward model in Eq. (4). In other words, Π is the set of solutions to an underdetermined system of linear equations, which are to be disambiguated by exploiting structural sparsity of

**f**in transform domains.

#### 4.2.2. The weighted *ℓ*_{2,1} ball

_{1}, Ψ

_{2}, Ψ

_{3}be the matrices of orthonormal transforms along the two spatial coordinates and the temporal coordinate, respectively. The frames

**f**are represented in Ψ = (Ψ

_{1}, Ψ

_{2}, Ψ

_{3}) as

*w*} into

_{i,j,k}*m*disjoint subsets, {

*w*: (

_{i,j,k}*i*,

*j*,

*k*) ∈

*G*},

_{l}*l*= 1, ⋯,

*m*, and weight each group

*G*by a positive number

_{l}*β*, where

_{l}*G*= {

*G*

_{1},

*G*

_{2}, ⋯,

*G*} is a partition of the coefficient indices. The weights

_{m}*ℓ*

_{2,1}ball of size

*C*is defined as Λ(

*C*) = {

**f**: ||Ψ(

**f**)||

*≤*

_{G}_{β}*C*}, where || · ||

_{2}is standard

*ℓ*

_{2}norm, and

**w**

*= [*

_{Gl}*w*]

_{i,j,k}_{(i,j,k)∈Gl}is a subvector of

**w**whose elements are indicated by indices in

*G*. Note that Λ(

_{l}*C*) is constructed as a weighted

*ℓ*

_{2,1}ball in the space of transform coefficients

**w**= Ψ(

**f**), since structural sparsity is desired for the coefficients instead of the voxels. The ball is rotated in voxel space, due to orthonormal transform Ψ(·).

#### 4.2.3. Euclidean projections

**f̃**∉ Π onto Π is given component-wise by The Euclidean projection of any

**f**∉ Λ(

*C*) onto Λ(

*C*) is given by where || · ||

_{2}is standard Euclidean norm. We are only interested in

*P*

_{Λ(C)}(

**f**) when

*C*takes the special values as considered below.

#### 4.2.4. Alternating projection between Π and Λ(*C*) with a systematically changing *C*

*ℓ*

_{2,1}ball that undergoes a systematic change in size. Let the projections on Π be denoted by {

**f**

^{(t)}} and the projection on Λ(

*C*

^{(t)}) be denoted by

**f̃**(

*t*). The GAP algorithm starts with

**f̃**(0) =

**0**(corresponding

*C*

^{(0)}= 0), iterates between the following two steps, until ||

**f**

^{(t)}−

**f̃**(

*t*)||

^{2}converges in

*t*.

**Projection on the weighted**where

*ℓ*_{2,1}ball of changing size.

*θ*^{(t)}, denoting

**w**

^{(t)}= Ψ(

**f**

^{(t)}), is given component-wise by and (

*l*

_{1},

*l*

_{2}, ⋯,

*l*) is a permutation of (1, 2, ⋯,

_{m}*m*) such that holds for any

*q*≤

*m*− 1, and

*m*

^{★}= min{

*z*: cardinality (

**g**)}. It is not difficult to verify that the ball size

*C*

^{(t)}used to derive the solution of

*θ*^{(t)}is which depends on the most recent projection on Π.

## 5. Results

*N*=

_{F}*C*= 14 and

*N*= 148. There is little aesthetic difference between these reconstructions. In some cases, as with the lens and hand reconstruction,

_{F}*N*= 148 appears to yield additional temporal information over

_{F}*N*= 14. The upper-left images depict the sum of the reconstructed frames, showing the expected time-integrated snapshots acquired with a 30fps video camera lacking spatiotemporal image plane modulation.

_{F}*C*is 14 rather than 16 because the triangle wave’s peak and trough (

**T**

*s*

_{1}and

**T**

*s*

_{16}) are not accurately characterized by linear motion due to the mechanical deceleration time and were hence not placed into

**H**to reduce model error.

*C*pixels during the exposure (Figs. 14(b),and 14(d)) than if mask is held stationary (Figs. 14(a),and 14(c)), thereby improving the reconstruction quality for detailed scenes. The stationary binary coded aperture may completely block small features, rendering the reconstruction difficult and artifact-ridden.

*C*times during Δ

*in hardware is only possible with adequate fill-factor employing a reflective LCoS device to address each pixel*

_{t}*C*times during the integration period. To compare the reconstruction fidelity of the low-bandwidth CACTI transmission function (Eq. (7)) with this modulation strategy, we present simulated PSNR values of videos in Figs. 15(a),15(b),and 15(c) (see Media 9– Media 11 for the complete videos). For these simulations, reconstructed experimental frames at

*N*= 14 were summed to emulate a time-integrated image. The high-speed reconstructed frames were used as ground truth. We reapply: 1.) the actual mask pattern; 2.) a simulated CACTI mask moving with motion

_{F}*s*; and 3.) a simulated, re-randomized coding pattern to each high-speed frame used as ground truth. The reconstruction performance difference between translating the same code and re-randomized the code for each of the

_{k}*N*reconstructed frames is typically within 1dB.

_{F}*T*produce reconstruction artifacts arising from structured aliasing. In this unlikely case, the shifting mask compression strategy yields poorer spatial reconstruction quality than that obtainable by re-randomizing the mask at each temporal channel of interest. This result may be overcome in future implementations consisting of two-dimensional or adaptive mask motion.

## 6. Discussion and conclusion

*reconstructed*in addition to that transmitted to sufficiently represent the optical datastream. Future work will adapt the compression ratio

*C*such that the resulting reconstructed video requires the fewest number of computations to depict the motion of the scene with high quality.

*N*only requires use of a larger mask and a greater detector sensing area, making it a viable choice for large-scale compressive video implementations. As

*N*increases, LCoS-driven temporal compression strategies must modulate

*N*pixels

*C*times per integration. Conversely, translating a passive transmissive element attains

*C*times temporal resolution without utilizing any additional bandwidth relative to conventional low-framerate capture.

20. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express **15**(21), 14013–14027, (2007) [CrossRef] [PubMed] .

**f**(

*x*,

*y*,

*λ*,

*t*).

## References and links

1. | D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature |

2. | S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ. |

3. | R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005). |

4. | D. J. Brady, |

5. | D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII |

6. | M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt. |

7. | Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in |

8. | D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE |

9. | M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006). |

10. | Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with |

11. | M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens. |

12. | A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in |

13. | V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt. |

14. | M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt. |

15. | E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter |

16. | D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory |

17. | R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics |

18. | D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336. |

19. | A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10. |

20. | M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express |

21. | X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted- |

22. | J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing |

**OCIS Codes**

(100.3010) Image processing : Image reconstruction techniques

(110.1758) Imaging systems : Computational imaging

(110.6915) Imaging systems : Time imaging

**ToC Category:**

Image Processing

**History**

Original Manuscript: January 25, 2013

Revised Manuscript: March 26, 2013

Manuscript Accepted: March 27, 2013

Published: April 23, 2013

**Virtual Issues**

April 26, 2013 *Spotlight on Optics*

**Citation**

Patrick Llull, Xuejun Liao, Xin Yuan, Jianbo Yang, David Kittle, Lawrence Carin, Guillermo Sapiro, and David J. Brady, "Coded aperture compressive temporal imaging," Opt. Express **21**, 10526-10545 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-9-10526

Sort: Year | Journal | Reset

### References

- D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389, (2012). [CrossRef] [PubMed]
- S. Kleinfelder, S. H. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE J. Solid-St. Circ.36(12), 2049–2059, (2001). [CrossRef]
- R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2, (2005).
- D. J. Brady, Optical Imaging and Spectroscopy. (Wiley-Interscience, 2009). [CrossRef]
- D. J. Brady, M. Feldman, N. Pitsianis, J. P Guo, A. Portnoy, and M. Fiddy, “Compressive optical MONTAGE photography,” Photonic Devices and Algorithms for Computing VII5907(1), 590708, (2005). [CrossRef]
- M. Shankar, N. P. Pitsianis, and D. J. Brady, “Compressive video sensors using multichannel imagers,” Appl. Opt.49(10), B9–B17, (2010). [CrossRef]
- Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE International Conference on Computer Vision. (IEEE, 2011), pp. 287–294.
- D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proc. SPIE6065, 606509 (2006). [CrossRef]
- M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K.F. Kelly, and R. G. Baraniuk, “Compressive imaging for video representation and coding,” in Proceedings of Picture Coding Symposium, (2006).
- Y. Oike and A. E Gamal, “A 256× 256 CMOS image sensor with δσ-based single-shot compressed sensing,” In Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (IEEE, 2012), pp. 386–388.
- M. Zhang and A. Bermak, “CMOS image sensor with on-chip image compression: A review and performance analysis,” J. Sens.2010, 1–17, (2010). [CrossRef]
- A. Fish and O. Yadid-Pecht, “Low Power CMOS Imager Circuits,” in Circuits at the Nanoscale: Communications, Imaging, and Sensing, K. Iniewski, ed. (CRC Press, Inc., 2008), pp. 457–484. [CrossRef]
- V. Treeaporn, A. Ashok, and M. A. Neifeld, “Space–time compressive imaging,” App. Opt.51(4), A67–A79, (2012). [CrossRef]
- M. A. Neifeld and P. Shankar, “Feature-specific imaging,” App. Opt.42(17), 3379–3389, (2003). [CrossRef]
- E. J. Candès and T. Tao, “Reflections on compressed sensing,” IEEE Information Theory Society Newsletter58(4), 20–23, (2008).
- D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52(4), 1289–1306, (2006). [CrossRef]
- R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics25(3), 795–804, (2006). [CrossRef]
- D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” In Proceedings of IEEEE Conference on Coputer Vision and Pattern Recognition (IEEE, 2011), pp. 329–336.
- A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “CS-MUVI: Video compressive sensing for spatial-multiplexing cameras,” In Proceedings of IEEE International Conference on Computational Photography (IEEE, 2012), pp. 1–10.
- M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Optics Express15(21), 14013–14027, (2007). [CrossRef] [PubMed]
- X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-ℓ2,1minimization with applications to model-based compressive sensing,” SIAM Journal on Imaging Sciences (to be published).
- J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing16(12), 2992–3014 (2007). [CrossRef] [PubMed]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
Fig. 14 |
Fig. 15 |

### Supplementary Material

» Media 1: AVI (156 KB)

» Media 2: AVI (1590 KB)

» Media 3: AVI (359 KB)

» Media 4: AVI (4299 KB)

» Media 5: AVI (418 KB)

» Media 6: AVI (4215 KB)

» Media 7: AVI (330 KB)

» Media 8: AVI (3562 KB)

» Media 9: AVI (3964 KB)

» Media 10: AVI (4078 KB)

» Media 11: AVI (4253 KB)

» Media 12: AVI (4312 KB)

» Media 13: AVI (3944 KB)

» Media 14: AVI (1528 KB)

» Media 15: AVI (1658 KB)

« Previous Article | Next Article »

OSA is a member of CrossRef.