OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 11, Iss. 18 — Sep. 8, 2003
  • pp: 2142–2152
« Show journal navigation

Coded apertures for efficient pyroelectric motion tracking

U. Gopinathan, D. J. Brady, and N. P. Pitsianis  »View Author Affiliations


Optics Express, Vol. 11, Issue 18, pp. 2142-2152 (2003)
http://dx.doi.org/10.1364/OE.11.002142


View Full Text Article

Acrobat PDF (208 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Coded apertures may be designed to modulate the visibility between source and measurement spaces such that the position of a source among N resolution cells may be discriminated using logarithm of N measurements. We use coded apertures as reference structures in a pyroelectric motion tracking system. This sensor system is capable of detecting source motion in one of the 15 cells uniformly distributed over a 1.6m×1.6m domain using 4 pyroelectric detectors.

© 2003 Optical Society of America

1. Introduction

Sensors for motion tracking applications have been an active area of research and development in the recent past [1

1. A. Moini, A. Bouzerdoum, K. Eshraghian, A. Yakovleff, X. T. Guyen, A. Blanskby, R. Beare, D. Abbott, and R. E. Bogner, “An insect vision-based motion detection chip,” IEEE J. Solid-State Circuits 32, 279–284, (1997). [CrossRef]

, 2

2. J. R. Baldwin, “Cross-over field-of-view composite Fresnel lens for an infrared detection system,” Hubbell Inc., US Patent 5,442,178, (1995).

, 3

3. J. R. Baldwin, “Composite Fresnel lens having array of lens segments providing long narrow detection range,” Hubbell Inc., US Patent 5,877,499, (1999).

, 4

4. H. L. Berman, “Infrared intrusion alarm system with temperature responsive threshold level,” Optical Coating Laboratory, Inc., US Patent 4,195,234, (1980).

, 5

5. S. D. Feller, E. Cull, D. Kowalski, K. Farlow, J. Burchett, J. Adleman, C. Lin, and D. J. Brady, “Tracking and imaging humans on heterogeneous infrared sensor array for tactical applications,” SPIE Aerosense2002.

]. Motion tracking systems may use image sensors [1

1. A. Moini, A. Bouzerdoum, K. Eshraghian, A. Yakovleff, X. T. Guyen, A. Blanskby, R. Beare, D. Abbott, and R. E. Bogner, “An insect vision-based motion detection chip,” IEEE J. Solid-State Circuits 32, 279–284, (1997). [CrossRef]

] or infrared point detectors [2

2. J. R. Baldwin, “Cross-over field-of-view composite Fresnel lens for an infrared detection system,” Hubbell Inc., US Patent 5,442,178, (1995).

, 3

3. J. R. Baldwin, “Composite Fresnel lens having array of lens segments providing long narrow detection range,” Hubbell Inc., US Patent 5,877,499, (1999).

, 4

4. H. L. Berman, “Infrared intrusion alarm system with temperature responsive threshold level,” Optical Coating Laboratory, Inc., US Patent 4,195,234, (1980).

, 5

5. S. D. Feller, E. Cull, D. Kowalski, K. Farlow, J. Burchett, J. Adleman, C. Lin, and D. J. Brady, “Tracking and imaging humans on heterogeneous infrared sensor array for tactical applications,” SPIE Aerosense2002.

]. Video systems are capable of advanced positioning and control but are also associated with high dataloads and computational costs. Tracking systems based on infrared motion sensors on the other hand achieve low dataloads but in general have poor spatial resolution. Design for these motion sensors has focused on shaping the sensor receiver pattern using Fresnel lens arrays [2

2. J. R. Baldwin, “Cross-over field-of-view composite Fresnel lens for an infrared detection system,” Hubbell Inc., US Patent 5,442,178, (1995).

, 3

3. J. R. Baldwin, “Composite Fresnel lens having array of lens segments providing long narrow detection range,” Hubbell Inc., US Patent 5,877,499, (1999).

] and conical mirrors [4

4. H. L. Berman, “Infrared intrusion alarm system with temperature responsive threshold level,” Optical Coating Laboratory, Inc., US Patent 4,195,234, (1980).

].

In the present work, we outline an efficient sensing method and implement a motion sensor based on this method. We define the sensor efficiency as

η=log(N)m
(1)

where N is the number of distinguishable source states and m is the number of measurements required to estimate the source state. We use an optical element called a reference structure between the source space and measurement space. The mapping is chosen to reveal the source state so that the source state reconstruction is not computationally intensive. This design approach could discriminate 2m-1 source states using m measurements. The motion sensing system we implemented could sense and localize the presence or absence of the source to one of the 15 cells as a single source moves in an area 1.6m×1.6m using four detector measurements.

2. System Model

A schematic of the system model is shown in Fig. 1. Let s(r, t) describe the source as a function of position r and time t. In the present work, r is assumed to lie in a two dimensional plane. In the following discussion, source space refers to the two dimensional plane in which the source moves and measurement space refers to the discrete set of points at which measurement is done. A reference structure modulates the visibility of source space. The visibility of a point r in source space to a point ri in measurement space is defined by a visibility function v(ri, r) denoted as vi(r). The visibility function is binary valued with value 1 if ri is visible to r and 0 otherwise and it is implemented by the reference structure. The reference structure maps the source motion through the spatio-temporal grid (r, t) to a temporal signal at a point ri in measurement space

si(t)=v(ri,r)s(r,t)dri{1,2,,N}.
(2)

The response mi(t) of the detector at the point ri to the stimulus si(t), assuming the detector is linear and time-invariant is given by

mi(t)=h(tt)si(t)dti{1,2,,m},
(3)
Fig. 1. Schematic representation of the system model

where h(t) is the impulse response of the detector. A discrete number of measurements is done in the measurement space. In this work, we implement an efficient sensing method to discriminate 2m-1 source states by performing m measurements. For this case the measurement space consists of m discrete measurement points. Every source point r is associated to a binary signature vector χ(r)∊{0,1}m. The ith element of this vector is vi(r). The reference structure is so designed that the source space has 2m-1 regions henceforth referred to as a cell. A cell is the collection of all points with the same non-zero signature vector. Let χj denote the non-zero signature vector of the points in the jth cell.

The source parameter s(r, t) at a given time instant t is assumed to have a non-zero value only in one of the 2m-1 cells. If s(r, t)=1 for rAj, then s(r, t)=0 for rA\Aj. Here, A denotes the set of all points in source space, Aj denotes the set of all points in the jth cell and index j ranges from 1 to 2m-1. The source state can be represented as a vector s. The jth element of this vector denoted by sj is 1 for the source presence in the cell Aj. The measured signal mi(t) in Eq. (3) is transformed to an event signal i(t) by a transformation f

m˜i(t)=f{mi(t)}.
(4)

The index i ranges from 1 to m. At a given time instant t, one can define a binary state vector M(t)∊{0,1}m, the ith element being the i(t). If we assume the above mentioned source constraint, we can define a one-to-one mapping between the 2m-1 state vectors and 2m-1 source states. At a given time instant t, one can estimate the source state vector s as follows

sj=1ifM=χj.
(5)

In the above equation, χj denotes the signature vector of the points in the jth cell.

Fig. 2. Visibility map for each of the four points in the measurement space. The colored area has a visibility 1 and the other regions have a visibility 0

The motion tracking system discussed in this paper, localizes the source motion to one of 2m-1 cells within a time interval δt, as the source moves through the spatio-temporal grid (r, t). The spatial resolution is determined by the size of the cell which in turn is determined by the reference structure. The temporal resolution δt is limited by the response time of the sensor used for the measurement.

3. System design and implementation

Based on the above model, we implemented a sensor system which can localize source motion to one of 15 cells using four detector measurements. In the coming sections, we discuss in greater detail the design aspects of such a system.

3.1. Design of Reference Structure

Table 1. The map showing signature vectors of different cells

table-icon
View This Table

3.2. System implementation

The visibility mapping of Fig. 2 is implemented using four binary masks. Each of four detectors see the source space through only one of the masks. The masks have regions transparent and opaque to heat radiation at 10–12µm. The masks are built using CAD software and automaticallly generated code from our simulator. The masks are printed in a 3D stereo-lithography printer. The transparent regions are squares of dimension 2mm×2mm and are positioned to implement the map shown in Table 1. The distance between the detector plane and binary mask is 2cm, and between the detector plane and source plane is 200cm. Each square region in the mask maps to a region of visibility 1 (colored region in Fig. 2). The four detectors, each with a sensing area 2mm×2mm, are positioned on the vertices of a square of side 2cm. The visibility mapping covers an area of 1.6m×1.6m in the source plane. Each cell has an area 40cm×40cm. The colored square within a cell is of size 20cm×20cm.

3.3. Pyroelectric detector as sensing element

A single element pyroelectric detector is used as the sensing element in this work. Pyroelectric detectors are heat flow sensors and generate an electrical signal proportional to the change in heat flux falling on it [14

14. J. Fraden, AIP handbook of modern sensors (American Institute of Physics, 1993), Chap. 6.

]. The source is described by the distribution of the flux radiated by the source ϕ(r, t) (given by the Stefan-Boltzman law) as it moves through a spatio-temporal grid (r, t). The response of a single element pyroelectric detector at the point ri to an impulse at a point (r, t) in spatio-temporal grid can be described as follows

h(θ,t)=f(θ)g(t).
(6)

The impulse response of the detector is weighted by a factor f(θ) where θ is the angle between the unit vector rrirri and the normal to the sensor. f(θ) restricts the source space that can be seen by the detector thereby modulating the visibility of the source space. f(θ) represents the angular response pattern of the detector and sets the field of view of the detector. f(θ) for the detector used in this work is plotted in Fig. 3. The response of the sensor to a step input can be approximated by

S˜=S˜o(1etτe)etτt
(7)

Fig. 3. The angular response pattern of the detector f(θ) used in this work as a function of θ

3.4. Pyroelectric detector measurements

We used a pyroelectric detector with two pyroelectric sensing elements. One of the detector element has an IR filter in the spectral range 10–12µm. The other element is used for compensation of rapid thermal changes and mechanical stresses resulting from acoustical noise and vibrations. The sensing element has an area 2mm×2mm. Measurements were done to determine the response pattern f(θ) for two different masks. The first mask has a single square hole with size 2mm×2mm and the second mask has two holes each of size 2mm×2mm separated by 4mm. A source (a filament) is moved along a radial arc around the detector. At each position of the source a step in time is input to the detector by opening a shutter mounted between the source and detector. The shutter when closed would block the radiation from the source falling on the detector. The distance between the detector and the source is 40 cm. The step response of the detector is measured for different values of θ, the angle made by the source with respect to detector surface normal, by moving the source along the radial arc. For each θ, the peak value of the step response normalized with respect to the value corresponding to source position along the detector surface normal is plotted in Fig. 4(a) and represents the response pattern of the detector with the mask. A similar measurement was done for the detector without the mask and is shown as a polar plot in Fig. 3. It may be noted that the mask modulates the response pattern of the detector and thereby the visibility between the source space and the sensor space. The angular width of the lobe is 5 degrees. This mask would project a region with visibility 1 of size 20cm×20cm on a plane 2m below. The detector response when the source moves at an angular velocity of 8 degrees per second is shown in Fig. 4(b). The pyroelectric sensor converts the heat flow in the sensing element into electric charge. One face of the sensing element is exposed to the heat flux from the source whereas the opposite side faces the detector internal housing which is at ambient temperature. As the source moves into the field of view of the detector, there is an increase in heat flux falling on the sensing element resulting in a heat flow. In response to this heat flow, electric charge is build up on the sensing element by virtue of pyroelectric property. The electrcic charge results in an electric current which is converted to a voltage signal by a current to voltage converter. As the source gradually moves out of the field of view of the detector the direction of the thermal gradient and heat flow reverses in the sensing element. This results in the reversal of the polarity of electric charge and the output voltage. Figure 4(c) shows the response pattern of the detector for a mask with two holes and Fig. 4(d) shows the corresponding detector response when a source moves with an angular velocity of 8 degrees per second. The angular separation of the two lobes in Fig. 4(c) is 10 degrees. This mask would project two visibility 1 regions separated by 40cm on a plane 2m away.

Fig. 4. The plot of response pattern of the detector f(θ) as a function of θ for a mask with a single square hole of size 2mm×2mm is shown in (a) and for a mask with two holes of sizes 2mm×2mm separated by 4mm is shown in (c). The corresponding sensor responses when a source moves at an angular velocity of 8 degrees per second at a distance 40cm is shown in (b) and (d)

4. Experimental results

The motion sensor was mounted on the ceiling of a room. The source space is considered an area 1.6m×1.6m and was at a distance of 2m from the sensor. In the first set of experiments, the source used was a hot object (temperature 320K) mounted on a robot. The robot moved through the source space at a constant velocity. The robot motion was controlled remotely and there was no human intervention in the experimental space during the experiment. Figure 5 shows the outputs from the four detectors obtained when the source moved at a velocity 32cm/s through cells with signature vectors [0 0 1 0],[1 0 1 1],[1 1 0 1],[1 0 0 1]. The detector output was amplified by a factor of 40000. The detector signal is converted to an event signal by a thresholding operation. If the value of the signal is above the threshold value the signal value is 1 else the value is 0. The threshold value is chosen to be above the noise level of the detector signal. The signal to noise ratio (SNR) of the signal, measured as the ratio of peak value of the signal to peak value of noise is 15. The threshold was chosen as 0.5 times the maximum value of the signal. The event signal corresponding to the detector signal is shown in Fig. 6. The state vector M at a given time instant consists of 4 values from the 4 event signals. The state vector is compared with each of the 15 signature vectors in Table 1. A match indicates a source motion in the corresponding cell. Figure 7 shows the state vector for the different time instants as the source moves through the source space. The source position at the time instants 2.6, 4, 5.3, 6.7 seconds is shown in Fig. 8. The timing errors between the detector signal responses would result in errors in state vector corresponding to certain time instances. This is due to mismatches in the visibility mappings of different detectors and could be eliminated to a greater extent by following two rules for state vector and transitions between state vectors. The first rule is to eliminate transitions which happen in short time intervals compared to the average time interval corresponding to a source motion in a cell. The second rule is to allow transitions only between state vectors which corresponds to signature vectors of adjacent cells.

Fig. 5. The response of the four detectors shown in a single plot. The source is a hot object moving at a velocity 32cm/s at a distance of 1m from the sensor. The source moves through cells with signature vectors [0 0 1 0], [1 0 1 1], [1 1 0 1] and [1 0 0 1]
Fig. 6. The event signal of the four detectors derived from the detector signals shown in Fig. 5
Fig. 7. Plot showing the state vectors at different time instances as the source moves through cells with signature vectors [0 0 1 0], [1 0 1 1], [1 1 0 1] and [1 0 0 1]
Fig. 8. Plot showing the source position at the time instants 2.6, 4, 5.3 and 6.7 seconds. The source is a robot carrying a hot object moving at a velocity of 32cm/s.

Figure 9 shows the source position at time instants 2.6, 3.8 and 5 seconds when the hot object moves with a velocity 32cm/s through cells with signature vectors [1 0 0 0], [0 1 1 1] and [1 0 0 1].

Fig. 9. Plot showing the source position at the time instants 2.6, 3.8, 5 seconds as the source moves through cells with signature vectors [1 0 0 0], [0 1 1 1] and [1 0 0 1]. The source is a robot carrying a hot object.
Fig. 10. Plot showing the source position at the time instants 1.8, 2.6 and 3.4 seconds as a human walks through cells with signature vectors [0 0 0 1], [1 1 0 0] and [1 0 1 0].

4.1. Human motion detection

We tested the capability of the system to detect human motion. Fig. 10 shows the position at time instants 1.8, 2.6 and 3.4 seconds as a person walked through the cells with signature vectors [0 0 0 1], [1 1 0 0] and [1 0 1 0].

The temporal resolution δt of the system is limited by the response time of the detector used. δt was roughly 0.5 seconds for our system. For real-time motion tracking, the processing is done on a window of streaming data. The window size should be larger than δt. The window size for the experiments shown here is 10s.

5. Conclusion

In conclusion, we outlined an efficient sensing method to discriminate a single source in one of N locations using logarithm of N number of measurements. Based on this method, we implemented a motion tracking system which is capable of detecting human motion in one of the 15 cells in an area covering 1.6m×1.6m using 4 pyroelectric detectors. An optical element, reference structure, is used to implement a mapping from source state to measurement space. The mapping is so chosen that the source state reconstruction is not computationally intensive. We presented results from some experiments conducted to test the system.

The source space is assumed to be two dimensional in this work, but could be extended to three dimensions. This would be interesting in applications requiring identification of features of object in addition to tracking them. The visibility mapping between the source space and measurement space in terms of boundaries rather than regions may be an interesting approach for motion sensing applications. These are some of the issues that needs to be further explored and research in these directions is in progress.

Although it is possible to design an equally efficient sensor system to detect multiple simultaneous sources, it is not required if the sensor sampling rate is high enough to preclude the sensing of sources in simultaneous motion. In the case of inadequate sensor sampling rate, concurrent motion on a single-sensor system results in the appearance of a ghost. A ghost is the erroneous reporting of the sum of the location indices where the concurrent motion takes place. Such errors in sensing might be corrected by temporal sampling and the principle of continuity of motion. By maintaining an active list of source coordinates in the domain, the source state is updated only when the resulting motion is adjacent to an existing location or a boundary location where a new entry is permitted. One may also seek to increase the number of spatial samples and to encode signatures for multiple source response to design for spacetime efficiency rather than space-state efficiency. Optimal designs for these goals are currently unknown, we leave their analysis to future reports.

Acknowledgments

The authors wish to thank Mohan Shankar and Andy Portnoy for their help in some of the experiments. This work was supported by the Defense Advanced Research Projects Agency through the grant DAAD 19-01-1-0641.

References and links

1.

A. Moini, A. Bouzerdoum, K. Eshraghian, A. Yakovleff, X. T. Guyen, A. Blanskby, R. Beare, D. Abbott, and R. E. Bogner, “An insect vision-based motion detection chip,” IEEE J. Solid-State Circuits 32, 279–284, (1997). [CrossRef]

2.

J. R. Baldwin, “Cross-over field-of-view composite Fresnel lens for an infrared detection system,” Hubbell Inc., US Patent 5,442,178, (1995).

3.

J. R. Baldwin, “Composite Fresnel lens having array of lens segments providing long narrow detection range,” Hubbell Inc., US Patent 5,877,499, (1999).

4.

H. L. Berman, “Infrared intrusion alarm system with temperature responsive threshold level,” Optical Coating Laboratory, Inc., US Patent 4,195,234, (1980).

5.

S. D. Feller, E. Cull, D. Kowalski, K. Farlow, J. Burchett, J. Adleman, C. Lin, and D. J. Brady, “Tracking and imaging humans on heterogeneous infrared sensor array for tactical applications,” SPIE Aerosense2002.

6.

W. T. Cathey and E. R. Dowski, “New Paradigm for Imaging Systems,” Appl. Opt. 41, 6080–6092 (2002). [CrossRef] [PubMed]

7.

D. J. Brady and Z. U. Rahman, “Integrated analysis and design of analog and digital processing in imaging systems: introduction to the feature issue,” Appl. Opt. 41, 6049–6049 (2002). [CrossRef] [PubMed]

8.

D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999). [CrossRef] [PubMed]

9.

T. M. Cannon and E. E. Fenimore, “Coded Aperture Imaging - Many holes make light work,” Opt. Eng. 19, 283–289 (1980).

10.

G. K. Skinner, “Imaging with Coded-Aperture Masks,” Nucl. Instrum. Methods Phys. Res. A 221, 33–40 (1984). [CrossRef]

11.

A. J. Bird and M. R. Merrifield, “X-ray all-sky monitoring and transient detection using a coded sphere telescope,” Aston. Astrophys. Suppl. Ser. 117, 131–136 (1996). [CrossRef]

12.

E. E. Fenimore, “Coded aperture imaging: predicted performance of uniform redundant arrays,” Appl. Opt. 17, 3562–3569 (1978). [CrossRef] [PubMed]

13.

P. Potuluri, U. Gopinathan, J. R. Adleman, and D. J. Brady, “Lensless sensor system using a reference structure,” Opt. Express 11, 965–974 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-965. [CrossRef] [PubMed]

14.

J. Fraden, AIP handbook of modern sensors (American Institute of Physics, 1993), Chap. 6.

OCIS Codes
(040.3060) Detectors : Infrared
(110.3080) Imaging systems : Infrared imaging

ToC Category:
Focus Issue: Integrated computational imaging systems

History
Original Manuscript: June 23, 2003
Revised Manuscript: August 1, 2003
Published: September 8, 2003

Citation
U. Gopinathan, D. Brady, and N. Pitsianis, "Coded apertures for efficient pyroelectric motion tracking," Opt. Express 11, 2142-2152 (2003)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-11-18-2142


Sort:  Journal  |  Reset  

References

  1. A. Moini, A. Bouzerdoum, K. Eshraghian, A. Yakovleff, X. T. Guyen, A. Blanskby, R. Beare, D. Abbott and R. E. Bogner, �??An insect vision-based motion detection chip,�?? IEEE J. Solid-State Circuits 32, 279-284, (1997). [CrossRef]
  2. J. R. Baldwin, �??Cross-over field-of-view composite Fresnel lens for an infrared detection system,�?? Hubbell Inc., US Patent 5,442,178, (1995).
  3. J. R. Baldwin, �??Composite Fresnel lens having array of lens segments providing long narrow detection range,�?? Hubbell Inc., US Patent 5,877,499, (1999).
  4. H. L. Berman, �??Infrared intrusion alarm system with temperature responsive threshold level,�?? Optical Coating Laboratory, Inc., US Patent 4,195,234, (1980).
  5. S. D. Feller, E. Cull, D. Kowalski, K. Farlow, J. Burchett, J. Adleman, C. Lin and D. J. Brady, �??Tracking and imaging humans on heterogeneous infrared sensor array for tactical applications,�?? SPIE Aerosense 2002.
  6. W. T. Cathey, E. R. Dowski, �??New Paradigm for Imaging Systems,�?? Appl. Opt. 41, 6080-6092 (2002). [CrossRef] [PubMed]
  7. D. J. Brady and Z. U. Rahman, �??Integrated analysis and design of analog and digital processing in imaging systems: introduction to the feature issue,�?? Appl. Opt. 41, 6049-6049 (2002). [CrossRef] [PubMed]
  8. D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, �??Visible cone-beam tomography with a lensless interferometric camera,�?? Science 284, 2164-2166 (1999). [CrossRef] [PubMed]
  9. T. M. Cannon and E. E. Fenimore, �??Coded Aperture Imaging - Many holes make light work,�?? Opt. Eng. 19, 283-289 (1980).
  10. G. K. Skinner, �??Imaging with Coded-Aperture Masks,�?? Nucl. Instrum. Methods Phys. Res. A 221, 33-40 (1984). [CrossRef]
  11. A. J. Bird and M. R. Merrifield, �??X-ray all-sky monitoring and transient detection using a coded sphere telescope,�?? Aston. Astrophys. Suppl. Ser. 117, 131-136 (1996). [CrossRef]
  12. E. E. Fenimore, �??Coded aperture imaging: predicted performance of uniform redundant arrays,�?? Appl. Opt. 17, 3562-3569 (1978). [CrossRef] [PubMed]
  13. P. Potuluri, U. Gopinathan, J. R. Adleman, and D. J. Brady, �??Lensless sensor system using a reference structure,�?? Opt. Express 11, 965-974 (2003), <a href="http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-965">http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-965</a>. [CrossRef] [PubMed]
  14. J. Fraden, AIP handbook of modern sensors (American Institute of Physics, 1993), Chap. 6.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited