OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 50, Iss. 22 — Aug. 1, 2011
  • pp: D1–D6
« Show journal navigation

Autonomous subpixel satellite track end point determination for space-based images

Lance M. Simms  »View Author Affiliations


Applied Optics, Vol. 50, Issue 22, pp. D1-D6 (2011)
http://dx.doi.org/10.1364/AO.50.0000D1


View Full Text Article

Acrobat PDF (563 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

An algorithm for determining satellite track end points with subpixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel end point determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.

© 2011 Optical Society of America

1. Introduction

The issue of autonomously detecting satellite and airplane tracks in images is by no means a new one. For decades, these tracks have been nothing more than a nuisance for astronomers–foreground artifacts that must be disposed of in the preprocessing of data–and several methods for identifying and removing them have been discussed in the literature. For instance, the Recognition by Adaptive Subdivision of Transformation Space algorithm [1

1. H. Ali, C. Lampert, and T. Breuel, “Satellite tracks removal in astronomical images,” in Progress in Pattern Recognition, Image Analysis and Applications, Lecture Notes in Computer Science (Springer, 2006), Vol. 4225, pp. 892–901. [CrossRef]

] removes satellite streaks directly from images using a geometric approach that assumes the tracks are straight lines, and Storkey et al. [2

2. A. J. Storkey, N. C. Hambly, C. K. I. Williams, and R. G. Mann, “Cleaning sky survey data bases using hough transform and renewal string approaches,” Mon. Not. R. Astron. Soc. 347, 36–51 (2004). [CrossRef]

] uses the Random Sampling and Consensus algorithm to allow for postprocessing removal of curved tracks and scratches as well.

While these streaks may be a source of noise in the field of astronomy, for applications such as the Space Surveillance Network (SSN), they are the signal. A track from a satellite or piece of debris, along with time-stamp information, allows the SSN to make an equatorial angles-only determination of its orbit. One can conceive several ways of obtaining the time-stamp information, but the most straightforward approach is to measure the start and end times of an exposure and extract the end points of the imaged track(s).

The precision of such a measurement, of course, is dependent on how well one can determine the track end points on the detector. As Earl notes, the error in detecting the end points may very well dominate the other sources of error in the measurement [3

3. M. A. Earl, “Determining the range of an artificial satellite using its observed trigonometric parallax,” J. R. Astron. Soc. Can. 99, 50–55 (2005).

]. Increasing the detector resolution (number of pixels per unit area) to mitigate this error is not a viable option because doing so decreases the dwell time per pixel of the target, effectively lowering its signal- to-noise ratio (SNR). But even with low resolution detectors, subpixel information is still available since the time spent by the satellite “in” the pixel translates to intensity, so all hope is not lost.

In fact, several methods to obtain subpixel track end points are available. For instance, Levesque presents an algorithm for accurate end point detection that has been successfully used on images obtained with the Canadian Automated Small Telescope for Orbital Research system [4

4. M. Levesque, “Automatic reacquisition of satellite positions by detecting their expected streaks in astronomical images,” presented at the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, Hawaii, 1–4 Sept. 2009.

]. However, the problem with these methods is that they generally make the assumptions that (1) the track is straight and (2) previously obtained orbital information is available to predict the appearance of the streak in the newly acquired image.

The motivation behind the method that will be discussed in this paper is a mission called the Space-based Telescopes for Actionable Refinement of Ephemeris (STARE), for which neither of these assumptions is valid [5

5. L. Simms, V. Riot, W. De Vries, S. Olivier, A. Pertica, B. Bauman, D. Phillion, and S. Nikolaev, “Optical payload for the STARE mission,” Proc. SPIE 8044-5, 804406 (2010).

]. The purpose of STARE is to refine orbital information for satellites and debris by directly imaging them with CMOS imagers onboard a constellation of cube-satellites (CubeSats). The images acquired by a given sensor will be run through an algorithm in the onboard microprocessor that is tasked with extracting star and track end point coordinates and sending them to the ground (without the accompanying image). Since the attitude of the STARE satellites will not be precisely controlled, the telescopes may be rotating about the pointing axis. Any uncertainty in the initial orbits means the location of the tracks on the imager will not be well-known. The STARE algorithm must therefore deliver subpixel end point determination for tracks with arbitrary curvature and location.

It should be emphasized that the algorithm is not concerned with detection of faint streaks, but rather high fidelity end point determination for streaks with ample SNR. Also, to avoid confusion while describing the algorithm in the following Sections, the term satellite will be reserved for the STARE CubeSat. The imaged debris or satellite will be referred to as the target.

2. Curved Target Tracks in STARE Images

During a STARE observation, the satellite will stare at a fixed star background and allow the target to streak across the field of view. Changes in the ori entation of the satellite during the observation are unwanted since they could potentially reduce the dwell time per pixel of the stars and the target (the exception being the case where the motion of the satellite causes inadvertent rate tracking of the target). But rotation of the satellite about the two axes perpendicular to the telescope pointing is of less concern, because it simply adds to the transverse velocity component of the target and causes the stars to streak in a uniform manner across the detector [6

6. Note that a simplification has been made by approximating the path of the target as a straight line during the exposure, which it is not.

]. It will not produce curvature in the streak left by the target.

Rotation about the pointing axis, on the other hand, could potentially induce significant curvature. If the satellite has a rotational velocity of θ˙ about the pointing axis, which will be taken as z, and the target has velocity components (vx,vy,vz) and coordinates of
x=xo+vxt,y=yo+vyt,z=zo+vzt,
(1)
with respect to the satellite center of mass, then the location of the target in the detector coordinate system is given by
x=(xo+vxt)cos(θ˙t)+(yo+vyt)sin(θ˙t),y=(xo+vxt)sin(θ˙t)+(yo+vyt)cos(θ˙t),
(2)
where the primes represent the mapping of the object space to pixel space and rotation of the satellite about the x and y axis has been folded into the components vx and vy.

One can gain an appreciation for the form of Eq. (2) by considering that for the case of xo=yo=0, it is the parametric representation of a spiral. Telescope angular velocities above 0.1°/s are not anticipated, so a spiral pattern should never be observed in STARE images. But θ˙=0.1°/s is large enough to make a Hough transform ineffective for basic detection and create an error as large as two pixels for a track that extends all the way across the image if a global linear fit is used.

Fortunately, fitting the entire track is not necessary. As long as the parameters θx˙, θy˙, and θz˙ are known reasonably well [7

7. This information will be available from calibration data taken before the observation.

], the track end points (xo,yo), (yf,yf) are sufficient to refine the orbit of the target. The primary intent of the STARE algorithm is to find these coordinates.

3. STARE End Point Determination Algorithm

The following Subsections follow the numbering in Fig. 1, which gives an overview of the STARE algorithm.

3A. Image Correction

3B. Object Detection

After the image is corrected, it is searched for contiguous sets of pixels that have a value above T. This step is shown in box 2 of Fig. 1. With both real and simulated images, typically T=3.5·RN, where RN is the read noise of the detector, produces good results. If the sky noise or dark current shot noise dominates the read noise, this must be taken into account in setting T.

Once a contiguous set of pixels has been identified, it is characterized as a star, track, or unknown object (such as a delta or Compton scattered worm) based upon its ellipticity (e) and the number of pixels (N) it contains. These values are dependent on the optical system and detector used, but for STARE, a cut of e>0.8 and N>20 should effectively identify all real tracks. A perfectly straight track should have e=1; the margin e=0.81.0 allows for curvature and the possibility of overlapping stars or cosmic rays. The chance of a muon hit producing a track greater than 20 pixels long is extremely low.

3C. Iterative Local Fitting at Track End Points (Transverse Degree of Freedom)

One might consider using the second derivative as a criterion:
d2ydx2=2θ˙(vxsin(θ˙t)+vycos(θ˙t))θ˙2((xo+vxt)cos(θ˙t)+(yo+vyt)sin(θ˙t))2θ˙(vxcos(θ˙t)vysin(θ˙t))θ˙2((xo+vxt)sin(θ˙t)+(yo+vyt)cos(θ˙t)),
(3)
(note that any change in the angular velocity has been ignored, θ¨=0). But this expression requires accurate knowledge of xo, yo, vx, and vy, which will not be known.

A solution to the problem is to use an iterative weighted least squares fit to each track end point until the rms deviation of distance from the included track pixels to the line is below a certain threshold, σDmax. Starting with all Npix=N pixels identified in the track, a line is fit using the expression
m=i=0Npixx2i=0NpixIyi=0NpixIxi=0NpixIxyNpixi=0NpixIx2(i=0NpixIx)2,b=Npixi=0NpixIxyi=0NpixIxi=0NpixIyNpixi=0NpixIx2(i=0NpixIx)2,
(4)
where I is the pixel intensity and the indices on x, y, and I have been left out for notational convenience. Then, the distance of the track points to the line is calculated using
D=I(mxy+b)Imaxm2+12,
(5)
where Imax is the maximum pixel intensity for the Npix pixels used in the fit. If the rms of this value, σD, is below the threshold σDmax, then the fit is considered valid. If not, n pixels are removed from the end of the track opposite to the one being fit and the above procedure is repeated. Thus, at the jth iteration, the track end will be fit with Npix=Nn·j pixels. A minimum number of pixels to be used in the fit Npix=Nmin is also incorporated, the value depending on the maximum curvature expected.

The threshold σDmax and whether intensity weighting is used in Eq. (5) will depend on the potential curvature and actual PSF of the system. Figure 2 shows results for a simulated track where θ˙=1.0°/s and σDmax=0.50 was used without weighted fitting. The eventual error in end point estimation was less than 0.1 pixels in both x and y. One can imagine extreme cases in which the target traces out a path perfectly centered over the dividing boundary between two rows of pixels, but this will be a very rare occurrence.

3D. Matched Filter at Track End Points (Longitudinal Degree of Freedom)

Once the track has been fit at each end point, the path the target took along the detector near that point is well approximated. What is left is to determine precisely where the target was along this path at the start (or end) of the exposure (step 4). Simply recording the first or last pixel with a value above T will obviously result in errors. Accurately determining the location of the target requires taking into account the PSF of the optical system and the kernel used in the low pass filter of step 1.

To do this, a region of interest (ROI) around the roughly estimated end point that spans R×R pixels is first considered. An example ROI with R=7 is shown in Fig. 3a. The goal is to reproduce this ROI with a simulated one obtained by convolving a line segment with a filter that matches the PSF and kernel described above. The form of the line segment is already known from the fit obtained in step 3. The length of it will indicate exactly where the end point is located.

After dividing each simulated pixel into r sub pixels, a line segment of length L=1/r is created at the edge of the simulated ROI from which the track emerges. The segment is convolved with the filter to produce a track in the simulated ROI, as shown in Fig. 3b. The simulated ROI is then subtracted from the real one and the residual is squared. The length of the line segment is increased by 1/r, and the process is repeated so that after R·r iterations, there will be a set of R·r residuals. The minimum of these, as shown in Fig. 3d, indicates where the end point is located.

4. Results for Simulated and Real Images

The results from testing the STARE algorithm on real images obtained by ground-based telescopes are encouraging. For these images, a median sky frame and bad pixel map could not be obtained, but subtraction of the mode sufficed for image correction. In Fig. 4, tracks found in three separate Oceanit images are shown after being analyzed by the algorithm. The ends of the green line segment indicate where the extracted end points are located. Although there are no official coordinates for these reported in the Oceanit data, inspection by eye shows that they line up well with the locations expected from the 1.9 pixel FWHM PSF.

Extensive testing on simulated tracks and star fields has also been performed. These tests are especially useful because the measured end point can be compared to the true end point to determine the accuracy of the algorithm as a function of track length, orientation, brightness, etc. To comprehensively measure the error in the estimated end points, a 10h run was performed in which 400 images were generated and analyzed. Real star fields were sampled and then tracks with random orientation and length were generated in a number of different brightness intervals. As a proxy for brightness, the quantity of photons per micrometer, which is the x axis of Fig. 5, was used. The reason for this is that a track of a given brightness will produce varying SNRs depending on how it is oriented relative to the detector. For instance, if a track is centered over the boundary between a row of pixels, it will produce roughly half the SNR as it would when centered directly over one of the two rows.

On the y axis of Fig. 5 is the total error in the end point estimate, Err=xerr2+yerr2, where xerr and yerr are simply the difference between the real and measured coordinates. The plot shows that at a level of about 600 photons per micrometer, the error approaches a near constant value of Err=0.14. This is expected from the choice of r=10 for the simulated grid, which should produce an error of roughly 0.1 pixels for each coordinate (the step in length at each iteration is L=0.1 pixels). The value of 600 photons per micrometer corresponds to an SNR in the range of 6–12, depending on the track orientation. One can see that at a value of 250 photons per micrometer, which is roughly an SNR of 2–4, the error is slightly larger. But it is still subpixel and will serve well for the purpose of orbital refinement.

5. Discussion

One may point out that a disadvantage of the technique is that it requires the track to be a contiguous set of pixels. A large fraction of satellites oscillate in brightness as they cross the sky and their signal may fall below the noise threshold as a result. However, as long as the value N is set low enough, the only implication is that the end points for a number of subtracks will be reported instead of just two (this would also be the case if there is an extensive region of bad pixels the track happens to cross). If the target happens to reach a minima in brightness at the start and end of the exposure, there is no hope of accurately measuring the end point anyway.

Another important point to consider is that, although it will not be available in the STARE mission, a priori knowledge of the target could potentially be used to enhance the performance of the algorithm. A rough estimate of the velocity and position at the exposure start can be used in Eq. (3) to help determine the number of pixels used to fit each end of the track directly or help in determining a value for σDmax. And if the information is accurate enough, it may be possible to generate a matched filter for initially finding the track even for the case of high curvature.

6. Conclusion

An algorithm for determining the end points of the satellite and debris tracks in space-based images has been presented. The algorithm is capable of delivering subpixel accuracy even for the case of curved tracks resulting from rotation of the imaging spacecraft. The underlying methodology and motivation for the algorithm have been discussed, and results for both real and simulated data showing high quality performance have been presented. Results from real data obtained by the STARE satellites will soon follow.

Fig. 1 Flow diagram for the various steps used in the STARE end point detection algorithm. Each of the circled numbers corresponds to one of the Subsections in this Section.
Fig. 2 Example of the local fitting at each end point. The left image shows the track fit in red when all pixels were used, the middle when the left 200 pixels were used, and the right when 170 pixels were used.
Fig. 3 Illustration of the matched filter process. (a) Shows an ROI taken from a corrected raw image. (b) Shows a simulated ROI, where a line segment of length L1 has been convolved with a match filter to attempt to reproduce the real track in (a). In (c) the length has been extended to L2 as part of the iterative process. And in (d), the entire simulated ROI has been spanned to produce a residual at all R·r grid points. The first and last 10 points appear flat because the edges of the ROI are ignored due to convolution edge effects. The real track length Lreal is evident at the minimum of the residual curve.
Fig. 4 End point determination for satellite track detected in three separate Oceanit images. While precise end point coordinates are not available for comparison, as they are in the simulated images, the reported end points match up well with what we expect based on the PSF of the system.
Fig. 5 Plot showing the total end point error from a run of 400 tracks of random lengths, orientation, and brightness. The y axis shows the total end point error and the x axis shows photons per micrometer, both of which are described in the text. At 250 photons per micrometer, the SNR ranges from 2–4. At 600 photons per micrometer, the SNR ranges roughly from 6–12. These values depend on the orientation of the track relative to pixel boundaries.
1.

H. Ali, C. Lampert, and T. Breuel, “Satellite tracks removal in astronomical images,” in Progress in Pattern Recognition, Image Analysis and Applications, Lecture Notes in Computer Science (Springer, 2006), Vol. 4225, pp. 892–901. [CrossRef]

2.

A. J. Storkey, N. C. Hambly, C. K. I. Williams, and R. G. Mann, “Cleaning sky survey data bases using hough transform and renewal string approaches,” Mon. Not. R. Astron. Soc. 347, 36–51 (2004). [CrossRef]

3.

M. A. Earl, “Determining the range of an artificial satellite using its observed trigonometric parallax,” J. R. Astron. Soc. Can. 99, 50–55 (2005).

4.

M. Levesque, “Automatic reacquisition of satellite positions by detecting their expected streaks in astronomical images,” presented at the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, Hawaii, 1–4 Sept. 2009.

5.

L. Simms, V. Riot, W. De Vries, S. Olivier, A. Pertica, B. Bauman, D. Phillion, and S. Nikolaev, “Optical payload for the STARE mission,” Proc. SPIE 8044-5, 804406 (2010).

6.

Note that a simplification has been made by approximating the path of the target as a straight line during the exposure, which it is not.

7.

This information will be available from calibration data taken before the observation.

8.

AMS Collaboration, “Protons in near earth orbit,” Phys. Lett. B 472, 215–226 (2000). [CrossRef]

9.

D. Shaw and P. Hodge, “Cosmic ray rejection in STIS CCD images,” Instrument Science Rep. STIS 98-22 (Space Telescope Science Institute, 1998).

10.

While the STARE pathfinder satellites will suffer from this problem, the future STARE constellation CubeSats will carry high quality sensors that will not.

OCIS Codes
(100.0100) Image processing : Image processing
(280.0280) Remote sensing and sensors : Remote sensing and sensors
(120.6085) Instrumentation, measurement, and metrology : Space instrumentation

History
Original Manuscript: March 14, 2011
Manuscript Accepted: May 13, 2011
Published: June 29, 2011

Citation
Lance M. Simms, "Autonomous subpixel satellite track end point determination for space-based images," Appl. Opt. 50, D1-D6 (2011)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-50-22-D1


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. H. Ali, C. Lampert, and T. Breuel, “Satellite tracks removal in astronomical images,” in Progress in Pattern Recognition, Image Analysis and Applications, Lecture Notes in Computer Science (Springer, 2006), Vol.  4225, pp. 892–901. [CrossRef]
  2. A. J. Storkey, N. C. Hambly, C. K. I. Williams, and R. G. Mann, “Cleaning sky survey data bases using hough transform and renewal string approaches,” Mon. Not. R. Astron. Soc. 347, 36–51 (2004). [CrossRef]
  3. M. A. Earl, “Determining the range of an artificial satellite using its observed trigonometric parallax,” J. R. Astron. Soc. Can. 99, 50–55 (2005).
  4. M. Levesque, “Automatic reacquisition of satellite positions by detecting their expected streaks in astronomical images,” presented at the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, Hawaii, 1–4 Sept. 2009.
  5. L. Simms, V. Riot, W. De Vries, S. Olivier, A. Pertica, B. Bauman, D. Phillion, and S. Nikolaev, “Optical payload for the STARE mission,” Proc. SPIE 8044-5, 804406 (2010).
  6. Note that a simplification has been made by approximating the path of the target as a straight line during the exposure, which it is not.
  7. This information will be available from calibration data taken before the observation.
  8. AMS Collaboration, “Protons in near earth orbit,” Phys. Lett. B 472, 215–226 (2000). [CrossRef]
  9. D. Shaw and P. Hodge, “Cosmic ray rejection in STIS CCD images,” Instrument Science Rep. STIS 98-22 (Space Telescope Science Institute, 1998).
  10. While the STARE pathfinder satellites will suffer from this problem, the future STARE constellation CubeSats will carry high quality sensors that will not.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited