## Modeling a MEMS deformable mirror using non-parametric estimation techniques |

Optics Express, Vol. 18, Issue 20, pp. 21356-21369 (2010)

http://dx.doi.org/10.1364/OE.18.021356

Acrobat PDF (5224 KB)

### Abstract

Using non-parametric estimation techniques, we have modeled an area of 126 actuators of a micro-electro-mechanical deformable mirror with 1024 actuators. These techniques produce models applicable to open-loop adaptive optics, where the turbulent wavefront is measured before it hits the deformable mirror. The model’s input is the wavefront correction to apply to the mirror and its output is the set of voltages to shape the mirror. Our experiments have achieved positioning errors of 3.1% rms of the peak-to-peak wavefront excursion.

© 2010 OSA

## 1. Introduction

1. F. Hammer, F. Sayede, E. Gendron, T. Fusco, D. Burgarella, V. Cayatte, J. M. Conan, F. Courbin, H. Flores, I. Guinouard, L. Jocou, A. Lancon, G. Monnet, M. Mouhcine, F. Rigaud, D. Rouan, G. Rousset, V. Buat, and F. Zamkotsian, “The FALCON Concept: Multi-Object Spectroscopy Combined with MCAO in Near-IR,” Proc. ESO Workshop (2002)

2. F. Assémat, E. Gendron, and F. Hammer, “The FALCON concept: multi-object adaptive optics and atmospheric tomography for integral field spectroscopy - principles and performance on an 8-m telescope,” Mon. Not. R. Astron. Soc. **376**(1), 287–312 (2007). [CrossRef]

3. C. Evans, S. Morris, M. Swinbank, J. G. Cuby, M. Lehnert, and M. Puech, “EAGLE: galaxy evolution with the E-ELT,” Astron. Geophys. **51**(2), 2.17–2.21 (2010). [CrossRef]

4. T. Bifano, P. Bierden, H. Zhu, S. Cornelissen, and J. Kim, “Megapixel wavefront correctors,” Proc. SPIE **5490**, 1472–1481 (2004). [CrossRef]

5. J. W. Evans, B. Macintosh, L. Poyneer, K. Morzinski, S. Severson, D. Dillon, D. Gavel, and L. Reza, “Demonstrating sub-nm closed loop MEMS flattening,” Opt. Express **14**(12), 5558–5570 (2006). [CrossRef] [PubMed]

6. D. Guzmán, F. J. Juez, F. S. Lasheras, R. Myers, and L. Young, “Deformable mirror model for open-loop adaptive optics using multivariate adaptive regression splines,” Opt. Express **18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

7. J. B. Stewart, A. Diouf, Y. Zhou, and T. G. Bifano, “Open-loop control of a MEMS deformable mirror for large-amplitude wavefront control,” J. Opt. Soc. Am. A **24**(12), 3827–3833 (2007). [CrossRef]

8. K. Morzinski, K. Harpsoe, D. Gavel, and S. Ammons, “The open-loop control of MEMS: Modeling and experimental results,” Proc. SPIE **6467**, 64670G–1 (2007). [CrossRef]

6. D. Guzmán, F. J. Juez, F. S. Lasheras, R. Myers, and L. Young, “Deformable mirror model for open-loop adaptive optics using multivariate adaptive regression splines,” Opt. Express **18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

9. C. Blain, R. Conan, C. Bradley, and O. Guyon, “Open-loop control demonstration of micro-electro-mechanical-system MEMS deformable mirror,” Opt. Express **18**(6), 5433–5448 (2010). [CrossRef] [PubMed]

6. D. Guzmán, F. J. Juez, F. S. Lasheras, R. Myers, and L. Young, “Deformable mirror model for open-loop adaptive optics using multivariate adaptive regression splines,” Opt. Express **18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

## 2. Non-parametric estimation techniques

### 2.1 Multivariate Adaptive Regression Splines (MARS)

11. J. Friedman, “Multivariate adaptive regression splines,” Ann. Stat. **19**(1), 1–67 (1991). [CrossRef]

12. S. Sekulic and B. R. Kowalski, “MARS: a tutorial,” J. Chemometr. **6**(4), 199–216 (1992). [CrossRef]

*a*the coefficient of the constant basis function,

_{0}*B*the

_{m}(x)*m*basis function, which may be a single spline function or a product (interaction) of two (or more) spline functions,

^{th}*a*the coefficient of the

_{m}*m*basis function and

^{th}*M*the number of basis functions included into the model.

*t*is called the knot location;

*t*,

*q*indicates the power (>0) to which the spline is raised; the subscript “+” indicates that the function has been forced to zero for negative arguments.

### 2.2 Artificial Neural Networks (ANN)

- where
*net*is the activation value of the_{j}*j*node,^{th}*w*is the connection weight from input node_{i,j}*i*to hidden node*j*,*y*is the_{i}*i*input with^{th}*y*being the bias_{0}*b*(with weight_{IH}*w*1),_{0,j}=*z*is the corresponding output of the_{j}*j*node in the hidden layer, and^{th}*f*is called the activation function of a node, which is usually a sigmoid function, written in Eq. (6):_{H}

- (2) Calculate the outputs of all output layer neurons using Eq. (7):
- where
*f*is the activation function, usually a line function,_{O}*w*is the connection weight from hidden node_{j,k}*j*to output node*k*(here*k = 1*),*z*is the corresponding output of the_{j}*j*node in the hidden layer with^{th}*z*being the bias_{0}*b*(with weight_{HO}*w*1). All the connection weights and bias values are assigned with random values initially, and then modified according to the results of MLP-BP training process._{0,k}=

### 2.3 ANN models implemented

- •
**ANNs**(small ANN): MLP with one hidden layer and ANN structure: 12 x 16 x 12 neurons - •
**ANNb**(big ANN): MLP with one hidden layer and ANN structure: 30 x 40 x 30 neurons

*14 x 9 = 126*actuators (the modeled area is fully described in the next section), which has been divided in 7, 6 x 5 actuators sectors, illustrated in different colors (except purple, used to denote main overlapping regions). Each sector is modeled by an ANNb model (of

*6 x 5 = 30*neurons in the first layer). The different sectors must overlap in order to give continuity to the model. The ANNb model is trained using data from the central sector solely (in white in Fig. 2).

### 2.4 Training

*R*programming language [10]. The training and estimation processes (illustrated in Fig. 4 ) are fundamentally different in terms of data flow. When training a model, DM surface data as well as actuator voltages are fed to the model’s training algorithm. Once trained, the model is fed with DM surface data, to generate ‘estimated’ or ‘predicted’ actuator voltages.

20. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature **323**(6088), 533–536 (1986). [CrossRef]

*R*, which uses the error backpropagation algorithm by updating the weight and bias values according to gradient descent with momentum. Networks trained with the backpropagation algorithm are sensitive to initial conditions and susceptible to local minima in the error surface. On the other hand, there may be many parameter sets within a model structure that are equally acceptable as simulators of a dynamical process of interest. Consequently, instead of attempting to find a best single ANN model, we may make predictions based on an ensemble of neural networks trained for the comparable task (see e.g [21

21. A. J. C. Sharkey, “On combining artificial neural nets,” Connect. Sci. **8**(3), 299–314 (1996). [CrossRef]

22. R. D. Braddock, M. L. Kremmer, and L. Sanzogni, “Feed-forward artificial neural network model for forecasting rainfall run-off,” Environmetrics **9**(4), 419–432 (1998). [CrossRef]

23. S.-I. Amari, N. Murata, K.-R. Muller, M. Finke, and H. H. Yang, “Asymptotic statistical theory of overtraining and cross-validation,” IEEE Trans. Neural Netw. **8**(5), 985–996 (1997). [CrossRef] [PubMed]

*R language and environment for statistical computing*[10], running on a 58 GFLOPS Cray XD-1 supercomputer. When the machine was mostly available to us, a MARS model training takes 3 hours, while a ANN model training takes around 24 hours to run.

## 3. Experimental setup and methodology

- • There were a few unresponsive actuators in other areas of the DM, which we did not want to incorporate in the modeled area
- • The modeled area is similar in size with respect to smaller DMs (Boston’s Multi-DM with 12 x 12 actuators), so its size is appropriate
- • The duration period for training would stay under control, since the experiments performed for this article are intended to be a proof-of-concept of non-parametric estimation techniques
- • We can have a static ring of the DM surface around the modeled area, where we can sample long-term drift effects we have seen with our interferometer. This is described in detail in Guzmán et al [6
**18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

**18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

## 4. Figures of merit

7. J. B. Stewart, A. Diouf, Y. Zhou, and T. G. Bifano, “Open-loop control of a MEMS deformable mirror for large-amplitude wavefront control,” J. Opt. Soc. Am. A **24**(12), 3827–3833 (2007). [CrossRef]

### 4.1 Ratio residual – desired peak-to-valley

### 4.2 Ratio residual – desired rms

**18**(7), 6492–6505 (2010). [CrossRef] [PubMed]

9. C. Blain, R. Conan, C. Bradley, and O. Guyon, “Open-loop control demonstration of micro-electro-mechanical-system MEMS deformable mirror,” Opt. Express **18**(6), 5433–5448 (2010). [CrossRef] [PubMed]

## 5. Results

### 5.1 Focus term

7. J. B. Stewart, A. Diouf, Y. Zhou, and T. G. Bifano, “Open-loop control of a MEMS deformable mirror for large-amplitude wavefront control,” J. Opt. Soc. Am. A **24**(12), 3827–3833 (2007). [CrossRef]

### 5.2 Random phases

## 6. Conclusions

## Acknowledgments

## References and links

1. | F. Hammer, F. Sayede, E. Gendron, T. Fusco, D. Burgarella, V. Cayatte, J. M. Conan, F. Courbin, H. Flores, I. Guinouard, L. Jocou, A. Lancon, G. Monnet, M. Mouhcine, F. Rigaud, D. Rouan, G. Rousset, V. Buat, and F. Zamkotsian, “The FALCON Concept: Multi-Object Spectroscopy Combined with MCAO in Near-IR,” Proc. ESO Workshop (2002) |

2. | F. Assémat, E. Gendron, and F. Hammer, “The FALCON concept: multi-object adaptive optics and atmospheric tomography for integral field spectroscopy - principles and performance on an 8-m telescope,” Mon. Not. R. Astron. Soc. |

3. | C. Evans, S. Morris, M. Swinbank, J. G. Cuby, M. Lehnert, and M. Puech, “EAGLE: galaxy evolution with the E-ELT,” Astron. Geophys. |

4. | T. Bifano, P. Bierden, H. Zhu, S. Cornelissen, and J. Kim, “Megapixel wavefront correctors,” Proc. SPIE |

5. | J. W. Evans, B. Macintosh, L. Poyneer, K. Morzinski, S. Severson, D. Dillon, D. Gavel, and L. Reza, “Demonstrating sub-nm closed loop MEMS flattening,” Opt. Express |

6. | D. Guzmán, F. J. Juez, F. S. Lasheras, R. Myers, and L. Young, “Deformable mirror model for open-loop adaptive optics using multivariate adaptive regression splines,” Opt. Express |

7. | J. B. Stewart, A. Diouf, Y. Zhou, and T. G. Bifano, “Open-loop control of a MEMS deformable mirror for large-amplitude wavefront control,” J. Opt. Soc. Am. A |

8. | K. Morzinski, K. Harpsoe, D. Gavel, and S. Ammons, “The open-loop control of MEMS: Modeling and experimental results,” Proc. SPIE |

9. | C. Blain, R. Conan, C. Bradley, and O. Guyon, “Open-loop control demonstration of micro-electro-mechanical-system MEMS deformable mirror,” Opt. Express |

10. | J. Chambers, |

11. | J. Friedman, “Multivariate adaptive regression splines,” Ann. Stat. |

12. | S. Sekulic and B. R. Kowalski, “MARS: a tutorial,” J. Chemometr. |

13. | K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. |

14. | J. de Villiers and E. Barnard, “Backpropagation neural nets with one and two hidden layers,” IEEE Trans. Neural Netw. |

15. | A. W. Minns and M. J. Hall, “Artificial neural networks as rainfall-runoff models / Modelisation pluie-debit par des reseaux neuroneaux artificiels,” Hydrol. Sci. J. |

16. | R. J. Abrahart and L. See, “Comparing neural network and autoregressive moving average techniques for the provision of continuous river flow forecasts in two contrasting catchments,” Hydrol. Process. |

17. | A. Y. Shamseldin, “Application of a neural network technique to rainfall-runoff modelling,” J. Hydrol. (Amst.) |

18. | C. M. Zealand, D. H. Burn, and S. P. Simonovic, “Short term streamflow forecasting using artificial neural networks,” J. Hydrol. (Amst.) |

19. | C. W. Dawson and R. L. Wilby, “Hydrological modelling using artificial neural networks,” Prog. Phys. Geogr. |

20. | D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature |

21. | A. J. C. Sharkey, “On combining artificial neural nets,” Connect. Sci. |

22. | R. D. Braddock, M. L. Kremmer, and L. Sanzogni, “Feed-forward artificial neural network model for forecasting rainfall run-off,” Environmetrics |

23. | S.-I. Amari, N. Murata, K.-R. Muller, M. Finke, and H. H. Yang, “Asymptotic statistical theory of overtraining and cross-validation,” IEEE Trans. Neural Netw. |

24. | W. Wang, P. H. A. J. M. Van Gelder, and J. K. Vrijling, “Some issues about the generalization of neural networks for time series prediction”. In: W. Duch, Editor, Artificial Neural Networks: Formal Models and Their Applications, Lecture Notes in Computer Science vol. 3697 (2005), pp. 559–564. |

**OCIS Codes**

(010.1080) Atmospheric and oceanic optics : Active or adaptive optics

(010.1330) Atmospheric and oceanic optics : Atmospheric turbulence

(010.1285) Atmospheric and oceanic optics : Atmospheric correction

**ToC Category:**

Adaptive Optics

**History**

Original Manuscript: June 21, 2010

Revised Manuscript: September 16, 2010

Manuscript Accepted: September 17, 2010

Published: September 23, 2010

**Citation**

Dani Guzmán, Francisco Javier de Cos Juez, Richard Myers, Andrés Guesalaga, and Fernando Sánchez Lasheras, "Modeling a MEMS deformable mirror using non-parametric estimation techniques," Opt. Express **18**, 21356-21369 (2010)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-20-21356

Sort: Year | Journal | Reset

### References

- F. Hammer, F. Sayede, E. Gendron, T. Fusco, D. Burgarella, V. Cayatte, J. M. Conan, F. Courbin, H. Flores, I. Guinouard, L. Jocou, A. Lancon, G. Monnet, M. Mouhcine, F. Rigaud, D. Rouan, G. Rousset, V. Buat, and F. Zamkotsian, “The FALCON Concept: Multi-Object Spectroscopy Combined with MCAO in Near-IR,” Proc. ESO Workshop (2002)
- F. Assémat, E. Gendron, and F. Hammer, “The FALCON concept: multi-object adaptive optics and atmospheric tomography for integral field spectroscopy - principles and performance on an 8-m telescope,” Mon. Not. R. Astron. Soc. 376(1), 287–312 (2007). [CrossRef]
- C. Evans, S. Morris, M. Swinbank, J. G. Cuby, M. Lehnert, and M. Puech, “EAGLE: galaxy evolution with the E-ELT,” Astron. Geophys. 51(2), 2.17–2.21 (2010). [CrossRef]
- T. Bifano, P. Bierden, H. Zhu, S. Cornelissen, and J. Kim, “Megapixel wavefront correctors,” Proc. SPIE 5490, 1472–1481 (2004). [CrossRef]
- J. W. Evans, B. Macintosh, L. Poyneer, K. Morzinski, S. Severson, D. Dillon, D. Gavel, and L. Reza, “Demonstrating sub-nm closed loop MEMS flattening,” Opt. Express 14(12), 5558–5570 (2006). [CrossRef] [PubMed]
- D. Guzmán, F. J. Juez, F. S. Lasheras, R. Myers, and L. Young, “Deformable mirror model for open-loop adaptive optics using multivariate adaptive regression splines,” Opt. Express 18(7), 6492–6505 (2010). [CrossRef] [PubMed]
- J. B. Stewart, A. Diouf, Y. Zhou, and T. G. Bifano, “Open-loop control of a MEMS deformable mirror for large-amplitude wavefront control,” J. Opt. Soc. Am. A 24(12), 3827–3833 (2007). [CrossRef]
- K. Morzinski, K. Harpsoe, D. Gavel, and S. Ammons, “The open-loop control of MEMS: Modeling and experimental results,” Proc. SPIE 6467, 64670G–1 (2007). [CrossRef]
- C. Blain, R. Conan, C. Bradley, and O. Guyon, “Open-loop control demonstration of micro-electro-mechanical-system MEMS deformable mirror,” Opt. Express 18(6), 5433–5448 (2010). [CrossRef] [PubMed]
- J. Chambers, Software for Data Analysis: Programming with R, (Springer, 2008).
- J. Friedman, “Multivariate adaptive regression splines,” Ann. Stat. 19(1), 1–67 (1991). [CrossRef]
- S. Sekulic and B. R. Kowalski, “MARS: a tutorial,” J. Chemometr. 6(4), 199–216 (1992). [CrossRef]
- K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw. 2(5), 359–366 (1989). [CrossRef]
- J. de Villiers and E. Barnard, “Backpropagation neural nets with one and two hidden layers,” IEEE Trans. Neural Netw. 4(1), 136–141 (1993). [CrossRef] [PubMed]
- A. W. Minns and M. J. Hall, “Artificial neural networks as rainfall-runoff models / Modelisation pluie-debit par des reseaux neuroneaux artificiels,” Hydrol. Sci. J. 41(3), 399–417 (1996). [CrossRef]
- R. J. Abrahart and L. See, “Comparing neural network and autoregressive moving average techniques for the provision of continuous river flow forecasts in two contrasting catchments,” Hydrol. Process. 14(11-12), 2157–2172 (2000). [CrossRef]
- A. Y. Shamseldin, “Application of a neural network technique to rainfall-runoff modelling,” J. Hydrol. (Amst.) 199(3-4), 272–294 (1997). [CrossRef]
- C. M. Zealand, D. H. Burn, and S. P. Simonovic, “Short term streamflow forecasting using artificial neural networks,” J. Hydrol. (Amst.) 214(1-4), 32–48 (1999). [CrossRef]
- C. W. Dawson and R. L. Wilby, “Hydrological modelling using artificial neural networks,” Prog. Phys. Geogr. 25(1), 80–108 (2001).
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323(6088), 533–536 (1986). [CrossRef]
- A. J. C. Sharkey, “On combining artificial neural nets,” Connect. Sci. 8(3), 299–314 (1996). [CrossRef]
- R. D. Braddock, M. L. Kremmer, and L. Sanzogni, “Feed-forward artificial neural network model for forecasting rainfall run-off,” Environmetrics 9(4), 419–432 (1998). [CrossRef]
- S.-I. Amari, N. Murata, K.-R. Muller, M. Finke, and H. H. Yang, “Asymptotic statistical theory of overtraining and cross-validation,” IEEE Trans. Neural Netw. 8(5), 985–996 (1997). [CrossRef] [PubMed]
- W. Wang, P. H. A. J. M. Van Gelder, and J. K. Vrijling, “Some issues about the generalization of neural networks for time series prediction”. W. Duch, ed., in Artificial Neural Networks: Formal Models and Their Applications, Lecture Notes in Computer Science, Vol. 3697 (2005), pp. 559–564.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.