This high level course is proposed by the Institute of Microtechnology (IMT) of the EPFL at Neuchâtel.
Advancements in silicon manufacturing and life sciences are enabling the creation of structures on the scale of fractions of micrometers, down to the nanometer scale. Successful applications in these nano-technologies require high-precision analysis tools for measurement and visual feedback, but many of these structures are too small to be clearly visualized by existing imaging tools. Some tools only allow to sample a few points on the surface of a tiny structure with a signal intensity that is lower than the noise floor of the system. Quantitative analysis of data from such nano-imaging and nano-machining tools requires techniques based on sophisticated mathematical models to enhance and de-noise image data, and to measure and reconstruct the shape of these nanostructures.
Primeraly targeted to students doing their Master of Science in Micro and Nanotechnology from the University of Neuchâtel, this two-day course is open to industrial audience interested by the state of the art in nanoscale imaging.
This course will teach the basics components of statistical data analysis with a focus on model-based techniques for quantitative analysis of nano-scale structures.
Starting from the basics of probabilistic estimation theory, we will compare various approaches to parameter estimation, such as, for example, maximum likelihood estimators, and Bayesian techniques.
We will analyze the relationship of these techniques and how to select the most appropriate estimator based on the mathematical structure of the problem.
In particular, this includes a detailed discussion of the components of parameter estimators, which will lead to an in-depth analysis of the proper choice of likelihood functions and the incorporation of a-priori information.
We will investigate how likelihood functions encode the relationship between the parameters of interest and the data, and how we can derive generative models based on the physical properties of the data acquisition system.
We will further learn how the statistical distribution of noise determines the analytical form of the likelihood function, and how this leads to robust estimators.
Finally, we will analyze how to design data priors based on a-priori knowledge of the problem and how to automatically learn them from data.
Using practical examples throughout the course, we will show how a proper choice of these components can yield precise results while the improper use can lead to strongly biased estimates.
In summary, the course includes following topics:
Problem statement and introduction
- Statistical estimators for image analysis
- Structure preserving de-noising for low-SNR applications
- Removal of systematic artifacts
- Image segmentation
- Feature detection, and classification
- 3D measurement and reconstruction
- Sensor fusion
Dr. Horst Haussecker is a Principal Engineer in Intel's Corporate Technology Group in Santa Clara, and manager of the Computational Nano-Vision research project in Intel Research. His research interests include physics-based computer vision, image sequence analysis, infrared thermography, and application of digital image processing as a quantitative instrument in science and technology.