WeChat Mini Program
Old Version Features

Facility for Alignment, Assembly, and Integration of the SPO Mirror Modules Onto the ATHENA Telescope

OPTICS FOR EUV, X-RAY, AND GAMMA-RAY ASTRONOMY X(2021)

Media Lario Srl

Cited 4|Views16
Abstract
Several hundreds of Silicon Pore Optics (SPO) mirror modules will be integrated and co-aligned onto the ATHENA (Advanced Telescope for High-ENergy Astrophysics) Mirror Assembly Module (MAM). The selected integration process, developed by Media Lario, exploits a full size optical bench to capture the focal plane image of each mirror module when illuminated by an UV plane wavefront at 218 nm. Each mirror module, handled by a manipulator, focuses the collimated beam onto a CCD camera placed at the 12 m focal position of the ATHENA telescope. The image is processed in real time to calculate the centroid position and overlap it to the centroid of the already integrated Mirror modules. Media Lario has designed the ATHENA Assembly Integration and Testing facility to realize the integration process for the flight telescope and has started its construction. The facility consists of a vertical optical bench installed inside a tower with controlled cleanroom conditions. The MAM axis is aligned along gravity and supported on actuators to compensate for gravity deformations. A robot device above the MAM is used for aligning the SPO Mirror modules. The 2.6 m paraboloid mirror that collects the light emitted by a UV source is in final polishing. The alignment system, the cell support and the metrology system for the UV collimator have been qualified and accepted for installation. Details about the optical bench and the status of the facility construction will be presented.
More
Translated text
Key words
X-ray optics,X-ray telescopes,ATHENA,Silicon Pore Optics,Integration,Optical Alignment
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined