Data Acquisition System of CANDLES Detector for Double Beta Decay Experiment
2011 IEEE NUCLEAR SCIENCE SYMPOSIUM AND MEDICAL IMAGING CONFERENCE (NSS/MIC)(2011)
Osaka Univ
Abstract
The observation of neutrino-less double beta decay (0νββ) will prove existence of a massive Majorana neutrino. For a sensitive measurement of the neutrino mass, we have developed a new detector system CANDLES, which features CaF2(pure) scintillators. The CANDLES system needs a low background measurement because 0νββ is a very rare decay. In order to reach the low background measurement, we introduced characteristic flash ADCs in the system. Besides the flash ADC, we developed a trigger system for the CaF2(pure) events. Signal processing for the data readout and trigger is in a FPGA. In this paper, we present the data acquisition system including the characteristic flash ADC and the trigger system for the CANDLES system.
MoreTranslated text
Key words
analogue-digital conversion,data acquisition,double beta decay,liquid scintillation detectors,neutrino mass,nuclear electronics,readout electronics,trigger circuits,CANDLES detector,CaF2 scintillator detector,FPGA,data acquisition system,data readout system,flash ADC characteristics,low background measurement method,massive Majorana neutrino,neutrino mass measurement,neutrino-less double beta decay experiment,signal processing,trigger system
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined