Nuclear Instruments and Methods in Physics Research A
openalex(2012)
Abstract
Calibration factors w, for determination of fission rate in metallic foils of nat U, 235 U, 232 Th, nat Pb and 197 Au were determined for foils in contact with synthetic mica track detectors. Proton-induced fission at proton energies of 0.7 GeV and 1.5 GeV were used. Using our experimental results as well as those of the other authors, w for different foil-mica systems were determined. Two methods were used to calculate w, relative to the calibration factor for uranium-mica system, which has been obtained in a standard neutron field of energy 14.7 MeV. One of these methods requires the knowledge of the mean range of the fission fragments in the foils of interest and other method needs information on the values of the fission cross-sections at the required energies as well as the density of the tracks recorded in the track detectors in contact with the foil surfaces. The obtained w-values were compared with Monte Carlo calculations and good agreements were found. It is shown that a calibration factor obtained at low energy neutron induced fissions in uranium isotopes deviates only by less than 10% from those obtained at relativistic proton induced fissions.
MoreTranslated text
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined