Characterization of Plasma-Discharge Capillaries for Plasma-based Particle Acceleration
IPAC23 PROCEEDINGS(2024)
INFN Lab Nazl Frascati
Abstract
Novel particle accelerators based on plasma technology allow a drastic reduction in size, due to the high accelerating field established inside plasmas, which are created and confined by specific devices. Plasma Wakefield Acceleration experiments are performed at the SPARC_LAB test facility (Laboratori Nazionali di Frascati - INFN) by using gas-filled capillaries, in which the plasma formation is achieved by ionizing hydrogen gas through high voltage pulses. In this work, the characterization of gas-filled plasma-discharge capillaries is presented. Several geometrical configurations are tested, including capillaries with different channel shapes and arrangement of inlets positions for the gas injection. Such configurations are designed in order to enhance the uniformity of the plasma density distribution along the plasma channel, which is necessary to improve particle beam acceleration. Plasma sources are characterized by means of the spectroscopic technique based on the Stark broadening method, which allows to measure the evolution of the plasma density profile along the channel. In addition, the CFD software OpenFoam is used to simulate the dynamics of the neutral gas during the filling of the capillary.
MoreTranslated text
Key words
Laser-Plasma Accelerators,Wakefield Acceleration,Particle-in-Cell Simulations,High-Energy Density Plasmas,Plasma Diagnostics
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined