Characteristics of Cluster Mode Particle Number Concentrations in India
crossref(2023)
Abstract
The dynamics of atmospheric aerosols is governed by the spatio-temporal variability in the particle number size distributions. Atmospheric new particle formation begins with the formation of the cluster mode (sub-3nm) particle number concentrations followed by their growth to large sizes in the atmosphere. Here, we used three years (2019-2022) particle number size distribution measurements in the size range from 1 to 3 nm from nano Condensation Nucleus Counter (nCNC) in Hyderabad, India. The distinct seasonal variation was observed in size-segregated cluster mode particle number concentrations, with the highest concentrations in spring (March-May) and the lowest concentrations in winter (December-February). The seasonal variability is strongly linked to the factors affecting cluster mode formation such as planetary boundary layer evolution, temperature (oxidation extent), pre-existing particles (coagulation sink), etc. The calculated sulfuric acid proxy is strongly correlated with cluster mode particle number concentrations and formation rates, indicating the important role of sulfuric acid in aerosol nucleation. The formation rate and growth rate of cluster mode particles were also the highest during spring than winter. Our analysis further revealed that cluster mode number concentrations were the highest at low particulate matter less than 2.5 µm (PM2.5) while it was the lowest at high PM2.5 levels, indicative of the efficient scavenging of cluster mode particles by large-size pre-existing particles. We have also used PARticle Growth And Nucleation (PARGAN) inversion model to estimate the formation rate and growth rate from particle size distribution measurements in the size range from 10 nm to 560 nm. We found that the estimated formation and growth rates from PARGAN model were compared with the measured formation and growth rates from nCNC, within the uncertainty levels. This underlines the applicability of PARGAN inversion model for estimating cluster mode formation and growth rates where such measurements are not available, particularly in India.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined