New Particle Tracking Method for 2D Steady-State Groundwater Flow Around Wells: A Modified Algorithm Using the Stream Function
Journal of Hydrology(2024)
China Univ Geosci
Abstract
Particle tracking is an essential process in analyzing groundwater flow and solute transport in aquifers. A drawback of existing particle tracking methods, which are based on approximate estimations of the velocity in a time step, is the accumulative errors of numerous time steps would lead to deviation of particles away from the exact streamlines, especially when pumping and injecting wells exist. In this study, to improve the accuracy of particle tracking for the two-dimensional (2D) steady-state flow around wells, we propose a stream-function-based (SFB) particle tracking method in which a particle is traced along the exact streamline with a given value of the stream function, and the branch cut effect caused by the multivalued stream function around a well is eliminated. Three synthetic cases are used to demonstrate the advantages of the SFB method over the existing Pollock method and the fourth-order Runge-Kutta method. In the cases of a single pumping well and an injecting-pumping well pair, the Pollock and Runge-Kutta methods yield pathways that may be far away from the exact streamline, but the SFB method performs much better. In the case with a pumping well and two injecting wells, the SFB method successfully delineated a narrow capture zone of the pumping well, which requires a high accuracy particle tracking process. Moreover, the accuracy of particle positions obtained by the SFB method is insensitive to the step-travel distance. The study would be useful for accurate investigation of flow fields in aquifers disturbed by wells.
MoreTranslated text
Key words
Flow net,Streamline,Stream function,Pumping well,Injecting well,Runge-Kutta method
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined