PR-Transformer: Long-term Prediction of Train Wheel Diameters Using Progressive Refinement Transformer
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT(2024)
Southwest Jiaotong Univ
Abstract
Massive amounts of wheel diameter data can be collected from the trackside wheel detection subsystem, providing a feasible way for developing a data-driven model for wheel diameter prediction. To address this issue, a novel wheel diameter prediction model, named progressive refinement transformer (PR-Transformer), which combines time embedding and data decomposition, is proposed. Time embedding encodes temporal information into the model, offering precise temporal context regarding wheel diameter changes and enabling the model to better understand the potential dependencies between time and wheel diameter wear. Data decomposition partitions the wheel diameter data into seasonal and trend components, simplifying each component and making them more predictable. The PR-Transformer, which adopts an encoder-decoder architecture, is designed to extract long-term dependencies using scaled dot-product attention (SDPA) and multihead attention (MHA) mechanisms. It progressively refines its predictions by integrating information from both the encoder and decoder. Results indicate that the PR-Transformer achieves minimal mean squared error (MSE) and mean absolute error (MAE) across various wheel diameter datasets, surpassing other state-of-the-art models and demonstrating its potential as a robust tool for wheel diameter prediction.
MoreTranslated text
Key words
Wheels,Predictive models,Data models,Transformers,Numerical models,Market research,Prediction algorithms,Feature extraction,Data mining,Vibrations,Rail vehicle,time-series analysis,time-series decomposition,transformer,wheel diameter prediction
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined