Best Practice Guidelines for Use of Reference Points in Radiation Oncology Information Systems to Aggregate Longitudinal Dosimetric Data
Practical Radiation Oncology(2024)
University of California
Abstract
Purpose/Objectives Tracking patient dose in radiation oncology is challenging due to disparate electronic systems from various vendors. Treatment planning systems (TPS), radiation oncology information systems (ROIS), and electronic health records (EHR) lack uniformity, complicating dose tracking and reporting. To address this, we examined practices in multiple radiation oncology settings and proposed guidelines for current systems. Material/Methods A survey was conducted among members of various professional groups to understand dose reporting practices in TPS, ROIS, and EHR systems. The aim was to identify consistent components and develop guidelines. Results We identified six treatment scenarios where current ROIS defaults fail in accurately representing dose totals. A standardized approach involving three reference point types – Primary Treatment Plan Reference, Dose Check, and Prescription Tracking – was proposed to address these scenarios. Standardizing naming conventions for reference points was also recommended for easier integration with EHRs. The approach requires minimal modifications to existing systems and facilitates easier data transfer and display in EHRs. Conclusion Standardizing reference points in commercial TPS and ROIS can bridge infrastructure gaps and improve dose tracking in complex clinical scenarios. This standardization, aligned with AAPM's TG-263, paves the way for continual development of automated, standardized, interoperable tools, enhancing the ease of sharing reference point information.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined