Development and Implementation of a Digital Quality Measure of Emergency Cancer Diagnosis
JOURNAL OF CLINICAL ONCOLOGY(2024)
Michael E DeBakey VA Med Ctr
Abstract
PURPOSE Missed and delayed cancer diagnoses are common, harmful, and often preventable. Automated measures of quality of cancer diagnosis are lacking but could identify gaps and guide interventions. We developed and implemented a digital quality measure (dQM) of cancer emergency presentation (EP) using electronic health record databases of two health systems and characterized the measure's association with missed opportunities for diagnosis (MODs) and mortality. METHODS On the basis of literature and expert input, we defined EP as a new cancer diagnosis within 30 days after emergency department or inpatient visit. We identified EPs for lung cancer and colorectal cancer (CRC) in the Department of Veterans Affairs (VA) and Geisinger from 2016 to 2020. We validated measure accuracy and identified preceding MODs through standardized chart review of 100 records per cancer per health system. Using VA's longitudinal encounter and mortality data, we applied logistic regression to assess EP's association with 1-year mortality, adjusting for cancer stage and demographics. RESULTS Among 38,565 and 2,914 patients with lung cancer and 14,674 and 1,649 patients with CRCs at VA and Geisinger, respectively, our dQM identified EPs in 20.9% and 9.4% of lung cancers, and 22.4% and 7.5% of CRCs. Chart reviews revealed high positive predictive values for EPs across sites and cancer types (72%-90%), and a substantial percent represented MODs (48.8%-84.9%). EP was associated with significantly higher odds of 1-year mortality for lung cancer and CRC (adjusted odds ratio, 1.78 and 1.83, respectively, 95% CI, 1.63 to 1.86 and 1.61 to 2.07). CONCLUSION A dQM for cancer EP was strongly associated with both mortality and MODs. The findings suggest a promising automated approach to measuring quality of cancer diagnosis in US health systems.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined