Time to Throw out the Elephant in the Room: Proper Use of SvO2 in Extracorporeal Life Support
ASAIO journal (American Society for Artificial Internal Organs 1992)(2024)
ECMO Centre Karolinska
Abstract
To the Editor: There is an elephant in the "ECLS room." When new devices for extracorporeal life support (ECLS) are released, despite numerous remarks during last decade to industry representatives, manufacturers' ignorance, and deliberate use of false terminology in ECLS consoles persist. The abbreviation for mixed venous saturation (SvO2) is falsely used for oxygen saturation of the blood drained to the ECLS circuit even though the Extracorporeal Life Support Organization (ELSO) has defined a sample taken at this site as the premembrane saturation (SpreO2).1 Indeed, the correct definition of SvO2 is the oxygen saturation in the mix of venous blood returning from all tissues in the body,2 most accurately sampled from the pulmonary artery in patients without extracorporeal membrane oxygenation (ECMO). Chat GTP3.5 tells you that free of charge (accessed online on May 30, 2024). In inferior vena cava (IVC) drainage, femoral artery return ECMO (Vfivc-Af), John et al.3 recently showed a difference between SpreO2 from the IVC and oxygen saturation in the main pulmonary artery (SpaO2). The authors also concluded that calculating cardiac output (CO) based on SpreO2 led to inaccurate results. However, the authors did not discuss that, in ECLS, as soon as the extracorporeal blood flow starts, with or without sweep gas, the natural flow streams from periphery defining SvO2 become disrupted. The fractional balance of returned blood oxygen content between inferior and superior vena cava, and coronary sinus becomes offset. The situation does not improve with the sweep gas on. The results of John et al.3 are thus even more obscured by the fact that they did not use the true SvO2 for assessment of referenced CO but the SpaO2. Note, the true SvO2 cannot be found anywhere in any patient on ECLS. Premembrane oxygen saturation may overestimate SvO2 with more than 20% in both veno-venous and veno-arterial ECLS,4,5 and thus severity of illness might be underestimated using wrong terminology. This is not a comment to question the work of John et al.3 Their detailed work is a good example, and among the first publications discussing the use of SpreO2 as a surrogate for SpaO2.4,5 To use "SvO2" wrongly is unfortunate but you could say, (unknowingly?) endorsed by device manufacturers and our own ECLS community. A number of ECLS consoles use "SvO2" to denote SpreO2, and still, most of us accept what the industry says and follow these false prophets, despite the definition by ELSO.1 Thus, flawed manuscripts are still published using wrong terminology, definitions, and nomenclature. A series of comments to shed light on the unawareness and ignorance of existing well-defined terminology have been published in the keel water of the corona virus disease 2019 pandemic. Similar deviations triggered the ELSO Maastricht Treaty for Nomenclature which forms the platform for how we should communicate.1 We should lead and continuously promote the use of common language for communication within our guild and with other stakeholders. This is key for seamless chain of development and patient safety, from the drawing board to the bedside product, from basic physiology to the effect of a new management or drug, yet again, patient safety.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined