WeChat Mini Program
Old Version Features

Early Detection of Visual Impairment in Young Children Using a Smartphone-Based Deep Learning System

NATURE MEDICINE(2023)

Sun Yat Sen Univ

Cited 20|Views114
Abstract
Early detection of visual impairment is crucial but is frequently missed in young children, who are capable of only limited cooperation with standard vision tests. Although certain features of visually impaired children, such as facial appearance and ocular movements, can assist ophthalmic practice, applying these features to real-world screening remains challenging. Here, we present a mobile health (mHealth) system, the smartphone-based Apollo Infant Sight (AIS), which identifies visually impaired children with any of 16 ophthalmic disorders by recording and analyzing their gazing behaviors and facial features under visual stimuli. Videos from 3,652 children (≤48 months in age; 54.5% boys) were prospectively collected to develop and validate this system. For detecting visual impairment, AIS achieved an area under the receiver operating curve (AUC) of 0.940 in an internal validation set and an AUC of 0.843 in an external validation set collected in multiple ophthalmology clinics across China. In a further test of AIS for at-home implementation by untrained parents or caregivers using their smartphones, the system was able to adapt to different testing conditions and achieved an AUC of 0.859. This mHealth system has the potential to be used by healthcare professionals, parents and caregivers for identifying young children with visual impairment across a wide range of ophthalmic disorders.
More
Translated text
Key words
Eye manifestations,Machine learning,Paediatrics,Translational research,Biomedicine,general,Cancer Research,Metabolic Diseases,Infectious Diseases,Molecular Medicine,Neurosciences
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点:本论文介绍了一种基于智能手机的深度学习系统,名为Apollo Infant Sight(AIS),用于早期检测儿童视力障碍。通过记录和分析视觉刺激下儿童注视行为和面部特征,AIS能够识别出具有16种眼科疾病的视力障碍儿童。

方法:本研究收集了3652个儿童(48个月以下;男性占54.5%)的视频进行系统开发和验证。AIS在内部验证集中取得了0.940的接受者操作特性曲线下面积(AUC),在中国多家眼科诊所收集的外部验证集中取得了0.843的AUC。在未经培训的父母或照顾者使用智能手机进行家庭实施的测试中,AIS能够适应不同的测试条件,并取得了0.859的AUC。

实验:本研究使用的数据集名称未提及。