MP09-05 AUTOMATED PROSTATE GLAND AND PROSTATE ZONES SEGMENTATION USING A NOVEL MRI-BASED MACHINE LEARNING FRAMEWORK AND CREATION OF SOFTWARE INTERFACE FOR USERS ANNOTATION
JOURNAL OF UROLOGY(2023)
Abstract
You have accessJournal of UrologyCME1 Apr 2023MP09-05 AUTOMATED PROSTATE GLAND AND PROSTATE ZONES SEGMENTATION USING A NOVEL MRI-BASED MACHINE LEARNING FRAMEWORK AND CREATION OF SOFTWARE INTERFACE FOR USERS ANNOTATION Masatomo Kaneko, GIovanni E. Cacciamani, Yijing Yang, Vasileios Magoulianitis, Jintang Xue, Jiaxin Yang, Jinyuan Liu, Maria Sarah L. Lenon, Passant Mohamed, Darryl H. Hwang, Karan Gill, Manju Aron, Vinay Duddalwar, Suzanne L. Palmer, C.-C. Jay Kuo, Andre Luis Abreu, Inderbir Gill, and Chrysostomos L. Nikias Masatomo KanekoMasatomo Kaneko More articles by this author , GIovanni E. CacciamaniGIovanni E. Cacciamani More articles by this author , Yijing YangYijing Yang More articles by this author , Vasileios MagoulianitisVasileios Magoulianitis More articles by this author , Jintang XueJintang Xue More articles by this author , Jiaxin YangJiaxin Yang More articles by this author , Jinyuan LiuJinyuan Liu More articles by this author , Maria Sarah L. LenonMaria Sarah L. Lenon More articles by this author , Passant MohamedPassant Mohamed More articles by this author , Darryl H. HwangDarryl H. Hwang More articles by this author , Karan GillKaran Gill More articles by this author , Manju AronManju Aron More articles by this author , Vinay DuddalwarVinay Duddalwar More articles by this author , Suzanne L. PalmerSuzanne L. Palmer More articles by this author , C.-C. Jay KuoC.-C. Jay Kuo More articles by this author , Andre Luis AbreuAndre Luis Abreu More articles by this author , Inderbir GillInderbir Gill More articles by this author , and Chrysostomos L. NikiasChrysostomos L. Nikias More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000003224.05AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: To develop an automated machine learning (ML) model to segment the prostate gland, the peripheral zone (PZ), and the transition zone (TZ) using magnetic resonance image (MRI) and to create a web-based software interface for annotation. METHODS: Consecutive men who underwent prostate MRI followed by prostate biopsy (PBx) were identified from our PBx database (IRB# HS-13-00663). The 3T MRI was performed according to Prostate Imaging-Reporting and Data System (PIRADS) v2 or v2.1. The T2-weighted (T2W) images were manually segmented into the whole prostate, PZ, and TZ by experienced radiologist and urologist. A novel two-stage automatic Green Learning (GL) based machine learning model was designed, which is a novel non-deep learning method. The first stage segments the prostate gland and the second stage zooms into the prostate area to delineate TZ and PZ. Both stages share a lightweight feed-forward encoder-decoder GL system. Included accessions were split for 5-fold cross-validation. The volumes were calculated according to the number of pixels/voxels. The model performance for automated prostate segmentation was evaluated by Dice scores and Pearson correlation coefficients. The web-based software interface was designed and implemented for users to interact with the AI annotation model and make necessary adjustments. RESULTS: A total of 119 patients (19992 T2W images) met the inclusion criteria (Figure 1). Using the training dataset of 95 MRIs, a ML model for whole prostate, PZ, and TZ segmentation was constructed. The mean Dice scores for whole prostate, PZ, and TZ were 0.85, 0.62 and 0.81, respectively. The Pearson correlation coefficient for volumes of whole prostate, PZ, and TZ segmentation were 0.92 (p<0.01), 0.62 (p<0.01), and 0.93 (p<0.01), respectively. The web-based software interface takes a mean of 90sec for prostate segmentation with 168 slices. The platform supports DICOM series upload, image preview, image modification, 3-dimensional preview, and annotation mask export, from any device without migrating data. CONCLUSIONS: A lightweight feed-forward encoder-decoder model based on Green Learning can precisely segment the whole prostate, PZ and TZ. This is available on a user-friendly software interface. Source of Funding: None © 2023 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 209Issue Supplement 4April 2023Page: e105 Advertisement Copyright & Permissions© 2023 by American Urological Association Education and Research, Inc.MetricsAuthor Information Masatomo Kaneko More articles by this author GIovanni E. Cacciamani More articles by this author Yijing Yang More articles by this author Vasileios Magoulianitis More articles by this author Jintang Xue More articles by this author Jiaxin Yang More articles by this author Jinyuan Liu More articles by this author Maria Sarah L. Lenon More articles by this author Passant Mohamed More articles by this author Darryl H. Hwang More articles by this author Karan Gill More articles by this author Manju Aron More articles by this author Vinay Duddalwar More articles by this author Suzanne L. Palmer More articles by this author C.-C. Jay Kuo More articles by this author Andre Luis Abreu More articles by this author Inderbir Gill More articles by this author Chrysostomos L. Nikias More articles by this author Expand All Advertisement PDF downloadLoading ...
MoreTranslated text
Key words
Spine Segmentation
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined