Improving CTR with the FastIC ASIC for TOF-PET by Overcoming SiPM Noise with Baseline Correction
IEEE Transactions on Radiation and Plasma Medical Sciences(2025)
I3N Physics Department (I3N)
Abstract
Time resolution in time-of-flight positron emission tomography (TOF-PET) has improved significantly over the last decade due to advancements in scintillation materials, photodetectors, and readout electronics, which has increased the signal-to-noise ratio (SNR) compared to conventional PET. Silicon photomultipliers (SiPMs) in TOF-PET detectors are often operated at high bias voltage to improve the time performance at the expense of increasing signal noise. SiPM noise, both correlated and uncorrelated, can cause baseline fluctuations, leading to time-walk effects when a leading edge trigger strategy is used, and thus limiting timing performance. We examined the effect of SiPM baseline fluctuations using the FastIC ASIC, a scalable multichannel readout for fast timing applications. We flagged noisy events by using a comparator signal triggered by dark counts before the actual scintillation event. We tested different classification and correction methods with scintillating crystals and Cherenkov radiators, coupled to analog SiPMs from Broadcom (NUV-MT) and Hamamatsu Photonics. We reduced the coincidence time resolution (CTR) in bismuth germanate 2x2x3 mm3 (BGO) crystals from 410 ± 10 ps to 388 ± 10 ps FWHM (5 %) by correcting the time-walk on the noisy events. We measured an improvement from 107 ±2 ps to 93.5 ± 0.6 ps (11 %) for LYSO 2x2x3 mm3 crystals by filtering the noisy events. An improvement of 9 % on the CTR of the EJ232 plastic scintillator was also achieved by filtering noisy events, reducing it from 82.2 ± 0.5 ps to 75 ± 1 ps. This study presents a scalable method for flagging undesired events in a full TOF-PET system and discusses the impact of SiPM noise on the FastIC readout.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined