Latest MAGIC discoveries pushing redshift boundaries in VHE Astrophysics
semanticscholar(2016)
Abstract
The search for detection of γ-rays from distant AGNs by Imaging Atmospheric Cherenkov Telescopes (IACTs) is challenging at high redshifts, not only because of lower flux due to the distance of the source, but also due to the consequent absorption of γ-rays by the extragalactic background light (EBL). Before the MAGIC discoveries reported in this work, the farthest source ever detected in the VHE domain was the blazar PKS 1424+240, at z > 0.6. MAGIC, a system of two 17 m of diameter IACTs located in the Canary island of La Palma, has been able to go beyond that limit and push the boundaries for VHE detection to redshifts z ∼ 1. The two sources detected and analyzed, the blazar QSO B0218+357 and the FSRQ PKS 1441+25 are located at redshift z = 0.944 and z = 0.939 respectively. QSO B0218+357 is also the first gravitational lensed blazar ever detected in VHE. The activity, triggered by Fermi-LAT in high energy γ-rays, was followed up by other instruments, such as the KVA telescope in the optical band and the Swift-XRT in X-rays. In the present work we show results on MAGIC analysis on QSO B0218+357 and PKS 1441+25 together with multiwavelength lightcurves. The collected dataset allowed us to test for the first time the present generation of EBL models at such distances. XIV International Conference on Topics in Astroparticle and Underground Physics (TAUP 2015) IOP Publishing Journal of Physics: Conference Series 718 (2016) 052022 doi:10.1088/1742-6596/718/5/052022 Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1
MoreTranslated text
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined