EPIGENETIC DYSFUNCTION OF TERMINALLY EXHAUSTED TUMOR INFILTRATING T CELLS
semanticscholar
Abstract
T lymphocytic as datasets from deficiency facilitates tumor rejection in mice without apparent adverse autoimmune effects. In spite of its therapeutic potential, little is known about DGK z function in human T cells and there are not iso-form-specific inhibitors targeting this DGK isoform. Methods Here we used of a human triple parameter reporter (TPR) cell line to examine the consequences of DGK z depletion in the transcriptional restriction imposed by PD-1 liga-tion. We also investigated the effect of DGK z deficiency in the expression dynamics of PD-1, as well as the impact of the absence of this DGK isoform in the in vivo growth of a MC38 adenocarcinoma cell line. Results We demonstrate that DGK z depletion enhances DAG-regulated transcriptional programs, favoring IL-2 production and limiting PD-1 expression. Diminished PD-1 expression and enhanced expansion of cytotoxic CD8+ T cell popula-tions is also observed even in the context of immunosuppres-sive milieus and correlates with the failure of MC38 adenocarcinoma cells to form tumors in DGK z -deficient mice. Conclusions Our results suggest the relevance of DGK z as a therapeutic target on its own as well as a biomarker of CD8 + T cell dysfunctional states.
MoreTranslated text
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined