WeChat Mini Program
Old Version Features

High Throughput Application of the NanoBiT Biochemical Assay for the Discovery of Selective Inhibitors of the Interaction of PI3K-p110α with KRAS

SLAS DISCOVERY(2024)

Francis Crick Inst

Cited 0|Views1
Abstract
The NanoBiT Biochemical Assay (NBBA) was designed as a biochemical format of the NanoBiT cellular assay, aiming to screen weak protein-protein interactions (PPIs) in mammalian cell lysates. Here we present a High Throughput Screening (HTS) application of the NBBA to screen small molecule and fragment libraries to identify compounds that block the interaction of KRAS-G12D with phosphatidylinositol 3-kinase (PI3K) p110α. This interaction promotes PI3K activity, resulting in the promotion of cell growth, proliferation and survival, and is required for tumour initiation and growth in mouse lung cancer models, whilst having little effect on the health of normal adult mice, establishing the significance of the p110α/KRAS interaction as an oncology drug target. Despite the weak binding affinity of the p110α/KRAS interaction (KD = 3 μM), the NBBA proved to be robust and displayed excellent Z’-factor statistics during the HTS primary screening of 726,000 compounds, which led to the identification of 8,000 active compounds. A concentration response screen comparing KRAS/p110α with two closely related PI3K isoforms, p110δ and p110γ, identified selective p110α-specific compounds and enabled derivation of an IC50 for these hits. We identified around 30 compounds showing greater than 20-fold selectivity towards p110α versus p110δ and p110γ with IC50 < 2 μM. By using Differential Scanning Fluorimetry (DSF) we confirmed several compounds that bind directly to purified p110α. The most potent hits will be followed up by co-crystallization with p110α to aid further elucidation of the nature of the interaction and extended optimisation of these compounds.
More
Translated text
Key words
PI 3-kinase,KRAS,NanoBiT assay,Drug discovery,Assay development,High throughput screening
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined