WeChat Mini Program
Old Version Features

Trait-customized Sampling of Core Collections from a Winter Wheat Genebank Collection Supports Association Studies

FRONTIERS IN PLANT SCIENCE(2024)

Leibniz Inst Plant Genet & Crop Plant Res IPK Gate

Cited 0|Views7
Abstract
Subsampling a reduced number of accessions from ex situ genebank collections, known as core collections, is a widely applied method for the investigation of stored genetic diversity and for an exploitation by breeding and research. Optimizing core collections for genome-wide association studies could potentially maximize opportunities to discover relevant and rare variation. In the present study, eight strategies to sample core collections were implemented separately for two traits, namely susceptibility to yellow rust and stem lodging, on about 6,300 accessions of winter wheat (Triticum aestivum L.). Each strategy maximized different parameters or emphasized another aspect of the collection; the strategies relied on genomic data, phenotypic data or a combination thereof. The resulting trait-customized core collections of eight different sizes, covering the range between 100 and 800 accession samples, were analyzed based on characteristics such as population stratification, number of duplicate genotypes and genetic diversity. Furthermore, the statistical power for an association study was investigated as a key criterion for comparisons. While sampling extreme phenotypes boosts the power especially for smaller core collections of up to 500 accession samples, maximization of genetic diversity within the core collection minimizes population stratification and avoids the accumulation of less informative duplicate genotypes when increasing the size of a core collection. Advantages and limitations of different strategies to create trait-customized core collections are discussed for different scenarios of the availability of resources and data.
More
Translated text
Key words
core collections,genebank genomics,association study,plant genetic resources,wheat
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined