WeChat Mini Program
Old Version Features

Enhancing Rice Breeding Through Two-Line Hybrids: Integrative Analysis of Combining Ability, Heterosis, MGIDI, and Grain Quality Traits

Euphytica(2025)

Tamil Nadu Agricultural University

Cited 0|Views1
Abstract
Two-line hybrid rice breeding, using thermo-sensitive genic male sterile (TGMS) lines, offers significant advantages over traditional three-line systems by addressing heterosis limitations and enhancing seed reproducibility. This study analyzed the genetic parameters and gene actions for 15 yield and grain quality traits in 135 two-line hybrid combinations produced by crossing between five TGMS lines with 27 paternal lines using the line × tester method. Superior hybrids were further assessed for 14-grain quality traits to determine their potential. The results indicated these traits are controlled by both additive and non-additive gene actions, with additive variance makes the largest contributions to the genotypic variance. General combining ability identified the best parental combiners, while specific combining ability (SCA) analysis highlighted high-yield hybrids. Dominance variance was greater than additive variance, showing that influence of non-additive gene effects on trait inheritance. MGIDI-based selection identified top-performing hybrids such as TNAU 45S × PMK-3, TNAU 45S × APO, and TNAU 45S × I 127, which exhibited excellent SCA effects, high heterosis, and superior grain yield, quality, and early maturity. Hybrids TNAU 60S × BPT 3034 and TNAU 116S × W225 showed significant heterosis for grain yield, productive tillers, and spikelet fertility. Additionally, TNAU 60S × Wayrarem achieved the highest head rice recovery rate (72.99
More
Translated text
Key words
Hybrid rice,Combining ability,GCA,SCA,MGIDI,Two-line hybrid,Heterosis
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined