Association and Path Coefficient Analysis among Grain Yield and Related Traits in Kharif Maize (zea Mays L.)
Deleted Journal(2021)
Abstract
Thirteen inbred lines, three testers, thirteen nine hybrids, and two checks were tested in an RBD design with three replications at the Irrigation Research Station Farm, Araria, Bihar during the season of kharif 2020. The goal was to assess the direct and indirect impacts of characteristics on grain yield in maize and to establish the phenotypic and genotypic connection between traits. Character association studies will aid in assessing the link between the yield and its components in order to improve the selection's effectiveness. In light of this, the current study used twelve quantitative parameters to analyze the correlation coefficient and path analysis among 39 F1s, 13 inbred, three testers, and two check of maize. Correlation studies indicated that plant height (cm), ear height (cm), ear length (cm), ear diameter (cm), 1000 kernels weight, kernel rows per ear, number of kernels per row showed significant positive association with grain yield (Kg/ha) as well as among themselves at phenotypic and genotypic level. As a result, selecting for any one of these characters would result in improvements in the other characters as well as an increase in grain yield (kg/ha). Path coefficient analysis revealed that the highest positive direct effects on grain yield was exhibited by ear length, ear diameter, kernel rows per ear, kernels per row, 1000 kernels weight, ear height, days to 50% silking. As a result, the current study could aid in the trustworthy selection of parental lines based on the features listed above, as well as the development of high yielding varieties for future breeding programs.
MoreTranslated text
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined