WeChat Mini Program
Old Version Features

南昌市动物园圈养动物3种肠道原虫的分子检测及基因型鉴定

LIANG Weijia,CHEN Xiaoqing

Biological Disaster Science(2023)

江西农业大学

Cited 0|Views7
Abstract
[目的]隐孢子虫、毕氏肠微孢子虫和芽囊原虫是 3种常见的寄生于人、家养动物和野生动物等肠道的寄生原虫,主要引起人和动物的腹泻.试验旨在调查南昌市动物园部分圈养动物 3 种原虫的感染情况并鉴定其基因型,为制订动物园人兽共患病传播防控措施提供参考依据.[方法]从南昌市动物园采集非人灵长类、偶蹄类、奇蹄类、象类、有袋类、鸟类和食肉类等圈养动物粪便样品共 42 份,提取粪样总DNA;采用PCR技术扩增隐孢子虫小亚单位核糖体RNA(SSU rRNA)、毕氏肠微孢子虫核糖体RNA内转录间隔区(ITS)和芽囊原虫SSU rRNA基因序列;并对阳性产物进行测序、序列比对及系统进化分析,确定3 种肠道原虫的感染情况和基因型.[结果]42 份粪便样本中有 17 份样品原虫为阳性,总检出率 40.5%(17/42);其中隐孢子虫、毕氏肠微孢子虫和芽囊原虫感染率分别为 2.4%(1/42)、14.3%(6/42)和 23.8%(10/42).检出动物阳性占比分别为:非人灵长类64.7%(11/17)、偶蹄类 23.5%(4/17)、奇蹄类 5.9%(1/17)、有袋类 5.9%(1/17);且非人灵长类动物存在毕氏肠微孢子虫和芽囊原虫混合感染.通过序列比对及系统进化分析,在矮马(奇蹄类)鉴定出1种隐孢子虫基因型(姬鼠基因型Ⅱ),在长臂猿、金丝猴、博士猴、红尾长尾猴和羊驼中共鉴定到2 种毕氏肠微孢子虫基因型(D,ALP1).在松鼠猴、黑白疣猴、博士猴、黑叶猴、金丝猴、羚羊和袋鼠中共鉴定到5种芽囊原虫基因型(ST1、ST2、ST5、ST13 和 ST14);其中芽囊原虫(ST1、ST5)和毕氏肠微孢子虫(D、ALP1)均属人兽共患基因型.[结论]南昌市动物园圈养动物存在毕氏肠微孢子虫、芽囊原虫和隐孢子虫感染;从毕氏肠微孢子虫和芽囊原虫中鉴定出人兽共患基因型,有潜在的人兽共患传播风险.动物园应进一步加强圈养动物肠道寄生原虫的监测,做好驱虫、消毒及预防人-兽传播等预防工作.
More
Translated text
Key words
captive animal,Enterocytozoon bieneusi,Blastocystis sp.,Cryptosporidum spp.,molecular detection,genotype
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined