WeChat Mini Program
Old Version Features

Spatial and Temporal Dynamics of Microbes and Genes in Drinking Water Reservoirs: Distribution and Potential for Taste and Odor Generation

JOURNAL OF HAZARDOUS MATERIALS(2024)

Cited 1|Views21
Abstract
Numerous reservoirs encounter challenges related to taste and odor issues, often attributed to odorous compounds such as geosmin (GSM) and 2-methylisoborneol (2-MIB). In this study, two large reservoirs located in northern and southern China were investigated. The Jinpen (JP) reservoir had 45.99% Actinomycetes and 14.82% Cyanobacteria, while the Xikeng (XK) reservoir contained 37.55% Actinomycetes and 48.27% Cyanobacteria. Most of the 2-MIB produced in surface layers of the two reservoirs in summer originated from Cyanobacteria, most of the 2-MIB produced in winter and in the bottom water originated from Actinomycetes. Mic gene abundance in the XK reservoir reached 5.42×104 copies/L in winter. The abundance of GSM synthase was notably high in the bottom layer and sediment of both reservoirs, while 2-MIB synthase was abundant in the surface layer of the XK reservoir, echoing the patterns observed in mic gene abundance. The abundance of odor-producing enzymes in the two reservoirs was inhibited by total nitrogen, temperature significantly influenced Actinomycetes abundance in the JP reservoir, whereas dissolved oxygen had a greater impact in the XK reservoir. Overall, this study elucidates the molecular mechanisms underlying odor compounding, providing essential guidance for water quality management strategies and the improvement of urban water reservoir quality.
More
Translated text
Key words
Drinking water safety,Taste and odor,Actinobacteria,Microbial community structure
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined