Synthesis of Lithium Ion-Imprinted Polymers for Selective Recovery of Lithium Ions from Salt Lake Brines
INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH(2024)
Beijing Univ Chem Technol
Abstract
In this study, a novel lithium ion-imprinted polymer [Li(I)-IIP] was synthesized through bulk polymerization using Li+ as the template, acrylic acid as the monomer, calix[4]arene as the ligand, ethylene glycol dimethacrylate as the cross-linker, and 2,2 '-azobis(isobutyronitrile) as the initiator. Li(I)-IIPs were characterized by an infrared spectrometer, solid-state C-13 nuclear magnetic resonance, X-ray photoelectron spectrometer, scanning electron microscope, thermogravimetric analysis, and zeta potential and used to selectively adsorb Li+ from aqueous solutions. The Li(I)-IIPs achieved adsorption equilibrium within 25 min at pH 10, exhibiting a maximum adsorption capacity of 28.52 mg/g for Li+, while the nonionic imprinted polymers (Li(I)-NIPs) displayed a maximum adsorption capacity of 22.82 mg/g. The isothermal adsorption model and adsorption kinetic model were determined, and the adsorption process of Li(I)-IIPs was found to follow Freundlich isothermal adsorption model and pseudo-second-order kinetic model. In addition, the adsorption selectivity for the binary mixtures of Li+/Rb+, Li+/Na+, and Li+/K+ and the cyclic stability of Li(I)-IIPs were investigated. The selectivity coefficients for Li+/Rb+, Li+/Na+, and Li+/K+, were 1.427, 2.476, and 1.582, respectively. After 10 adsorption-desorption cycles, the adsorption capacity of the polymers decreased by 12.14%. The excellent selective adsorption performance and recyclability of Li(I)-IIPs make them promising for the separation of Li+ from salt lake brines.
MoreTranslated text
Key words
Ion-imprinted polymer,Lithium recovery,Adsorption,Bulk polymerization
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined