WeChat Mini Program
Old Version Features

Degradation Alchemy: Self-Supervised Unknown-to-Known Transformation for Blind Hyperspectral Image Fusion

arXiv · Image and Video Processing(2025)

Cited 0|Views10
Abstract
Hyperspectral image (HSI) fusion is an efficient technique that combines low-resolution HSI (LR-HSI) and high-resolution multispectral images (HR-MSI) to generate high-resolution HSI (HR-HSI). Existing supervised learning methods (SLMs) can yield promising results when test data degradation matches the training ones, but they face challenges in generalizing to unknown degradations. To unleash the potential and generalization ability of SLMs, we propose a novel self-supervised unknown-to-known degradation transformation framework (U2K) for blind HSI fusion, which adaptively transforms unknown degradation into the same type of degradation as those handled by pre-trained SLMs. Specifically, the proposed U2K framework consists of: (1) spatial and spectral Degradation Wrapping (DW) modules that map HR-HSI to unknown degraded HR-MSI and LR-HSI, and (2) Degradation Transformation (DT) modules that convert these wrapped data into predefined degradation patterns. The transformed HR-MSI and LR-HSI pairs are then processed by a pre-trained network to reconstruct the target HR-HSI. We train the U2K framework in a self-supervised manner using consistency loss and greedy alternating optimization, significantly improving the flexibility of blind HSI fusion. Extensive experiments confirm the effectiveness of our proposed U2K framework in boosting the adaptability of five existing SLMs under various degradation settings and surpassing state-of-the-art blind methods.
More
Translated text
PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:提出了一种自适应的未知到已知退化解变换框架(U2K),用于无需先验信息的盲态高光谱图像融合,增强了监督学习方法的泛化能力。

方法】:通过空间和光谱退化解卷积模块(DW)将高分辨率高光谱图像映射到未知的退化的高分辨率多光谱图像和低分辨率高光谱图像,再通过退化解变换模块(DT)将这些数据转换成预定义的退化模式,进而利用预训练网络重构目标高分辨率高光谱图像。

实验】:使用一致性损失和贪心交替优化方式自我监督训练U2K框架,并在多个退化设置下验证了该方法提升了五种现有监督学习方法的自适应性,且超越了现有的盲态方法。实验使用的数据集未在摘要中明确提及。