Infrared and visible light image fusion papers and code compilation Infrared and visible light image fusion papers and code compilation

Infrared and visible light image fusion papers and code compilation

News

[2022-07-29] Our review paper " A Review of Image Fusion Methods Based on Deep Learning " was officially accepted by the "Chinese Journal of Image and Graphics"! [ Paper download ]

This blog post is expanded on the reprint: Summary of image fusion papers and code URLs (2) - Infrared and visible light image fusion, and summarizes the existing infrared and visible light image fusion algorithms (articles and codes). I hope to provide some convenience for myself and everyone in checking the code of articles in the field of infrared and visible light image fusion. In addition, there are many articles in this field, and this blog post only compiles a part of them. Due to the limited level, the articles are not interpreted!
Author QQ: 2458707789 , please note when applying for friends: name + school for easy notes.

The author also introduced the image fusion framework driven by high-level vision tasks. For details, see: SeAFusion: The first image fusion framework that combines high-level vision tasks.
The author also has blog posts in the same series:

  1. For the most comprehensive collection of image fusion papers and codes, see: The most comprehensive collection of image fusion papers and codes
  2. For the collection of image fusion review papers, see: Collection of image fusion review papers
  3. For image fusion evaluation indicators, see: Infrared and visible light image fusion evaluation indicators
  4. For the organization of commonly used data sets for image fusion, see: Organization of commonly used data sets for image fusion.
  5. General image fusion framework papers and code arrangement see: General image fusion framework papers and code arrangement
  6. Infrared and visible light image fusion papers and code collection based on deep learning see: Infrared and visible light image fusion papers and code collection based on deep learning
  7. For more detailed infrared and visible light image fusion codes, see: Infrared and visible light image fusion papers and code collection
  8. Multi-exposure image fusion papers and code compilation based on deep learning see: Multi-exposure image fusion papers and code compilation based on deep learning
  9. Multi-focus image fusion papers and code collection based on deep learning see: Multi-focus Image Fusion papers and code collection based on deep learning
  10. Pan-color image sharpening papers and codes based on deep learning, see: Pan-color image sharpening papers and codes based on deep learning (Pansharpening)
  11. For medical image fusion papers and code compilation based on deep learning, see: Medical image fusion papers and code compilation based on deep learning
  12. For color image fusion, see: Color Image Fusion
  13. SeAFusion: The first image fusion framework that combines high-level vision tasks. See: SeAFusion: The first image fusion framework that combines high-level vision tasks.

 

The author’s blog posts from the same series are also reprinted:

Summary of image fusion papers and code URLs (1) - multi-focus image fusion

Summary of image fusion papers and code URLs (2) - Infrared and visible light image fusion

Summary of image fusion papers and code URLs (3) - image fusion algorithms that are not distinguished in the title

Image fusion data set, image fusion database

 


 

【2022】

Article: Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network [deep learning] [high-level vision task driver]

Cite as: Tang, Linfeng, Jiteng Yuan, and Jiayi Ma. “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network.” Information Fusion 82 (2022): 28-42.

Paper: Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network

Code: https://github.com/Linfeng-Tang/SeAFusion

Interpretation: SeAFusion: The first image fusion framework combining high-level vision tasks
Article: SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer [TransFormer] [General Image Fusion Framework]

Cite as: Jiayi Ma, Linfeng Tang, Fan Fan, Jun Huang, Xiaoguang Mei, and Yong Ma. “SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer”, IEEE/CAA Journal of Automatica Sinica, 9 (7), pp. 1200-1217, Jul. 2022.

Paper: SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer

Code:https://github.com/Linfeng-Tang/SwinFusion

Article: PIAFusion: A progressive infrared and visible image fusion network based on illumination aware [deep learning]

Cite as: Tang, Linfeng, Jiteng Yuan, Hao Zhang, Xingyu Jiang, and Jiayi Ma. “PIAFusion: A progressive infrared and visible image fusion network based on illumination aware.” Information Fusion 83 (2022): 79-92.

Paper: PIAFusion: A progressive infrared and visible image fusion network based on illumination aware

Code: https://github.com/Linfeng-Tang/SeAFusion

Article: Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [Deep Learning] [Advanced Vision Task Driver]

Cite as: Liu, Jinyuan, Xin Fan, Zhanbo Huang, Guanyao Wu, Risheng Liu, Wei Zhong, and Zhongxuan Luo. “Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5802-5811. 2022.

Paper: Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection

Code:https://github.com/JinyuanLiu-CV/TarDAL

Article: Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration [Deep Learning] [Registration Fusion]

Cite as: Wang, Di, Jinyuan Liu, Xin Fan, and Risheng Liu. “Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration.” arXiv preprint arXiv:2205.11876 (2022).

Paper: Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration

Code: https://github.com/wdhudiekou/UMF-CMGR

【2021】

Article: STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection [Deep Learning] [Salient Target Mask]

Cite as: J. Ma, L. Tang, M. Xu, H. Zhang and G. Xiao, “STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection,” in IEEE Transactions on Instrumentation and Measurement, 2021, 70:1-13.

Paper: STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection

Code: https://github.com/Linfeng-Tang/STDFusionNet

Article: SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion [Deep Learning] [Universal Image Fusion Framework]

Cite as: Zhang, H. and Ma, J., 2021. SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129(10), pp.2761-2785.

Paper: SDNet: A versatile squeeze-and-decomposition network for real-time image fusion

Code:https://github.com/HaoZhang1018/SDNet

Article: Image fusion meets deep learning: A survey and perspective [Deep Learning] [Review]

Cite as:Zhang, Hao, Han Xu, Xin Tian, Junjun Jiang, and Jiayi Ma. “Image fusion meets deep learning: A survey and perspective.” Information Fusion 76 (2021): 323-336.

Paper: Image fusion meets deep learning: A survey and perspective

Article: Classification Saliency-Based Rule for Visible and Infrared Image Fusion [Deep Learning [Learnable Fusion Rules]

Cite as: Xu, Han, Hao Zhang, and Jiayi Ma. “Classification saliency-based rule for visible and infrared image fusion.” IEEE Transactions on Computational Imaging 7 (2021): 824-836.

Paper: Classification Saliency-Based Rule for Visible and Infrared Image Fusion

Code: https://github.com/hanna-xu/CSF

文章:GANMcC: A Generative Adversarial Network with Multiclassification Constraints for Infrared and Visible Image Fusion【深度学习 【GAN】

Cite as: Ma, Jiayi, Hao Zhang, Zhenfeng Shao, Pengwei Liang, and Han Xu. “GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion.” IEEE Transactions on Instrumentation and Measurement 70 (2020): 1-14.

Paper: GANMcC: A Generative Adversarial Network with Multiclassification Constraints for Infrared and Visible Image Fusion

Code: https://github.com/HaoZhang1018/GANMcC

Article: A Bilevel Integrated Model with Data-Driven Layer Ensemble for Multi-Modality Image Fusion [Deep Learning]

Cite as: Liu, Risheng, Jinyuan Liu, Zhiying Jiang, Xin Fan, and Zhongxuan Luo. “A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion.” IEEE Transactions on Image Processing 30 (2020): 1261 -1274.

Paper: A Bilevel Integrated Model with Data-Driven Layer Ensemble for Multi-Modality Image Fusion

Article: RFN-Nest: An end-to-end residual fusion network for infrared and visible images [Deep Learning] [Multi-scale]

Cite as: Li, Hui, Xiao-Jun Wu, and Josef Kittler. “RFN-Nest: An end-to-end residual fusion network for infrared and visible images.” Information Fusion 73 (2021): 72-86.

Paper: RFN-Nest: An end-to-end residual fusion network for infrared and visible images

Code: https://github.com/hli1221/imagefusion-rfn-nest

Article: DRF: Disentangled Representation for Visible and Infrared Image Fusion [Deep Learning] [Disentangled Learning]

Cite as: Xu, Han, Xinya Wang, and Jiayi Ma. “DRF: Disentangled representation for visible and infrared image fusion.” IEEE Transactions on Instrumentation and Measurement 70 (2021): 1-13.

Paper: DRF: Disentangled Representation for Visible and Infrared Image Fusion

Code: https://github.com/hanna-xu/DRF

文章:RXDNFuse: A aggregated residual dense network for infrared and visible image fusion【深度学习】

Cite as: Long, Yongzhi, Haitao Jia, Yida Zhong, Yadong Jiang, and Yuming Jia. “RXDNFuse: a aggregated residual dense network for infrared and visible image fusion.” Information Fusion 69 (2021): 128-141.

Paper: RXDNFuse: a aggregated residual dense network for infrared and visible image fusion

文章:Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion【深度学习 】【元学习】

Cite as: Li, Huafeng, Yueliang Cen, Yu Liu, Xun Chen, and Zhengtao Yu. “Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion.” IEEE Transactions on Image Processing 30 (2021): 4070-4083.

Paper: Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion

文章:Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis【深度学习 】【深度分解网络】

Cite as: Jian, L., Rayhana, R., Ma, L., Wu, S., Liu, Z. and Jiang, H., 2021. Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis. IEEE Transactions on Multimedia.

Paper: Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis

【2020】

1、文章:U2Fusion: A Unified Unsupervised Image Fusion Network 【深度学习】【通用图像融合】

Cite as:Xu H, Ma J, Jiang J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.

Paper: U2Fusion: A Unified Unsupervised Image Fusion Network

Code:https://github.com/jiayi-ma/U2Fusion

2、文章:Deep Convolutional Neural Network for Multi-modal Image Restoration and Fusion【深度学习】【图像分解】

Cite as: Deng X, Dragotti P L. Deep convolutional neural network for multi-modal image restoration and fusion[J]. IEEE transactions on pattern analysis and machine intelligence, 2020.

Paper:Deep convolutional neural network for multi-modal image restoration and fusion

Code:

3、文章:FusionDN: A Unified Densely Connected Network for Image Fusion 【深度学习】【通用图像融合】

Cite as:Xu H, Ma J, Le Z, et al. FusionDN: A Unified Densely Connected Network for Image Fusion[C]//AAAI. 2020: 12484-12491.

Paper: FusionDN: A Unified Densely Connected Network for Image Fusion

Code:https://github.com/jiayi-ma/FusionDN

4、文章:DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion  【深度学习】

Cite as:Ma J, Xu H, Jiang J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995.

Paper:DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion

Code:https://github.com/jiayi-ma/DDcGAN

5、文章:Rethinking the Image Fusion A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity  【深度学习】【通用图像融合】

Cite as:Zhang H, Xu H, Xiao Y, et al. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity[C]//Proc. AAAI Conf. Artif. Intell. 2020.

Paper:Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity(PMGI)

Code:https://github.com/HaoZhang1018/PMGI AAAI2020

6、文章:Infrared and visible image fusion based on target-enhanced multiscale transform decomposition  【多尺度分解】

Cite as:Chen J, Li X, Luo L, et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences, 2020, 508: 64-78.

Paper:Infrared and visible image fusion based on target-enhanced multiscale transform decomposition

Code:https://github.com/jiayi-ma/TE-MST

7、文章:AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Networks 【深度学习】

Cite as: Li J, Huo H, Li C, et al. AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2020.

Paper: AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Networks

8、文章:MDLatLRR: A novel decomposition method for infrared and visible image fusion  【多尺度分解】

Cite as:Li H, Wu X, Kittler J, et al. MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion[J]. IEEE Transactions on Image Processing, 2020: 4733-4746.

Paper:MDLatLRR: A novel decomposition method for infrared and visible image fusion

Code:https://github.com/hli1221/imagefusion_mdlatlrr

9、文章:NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models  【多尺度分解】

Cite as:Li H, Wu X J, Durrani T. Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656.

Paper:NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models

Code:https://github.com/hli1221/imagefusion-nestfuse

10、文章:IFCNN: A general image fusion framework based on convolutional neural network  【深度学习】【通用图像融合】

Cite as:Zhang Y, Liu Y, Sun P, et al. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network[J]. Information Fusion, 2020: 99-118.

Paper:. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network

Code:https://github.com/uzeful/IFCNN

11、文章:RXDNFuse: A aggregated residual dense network for infrared and visible image fusion 【深度学习】

Cite as:Long Y, Jia H, Zhong Y, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 69: 128-141.

Paper:. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion

12、文章:Infrared and visible image fusion via detail preserving adversarial learning  【深度学习】【生成对抗网络】

Cite as:Ma J, Liang P, Yu W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98.

Paper:. Infrared and visible image fusion via detail preserving adversarial learning

Code:https://github.com/jiayi-ma/ResNetFusion

13、文章:SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion  【深度学习】

Cite as: Jian L, Yang X, Liu Z, et al. SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-15.

Paper: SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion

Code:https://github.com/jianlihua123/SEDRFuse

14、文章:VIF-Net: an unsupervised framework for infrared and visible image fusion  【深度学习】

Cite as: JHou R, Zhou D, Nie R, et al. VIF-Net: an unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2020, 6: 640-651.

Paper: VIF-Net: an unsupervised framework for infrared and visible image fusion

Code:https://github.com/Laker2423/VIF-NET

15、文章:Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance  【深度学习】

Cite as: Li J, Huo H, Liu K, et al. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance[J]. Information Sciences, 2020, 529: 28-41.

Paper:Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance

16、文章:Multigrained Attention Network for Infrared and Visible Image Fusion  【深度学习】

Cite as: Li J, Huo H, Li C, et al. Multigrained Attention Network for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-12.

Paper:Multigrained Attention Network for Infrared and Visible Image Fusion

【2019】

1、文章:FusionGAN:A generative adversarial network for infrared and visible image fusion  【深度学习】

Cite as:Jiayi Ma, Wei Yu, Pengwei Liang, Chang Li, and Junjun Jiang. FusionGAN: A generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, Aug. 2019.

Paper:https://doi.org/10.1016/j.inffus.2018.09.004

Code:https://github.com/jiayi-ma/FusionGAN

作者:

马佳义,武汉大学。

个人主页:

http://www.escience.cn/people/jiayima/index.html (科研在线 科研主页)

http://mvp.whu.edu.cn/jiayima/

GitHub地址:https://github.com/jiayi-ma

李畅,合肥工业大学,讲师。

个人主页:http://www.escience.cn/people/lichang/index.html(科研在线 科研主页)

(值得一提:主页内Data栏总结了高光谱图像数据链接。)

 GitHub地址:https://github.com/Chang-Li-HFUT

江俊君,哈尔滨工业大学,教授。

个人主页:

http://www.escience.cn/people/jiangjunjun/index.html(科研在线 科研主页)

https://jiangjunjun.wordpress.com

http://homepage.hit.edu.cn/jiangjunjun(哈工大主页)

https://scholar.google.com/citations?user=WNH2_rgAAAAJ&hl=zh-CN&oi=ao(Google学术主页)

https://github.com/junjun-jiang(GitHub主页)

 

2、文章:Infrared and visible image fusion methods and applications: A survey 【综述文章】

Cite as: Jiayi Ma, Yong Ma, and Chang Li. "Infrared and visible image fusion methods and applications: A survey", Information Fusion, 45, pp. 153-178, 2019.

Paper:https://doi.org/10.1016/j.inffus.2018.02.004

作者:马佳义,武汉大学。马泳李畅

 

【2018】

1、文章:Infrared and Visible Image Fusion with ResNet and zero-phase component analysis(点击下载文章)【深度学习】

Cite as:Li H , Wu X J , Durrani T S . Infrared and Visible Image Fusion with ResNet and zero-phase component analysis[J]. 2018.

Paper:https://arxiv.org/abs/1806.07119

Code:https://github.com/hli1221/imagefusion_resnet50

作者:

Li Hui , Ph.D. from Jiangnan University. (Instructor: Wu Xiaojun )

Home page: https://hli1221.github.io

GitHub address:

https://github.com/hli1221 (primary GitHub)

https://github.com/exceptionLi

Wu Xiaojun :

Home page: http://iot.jiangnan.edu.cn/info/1059/1532.htm (school tutor’s home page)

https://scholar.google.com/citations?user=5IST34sAAAAJ&hl=zh-CN&oi=sra (Google Scholar homepage)

 

2. Article: DenseFuse: A Fusion Approach to Infrared and Visible Images (Click to download the article) [ Deep Learning]

Cite as:

H. Li, X. J. Wu, DenseFuse: A Fusion Approach to Infrared and Visible Images, IEEE Trans. Image Process.(Early Access), pp. 1-1, 2018.

Paper:https://arxiv.org/abs/1804.08361

(DOI:10.1109/TIP.2018.2887342

Code:https://github.com/hli1221/imagefusion_densefuse

另一个实现:https://github.com/srinu007/MultiModelImageFusion(代码包里也包含有图像融合MATLAB客观评价指标函数)

作者:李辉,江南大学博士。(导师:吴小俊

 

3、文章:Infrared and Visible Image Fusion using a Deep Learning Framework(点击下载文章)【深度学习】

Cite as:Li H, Wu X J, Kittler J. Infrared and Visible Image Fusion using a Deep Learning Framework[C]//Pattern Recognition (ICPR), 2018 24rd International Conference on. IEEE, 2018: 2705 - 2710.

Paper:https://arxiv.org/pdf/1804.06992

DOI: 10.1109/ICPR.2018.8546006

Code:https://github.com/hli1221/imagefusion_deeplearning

作者:李辉,江南大学博士。(导师:吴小俊

 

4、文章:Infrared and visible image fusion using Latent Low-Rank Representation     【LRR用于图像融合】

Cite as:Li H, Wu X J. Infrared and visible image fusion using Latent Low-Rank Representation[J]. 2018.

Paper:https://arxiv.org/abs/1804.08992

Code:https://github.com/exceptionLi/imagefusion_Infrared_visible_latlrr

作者:李辉,江南大学博士。(导师:吴小俊

 

5、文章:Infrared and visible image fusion using a novel deep decomposition method【深度学习】

Cite as: Li H, Wu X. Infrared and visible image fusion using a novel deep decomposition method[J]. arXiv: Computer Vision and Pattern Recognition, 2018.

Paper:https://arxiv.org/abs/1811.02291

Code:https://github.com/hli1221/imagefusion_deepdecomposition

作者:李辉,江南大学博士。(导师:吴小俊

 

6、文章:Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

Cite as: Jin X, Jiang Q, Yao S, et al. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain[J]. Infrared Physics & Technology, 2018: 1-12.

Paper:https://doi.org/10.1016/j.infrared.2017.10.004

Code:https://github.com/jinxinhuo/SWT_DCT_SF-for-image-fusion

https://ww2.mathworks.cn/matlabcentral/fileexchange/68674-infrared-and-visual-image-fusion-method-based-on-swt_dct_sf?s_tid=FX_rc2_behav

作者:金鑫——2013级云南大学博士。

 

7、文章:Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis

Cite as: Ma T, Ma J, Fang B, et al. Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis[J]. Infrared Physics & Technology, 2018: 154-162.

Paper:https://doi.org/10.1016/j.infrared.2018.06.002

作者:Siwen Quan (权思文)

主页:https://sites.google.com/view/siwenquanshomepage

https://scholar.google.com/citations?user=9CS008EAAAAJ&hl=zh-CN&oi=sra(Google学术主页)

 

8、文章:Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary

Cite as: Aishwarya N, Thangammal C B. Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary[J]. Infrared Physics & Technology, 2018: 300-309.

Paper:https://doi.org/10.1016/j.infrared.2018.08.013

 

9、文章:Infrared and visible image fusion based on convolutional neural network model and saliency detection via hybrid l0-l1 layer decomposition 【CNN】【深度学习】【显著性检测】

Cite as: Liu D, Zhou D, Nie R, et al. Infrared and visible image fusion based on convolutional neural network model and saliency detection via hybrid l0-l1 layer decomposition[J]. Journal of Electronic Imaging, 2018, 27(06).

Paper:https://doi.org/10.1117/1.JEI.27.6.063036

作者:

周冬明——云南大学教授,博导
聂仁灿——云南大学信息学院副教授,博士,硕士生导师 

 

【2017】

1、文章:Fusion of visible and infrared images using global entropy and gradient constrained regularization

Paper:https://doi.org/10.1016/j.infrared.2017.01.012

作者:赵巨峰,杭州电子科技大学副教授,硕导。

个人主页:http://mypage.hdu.edu.cn/zhaojufeng/0.html

 

2、文章:A survey of infrared and visual image fusion methods   【综述文章】

Paper:https://doi.org/10.1016/j.infrared.2017.07.010

作者:

金鑫——2013级云南大学博士。
姚邵文——云南大学软件学院院长
周冬明——云南大学教授,博导
聂仁灿——云南大学信息学院副教授,博士,硕士生导师 

贺康建——2014级云南大学博士

 

3、文章:Infrared and Visual Image Fusion through Infrared Feature Extraction and Visual Information Preservation

Cite as:

Yu Zhang, Lijia Zhang, Xiangzhi Bai and Li Zhang. Infrared and Visual Image Fusion through Infrared Feature Extraction and Visual Information Preservation, Infrared Physics & Technology 83 (2017) 227-237.

Paper:http://dx.doi.org/10.1016/j.infrared.2017.05.007(DOI:10.1016/j.infrared.2017.05.007)

Code:https://github.com/uzeful/Infrared-and-Visual-Image-Fusion-via-Infrared-Feature-Extraction-and-Visual-Information-Preservation

作者:张余,清华大学博士。

主页:

https://sites.google.com/site/uze1989/

https://uzeful.github.io/

GitHub地址:https://github.com/uzeful

 

4、文章:Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility

Cite as:

Vanmali A V , Gadre V M . Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility[J]. Sādhanā, 2017, 42(7):1063-1082.

Paper: (DOI:10.1007/s12046-017-0673-1)

Code:https://drive.google.com/file/d/0B-hGkOHjv3gzVnU5Slg2YWZRWVE/view?usp=sharing

 

5、文章:Infrared and visible image fusion based on visual saliency map and weighted least square optimization

Cite as:

Ma J, Zhou Z, Wang B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82:8-17.

Paper:https://doi.org/10.1016/j.infrared.2017.02.005(DOI:10.1016/j.infrared.2017.02.005)

Code:https://github.com/JinleiMa/Image-fusion-with-VSM-and-WLS

作者:马金磊,北京理工大学。

GitHub地址:https://github.com/JinleiMa?utf8=✓

 

6、文章:Infrared and visible image fusion method based on saliency detection in sparse domain

Cite as:

Liu C H , Qi Y , Ding W R . Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017:S1350449516307150.

Paper:https://doi.org/10.1016/j.infrared.2017.04.018(DOI:10.1016/j.infrared.2017.04.018)

 

7、文章:Infrared and visible image fusion with convolutional neural networks 【深度学习】【CNN】

Cite as:

Yu Liu, Xun Chen, Juan Cheng, Hu Peng, Zengfu Wang,“Infrared and visible image fusion with convolutional neural networks”, International Journal of Wavelets,Multiresolution and Information Processing, vol. 16, no. 3, 1850018: 1-20, 2018.

Paper:https://www.worldscientific.com/doi/abs/10.1142/S0219691318500182

https://www.researchgate.net/publication/321799375_Infrared_and_visible_image_fusion_with_convolutional_neural_networks

(DOI:10.1142/S0219691318500182)

Code:http://www.escience.cn/people/liuyu1/Codes.html(刘羽)

作者:

刘羽

陈勋,教授、博导

http://staff.ustc.edu.cn/~xunchen/

https://scholar.google.com/citations?user=aBnUWyQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)

成娟

http://www.escience.cn/people/chengjuanhfut/index.html

https://scholar.google.com/citations?user=fMOOhH8AAAAJ&hl=zh-CN&oi=sra(Google学术主页)

 

8、文章:Infrared and visible image fusion based on total variation and augmented Lagrangian

Paper:https://doi.org/10.1364/JOSAA.34.001961

作者:HANQI GUO, YONG MA(马泳), XIAOGUANG MEI(梅晓光), JIAYI MA(马佳义),武汉大学。

 

9、文章:Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules

Paper:https://doi.org/10.1016/j.infrared.2017.01.013

 

10、Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse K-SVD dictionary learning

Paper:(DOI:10.1016/j.infrared.2017.01.026)

作者:

Jiajun Cai,武汉大学

https://c.glgoo.top/citations?user=1jAmUp0AAAAJ&hl=zh-CN&oi=sra

 

 

【2016】

1、文章:Infrared and visible image fusion via gradient transfer and total variation minimization(点击下载文章)

Cite as:

Jiayi Ma, Chen Chen, Chang Li, and Jun Huang. Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion, 31, pp. 100-109, Sept. 2016.

Paper:https://doi.org/10.1016/j.inffus.2016.02.001

Code:https://github.com/jiayi-ma/GTF

(代码包里也提供了论文中用作对比实验的其他八种算法的代码,以及图像融合MATLAB客观评价指标函数)

作者:马佳义,武汉大学。

个人主页:http://www.escience.cn/people/jiayima/index.html

 

2、文章:Multi-window visual saliency extraction for fusion of visible and infrared images

Cite as:

Zhao J , Gao X , Chen Y , et al. Multi-window visual saliency extraction for fusion of visible and infrared images[J]. Infrared Physics & Technology, 2016, 76:295-302.

Paper:https://doi.org/10.1016/j.infrared.2016.01.020

作者:赵巨峰,杭州电子科技大学副教授,硕导。

个人主页:http://mypage.hdu.edu.cn/zhaojufeng/0.html

 

3、文章:Two-scale image fusion of visible and infrared images using saliency detection

Cite as:

Bavirisetti D P , Dhuli R . Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics & Technology, 2016, 76:52-64.

Paper:https://doi.org/10.1016/j.infrared.2016.01.009

Code:https://www.mathworks.com/matlabcentral/fileexchange/63571-two-scale-image-fusion-of-visible-and-infrared-images-using-saliency-detection

作者:Durga Prasad Bavirisetti

主页:https://sites.google.com/view/durgaprasadbavirisetti/home

主页中右上角Datasets中提供了各种图像融合数据集。

https://scholar.google.com/citations?user=hc0VdQQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)

 

4、文章:Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform

Paper:https://ieeexplore.ieee.org/document/7264981

DOI: 10.1109/JSEN.2015.2478655

Code:https://ww2.mathworks.cn/matlabcentral/fileexchange/63591-fusion-of-infrared-and-visible-sensor-images-based-on-anisotropic-diffusion-and-kl-transform?s_tid=FX_rc2_behav

作者:Durga Prasad Bavirisetti

主页:https://sites.google.com/view/durgaprasadbavirisetti/home

主页中右上角Datasets中提供了各种图像融合数据集。

https://scholar.google.com/citations?user=hc0VdQQAAAAJ&hl=zh-CN&oi=sra(Google学术主页)

 

5、文章:Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters     【HMSD】

Cite as:

Zhiqiang Zhou et al. "Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters", Information Fusion, 30, 2016

Paper:https://doi.org/10.1016/j.inffus.2015.11.003

Code:https://github.com/bitzhouzq/Hybrid-MSD-Fusion

或:https://www.researchgate.net/publication/304246314

作者:周志强,北京理工大学自动化学院,副教授

主页:http://ac.bit.edu.cn/szdw/jsdw/mssbyznxtyjs_20150206131517284801/20150206115445413049_20150206131517284801/index.htm

GitHub地址:https://github.com/bitzhouzq

 

6、文章:Fusion of infrared and visible images for night-vision context enhancement

Paper:https://doi.org/10.1364/AO.55.006480

Code:https://github.com/bitzhouzq/Context-Enhance-via-Fusion

作者:周志强,北京理工大学自动化学院,副教授

主页:http://ac.bit.edu.cn/szdw/jsdw/mssbyznxtyjs_20150206131517284801/20150206115445413049_20150206131517284801/index.htm

GitHub地址:https://github.com/bitzhouzq

 

【2015】

1、文章:Attention-based hierarchical fusion of visible and infrared images

Paper:https://doi.org/10.1016/j.ijleo.2015.08.120

作者:

陈艳菲,副教授,硕士生导师。

主页:http://eie.wit.edu.cn/info/1067/1028.htm(教师主页)

桑农,华中科技大学自动化学院教授,博士生导师

主页:http://auto.hust.edu.cn/info/1154/3414.htm(教师主页)

 

【2014】

1、文章:Fusion method for infrared and visible images by using non-negative sparse representation     【NNSR】

Cite as:

Wang J , Peng J , Feng X , et al. Fusion method for infrared and visible images by using non-negative sparse representation[J]. Infrared Physics & Technology, 2014, 67:477-489.

Paper:https://doi.org/10.1016/j.infrared.2014.09.019

作者:西北工业大学 王珺彭进业冯晓毅何贵青

 

2、文章:The infrared and visible image fusion algorithm based on target separation and sparse representation

Cite as:

Lu X , Zhang B , Zhao Y , et al. The infrared and visible image fusion algorithm based on target separation and sparse representation[J]. Infrared Physics & Technology, 2014, 67:397-407.

Paper:https://doi.org/10.1016/j.infrared.2014.09.007

作者:吕晓琪,张宝华,赵瑛,内蒙古科技大学

吕晓琪,内蒙古科技大学信息工程学院教授,博导。

主页:http://graduate.imust.cn/info/1063/2860.htm

张宝华,内蒙古科技大学信息工程学院副教授,硕导。

主页:http://graduate.imust.cn/info/1063/2331.htm

赵瑛,内蒙古科技大学信息工程学院讲师,硕导。

主页:http://graduate.imust.cn/info/1063/2409.htm

 

===================== 分 ========== 割 ========== 线 =====================

PS:早期的代码中用到的某些函数可能随着MATLAB版本的升级更新,会被删掉,导致运行错误。解决办法就是在自己电脑上保留着低版本的MATLAB。然后用到哪个函数,复制出来粘贴到代码文件夹里。

由于笔者水平有限,某些最新的论文未被收集整理,欢迎大家讨论交流!

如有疑问可联系:[email protected]; 备注 姓名+学校

Guess you like

Origin blog.csdn.net/fovever_/article/details/106585576