TransMRSR: Transformer-based Self-Distilled Generative Prior for Brain MRI Super-Resolution

التفاصيل البيبلوغرافية
العنوان: TransMRSR: Transformer-based Self-Distilled Generative Prior for Brain MRI Super-Resolution
المؤلفون: Huang, Shan, Liu, Xiaohong, Tan, Tao, Hu, Menghan, Wei, Xiaoer, Chen, Tingli, Sheng, Bin
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Image and Video Processing, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
الوصف: Magnetic resonance images (MRI) acquired with low through-plane resolution compromise time and cost. The poor resolution in one orientation is insufficient to meet the requirement of high resolution for early diagnosis of brain disease and morphometric study. The common Single image super-resolution (SISR) solutions face two main challenges: (1) local detailed and global anatomical structural information combination; and (2) large-scale restoration when applied for reconstructing thick-slice MRI into high-resolution (HR) iso-tropic data. To address these problems, we propose a novel two-stage network for brain MRI SR named TransMRSR based on the convolutional blocks to extract local information and transformer blocks to capture long-range dependencies. TransMRSR consists of three modules: the shallow local feature extraction, the deep non-local feature capture, and the HR image reconstruction. We perform a generative task to encapsulate diverse priors into a generative network (GAN), which is the decoder sub-module of the deep non-local feature capture part, in the first stage. The pre-trained GAN is used for the second stage of SR task. We further eliminate the potential latent space shift caused by the two-stage training strategy through the self-distilled truncation trick. The extensive experiments show that our method achieves superior performance to other SSIR methods on both public and private datasets. Code is released at https://github.com/goddesshs/TransMRSR.gitTest .
Comment: 2023 CGI
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2306.06669Test
رقم الانضمام: edsarx.2306.06669
قاعدة البيانات: arXiv