Paper Reading AI Learner

PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer

2021-05-28 17:20:19
Xuzhe Zhang, Xinzi He, Jia Guo, Nabil Ettehadi, Natalie Aw, David Semanek, Jonathan Posner, Andrew Laine, Yun Wang

Abstract

Magnetic resonance imaging (MRI) noninvasively provides critical information about how human brain structures develop across stages of life. Developmental scientists are particularly interested in the first few years of neurodevelopment. Despite the success of MRI collection and analysis for adults, it is a challenge for researchers to collect high-quality multimodal MRIs from developing infants mainly because of their irregular sleep pattern, limited attention, inability to follow instructions to stay still, and a lack of analysis approaches. These challenges often lead to a significant reduction of usable data. To address this issue, researchers have explored various solutions to replace corrupted scans through synthesizing realistic MRIs. Among them, the convolution neural network (CNN) based generative adversarial network has demonstrated promising results and achieves state-of-the-art performance. However, adversarial training is unstable and may need careful tuning of regularization terms to stabilize the training. In this study, we introduced a novel MRI synthesis framework - Pyramid Transformer Net (PTNet). PTNet consists of transformer layers, skip-connections, and multi-scale pyramid representation. Compared with the most widely used CNN-based conditional GAN models (namely pix2pix and pix2pixHD), our model PTNet shows superior performance in terms of synthesis accuracy and model size. Notably, PTNet does not require any type of adversarial training and can be easily trained using the simple mean squared error loss.

Abstract (translated)

URL

https://arxiv.org/abs/2105.13993

PDF

https://arxiv.org/pdf/2105.13993.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot