Paper Reading AI Learner

FlowerFormer: Empowering Neural Architecture Encoding using a Flow-aware Graph Transformer

2024-03-19 15:21:10
Dongyeong Hwang, Hyunju Kim, Sunwoo Kim, Kijung Shin

Abstract

The success of a specific neural network architecture is closely tied to the dataset and task it tackles; there is no one-size-fits-all solution. Thus, considerable efforts have been made to quickly and accurately estimate the performances of neural architectures, without full training or evaluation, for given tasks and datasets. Neural architecture encoding has played a crucial role in the estimation, and graphbased methods, which treat an architecture as a graph, have shown prominent performance. For enhanced representation learning of neural architectures, we introduce FlowerFormer, a powerful graph transformer that incorporates the information flows within a neural architecture. FlowerFormer consists of two key components: (a) bidirectional asynchronous message passing, inspired by the flows; (b) global attention built on flow-based masking. Our extensive experiments demonstrate the superiority of FlowerFormer over existing neural encoding methods, and its effectiveness extends beyond computer vision models to include graph neural networks and auto speech recognition models. Our code is available at this http URL.

Abstract (translated)

特定神经网络架构的成功与所处理的数据集和任务息息相关;没有一种万无一失的解决方案。因此,为了快速且准确地估计给定任务和数据集上的神经网络性能,已经做了很多努力。神经网络架构编码在估计过程中发挥了关键作用,基于流的方法,将架构视为一个图,表现突出。为了增强神经架构的表示学习,我们引入了FlowerFormer,一种强大的图变换器,它包含了神经架构中的信息流。FlowerFormer由两个关键组件组成:(a)双向异步消息传递,受到流的影响;(b)基于流掩码的全球注意力。我们广泛的实验证明,FlowerFormer优于现有的神经编码方法,且其效果不仅限于计算机视觉模型,还包括图神经网络和自动语音识别模型。我们的代码可以从该链接的http URL获得。

URL

https://arxiv.org/abs/2403.12821

PDF

https://arxiv.org/pdf/2403.12821.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot