Paper Reading AI Learner

Quantum Adjoint Convolutional Layers for Effective Data Representation

2024-04-26 12:52:45
Ren-Xin Zhao, Shi Wang, Yaonan Wang

Abstract

Quantum Convolutional Layer (QCL) is considered as one of the core of Quantum Convolutional Neural Networks (QCNNs) due to its efficient data feature extraction capability. However, the current principle of QCL is not as mathematically understandable as Classical Convolutional Layer (CCL) due to its black-box structure. Moreover, classical data mapping in many QCLs is inefficient. To this end, firstly, the Quantum Adjoint Convolution Operation (QACO) consisting of a quantum amplitude encoding and its inverse is theoretically shown to be equivalent to the quantum normalization of the convolution operation based on the Frobenius inner product while achieving an efficient characterization of the data. Subsequently, QACO is extended into a Quantum Adjoint Convolutional Layer (QACL) by Quantum Phase Estimation (QPE) to compute all Frobenius inner products in parallel. At last, comparative simulation experiments are carried out on PennyLane and TensorFlow platforms, mainly for the two cases of kernel fixed and unfixed in QACL. The results demonstrate that QACL with the insight of special quantum properties for the same images, provides higher training accuracy in MNIST and Fashion MNIST classification experiments, but sacrifices the learning performance to some extent. Predictably, our research lays the foundation for the development of efficient and interpretable quantum convolutional networks and also advances the field of quantum machine vision.

Abstract (translated)

量子卷积层(QCL)被认为是量子卷积神经网络(QCNNs)的核心,因为其高效的特征提取能力。然而,由于其黑盒结构,QCL的当前原理不如经典卷积层(CCL)具有数学可理解性。此外,许多QCL中的经典数据映射效率较低。为此,首先,基于量子幅度的编码及其逆的量子Adjoint Convolution操作(QACO)被理论证明与基于Frobenius内积的卷积操作等价,从而实现对数据的有效特征表示。随后,通过量子相估计(QPE)将QACO扩展为量子Adjoint Convolutional Layer(QACL),用于并行计算所有Frobenius内积。最后,在PennyLane和TensorFlow平台上进行了针对QACL中内核固定和 unfixed 的比较性模拟实验。结果表明,具有特殊量子特性的QACL在相同图像上的训练精度在MNIST和Fashion MNIST分类实验中较高,但一定程度上牺牲了学习性能。预计,我们的研究为开发高效且可解释的量子卷积网络奠定了基础,同时也推动了量子机器视觉领域的发展。

URL

https://arxiv.org/abs/2404.17378

PDF

https://arxiv.org/pdf/2404.17378.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot