Abstract
Quantum Convolutional Layer (QCL) is considered as one of the core of Quantum Convolutional Neural Networks (QCNNs) due to its efficient data feature extraction capability. However, the current principle of QCL is not as mathematically understandable as Classical Convolutional Layer (CCL) due to its black-box structure. Moreover, classical data mapping in many QCLs is inefficient. To this end, firstly, the Quantum Adjoint Convolution Operation (QACO) consisting of a quantum amplitude encoding and its inverse is theoretically shown to be equivalent to the quantum normalization of the convolution operation based on the Frobenius inner product while achieving an efficient characterization of the data. Subsequently, QACO is extended into a Quantum Adjoint Convolutional Layer (QACL) by Quantum Phase Estimation (QPE) to compute all Frobenius inner products in parallel. At last, comparative simulation experiments are carried out on PennyLane and TensorFlow platforms, mainly for the two cases of kernel fixed and unfixed in QACL. The results demonstrate that QACL with the insight of special quantum properties for the same images, provides higher training accuracy in MNIST and Fashion MNIST classification experiments, but sacrifices the learning performance to some extent. Predictably, our research lays the foundation for the development of efficient and interpretable quantum convolutional networks and also advances the field of quantum machine vision.
Abstract (translated)
量子卷积层(QCL)被认为是量子卷积神经网络(QCNNs)的核心,因为其高效的特征提取能力。然而,由于其黑盒结构,QCL的当前原理不如经典卷积层(CCL)具有数学可理解性。此外,许多QCL中的经典数据映射效率较低。为此,首先,基于量子幅度的编码及其逆的量子Adjoint Convolution操作(QACO)被理论证明与基于Frobenius内积的卷积操作等价,从而实现对数据的有效特征表示。随后,通过量子相估计(QPE)将QACO扩展为量子Adjoint Convolutional Layer(QACL),用于并行计算所有Frobenius内积。最后,在PennyLane和TensorFlow平台上进行了针对QACL中内核固定和 unfixed 的比较性模拟实验。结果表明,具有特殊量子特性的QACL在相同图像上的训练精度在MNIST和Fashion MNIST分类实验中较高,但一定程度上牺牲了学习性能。预计,我们的研究为开发高效且可解释的量子卷积网络奠定了基础,同时也推动了量子机器视觉领域的发展。
URL
https://arxiv.org/abs/2404.17378