Abstract
The study of variational quantum algorithms (VQCs) has received significant attention from the quantum computing community in recent years. These hybrid algorithms, utilizing both classical and quantum components, are well-suited for noisy intermediate-scale quantum devices. Though estimating exact gradients using the parameter-shift rule to optimize the VQCs is realizable in NISQ devices, they do not scale well for larger problem sizes. The computational complexity, in terms of the number of circuit evaluations required for gradient estimation by the parameter-shift rule, scales linearly with the number of parameters in VQCs. On the other hand, techniques that approximate the gradients of the VQCs, such as the simultaneous perturbation stochastic approximation (SPSA), do not scale with the number of parameters but struggle with instability and often attain suboptimal solutions. In this work, we introduce a novel gradient estimation approach called Guided-SPSA, which meaningfully combines the parameter-shift rule and SPSA-based gradient approximation. The Guided-SPSA results in a 15% to 25% reduction in the number of circuit evaluations required during training for a similar or better optimality of the solution found compared to the parameter-shift rule. The Guided-SPSA outperforms standard SPSA in all scenarios and outperforms the parameter-shift rule in scenarios such as suboptimal initialization of the parameters. We demonstrate numerically the performance of Guided-SPSA on different paradigms of quantum machine learning, such as regression, classification, and reinforcement learning.
Abstract (translated)
近年来,量子计算社区对变分量子算法(VQCs)的研究受到了广泛关注。这些混合算法利用经典和量子组件,非常适合应用于噪声中等规模的量子设备。然而,在NISQ设备上使用参数位移规则估计VQCs的 exact梯度是可行的,但当问题规模较大时,它们并不具有良好的扩展性。在另一方面,近似VQCs梯度的技术,如同时扰动随机近似(SPSA)并不随着参数数量的增加而扩展,而是努力应对不稳定问题,并且往往得到次优解决方案。在本文中,我们引入了一种名为Guided-SPSA的新参数估计方法,它将参数位移规则和基于SPSA的梯度近似相结合,实现了对参数的指导性估计。Guided-SPSA在类似或更好的解决方法与参数位移规则相比减少了15%至25%的电路评估次数。在所有场景中,Guided-SPSA都优于标准SPSA,并且在参数初始化不优的情况下也优于参数位移规则。我们通过数值展示了Guided-SPSA在量子机器学习中的不同范式(如回归、分类和强化学习)的性能。
URL
https://arxiv.org/abs/2404.15751