Abstract
Non-convex constrained optimizations are ubiquitous in robotic applications such as multi-agent navigation, UAV trajectory optimization, and soft robot simulation. For this problem class, conventional optimizers suffer from small step sizes and slow convergence. We propose BC-ADMM, a variant of Alternating Direction Method of Multiplier (ADMM), that can solve a class of non-convex constrained optimizations with biconvex constraint relaxation. Our algorithm allows larger step sizes by breaking the problem into small-scale sub-problems that can be easily solved in parallel. We show that our method has both theoretical convergence speed guarantees and practical convergence guarantees in the asymptotic sense. Through numerical experiments in a row of four robotic applications, we show that BC-ADMM has faster convergence than conventional gradient descent and Newton's method in terms of wall clock time.
Abstract (translated)
非凸约束优化在机器人应用中非常普遍,例如多智能体导航、无人机轨迹优化和软机器人模拟。对于这类问题,传统优化器由于步长小且收敛慢而存在局限性。我们提出了BC-ADMM,这是一种交替方向乘子法(ADMM)的变体,能够解决一类带有双凸约束松弛的非凸约束优化问题。我们的算法通过将大问题分解为可以并行求解的小规模子问题来允许使用更大的步长。我们证明了该方法在理论上的收敛速度保证以及渐近意义下的实际收敛性保证。通过一系列四个机器人应用中的数值实验,我们展示了BC-ADMM在实时时钟时间上比传统梯度下降和牛顿法的收敛速度快。
URL
https://arxiv.org/abs/2504.05465