Abstract
We present an approach to backpropagating through minimal problem solvers in end-to-end neural network training. Traditional methods relying on manually constructed formulas, finite differences, and autograd are laborious, approximate, and unstable for complex minimal problem solvers. We show that using the Implicit function theorem to calculate derivatives to backpropagate through the solution of a minimal problem solver is simple, fast, and stable. We compare our approach to (i) using the standard autograd on minimal problem solvers and relate it to existing backpropagation formulas through SVD-based and Eig-based solvers and (ii) implementing the backprop with an existing PyTorch Deep Declarative Networks (DDN) framework. We demonstrate our technique on a toy example of training outlier-rejection weights for 3D point registration and on a real application of training an outlier-rejection and RANSAC sampling network in image matching. Our method provides $100\%$ stability and is 10 times faster compared to autograd, which is unstable and slow, and compared to DDN, which is stable but also slow.
Abstract (translated)
我们提出了一种在端到端神经网络训练中通过最小问题求解器进行反向传播的方法。传统的依赖手工构造公式、有限差分和自求导的方法费力、近似和不稳定。我们证明了使用隐函数定理计算最小问题求解器的反向传播是简单、快速、稳定的。我们将我们的方法与(i)在最小问题求解器上使用标准自求导方法和通过SVD-基于和Eig-基于求解器与现有的反向传播公式联系起来进行比较,以及(ii)实现使用现有PyTorch Deep Declarative Networks (DDN)框架的反向传播。我们在三维点配准的训练示例和一个图像匹配领域的真实应用中演示了我们的技术。我们的方法提供了100%的稳定性,比自求导快10倍,与DDN相比,它既稳定又慢。
URL
https://arxiv.org/abs/2404.17993