Abstract
Human demonstrations of trajectories are an important source of training data for many machine learning problems. However, the difficulty of collecting human demonstration data for complex tasks makes learning efficient representations of those trajectories challenging. For many problems, such as for handwriting or for quasistatic dexterous manipulation, the exact timings of the trajectories should be factored from their spatial path characteristics. In this work, we propose TimewarpVAE, a fully differentiable manifold-learning algorithm that incorporates Dynamic Time Warping (DTW) to simultaneously learn both timing variations and latent factors of spatial variation. We show how the TimewarpVAE algorithm learns appropriate time alignments and meaningful representations of spatial variations in small handwriting and fork manipulation datasets. Our results have lower spatial reconstruction test error than baseline approaches and the learned low-dimensional representations can be used to efficiently generate semantically meaningful novel trajectories.
Abstract (translated)
人类演示轨迹的数据对许多机器学习问题来说是一种重要的训练数据来源。然而,收集复杂任务中的人类演示数据具有挑战性,使得学习有效表示这些轨迹具有困难。对于许多问题,如手写或准静态灵巧操作,轨迹的准确时间应该从其空间路径特征中计算出来。在本文中,我们提出TimewarpVAE,一种完全可导的流形学习算法,它结合了动态时间平移(DTW)来同时学习时差变化和空间变化中的隐含因素。我们证明了TimewarpVAE算法在小型手写和叉操作数据集上学会了适当的时间对齐和有意义的空间变化表示。我们的结果具有比基线方法更低的空间重构测试误差,并且学到的低维表示可用于有效地生成具有语义意义的新的轨迹。
URL
https://arxiv.org/abs/2310.16027