Abstract
The evaluation and training of autonomous driving systems require diverse and scalable corner cases. However, most existing scene generation methods lack controllability, accuracy, and versatility, resulting in unsatisfactory generation results. To address this problem, we propose Dragtraffic, a generalized, point-based, and controllable traffic scene generation framework based on conditional diffusion. Dragtraffic enables non-experts to generate a variety of realistic driving scenarios for different types of traffic agents through an adaptive mixture expert architecture. We use a regression model to provide a general initial solution and a refinement process based on the conditional diffusion model to ensure diversity. User-customized context is introduced through cross-attention to ensure high controllability. Experiments on a real-world driving dataset show that Dragtraffic outperforms existing methods in terms of authenticity, diversity, and freedom.
Abstract (translated)
自动驾驶系统的评估和训练需要各种具有多样性和可扩展性的场景。然而,大多数现有的场景生成方法缺乏可控制性、准确性和多样性,导致不满意的生成结果。为了解决这个问题,我们提出了Dragtraffic,一种基于条件扩散的泛化、基于点的自动驾驶场景生成框架。Dragtraffic通过自适应混合专家架构使非专家用户为不同类型的交通代理生成各种真实的驾驶场景。我们使用回归模型提供了一个通用的初始解决方案,并基于条件扩散模型进行优化,以确保多样性。通过跨注意力的引入,我们引入了用户自定义上下文,以确保高可控性。在现实世界的驾驶数据集上进行的实验表明,Dragtraffic在真实性、多样性和自由度方面超过了现有方法。
URL
https://arxiv.org/abs/2404.12624