Paper Reading AI Learner

Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction

2024-05-05 06:01:31
Changan Chen, Jordi Ramos, Anshul Tomar, Kristen Grauman

Abstract

Sim2real transfer has received increasing attention lately due to the success of learning robotic tasks in simulation end-to-end. While there has been a lot of progress in transferring vision-based navigation policies, the existing sim2real strategy for audio-visual navigation performs data augmentation empirically without measuring the acoustic gap. The sound differs from light in that it spans across much wider frequencies and thus requires a different solution for sim2real. We propose the first treatment of sim2real for audio-visual navigation by disentangling it into acoustic field prediction (AFP) and waypoint navigation. We first validate our design choice in the SoundSpaces simulator and show improvement on the Continuous AudioGoal navigation benchmark. We then collect real-world data to measure the spectral difference between the simulation and the real world by training AFP models that only take a specific frequency subband as input. We further propose a frequency-adaptive strategy that intelligently selects the best frequency band for prediction based on both the measured spectral difference and the energy distribution of the received audio, which improves the performance on the real data. Lastly, we build a real robot platform and show that the transferred policy can successfully navigate to sounding objects. This work demonstrates the potential of building intelligent agents that can see, hear, and act entirely from simulation, and transferring them to the real world.

Abstract (translated)

最近,由于在仿真中实现端到端学习机器人任务的成功,Sim2real传输受到了越来越多的关注。虽然已经在视觉导航策略的转移方面取得了很大的进展,但现有的Sim2real策略在经验上对音频-视觉导航进行数据增强,而没有测量声学缺口。声音与光不同,它跨越了更广泛的频率,因此需要为Sim2real采用不同的解决方案。我们将通过将Sim2real分解为声场预测(AFP)和轨迹导航来提出第一个针对音频-视觉导航的Sim2real治疗方案。我们在SoundSpaces仿真器中验证了我们的设计选择,并展示了在连续音频目标导航基准上的改善。接着,我们收集了现实世界的数据,通过训练只对特定频率子带输入的AFP模型来测量模拟和现实之间的频谱差异。我们进一步提出了一种频率适应策略,根据测量的频谱差异和接收到的音频的能量分布智能地选择预测的最佳频率带,这会在现实数据上提高性能。最后,我们构建了一个真实的机器人平台,并展示了通过转移策略成功导航到发声物的能力。这项工作证明了构建能够从仿真中看到、听到并行动的智能代理的潜力,并将它们转移到现实世界。

URL

https://arxiv.org/abs/2405.02821

PDF

https://arxiv.org/pdf/2405.02821.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot