Paper Reading AI Learner

Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping

2019-03-06 11:46:00
Ali Shafti, Pavel Orlov, A. Aldo Faisal

Abstract

Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, leading to intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement low-level motion actions to achieve high-level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and grammars-based implementation of sequences of action with the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of $4.68\pm0.14cm$. The full system is tested with 5 subjects, showing successful implementation of $100\%$ of reach to gaze point actions and full implementation of pick and place tasks in 96\%, and pick and pour tasks in $76\%$ of cases. Finally we present a discussion on our results and what future work is needed to improve the system.

Abstract (translated)

辅助机器人系统致力于支持那些有运动障碍的人,使他们能够再次运动并恢复功能。这些系统的主要问题是它们的低级控制的复杂性,以及如何将其转换为更简单、更高级的命令,这些命令对于人类用户来说很容易和直观地进行交互。我们已经创建了一个多模态系统,由不同的感知、决策和执行方式组成,从而产生了直观的、人在回路中的辅助机器人。系统从用户的注视中获取线索,解码用户的意图,执行低级动作动作,以实现高级任务。这导致用户只需查看感兴趣的对象,机器人系统就可以帮助他们接触到这些对象,抓住它们,并使用它们与其他对象交互。我们提出了我们的三维凝视估计方法,以及基于语法的机器人动作序列的实现。对8名受试者进行3D凝视评估,总体准确度为4.68美元/pm0.14cm美元。整个系统由5个受试者进行测试,显示成功实施100美元的“到达-注视点”行动和96%的“选择-放置”任务,以及76美元的“选择-倾注”任务。最后,我们将讨论我们的结果,以及需要什么样的未来工作来改进系统。

URL

https://arxiv.org/abs/1809.08095

PDF

https://arxiv.org/pdf/1809.08095.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot