Paper Reading AI Learner

Feedback and Control of Dynamics and Robotics using Augmented Reality

2023-03-23 03:43:36
Elijah Wyckoff, Ronan Reza, Fernando Moreu

Abstract

Human-machine interaction (HMI) and human-robot interaction (HRI) can assist structural monitoring and structural dynamics testing in the laboratory and field. In vibratory experimentation, one mode of generating vibration is to use electrodynamic exciters. Manual control is a common way of setting the input of the exciter by the operator. To measure the structural responses to these generated vibrations sensors are attached to the structure. These sensors can be deployed by repeatable robots with high endurance, which require on-the-fly control. If the interface between operators and the controls was augmented, then operators can visualize the experiments, exciter levels, and define robot input with a better awareness of the area of interest. Robots can provide better aid to humans if intelligent on-the-fly control of the robot is: (1) quantified and presented to the human; (2) conducted in real-time for human feedback informed by data. Information provided by the new interface would be used to change the control input based on their understanding of real-time parameters. This research proposes using Augmented Reality (AR) applications to provide humans with sensor feedback and control of actuators and robots. This method improves cognition by allowing the operator to maintain awareness of structures while adjusting conditions accordingly with the assistance of the new real-time interface. One interface application is developed to plot sensor data in addition to voltage, frequency, and duration controls for vibration generation. Two more applications are developed under similar framework, one to control the position of a mediating robot and one to control the frequency of the robot movement. This paper presents the proposed model for the new control loop and then compares the new approach with a traditional method by measuring time delay in control input and user efficiency.

Abstract (translated)

人机交互(HMI)和人机交互(HRI)可以在实验室和实地帮助进行结构监测和结构动力学测试。在振动实验中,一种产生振动的模式是利用电热激发器。手动控制是一种常见的方式,由操作员设置激发器的输入。为了测量这些产生振动的结构响应,传感器被安装在结构中。这些传感器可以重复使用机器人上具有高耐力的机器人部署,这需要实时控制。如果操作员与控制台之间的界面被增强,则操作员可以更好地可视化实验、激发器水平,并定义机器人输入,更好地了解感兴趣的区域。如果机器人的实时智能控制是:(1)量化并呈现给人类;(2)通过数据 inform 人类实时反馈。新界面提供的信息将被用于改变控制输入,基于他们对实时参数的理解。此研究建议使用增强现实(AR)应用程序为提供传感器反馈和驱动控制器和机器人的控制。这种方法可以提高认知,允许操作员在借助新实时界面的同时,保持对结构的注意。一个界面应用程序将被开发来绘制传感器数据,除了电压、频率和持续时间的控制外,还用于振动生成。另外两个应用程序将在类似框架下开发,一个控制中介机器人的位置,一个控制机器人的运动频率。本文提出了新控制循环的模型,然后通过测量控制输入和时间延迟比较传统方法和新方法的效率。

URL

https://arxiv.org/abs/2303.13016

PDF

https://arxiv.org/pdf/2303.13016.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot