Paper Reading AI Learner

Fine Robotic Manipulation without Force/Torque Sensor

2023-01-31 05:06:04
Shilin Shan, Quang-Cuong Pham

Abstract

Force Sensing and Force Control are essential to many industrial applications. Typically, a 6-axis Force/Torque (F/T) sensor is mounted between the robot's wrist and the end-effector in order to measure the forces and torques exerted by the environment onto the robot (the external wrench). Although a typical 6-axis F/T sensor can provide highly accurate measurements, it is expensive and vulnerable to drift and external impacts. Existing methods aiming at estimating the external wrench using only the robot's internal signals are limited in scope: for example, wrench estimation accuracy was mostly validated in free-space motions and simple contacts as opposed to tasks like assembly that require high-precision force control. Here we present a Neural Network based method and argue that by devoting particular attention to the training data structure, it is possible to accurately estimate the external wrench in a wide range of scenarios based solely on internal signals. As an illustration, we demonstrate a pin insertion experiment with 100-micron clearance and a hand-guiding experiment, both performed without external F/T sensors or joint torque sensors. Our result opens the possibility of equipping the existing 2.7 million industrial robots with Force Sensing and Force Control capabilities without any additional hardware.

Abstract (translated)

Force Sensing and Force Control对于许多工业应用是至关重要的。通常情况下,一个六轴力量/扭矩(F/T)传感器会被安装在机器人手腕和终端负载之间,以测量环境对机器人施加的力量和扭矩(外部扳手)。虽然一个典型的六轴F/T传感器可以提供高精度测量,但它昂贵且容易受到漂移和外部冲击的影响。现有的方法仅使用机器人内部信号进行估计外部扳手的准确性是有限的:例如,扳手估计的准确性大多数在空间运动和简单接触方面得到验证,而像组装等需要高精度力量控制的任务则通常不需要外部F/T传感器或关节扭矩传感器。在这里,我们介绍了一种神经网络方法,并认为通过特别关注训练数据结构,可以在仅仅使用内部信号的情况下,准确估计外部扳手在各种情况下。作为例子,我们演示了一个插入pin的实验,该实验使用了100微米的间隙,并且没有外部F/T传感器或关节扭矩传感器。我们的结果打开了在现有270万 industrial robots 中配备 Force Sensing 和 Force Control 能力而无需任何额外硬件的可能性。

URL

https://arxiv.org/abs/2301.13413

PDF

https://arxiv.org/pdf/2301.13413.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot