Paper Reading AI Learner

Temporal changes in stimulus perception improve bio-inspired source seeking

2019-03-25 12:53:18
A. Pequeño-Zurro, D. Shaikh, I. Rañó

Abstract

Braitenberg vehicles are well known qualitative models of sensor driven animal source seeking (biological taxes) that locally navigate a stimulus function. These models ultimately depend on the perceived stimulus values, while there is biological evidence that animals also use the temporal changes in the stimulus as information source for taxis behaviour. The time evolution of the stimulus values depends on the agent's (animal or robot) velocity, while simultaneously the velocity is typically the variable to control. This circular dependency appears, for instance, when using optical flow to control the motion of a robot, and it is solved by fixing the forward speed while controlling only the steering rate. This paper presents a new mathematical model of a bio-inspired source seeking controller that includes the rate of change of the stimulus in the velocity control mechanism. The above mentioned circular dependency results in a closed-loop model represented by a set of differential-algebraic equations (DAEs), which can be converted to non-linear ordinary differential equations (ODEs) under some assumptions. Theoretical results of the model analysis show that including a term dependent on the temporal evolution of the stimulus improves the behaviour of the closed-loop system compared to simply using the stimulus values. We illustrate the theoretical results through a set of simulations.

Abstract (translated)

Braitenberg车辆是众所周知的传感器驱动的动物寻源(生物税)定性模型,可在局部导航刺激功能。这些模型最终取决于感知的刺激值,同时有生物学证据表明动物也利用刺激中的时间变化作为出租车行为的信息来源。刺激值的时间演化取决于代理(动物或机器人)的速度,同时速度通常是要控制的变量。例如,当使用光流来控制机器人的运动时,就会出现这种循环依赖关系,通过在只控制转向速度的同时固定前进速度来解决这一问题。本文提出了一种新的生物激励寻源控制器的数学模型,该模型包括速度控制机构中激励的变化率。上面提到的循环依赖关系导致了一个闭环模型,由一组微分代数方程(daes)表示,在某些假设下可以转换为非线性常微分方程(odes)。模型分析的理论结果表明,与单纯使用刺激值相比,包含一个依赖于刺激时间演化的项可以改善闭环系统的行为。我们通过一组模拟来说明理论结果。

URL

https://arxiv.org/abs/1903.10279

PDF

https://arxiv.org/pdf/1903.10279.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot