Paper Reading AI Learner

Neurofeedback-Driven 6-DOF Robotic Arm: Integration of Brain-Computer Interface with Arduino for Advanced Control

2024-10-29 12:55:04
Ihab A. Satam, R\'obert Szabolcsi

Abstract

Brain computer interface (BCI) applications in robotics are becoming more famous and famous. People with disabilities are facing a real-time problem of doing simple activities such as grasping, handshaking etc. in order to aid with this problem, the use of brain signals to control actuators is showing a great importance. The Emotive Insight, a Brain-Computer Interface (BCI) device, is utilized in this project to collect brain signals and transform them into commands for controlling a robotic arm using an Arduino controller. The Emotive Insight captures brain signals, which are subsequently analyzed using Emotive software and connected with Arduino code. The HITI Brain software integrates these devices, allowing for smooth communication between brain activity and the robotic arm. This system demonstrates how brain impulses may be utilized to control external devices directly. The results showed that the system is applicable efficiently to robotic arms and also for prosthetic arms with Multi Degree of Freedom. In addition to that, the system can be used for other actuators such as bikes, mobile robots, wheelchairs etc.

Abstract (translated)

脑机接口(BCI)在机器人领域的应用越来越受到关注。残疾人面临着实时执行简单活动(如抓握、握手等)的实际问题,为了帮助解决这个问题,使用大脑信号控制执行器显示出极大的重要性。本项目利用Emotive Insight这一脑机接口设备采集大脑信号,并将其转化为通过Arduino控制器来操作机械臂的指令。Emotive Insight捕捉的大脑信号随后使用Emotive软件进行分析并与Arduino代码连接起来。HITI Brain软件整合了这些设备,实现了大脑活动与机械臂之间的顺畅通信。该系统展示了如何利用大脑脉冲直接控制外部设备。结果显示,该系统能高效地应用于机械臂,同样适用于多自由度的假肢手臂。除此之外,该系统还可以用于其他执行器如自行车、移动机器人、轮椅等。

URL

https://arxiv.org/abs/2410.22008

PDF

https://arxiv.org/pdf/2410.22008.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot