Abstract
This paper introduces a novel neural network framework called M2BeamLLM for beam prediction in millimeter-wave (mmWave) massive multi-input multi-output (mMIMO) communication systems. M2BeamLLM integrates multi-modal sensor data, including images, radar, LiDAR, and GPS, leveraging the powerful reasoning capabilities of large language models (LLMs) such as GPT-2 for beam prediction. By combining sensing data encoding, multimodal alignment and fusion, and supervised fine-tuning (SFT), M2BeamLLM achieves significantly higher beam prediction accuracy and robustness, demonstrably outperforming traditional deep learning (DL) models in both standard and few-shot scenarios. Furthermore, its prediction performance consistently improves with increased diversity in sensing modalities. Our study provides an efficient and intelligent beam prediction solution for vehicle-to-infrastructure (V2I) mmWave communication systems.
Abstract (translated)
本文介绍了一种新颖的神经网络框架M2BeamLLM,用于毫米波(mmWave)大规模多输入多输出(mMIMO)通信系统的波束预测。M2BeamLLM整合了多种模式的传感器数据,包括图像、雷达、激光雷达和GPS信息,并利用大型语言模型(如GPT-2)的强大推理能力进行波束预测。通过结合传感数据编码、跨模态对齐与融合以及监督微调(SFT),M2BeamLLM显著提高了波束预测的准确性和鲁棒性,在标准场景和少量样本场景下均优于传统的深度学习模型。此外,随着感知模式多样性的增加,其预测性能持续提升。我们的研究为车辆到基础设施(V2I)毫米波通信系统提供了一种高效且智能的波束预测解决方案。
URL
https://arxiv.org/abs/2506.14532