Abstract
Despite the demonstrated effectiveness of transformer models in NLP, and image and video classification, the available tools for extracting features from captured IoT network flow packets fail to capture sequential patterns in addition to the absence of spatial patterns consequently limiting transformer model application. This work introduces a novel preprocessing method to adapt transformer models, the vision transformer (ViT) in particular, for IoT botnet attack detection using network flow packets. The approach involves feature extraction from .pcap files and transforming each instance into a 1-channel 2D image shape, enabling ViT-based classification. Also, the ViT model was enhanced to allow use any classifier besides Multilayer Perceptron (MLP) that was deployed in the initial ViT paper. Models including the conventional feed forward Deep Neural Network (DNN), LSTM and Bidirectional-LSTM (BLSTM) demonstrated competitive performance in terms of precision, recall, and F1-score for multiclass-based attack detection when evaluated on two IoT attack datasets.
Abstract (translated)
尽管变换器模型(如变压器模型)在自然语言处理、图像和视频分类方面表现出强大的效果,但现有的用于从捕获的物联网网络流量包中提取特征的工具无法捕捉序列模式,并且缺乏空间模式,这限制了变压器模型的应用。这项工作提出了一种新的预处理方法,以使变换器模型(特别是视觉变换器ViT)能够利用网络流量包进行物联网僵尸网络攻击检测。该方法涉及从.pcap文件中提取特征,并将每个实例转换为1通道2D图像形式,从而实现基于ViT的分类。此外,对ViT模型进行了改进,以允许使用除多层感知机(MLP)以外的任何分类器,后者在最初的ViT论文中被部署。 实验表明,在两个物联网攻击数据集上评估时,包括传统的前馈深度神经网络(DNN)、长短时记忆网络(LSTM)和双向-LSTM(BLSTM)在内的模型,在多类攻击检测方面具有竞争力的准确率、召回率和F1分数表现。
URL
https://arxiv.org/abs/2504.18781