Abstract
The Internet of Things (IoT) faces tremendous security challenges. Machine learning models can be used to tackle the growing number of cyber-attack variations targeting IoT systems, but the increasing threat posed by adversarial attacks restates the need for reliable defense strategies. This work describes the types of constraints required for an adversarial cyber-attack example to be realistic and proposes a methodology for a trustworthy adversarial robustness analysis with a realistic adversarial evasion attack vector. The proposed methodology was used to evaluate three supervised algorithms, Random Forest (RF), Extreme Gradient Boosting (XGB), and Light Gradient Boosting Machine (LGBM), and one unsupervised algorithm, Isolation Forest (IFOR). Constrained adversarial examples were generated with the Adaptative Perturbation Pattern Method (A2PM), and evasion attacks were performed against models created with regular and adversarial training. Even though RF was the least affected in binary classification, XGB consistently achieved the highest accuracy in multi-class classification. The obtained results evidence the inherent susceptibility of tree-based algorithms and ensembles to adversarial evasion attacks and demonstrates the benefits of adversarial training and a security by design approach for a more robust IoT network intrusion detection.
Abstract (translated)
物联网(IoT)面临巨大的安全问题。机器学习模型可以用来应对针对IoT系统的越来越多的黑客攻击变异,但dversarial攻击带来的威胁不断增加,重申了需要可靠的防御策略的必要性。本研究描述了实现真实adversarial攻击示例所需的约束类型,并提出了一种可靠的adversarial robust分析方法,方法采用具有真实adversarial回避攻击向量的adversarial训练示例。该方法被用于评估三组监督算法:随机森林(RF)、极端梯度提升(XGB)和轻量级梯度提升机(LGBM)以及一个无监督算法:隔离森林(IFOR)。虽然RF在二进制分类中受到的影响最小,但XGB在多分类分类中 consistently 实现了最高的准确率。虽然RF在二进制分类中受到的影响最小,但XGB在多分类分类中 consistently 实现了最高的准确率。结果证明了基于树算法和集合的adversarial回避攻击固有的易受到性,并展示了dversarial训练和设计方法为更鲁棒的IoT网络入侵检测所带来好处。
URL
https://arxiv.org/abs/2301.13122