Abstract
This paper presents a performance benchmarking study of a Gradient-Optimized Fuzzy Inference System (GF) classifier against several state-of-the-art machine learning models, including Random Forest, XGBoost, Logistic Regression, Support Vector Machines, and Neural Networks. The evaluation was conducted across five datasets from the UCI Machine Learning Repository, each chosen for their diversity in input types, class distributions, and classification complexity. Unlike traditional Fuzzy Inference Systems that rely on derivative-free optimization methods, the GF leverages gradient descent to significantly improving training efficiency and predictive performance. Results demonstrate that the GF model achieved competitive, and in several cases superior, classification accuracy while maintaining high precision and exceptionally low training times. In particular, the GF exhibited strong consistency across folds and datasets, underscoring its robustness in handling noisy data and variable feature sets. These findings support the potential of gradient optimized fuzzy systems as interpretable, efficient, and adaptable alternatives to more complex deep learning models in supervised learning tasks.
Abstract (translated)
本文介绍了一项针对梯度优化模糊推理系统(GF)分类器的性能基准研究,将其与几种最先进的机器学习模型进行了对比,包括随机森林、XGBoost、逻辑回归、支持向量机和神经网络。评估在来自UCI机器学习仓库的五个不同数据集上进行,这些数据集因输入类型多样、类别分布各异及分类难度的不同而被选中。 不同于依赖无导数优化方法的传统模糊推理系统,GF利用梯度下降显著提升了训练效率和预测性能。实验结果显示,在保持高精度的同时,GF模型实现了与其它基准模型相当甚至更优的分类准确率,并且具备极低的训练时间。此外,GF在各个折集(folds)和数据集上表现出了强一致性,这突显了其处理噪声数据及变化特征集合时的鲁棒性。 这些发现支持了梯度优化模糊系统作为解释性强、高效且适应性强的替代方案,可应用于监督学习任务中复杂的深度学习模型。
URL
https://arxiv.org/abs/2504.16263