Abstract
Feature engineering is crucial for optimizing machine learning model performance, particularly in tabular data classification tasks. Leveraging advancements in natural language processing, this study presents a systematic approach to enrich tabular datasets with features derived from large language model embeddings. Through a comprehensive ablation study on diverse datasets, we assess the impact of RoBERTa and GPT-2 embeddings on ensemble classifiers, including Random Forest, XGBoost, and CatBoost. Results indicate that integrating embeddings with traditional numerical and categorical features often enhances predictive performance, especially on datasets with class imbalance or limited features and samples, such as UCI Adult, Heart Disease, Titanic, and Pima Indian Diabetes, with improvements particularly notable in XGBoost and CatBoost classifiers. Additionally, feature importance analysis reveals that LLM-derived features frequently rank among the most impactful for the predictions. This study provides a structured approach to embedding-based feature enrichment and illustrates its benefits in ensemble learning for tabular data.
Abstract (translated)
特征工程对于优化机器学习模型的性能至关重要,尤其是在表格数据分类任务中。借助自然语言处理领域的进步,本研究提出了一种系统方法,通过使用大型语言模型嵌入来丰富表格数据集的特性。通过对多种数据集进行全面的消融研究,我们评估了RoBERTa和GPT-2嵌入对集成分类器(包括随机森林、XGBoost和CatBoost)的影响。结果显示,在将嵌入与传统的数值和类别特征结合使用时,通常可以提升预测性能,尤其是在具有类别不平衡或特征和样本较少的数据集上,例如UCI成人数据集、心脏病数据集、泰坦尼克号数据集和Pima印度人糖尿病数据集,特别是在XGBoost和CatBoost分类器中改进尤为明显。此外,特征重要性分析表明,由大型语言模型衍生的特征经常被认为是预测中最关键的因素之一。本研究提供了一种基于嵌入的特征求精的结构化方法,并展示了其在表格数据分析中的集成学习中的益处。
URL
https://arxiv.org/abs/2411.01645