Paper Reading AI Learner

Test-Time Code-Switching for Cross-lingual Aspect Sentiment Triplet Extraction

2025-01-24 00:00:51
Dongming Sheng, Kexin Han, Hao Li, Yan Zhang, Yucheng Huang, Jun Lang, Wenqiang Liu

Abstract

Aspect Sentiment Triplet Extraction (ASTE) is a thriving research area with impressive outcomes being achieved on high-resource languages. However, the application of cross-lingual transfer to the ASTE task has been relatively unexplored, and current code-switching methods still suffer from term boundary detection issues and out-of-dictionary problems. In this study, we introduce a novel Test-Time Code-SWitching (TT-CSW) framework, which bridges the gap between the bilingual training phase and the monolingual test-time prediction. During training, a generative model is developed based on bilingual code-switched training data and can produce bilingual ASTE triplets for bilingual inputs. In the testing stage, we employ an alignment-based code-switching technique for test-time augmentation. Extensive experiments on cross-lingual ASTE datasets validate the effectiveness of our proposed method. We achieve an average improvement of 3.7% in terms of weighted-averaged F1 in four datasets with different languages. Additionally, we set a benchmark using ChatGPT and GPT-4, and demonstrate that even smaller generative models fine-tuned with our proposed TT-CSW framework surpass ChatGPT and GPT-4 by 14.2% and 5.0% respectively.

Abstract (translated)

方面情感三元组抽取(ASTE)是一个充满活力的研究领域,在高资源语言上已经取得了令人瞩目的成果。然而,跨语言迁移在ASTE任务中的应用相对较少探索,当前的代码切换方法仍然存在术语边界检测问题和词典外的问题。在这项研究中,我们引入了一种新颖的测试时间代码切换(TT-CSW)框架,该框架弥合了双语训练阶段与单语测试时预测之间的差距。在训练过程中,基于双语文本代码切换的数据开发了一个生成模型,并且可以为双语输入产生双语ASTE三元组。在测试阶段,我们采用一种基于对齐的代码切换技术进行测试时间增强。跨语言ASTE数据集上的大量实验证明了我们提出方法的有效性。我们在四个不同语言的数据集中实现了加权平均F1分数3.7%的平均改进。此外,我们使用ChatGPT和GPT-4设置了基准,并证明即使是经过我们的TT-CSW框架微调的小型生成模型也分别超越了ChatGPT和GPT-4 14.2% 和5.0%。

URL

https://arxiv.org/abs/2501.14144

PDF

https://arxiv.org/pdf/2501.14144.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot