Paper Reading AI Learner

Adapting Mental Health Prediction Tasks for Cross-lingual Learning via Meta-Training and In-context Learning with Large Language Model

2024-04-13 17:11:35
Zita Lifelo, Huansheng Ning, Sahraoui Dhelim

Abstract

Timely identification is essential for the efficient handling of mental health illnesses such as depression. However, the current research fails to adequately address the prediction of mental health conditions from social media data in low-resource African languages like Swahili. This study introduces two distinct approaches utilising model-agnostic meta-learning and leveraging large language models (LLMs) to address this gap. Experiments are conducted on three datasets translated to low-resource language and applied to four mental health tasks, which include stress, depression, depression severity and suicidal ideation prediction. we first apply a meta-learning model with self-supervision, which results in improved model initialisation for rapid adaptation and cross-lingual transfer. The results show that our meta-trained model performs significantly better than standard fine-tuning methods, outperforming the baseline fine-tuning in macro F1 score with 18\% and 0.8\% over XLM-R and mBERT. In parallel, we use LLMs' in-context learning capabilities to assess their performance accuracy across the Swahili mental health prediction tasks by analysing different cross-lingual prompting approaches. Our analysis showed that Swahili prompts performed better than cross-lingual prompts but less than English prompts. Our findings show that in-context learning can be achieved through cross-lingual transfer through carefully crafted prompt templates with examples and instructions.

Abstract (translated)

及时的疾病诊断对于处理诸如抑郁症等心理健康疾病至关重要。然而,当前的研究未能充分关注低资源非洲语言(如斯瓦希里语)中从社交媒体数据预测精神健康状况。本研究介绍了一种利用模型无关元学习以及大型语言模型(LLMs)来解决这一问题的独特方法。实验在三种翻译至低资源语言的数据集上进行,并应用于四种精神健康任务,包括压力、抑郁、抑郁严重程度和自杀观念预测。我们首先应用了一种具有自监督的元学习模型,结果是提高了模型初始化以实现快速适应和跨语言转移。结果显示,我们的元训练模型在标准微调方法中表现得比基线微调好得多,在XLM-R和mBERT上的宏观F1得分分别比18\%和0.8\%更高。 同时,我们利用LLMs的上下文学习能力对斯瓦希里心理健康预测任务的不同跨语言提示方法进行分析。我们的分析显示,斯瓦希里提示表现更好,但不如英语提示。我们的研究结果表明,通过精心制作具有示例和说明的跨语言提示模板,可以实现上下文学习。

URL

https://arxiv.org/abs/2404.09045

PDF

https://arxiv.org/pdf/2404.09045.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot