Paper Reading AI Learner

Multi Class Depression Detection Through Tweets using Artificial Intelligence

2024-04-19 12:47:56
Muhammad Osama Nusrat, Waseem Shahzad, Saad Ahmed Jamal

Abstract

Depression is a significant issue nowadays. As per the World Health Organization (WHO), in 2023, over 280 million individuals are grappling with depression. This is a huge number; if not taken seriously, these numbers will increase rapidly. About 4.89 billion individuals are social media users. People express their feelings and emotions on platforms like Twitter, Facebook, Reddit, Instagram, etc. These platforms contain valuable information which can be used for research purposes. Considerable research has been conducted across various social media platforms. However, certain limitations persist in these endeavors. Particularly, previous studies were only focused on detecting depression and the intensity of depression in tweets. Also, there existed inaccuracies in dataset labeling. In this research work, five types of depression (Bipolar, major, psychotic, atypical, and postpartum) were predicted using tweets from the Twitter database based on lexicon labeling. Explainable AI was used to provide reasoning by highlighting the parts of tweets that represent type of depression. Bidirectional Encoder Representations from Transformers (BERT) was used for feature extraction and training. Machine learning and deep learning methodologies were used to train the model. The BERT model presented the most promising results, achieving an overall accuracy of 0.96.

Abstract (translated)

抑郁症是一个重要的问题。根据世界卫生组织(WHO),到2023年,有280 million个人正在经历抑郁症。这是一个巨大的数字,如果得不到重视,这些数字会迅速增加。大约4.89亿人使用社交媒体。人们会在像Twitter、Facebook、Reddit、Instagram等平台上表达他们的情感和情绪。这些平台包含可以用于研究目的的有价值的信息。已经在各个社交媒体平台上进行了一定的研究,但是这些努力存在一些局限性。特别是,以前的研究仅关注在推特中检测抑郁症及其严重程度。此外,数据集标签存在不准确的问题。在这项研究中,通过基于推特数据库的推文进行预测,预测了五种抑郁症类型(双相、主要、精神分裂、非典型和产后抑郁症)。利用可解释AI来提供推理,通过突出推特中代表抑郁症类型的部分来解释。双向编码器表示法(BERT)用于特征提取和训练。机器学习和深度学习方法被用于训练模型。BERT模型取得了最积极的结果,达到0.96的总体准确性。

URL

https://arxiv.org/abs/2404.13104

PDF

https://arxiv.org/pdf/2404.13104.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot