Paper Reading AI Learner

Two-Stage Stance Labeling: User-Hashtag Heuristics with Graph Neural Networks

2024-04-16 02:18:30
Joshua Melton, Shannon Reid, Gabriel Terejanu, Siddharth Krishnan

Abstract

The high volume and rapid evolution of content on social media present major challenges for studying the stance of social media users. In this work, we develop a two stage stance labeling method that utilizes the user-hashtag bipartite graph and the user-user interaction graph. In the first stage, a simple and efficient heuristic for stance labeling uses the user-hashtag bipartite graph to iteratively update the stance association of user and hashtag nodes via a label propagation mechanism. This set of soft labels is then integrated with the user-user interaction graph to train a graph neural network (GNN) model using semi-supervised learning. We evaluate this method on two large-scale datasets containing tweets related to climate change from June 2021 to June 2022 and gun control from January 2022 to January 2023. Experiments demonstrate that our user-hashtag heuristic and the semi-supervised GNN method outperform zero-shot stance labeling using LLMs such as GPT4. Further analysis illustrates how the stance labeling information and interaction graph can be used for evaluating the polarization of social media interactions on divisive issues such as climate change and gun control.

Abstract (translated)

社交媒体上内容的数量和快速演变给研究社交媒体用户立场的分析带来了重大挑战。在这项工作中,我们开发了一种两级立标签方法,该方法利用了用户- hashtag 二分图和用户-用户互动图。在第一阶段,一个简单而有效的立标签策略使用用户- hashtag 二分图通过标签传播机制迭代更新用户和 hashtag 节点的立标签关联。这组软标签随后与用户-用户互动图整合,使用半监督学习训练图神经网络(GNN)模型。我们在2021年6月至2022年6月期间气候变化和2022年1月至2023年1月枪支控制的大型数据集上评估了这种方法。实验结果表明,我们的用户-hashtag策略和半监督的 GNN 方法优于使用LLM(如 GPT4)进行零散立标签。进一步的分析表明,立标签信息和互动图可用于评估社交媒体在具有分歧的问题(如气候变化和枪支控制)上的极化。

URL

https://arxiv.org/abs/2404.10228

PDF

https://arxiv.org/pdf/2404.10228.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot