Paper Reading AI Learner

A Multi-View Joint Learning Framework for Embedding Clinical Codes and Text Using Graph Neural Networks

2023-01-27 09:19:03
Lecheng Kong, Christopher King, Bradley Fritz, Yixin Chen

Abstract

Learning to represent free text is a core task in many clinical machine learning (ML) applications, as clinical text contains observations and plans not otherwise available for inference. State-of-the-art methods use large language models developed with immense computational resources and training data; however, applying these models is challenging because of the highly varying syntax and vocabulary in clinical free text. Structured information such as International Classification of Disease (ICD) codes often succinctly abstracts the most important facts of a clinical encounter and yields good performance, but is often not as available as clinical text in real-world scenarios. We propose a \textbf{multi-view learning framework} that jointly learns from codes and text to combine the availability and forward-looking nature of text and better performance of ICD codes. The learned text embeddings can be used as inputs to predictive algorithms independent of the ICD codes during inference. Our approach uses a Graph Neural Network (GNN) to process ICD codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation Analysis (DCCA) to enforce the two views to learn a similar representation of each patient. In experiments using planned surgical procedure text, our model outperforms BERT models fine-tuned to clinical data, and in experiments using diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a tiny fraction of its computational effort.

Abstract (translated)

学习表示自由文本是许多临床机器学习应用的核心任务,因为临床文本包含无法在其他情况下推断观察和计划。先进的方法使用巨大的计算资源和训练数据开发了大型语言模型。然而,应用这些模型具有挑战性,因为这些临床自由文本的语法和词汇高度不同。例如,国际疾病分类(ICD)代码常常简要概括了临床 encounter 中最重要的事实,并表现出良好的性能,但在现实生活中,比临床文本更易获得。我们提出了一个 extbf{多视角学习框架},从代码和文本中共同学习,以结合文本的可用性和前瞻性性质,并提高ICD代码的性能。 learned 文本嵌入可以在预测算法的输入中独立于ICD代码时使用。我们的方法使用图神经网络(GNN)处理ICD代码,并使用双向LSTM处理文本。我们应用深度元共轭关系分析(DCCA)来强制两个视角学习每个患者的相似表示。在利用计划手术文本的实验中,我们的模型优于调整了临床数据的BERT模型,而在利用多样性文本的MIMIC-III实验中,我们的模型在计算资源的极小部分内与调整了BERT的模型竞争。

URL

https://arxiv.org/abs/2301.11608

PDF

https://arxiv.org/pdf/2301.11608.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot