Paper Reading AI Learner

Unveiling and Manipulating Prompt Influence in Large Language Models

2024-05-20 09:15:36
Zijian Feng, Hanzhang Zhou, Zixiao Zhu, Junlang Qian, Kezhi Mao

Abstract

Prompts play a crucial role in guiding the responses of Large Language Models (LLMs). However, the intricate role of individual tokens in prompts, known as input saliency, in shaping the responses remains largely underexplored. Existing saliency methods either misalign with LLM generation objectives or rely heavily on linearity assumptions, leading to potential inaccuracies. To address this, we propose Token Distribution Dynamics (TDD), a \textcolor{black}{simple yet effective} approach to unveil and manipulate the role of prompts in generating LLM outputs. TDD leverages the robust interpreting capabilities of the language model head (LM head) to assess input saliency. It projects input tokens into the embedding space and then estimates their significance based on distribution dynamics over the vocabulary. We introduce three TDD variants: forward, backward, and bidirectional, each offering unique insights into token relevance. Extensive experiments reveal that the TDD surpasses state-of-the-art baselines with a big margin in elucidating the causal relationships between prompts and LLM outputs. Beyond mere interpretation, we apply TDD to two prompt manipulation tasks for controlled text generation: zero-shot toxic language suppression and sentiment steering. Empirical results underscore TDD's proficiency in identifying both toxic and sentimental cues in prompts, subsequently mitigating toxicity or modulating sentiment in the generated content.

Abstract (translated)

提示在引导大型语言模型(LLMs)的回答中发挥着关键作用。然而,在提示中单个标记词对塑造LLM回答的作用仍没有被充分探索。现有的提示 Saliency 方法要么与LLM生成目标相悖,要么过于依赖线性假设,导致可能的误差。为了解决这个问题,我们提出了 Token Distribution Dynamics (TDD),一种简单而有效的方法,以揭示和操作提示在生成LLM输出中的作用。TDD利用语言模型的头(LM头)的稳健解释能力来评估输入 Saliency。它将输入标记词投影到嵌入空间,然后根据词汇的分布动态估计其重要性。我们引入了三种TDD变体:前向、后向和双向,每种变体都提供了独特的对标记词相关性的见解。大量的实验证明,TDD在阐明提示与LLM输出之间的因果关系方面超越了最先进的基线。除了简单的解释之外,我们将TDD应用于两个控制文本生成的提示操纵任务:零击有毒语言消除和情感引导。实验结果强调TDD在识别提示中的毒性和情感线索方面的卓越性能。在生成的内容中,TDD能够减轻毒性或调整情感。

URL

https://arxiv.org/abs/2405.11891

PDF

https://arxiv.org/pdf/2405.11891.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot