Paper Reading AI Learner

Minding the Politeness Gap in Cross-cultural Communication

2025-06-18 16:52:20
Yuka Machino, Matthias Hofer, Max Siegel, Joshua B. Tenenbaum, Robert D. Hawkins

Abstract

Misunderstandings in cross-cultural communication often arise from subtle differences in interpretation, but it is unclear whether these differences arise from the literal meanings assigned to words or from more general pragmatic factors such as norms around politeness and brevity. In this paper, we report three experiments examining how speakers of British and American English interpret intensifiers like "quite" and "very." To better understand these cross-cultural differences, we developed a computational cognitive model where listeners recursively reason about speakers who balance informativity, politeness, and utterance cost. Our model comparisons suggested that cross-cultural differences in intensifier interpretation stem from a combination of (1) different literal meanings, (2) different weights on utterance cost. These findings challenge accounts based purely on semantic variation or politeness norms, demonstrating that cross-cultural differences in interpretation emerge from an intricate interplay between the two.

Abstract (translated)

跨文化沟通中的误解往往源于对词语细微解读差异,但这些差异是源自字面意义的分配还是更普遍的实际因素如礼貌和简洁规范尚不清楚。本文中,我们报告了三项实验,研究说英式英语和美式英语的人如何理解诸如“quite”(相当)和“very”(非常)这样的加强词。为了更好地了解这些跨文化差异,我们开发了一个计算认知模型,在这个模型中,听众反复推理出讲者在信息量、礼貌性以及话语成本之间进行权衡的方式。我们的模型比较表明,跨文化的强化词解读差异源于以下两者的结合:(1) 不同的字面意义;(2) 对话语成本的不同权重分配。这些发现挑战了仅基于语义变化或礼貌规范的解释,证明跨文化理解上的差异是从两者之间复杂的相互作用中产生的。

URL

https://arxiv.org/abs/2506.15623

PDF

https://arxiv.org/pdf/2506.15623.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot