Paper Reading AI Learner

Smartphone monitoring of smiling as a behavioral proxy of well-being in everyday life

2025-12-10 15:56:37
Ming-Zher Poh, Shun Liao, Marco Andreetto, Daniel McDuff, Jonathan Wang, Paolo Di Achille, Jiang Wu, Yun Liu, Lawrence Cai, Eric Teasley, Mark Malhotra, Anupam Pathak, Shwetak Patel

Abstract

Subjective well-being is a cornerstone of individual and societal health, yet its scientific measurement has traditionally relied on self-report methods prone to recall bias and high participant burden. This has left a gap in our understanding of well-being as it is expressed in everyday life. We hypothesized that candid smiles captured during natural smartphone interactions could serve as a scalable, objective behavioral correlate of positive affect. To test this, we analyzed 405,448 video clips passively recorded from 233 consented participants over one week. Using a deep learning model to quantify smile intensity, we identified distinct diurnal and daily patterns. Daily patterns of smile intensity across the week showed strong correlation with national survey data on happiness (r=0.92), and diurnal rhythms documented close correspondence with established results from the day reconstruction method (r=0.80). Higher daily mean smile intensity was significantly associated with more physical activity (Beta coefficient = 0.043, 95% CI [0.001, 0.085]) and greater light exposure (Beta coefficient = 0.038, [0.013, 0.063]), whereas no significant effects were found for smartphone use. These findings suggest that passive smartphone sensing could serve as a powerful, ecologically valid methodology for studying the dynamics of affective behavior and open the door to understanding this behavior at a population scale.

Abstract (translated)

主观幸福感是个体和社会健康的基础,然而其科学测量传统上依赖于容易产生回忆偏差且给参与者带来较高负担的自我报告方法。这导致了我们对日常生活中表达的幸福感的理解存在缺口。我们假设,在自然手机互动中捕捉到的真实微笑可以作为积极情感的一种可扩展、客观的行为标志。为了验证这一假设,我们在一周内被动记录并分析了来自233名同意参与者的405,448段视频片段。使用深度学习模型量化微笑强度后,我们发现了明显的昼夜节律和每日模式。每周微笑强度的每日变化与国家幸福调查数据之间显示出强烈的相关性(r=0.92),而记录到的日常节奏与日重构法建立的结果密切对应(r=0.80)。较高的平均每日微笑强度显著关联于更多的身体活动(贝塔系数 = 0.043,95% CI [0.001, 0.085])和更大的光照暴露(贝塔系数 = 0.038,[0.013, 0.063]),但没有发现手机使用对其有显著影响。这些研究结果表明,被动的智能手机感知可以作为研究情感行为动态的强大、生态有效的研究方法,并为在大规模人群中理解这种行为打开了一扇门。

URL

https://arxiv.org/abs/2512.11905

PDF

https://arxiv.org/pdf/2512.11905.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot