Abstract
Technologies for recognizing facial attributes like race, gender, age, and emotion have several applications, such as surveillance, advertising content, sentiment analysis, and the study of demographic trends and social behaviors. Analyzing demographic characteristics based on images and analyzing facial expressions have several challenges due to the complexity of humans' facial attributes. Traditional approaches have employed CNNs and various other deep learning techniques, trained on extensive collections of labeled images. While these methods demonstrated effective performance, there remains potential for further enhancements. In this paper, we propose to utilize vision language models (VLMs) such as generative pre-trained transformer (GPT), GEMINI, large language and vision assistant (LLAVA), PaliGemma, and Microsoft Florence2 to recognize facial attributes such as race, gender, age, and emotion from images with human faces. Various datasets like FairFace, AffectNet, and UTKFace have been utilized to evaluate the solutions. The results show that VLMs are competitive if not superior to traditional techniques. Additionally, we propose "FaceScanPaliGemma"--a fine-tuned PaliGemma model--for race, gender, age, and emotion recognition. The results show an accuracy of 81.1%, 95.8%, 80%, and 59.4% for race, gender, age group, and emotion classification, respectively, outperforming pre-trained version of PaliGemma, other VLMs, and SotA methods. Finally, we propose "FaceScanGPT", which is a GPT-4o model to recognize the above attributes when several individuals are present in the image using a prompt engineered for a person with specific facial and/or physical attributes. The results underscore the superior multitasking capability of FaceScanGPT to detect the individual's attributes like hair cut, clothing color, postures, etc., using only a prompt to drive the detection and recognition tasks.
Abstract (translated)
面部特征识别技术,如种族、性别、年龄和情绪,有多种应用,包括监控、广告内容定向、情感分析以及研究人口趋势和社会行为。基于图像分析人群特征及面部表情存在诸多挑战,因为人类的面部属性非常复杂。传统方法使用卷积神经网络(CNN)及其他深度学习技术,并在大量标注图像上进行训练。尽管这些方法表现有效,但仍有提升空间。本文提出利用视觉语言模型(VLM),如生成式预训练转换器(GPT)、GEMINI、大型语言和视觉助手(LLAVA)、PaliGemma以及微软Florence2来从人脸图片中识别面部属性,包括种族、性别、年龄及情绪。各类数据集如FairFace、AffectNet和UTKFace被用来评估这些解决方案。结果显示,VLM的表现与传统技术相当甚至更优。此外,我们提出了一种名为“FaceScanPaliGemma”的微调模型用于识别种族、性别、年龄段以及情绪分类,准确率分别为81.1%、95.8%、80%和59.4%,优于预训练版本的PaliGemma以及其他VLM和当前最优方法。最后,我们提出“FaceScanGPT”,这是一个基于GPT-4o模型的技术,用于识别多人图像中的上述属性,并使用针对特定面部及/或身体特征设计的提示词进行驱动。结果显示,FaceScanGPT在检测个体特征如发型、服装颜色、姿势等方面具有优越的多任务处理能力,仅通过一个提示就能完成检测和识别任务。
URL
https://arxiv.org/abs/2410.24148