Physically realistic materials are pivotal in augmenting the realism of 3D assets across various applications and lighting conditions. However, existing 3D assets and generative models often lack authentic material properties. Manual assignment of materials using graphic software is a tedious and time-consuming task. In this paper, we exploit advancements in Multimodal Large Language Models (MLLMs), particularly GPT-4V, to present a novel approach, Make-it-Real: 1) We demonstrate that GPT-4V can effectively recognize and describe materials, allowing the construction of a detailed material library. 2) Utilizing a combination of visual cues and hierarchical text prompts, GPT-4V precisely identifies and aligns materials with the corresponding components of 3D objects. 3) The correctly matched materials are then meticulously applied as reference for the new SVBRDF material generation according to the original diffuse map, significantly enhancing their visual authenticity. Make-it-Real offers a streamlined integration into the 3D content creation workflow, showcasing its utility as an essential tool for developers of 3D assets.
物理真实感材料在增强3D资产的各种应用和光照条件下的真实感方面至关重要。然而,现有的3D资产和生成模型通常缺乏真实材料的属性。使用图形软件手动分配材料是一个费力且耗时的任务。在本文中,我们利用多模态大型语言模型(MMLMs)的进步,特别是GPT-4V,提出了一个新方法,名为Make-it-Real:1)我们证明了GPT-4V可以有效地识别和描述材料,使得构建详细材料库成为可能。2)利用视觉提示和分层文本提示,GPT-4V准确地识别和校准材料与3D物体相应部件的对应关系。3)然后,正确匹配的材料被用作根据原始漫射图生成新的SVBRDF材料的新参考,显著增强了它们的视觉真实感。Make-it-Real使3D内容创建工作流程更加流畅,展示了它在开发者3D资产方面作为关键工具的重要作用。
https://arxiv.org/abs/2404.16829
Our objective is to discover and localize monotonic temporal changes in a sequence of images. To achieve this, we exploit a simple proxy task of ordering a shuffled image sequence, with `time' serving as a supervisory signal since only changes that are monotonic with time can give rise to the correct ordering. We also introduce a flexible transformer-based model for general-purpose ordering of image sequences of arbitrary length with built-in attribution maps. After training, the model successfully discovers and localizes monotonic changes while ignoring cyclic and stochastic ones. We demonstrate applications of the model in multiple video settings covering different scene and object types, discovering both object-level and environmental changes in unseen sequences. We also demonstrate that the attention-based attribution maps function as effective prompts for segmenting the changing regions, and that the learned representations can be used for downstream applications. Finally, we show that the model achieves the state of the art on standard benchmarks for ordering a set of images.
我们的目标是发现和局部化图像序列中的单调时间变化。为了实现这一目标,我们利用了一个简单的代理任务,即对随机图像序列进行排序,其中`time'作为监督信号,因为只有与时间相关的单调变化才能得到正确的排序。我们还引入了一个灵活的Transformer-based模型,用于对任意长度的图像序列进行通用排序,并内置归一化映射。在训练之后,该模型在成功发现和局部化单调变化的同时,忽略了循环和随机变化。我们在多个视频设置中展示了该模型的应用,涵盖了不同的场景和对象类型,发现了未见过的序列中的物体级和环境变化。我们还证明了基于注意的归一化映射可以作为分割变化区域的有效提示,并且学到的表示可以用于下游应用。最后,我们证明了该模型在为给定一组图像排序的基准测试中达到了最先进的水平。
https://arxiv.org/abs/2404.16828
In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements: (1) Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model -- InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs. (2) Dynamic High-Resolution: we divide images into tiles ranging from 1 to 40 of 448$\times$448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input. (3) High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks. We evaluate InternVL 1.5 through a series of benchmarks and comparative studies. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Code has been released at this https URL.
在这份报告中,我们介绍了InternVL 1.5,一个开源的多模态大型语言模型(MLLM),以弥合开源和商业模型在多模态理解能力方面的差距。我们介绍了三个简单的改进:(1)强视图编码器:我们对 large-scale vision foundation model -- InternViT-6B 进行连续学习,提高了其视觉理解能力,并使其可以迁移和重用于不同的LLM。 (2)动态高分辨率:我们根据输入图像的透视率和分辨率将图像划分为从1到40个448x448像素的方块,支持最高4K分辨率输入。 (3)高质量双语数据集:我们仔细收集了一个高质量的双语数据集,涵盖了常见的场景、文档图像,并使用英语和中文问题与答案对它们进行了标注,显著提高了 OCR- 和与中文相关的任务的表现。我们通过一系列基准测试和比较研究评估了InternVL 1.5。与开源和商业模型相比,InternVL 1.5显示出具有竞争力的性能,在8个基准测试中实现了最先进的结果。代码已发布在https://这个网址。
https://arxiv.org/abs/2404.16821
As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench - the largest benchmark for evaluating LLMs on user-facing generation tasks across a diverse set 29 of Indic languages covering 13 scripts and 4 language families. IndicGenBench is composed of diverse generation tasks like cross-lingual summarization, machine translation, and cross-lingual question answering. IndicGenBench extends existing benchmarks to many Indic languages through human curation providing multi-way parallel evaluation data for many under-represented Indic languages for the first time. We evaluate a wide range of proprietary and open-source LLMs including GPT-3.5, GPT-4, PaLM-2, mT5, Gemma, BLOOM and LLaMA on IndicGenBench in a variety of settings. The largest PaLM-2 models performs the best on most tasks, however, there is a significant performance gap in all languages compared to English showing that further research is needed for the development of more inclusive multilingual language models. IndicGenBench is released at this http URL
随着大型语言模型(LLMs)在全球范围内的应用不断增加,LLMs代表世界语言多样性至关重要。印度是一个拥有14亿人口的多语言国家。为了促进对多语言LLM评估的研究,我们发布了IndicGenBench - 针对13个脚本和4个语言家庭的多语言用户生成任务评估的最大基准。IndicGenBench由跨语言摘要、机器翻译和跨语言问题回答等多样化的生成任务组成。通过人类审核,IndicGenBench为许多印度语言提供了多途径并行评估数据,为许多代表性不足的印度语言首次提供了全面评估。我们在IndicGenBench上评估了各种专有和开源LLM,包括GPT-3.5、GPT-4、PaLM-2、mT5、Gemma、BLOOM和LLLaMA。在IndicGenBench上,最大的PaLM-2模型在大多数任务上表现最佳,然而,与英语相比,所有语言之间的性能差距都很大,这表明需要进一步研究更包容的多语言语言模型的开发。IndicGenBench发布在以下URL:
https://arxiv.org/abs/2404.16816
Recent advances in large pre-trained vision-language models have demonstrated remarkable performance on zero-shot downstream tasks. Building upon this, recent studies, such as CoOp and CoCoOp, have proposed the use of prompt learning, where context within a prompt is replaced with learnable vectors, leading to significant improvements over manually crafted prompts. However, the performance improvement for unseen classes is still marginal, and to tackle this problem, data augmentation has been frequently used in traditional zero-shot learning techniques. Through our experiments, we have identified important issues in CoOp and CoCoOp: the context learned through traditional image augmentation is biased toward seen classes, negatively impacting generalization to unseen classes. To address this problem, we propose adversarial token embedding to disentangle low-level visual augmentation features from high-level class information when inducing bias in learnable prompts. Through our novel mechanism called "Adding Attributes to Prompt Learning", AAPL, we guide the learnable context to effectively extract text features by focusing on high-level features for unseen classes. We have conducted experiments across 11 datasets, and overall, AAPL shows favorable performances compared to the existing methods in few-shot learning, zero-shot learning, cross-dataset, and domain generalization tasks.
近年来,大型预训练视觉语言模型在零散分布任务上的表现已经引人注目。在此基础上,一些研究,如CoOp和CoCoOp,提出了使用提示学习的方法,其中上下文在提示中替换为可学习向量,从而在手动设计的提示上取得了显著的改进。然而,对于未见过的类别的性能提升仍然很小,为了解决这个问题,传统零散学习技术中经常使用数据增强。通过我们的实验,我们发现了CoOp和CoCoOp中重要的问题:通过传统图像增强学习到的上下文存在偏见,不利于对未见过的类别的泛化。为了解决这个问题,我们提出了一个对抗性标记嵌入策略,当在提示中诱导偏见时,将低级视觉增强特征与高级分类信息分离。通过我们新颖的机制“在提示中添加属性”,AAPL,我们引导可学习上下文有效地提取未见过的类别的文本特征。我们在11个数据集上进行了实验,总体而言,AAPL在零散分布学习、少样本学习、跨数据集学习和领域泛化任务上的表现与现有方法相比具有优势。
https://arxiv.org/abs/2404.16804
Comprehending text-rich visual content is paramount for the practical application of Multimodal Large Language Models (MLLMs), since text-rich scenarios are ubiquitous in the real world, which are characterized by the presence of extensive texts embedded within images. Recently, the advent of MLLMs with impressive versatility has raised the bar for what we can expect from MLLMs. However, their proficiency in text-rich scenarios has yet to be comprehensively and objectively assessed, since current MLLM benchmarks primarily focus on evaluating general visual comprehension. In this work, we introduce SEED-Bench-2-Plus, a benchmark specifically designed for evaluating \textbf{text-rich visual comprehension} of MLLMs. Our benchmark comprises 2.3K multiple-choice questions with precise human annotations, spanning three broad categories: Charts, Maps, and Webs, each of which covers a wide spectrum of text-rich scenarios in the real world. These categories, due to their inherent complexity and diversity, effectively simulate real-world text-rich environments. We further conduct a thorough evaluation involving 34 prominent MLLMs (including GPT-4V, Gemini-Pro-Vision and Claude-3-Opus) and emphasize the current limitations of MLLMs in text-rich visual comprehension. We hope that our work can serve as a valuable addition to existing MLLM benchmarks, providing insightful observations and inspiring further research in the area of text-rich visual comprehension with MLLMs. The dataset and evaluation code can be accessed at this https URL.
理解丰富文本的视觉内容对于多模态大型语言模型的实际应用至关重要,因为这种场景在现实生活中随处可见,特点是图像中嵌入大量文本。近年来,具有令人印象深刻的多样性的MLLM的出现提高了我们对MLLM的期望,然而,对于这些MLLM在丰富文本场景中的表现,我们还没有进行全面的、客观的评估,因为目前的MLLM基准主要关注评估通用视觉理解。在本文中,我们介绍了SEED-Bench-2-Plus,一个专门为评估MLLM的丰富文本视觉理解而设计的基准。我们的基准包括2300多个多选题问题,带有精确的人类注释,涵盖了三个广泛的类别:图表、地图和网站,每个类别涵盖了现实世界中的广泛文本丰富场景。由于它们的固有复杂性和多样性,这些类别有效地模拟了现实世界的文本丰富环境。我们进一步对34个著名的MLLM(包括GPT-4V、Gemini-Pro-Vision和Claude-3-Opus)进行了深入评估,并强调了MLLM在丰富文本视觉理解方面的当前局限性。我们希望我们的工作能为现有的MLLM基准提供宝贵的补充,提供有关丰富文本视觉理解与MLLM的进一步研究,以及有益的观察。数据和评估代码可以在此链接访问:https://url.in/
https://arxiv.org/abs/2404.16790
The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. Pre-trained LLMs, when tailored for specific needs, often experience significant performance degradation in previous knowledge domains -- a phenomenon known as "catastrophic forgetting". While extensively studied in the continual learning (CL) community, it presents new manifestations in the realm of LLMs. In this survey, we provide a comprehensive overview of the current research progress on LLMs within the context of CL. This survey is structured into four main sections: we first describe an overview of continually learning LLMs, consisting of two directions of continuity: vertical continuity (or vertical continual learning), i.e., continual adaptation from general to specific capabilities, and horizontal continuity (or horizontal continual learning), i.e., continual adaptation across time and domains (Section 3). We then summarize three stages of learning LLMs in the context of modern CL: Continual Pre-Training (CPT), Domain-Adaptive Pre-training (DAP), and Continual Fine-Tuning (CFT) (Section 4). Then we provide an overview of evaluation protocols for continual learning with LLMs, along with the current available data sources (Section 5). Finally, we discuss intriguing questions pertaining to continual learning for LLMs (Section 6). The full list of papers examined in this survey is available at this https URL.
近年来,基于静态、预先收集的通用数据集训练的大语言模型(LLMs)的成功引发了大量的研究方向和应用。其中一种方向解决了将预训练的LLM集成到动态数据分布、任务结构和用户偏好中的非平凡挑战。经过专门调整以满足特定需求后,预训练的LLM在先前知识领域中的表现常常会显著下降,这种现象被称为“灾难性遗忘”。尽管在持续学习(CL)领域得到了广泛研究,但LLMs在LLM领域中呈现出了新的表现形式。在本次调查中,我们全面概述了LLM在CL背景下的当前研究进展。本次调查分为四个主要部分:我们首先描述了持续学习LLMs的概述,包括两个方向:垂直连续(或垂直持续学习),即从通用到特定能力的持续适应,以及水平连续(或水平持续学习),即跨越时间和领域的持续适应(第3节)。接着我们总结了在现代CL背景下学习LLM的三个阶段:持续预训练(CPT)、领域自适应预训练(DAP)和持续微调(CFT)(第4节)。然后我们概述了使用LLMs进行持续学习的评估协议以及当前可用的数据源(第5节)。最后,我们讨论了与LLM的持续学习相关的一些有趣问题(第6节)。本次调查中审查的论文清单可以在这个https:// URL中找到。
https://arxiv.org/abs/2404.16789
Developing generalist foundation model has recently attracted tremendous attention among researchers in the field of AI for Medicine (AI4Medicine). A pivotal insight in developing these models is their reliance on dataset scaling, which emphasizes the requirements on developing open-source medical image datasets that incorporate diverse supervision signals across various imaging modalities. In this paper, we introduce RadGenome-Chest CT, a comprehensive, large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE. Specifically, we leverage the latest powerful universal segmentation and large language models, to extend the original datasets (over 25,692 non-contrast 3D chest CT volume and reports from 20,000 patients) from the following aspects: (i) organ-level segmentation masks covering 197 categories, which provide intermediate reasoning visual clues for interpretation; (ii) 665 K multi-granularity grounded reports, where each sentence of the report is linked to the corresponding anatomical region of CT volume in the form of a segmentation mask; (iii) 1.3 M grounded VQA pairs, where questions and answers are all linked with reference segmentation masks, enabling models to associate visual evidence with textual explanations. All grounded reports and VQA pairs in the validation set have gone through manual verification to ensure dataset quality. We believe that RadGenome-Chest CT can significantly advance the development of multimodal medical foundation models, by training to generate texts based on given segmentation regions, which is unattainable with previous relevant datasets. We will release all segmentation masks, grounded reports, and VQA pairs to facilitate further research and development in this field.
在人工智能领域(AI4Medicine)的研究者中,开发通用基础模型最近引起了巨大的关注。这些模型的关键在于它们对数据集扩大的依赖,强调开发包含各种成像模式下不同监督信号的开放医疗图像数据集。在本文中,我们介绍了RadGenome-Chest CT,一个基于CT-RATE的全面、大规模、区域指导的3D chest CT解释数据集。具体来说,我们利用最先进的强大通用分割和大型语言模型,从以下方面扩展了原始数据集:(一)覆盖197个类别的器官级别分割掩码,为解释提供中间推理的视觉提示;(二)665K个多粒度 grounded 报告,其中每个报告的句子都与相应的 CT 体积的解剖区域通过分割掩码链接;(三)1.3M个 grounded VQA 对,其中问题及其答案都与参考分割掩码链接,使模型能够将视觉证据与文本解释相关联。所有验证集中的 grounded 报告和 VQA 对都经过手动验证,以确保数据集质量。我们相信,RadGenome-Chest CT 可以通过根据给定分割区域生成文本,从而显著推动多模态医疗基础模型的开发,这是之前相关数据集无法实现的。我们将释放所有分割掩码、 grounded 报告和 VQA 对,以促进该领域进一步的研究和发展。
https://arxiv.org/abs/2404.16754
Vision-language models enable open-world classification of objects without the need for any retraining. While this zero-shot paradigm marks a significant advance, even today's best models exhibit skewed performance when objects are dissimilar from their typical depiction. Real world objects such as pears appear in a variety of forms -- from diced to whole, on a table or in a bowl -- yet standard VLM classifiers map all instances of a class to a \it{single vector based on the class label}. We argue that to represent this rich diversity within a class, zero-shot classification should move beyond a single vector. We propose a method to encode and account for diversity within a class using inferred attributes, still in the zero-shot setting without retraining. We find our method consistently outperforms standard zero-shot classification over a large suite of datasets encompassing hierarchies, diverse object states, and real-world geographic diversity, as well finer-grained datasets where intra-class diversity may be less prevalent. Importantly, our method is inherently interpretable, offering faithful explanations for each inference to facilitate model debugging and enhance transparency. We also find our method scales efficiently to a large number of attributes to account for diversity -- leading to more accurate predictions for atypical instances. Finally, we characterize a principled trade-off between overall and worst class accuracy, which can be tuned via a hyperparameter of our method. We hope this work spurs further research into the promise of zero-shot classification beyond a single class vector for capturing diversity in the world, and building transparent AI systems without compromising performance.
视觉语言模型使得无需重新训练即可对开放世界中的物体进行分类。虽然这种零样本范式取得了重大进展,但即使是最先进的模型在物体不与其典型描述相当时也会表现出偏斜的性能。现实世界中的苹果呈现出各种形式——从切成薄片到整个,放在桌子上或碗里——然而,标准视觉语言模型将类别的实例映射到基于类别的单个向量上。我们认为,为了在类中表示这种丰富的多样性,零样本分类应超越单一向量。我们提出了一种方法,通过推断属性来编码和解释类中的多样性,在零样本设置中不需要重新训练。我们发现在一系列包括层次结构、多样物体状态和真实世界地理多样性的大数据集上,我们的方法 consistently优于标准零样本分类。此外,我们的方法具有内在可解释性,为每个推理提供准确的解释,从而促进模型调试和提高透明度。我们还发现,我们的方法能够有效地扩展到大量的属性,以考虑多样性,从而使典型实例的预测更准确。最后,我们描述了总体和最差类准确度之间的原则性权衡,该权衡可以通过我们方法的超参数进行调整。我们希望这项工作能够推动关于零样本分类在捕捉世界多样性方面的前景以及在不牺牲性能的情况下构建透明AI系统的研究。
https://arxiv.org/abs/2404.16717
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task.
我们提出了LayerSkip,这是一种加速大型语言模型(LLM)推理速度的端到端解决方案。首先,在训练过程中,我们应用层下落,对于较早的层,下落率较低,对于较晚的层,下落率较高,并且有一个早期的退出损失,其中所有Transformer层都共享相同的退出。其次,在推理过程中,我们证明了这种训练方法在较早的层上增加了早期退出模型的准确性,而没有添加任何辅助层或模块到模型中。第三,我们提出了一个新颖的自适应解码解决方案,其中我们在较早的层退出,并使用模型的剩余层来验证和纠正。我们针对LLama模型的大小和不同类型的训练进行了实验:从头预训练,继续预训练,针对特定数据领域的微调,以及针对特定任务的微调。我们实现了我们的推理解决方案,并在综述的CNN/DM文档上展示了速度提高至2.16倍,在编码上提高了1.82倍,在TOPv2语义解析任务上提高了2.0倍。
https://arxiv.org/abs/2404.16710
We explored the addition bias, a cognitive tendency to prefer adding elements over removing them to alter an initial state or structure, by conducting four preregistered experiments examining the problem-solving behavior of both humans and OpenAl's GPT-4 large language model. The experiments involved 588 participants from the U.S. and 680 iterations of the GPT-4 model. The problem-solving task was either to create symmetry within a grid (Experiments 1 and 3) or to edit a summary (Experiments 2 and 4). As hypothesized, we found that overall, the addition bias was present. Solution efficiency (Experiments 1 and 2) and valence of the instruction (Experiments 3 and 4) played important roles. Human participants were less likely to use additive strategies when subtraction was relatively more efficient than when addition and subtraction were equally efficient. GPT-4 exhibited the opposite behavior, with a strong addition bias when subtraction was more efficient. In terms of instruction valence, GPT-4 was more likely to add words when asked to "improve" compared to "edit", whereas humans did not show this effect. When we looked at the addition bias under different conditions, we found more biased responses for GPT-4 compared to humans. Our findings highlight the importance of considering comparable and sometimes superior subtractive alternatives, as well as reevaluating one's own and particularly the language models' problem-solving behavior.
我们通过进行四项预注册实验,研究了人类和OpenAl的GPT-4大型语言模型在解决问题行为方面的差异,以探讨添加偏差(addition bias)这一认知趋势。实验涉及来自美国588名参与者和GPT-4模型的680个迭代。问题解决任务可以是创建网格内的对称性(实验1和3)或者编辑摘要(实验2和4)。 根据我们的假设,我们发现总体上存在添加偏差。解决方案效率(实验1和2)和指令的积极性(实验3和4)非常重要。当减法相对更有效时,人类参与者更不可能使用添加策略。GPT-4表现出相反的行为,在减法更有效时具有强烈的添加偏差。 在指令积极性方面,GPT-4在被告知“改进”时更可能添加单词,而人类则没有这种效果。当我们研究添加偏差在不同条件下时,发现GPT-4的回答更加偏见,相对于人类而言。我们的研究结果强调了考虑可比较的和有时更好的减法替代方案的重要性,以及重新评估自己以及特别是语言模型的解决问题的行为。
https://arxiv.org/abs/2404.16692
Visual Instruction Tuning represents a novel learning paradigm involving the fine-tuning of pre-trained language models using task-specific instructions. This paradigm shows promising zero-shot results in various natural language processing tasks but is still unexplored in vision emotion understanding. In this work, we focus on enhancing the model's proficiency in understanding and adhering to instructions related to emotional contexts. Initially, we identify key visual clues critical to visual emotion recognition. Subsequently, we introduce a novel GPT-assisted pipeline for generating emotion visual instruction data, effectively addressing the scarcity of annotated instruction data in this domain. Expanding on the groundwork established by InstructBLIP, our proposed EmoVIT architecture incorporates emotion-specific instruction data, leveraging the powerful capabilities of Large Language Models to enhance performance. Through extensive experiments, our model showcases its proficiency in emotion classification, adeptness in affective reasoning, and competence in comprehending humor. The comparative analysis provides a robust benchmark for Emotion Visual Instruction Tuning in the era of LLMs, providing valuable insights and opening avenues for future exploration in this domain. Our code is available at \url{this https URL}.
视觉指令微调是一种新的学习范式,涉及使用任务特定指令对预训练语言模型进行微调。在这个范式中,我们专注于提高模型在理解并遵循与情感上下文相关的指令方面的能力。首先,我们识别出对视觉情感识别至关重要的关键视觉线索。接着,我们引入了一种新颖的GPT辅助生成情感视觉指令数据的长式依赖关系网络,有效解决了该领域中标注指令数据不足的问题。通过在InstructionBLIP工作的基础上拓展工作,我们提出的EmoVIT架构利用大型语言模型的强大能力来增强性能。通过广泛的实验,我们的模型在情感分类、情感推理和理解幽默方面展现了卓越的表现。比较分析为LLM时代的情感视觉指令微调提供了一个稳健的基准,为这个领域提供了宝贵的见解,并开拓了未来的研究方向。我们的代码可在此处访问:\url{这个链接}。
https://arxiv.org/abs/2404.16670
Linguistic ambiguity continues to represent a significant challenge for natural language processing (NLP) systems, notwithstanding the advancements in architectures such as Transformers and BERT. Inspired by the recent success of instructional models like ChatGPT and Gemini (In 2023, the artificial intelligence was called Bard.), this study aims to analyze and discuss linguistic ambiguity within these models, focusing on three types prevalent in Brazilian Portuguese: semantic, syntactic, and lexical ambiguity. We create a corpus comprising 120 sentences, both ambiguous and unambiguous, for classification, explanation, and disambiguation. The models capability to generate ambiguous sentences was also explored by soliciting sets of sentences for each type of ambiguity. The results underwent qualitative analysis, drawing on recognized linguistic references, and quantitative assessment based on the accuracy of the responses obtained. It was evidenced that even the most sophisticated models, such as ChatGPT and Gemini, exhibit errors and deficiencies in their responses, with explanations often providing inconsistent. Furthermore, the accuracy peaked at 49.58 percent, indicating the need for descriptive studies for supervised learning.
语言歧义一直是自然语言处理(NLP)系统的一个显著挑战,尽管像Transformer和BERT这样的架构取得了进步。受到类似ChatGPT和Gemini等 recent instructional models的成功启发,本研究旨在分析并讨论这些模型中的语言歧义,重点关注巴西葡萄牙语中三种普遍存在的歧义类型:语义、句法 和词汇歧义。我们创建了一个包括120个句子的语料库,包括歧义和明确语义两种,用于分类、解释和去歧义。还研究了模型生成歧义句的能力,通过要求针对每种歧义类型提供一组句子。结果经过定性分析,基于公认的语言参考,以及基于所获回答的准确性的定量评估。结果显示,即使是最先进的模型,如ChatGPT和Gemini,在其回应中也有错误和不足之处,解释往往是不一致的。此外,准确率在49.58%达到峰值,表明需要进行描述性研究来进行有监督学习。
https://arxiv.org/abs/2404.16653
Large language models (LLMs) have showcased profound capabilities in language understanding and generation, facilitating a wide array of applications. However, there is a notable paucity of detailed, open-sourced methodologies on efficiently scaling LLMs beyond 50 billion parameters with minimum trial-and-error cost and computational resources. In this report, we introduce Tele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that features a stable, efficient pre-training paradigm and enhanced factual judgment capabilities. Tele-FLM demonstrates superior multilingual language modeling abilities, measured by BPB on textual corpus. Besides, in both English and Chinese foundation model evaluation, it is comparable to strong open-sourced models that involve larger pre-training FLOPs, such as Llama2-70B and DeepSeek-67B. In addition to the model weights, we share the core designs, engineering practices, and training details, which we expect to benefit both the academic and industrial communities.
大语言模型(LLMs)在语言理解和生成方面展示了深刻的应用潜力,推动了各种应用的发展。然而,在将LLMs扩展到具有500亿参数以上最小尝试和错误成本的范围内时,特别是在计算资源方面,依然存在显著的缺乏详细、开源的方法论。在本文中,我们介绍了Tele-FLM(即FLM-2),一个520亿参数开源的多语言大语言模型,具有稳定的预训练范式和增强的的事实判断能力。Tele-FLM展示了卓越的多语言语言建模能力,以BPB衡量文本语料库。此外,在英语和中文基础模型评估中,它与涉及更大预训练FLOPs的强开源模型(如Llama2-70B和DeepSeek-67B)相当。除了模型权重外,我们还分享了核心设计、工程实践和训练细节,我们期望这将有益于学术界和工业界。
https://arxiv.org/abs/2404.16645
Charts are important for presenting and explaining complex data relationships. Recently, multimodal large language models (MLLMs) have shown remarkable capabilities in various chart understanding tasks. However, the sheer size of these models in terms of parameters and computational requirements limits their use in resource-constrained environments. In this paper, we present TinyChart, an efficient MLLM for chart understanding with only 3B parameters. TinyChart overcomes two key challenges in efficient chart understanding: (1) reduce the burden of learning numerical computations through a Program-of-Thoughts (PoT) learning strategy, which trains the model to generate Python programs for numerical calculations, and (2) reduce lengthy vision feature sequences produced by the vision transformer for high-resolution images through a Vision Token Merging module, which gradually merges most similar vision tokens. Extensive experiments demonstrate that our 3B TinyChart achieves SOTA performance on a variety of chart understanding benchmarks including ChartQA, Chart-to-Text, Chart-to-Table, OpenCQA, and ChartX. It outperforms several chart understanding MLLM with up to 13B parameters such as ChartLlama and ChartAst, and close-sourced general-purpose MLLM GPT-4V on ChartQA. It also demonstrates its superior efficiency with higher throughput during inference due to a smaller model scale and more efficient vision encoding. Our code and model are available at this https URL.
图表对于呈现和解释复杂数据关系非常重要。最近,多模态大型语言模型(MLLMs)在各种图表理解任务中表现出非凡的能力。然而,这些模型在参数和计算需求方面的庞大规模限制了其在资源受限环境中的应用。在本文中,我们提出了TinyChart,一个仅包含3B参数的高效的MLLM,用于图表理解。TinyChart克服了高效图表理解的两个关键挑战:(1)通过程序化思考(PoT)学习策略减少学习数值计算的负担,该策略训练模型生成用于数值计算的Python程序,(2)通过视觉词表合并模块减少高分辨率图像中产生的长视觉特征序列,该模块逐渐合并最相似的视觉词。大量实验证明,我们的3B TinyChart在包括ChartQA、Chart-to-Text、Chart-to-Table、OpenCQA和ChartX在内的各种图表理解基准测试中实现了最先进的性能。它优于拥有多达13B参数的ChartLlama和ChartAst等几个图表理解MLLM,并在ChartQA上的性能优于基于闭源通用MLLM GPT-4V。它还证明了其在推理过程中由于模型规模较小和视觉编码更高效而具有优越的效率。我们的代码和模型可以从该链接下载:https://url.com/
https://arxiv.org/abs/2404.16635
The integration of Large Language Models (LLMs) into healthcare promises to transform medical diagnostics, research, and patient care. Yet, the progression of medical LLMs faces obstacles such as complex training requirements, rigorous evaluation demands, and the dominance of proprietary models that restrict academic exploration. Transparent, comprehensive access to LLM resources is essential for advancing the field, fostering reproducibility, and encouraging innovation in healthcare AI. We present Hippocrates, an open-source LLM framework specifically developed for the medical domain. In stark contrast to previous efforts, it offers unrestricted access to its training datasets, codebase, checkpoints, and evaluation protocols. This open approach is designed to stimulate collaborative research, allowing the community to build upon, refine, and rigorously evaluate medical LLMs within a transparent ecosystem. Also, we introduce Hippo, a family of 7B models tailored for the medical domain, fine-tuned from Mistral and LLaMA2 through continual pre-training, instruction tuning, and reinforcement learning from human and AI feedback. Our models outperform existing open medical LLMs models by a large-margin, even surpassing models with 70B parameters. Through Hippocrates, we aspire to unlock the full potential of LLMs not just to advance medical knowledge and patient care but also to democratize the benefits of AI research in healthcare, making them available across the globe.
将大型语言模型(LLMs)融入医疗保健行业有望彻底改变医疗诊断、研究和患者护理。然而,医疗LLMs的发展面临着一些障碍,如复杂的训练要求、严格的评估需求以及 proprietary模型的主导地位,这些模型限制了学术探索。透明、全面的访问LLM资源对于推动该领域的发展、促进可重复性以及鼓励医疗保健领域的人工创新至关重要。我们推出了Hippocrates,一个专为医疗领域而设计的开源LLM框架。与之前的努力相比,它提供了无限制的访问其训练数据集、代码库、检查点以及评估协议。这种开放方法旨在鼓励协同研究,让社区在透明的生态系统中构建、改进和严格评估医疗LLM。此外,我们还介绍了Hippo家族7B模型,这些模型针对医疗领域进行了微调和优化,通过持续的预训练、指令调整和强化学习从人类和AI反馈中进行微调。我们的模型在现有开放医疗LLM模型的性能优势基础上,性能优势巨大,甚至超过了具有70B参数的模型。通过Hippocrates,我们渴望利用LLMs不仅推动医疗知识和患者护理的发展,还将促进医疗保健领域的人工研究民主化,使它们在全球范围内可用。
https://arxiv.org/abs/2404.16621
This paper presents the multi-speaker multi-lingual few-shot voice cloning system developed by THU-HCSI team for LIMMITS'24 Challenge. To achieve high speaker similarity and naturalness in both mono-lingual and cross-lingual scenarios, we build the system upon YourTTS and add several enhancements. For further improving speaker similarity and speech quality, we introduce speaker-aware text encoder and flow-based decoder with Transformer blocks. In addition, we denoise the few-shot data, mix up them with pre-training data, and adopt a speaker-balanced sampling strategy to guarantee effective fine-tuning for target speakers. The official evaluations in track 1 show that our system achieves the best speaker similarity MOS of 4.25 and obtains considerable naturalness MOS of 3.97.
本文介绍了由THU-HCSI团队为LIMMITS'24挑战开发的的多语种、多声道语音克隆系统。为了在单语种和跨语种场景下实现高说话者相似度和自然度,我们在YourTTS基础上进行了系统构建,并添加了几个增强功能。为了进一步提高说话者相似度和语音质量,我们引入了说话者感知的文本编码器和基于Transformer的流式解码器。此外,我们还对几 shot数据进行了去噪、混合处理,并采用了一种针对说话者的平衡采样策略,以确保对目标说话者的有效微调。在1号轨道的官方评估中,我们的系统实现了4.25的说话者相似度MOS和显著的自然度MOS。
https://arxiv.org/abs/2404.16619
In below freezing winter conditions, road surface friction can greatly vary based on the mixture of snow, ice, and water on the road. Friction between the road and vehicle tyres is a critical parameter defining vehicle dynamics, and therefore road surface friction information is essential to acquire for several intelligent transportation applications, such as safe control of automated vehicles or alerting drivers of slippery road conditions. This paper explores computer vision-based evaluation of road surface friction from roadside cameras. Previous studies have extensively investigated the application of convolutional neural networks for the task of evaluating the road surface condition from images. Here, we propose a hybrid deep learning architecture, WCamNet, consisting of a pretrained visual transformer model and convolutional blocks. The motivation of the architecture is to combine general visual features provided by the transformer model, as well as finetuned feature extraction properties of the convolutional blocks. To benchmark the approach, an extensive dataset was gathered from national Finnish road infrastructure network of roadside cameras and optical road surface friction sensors. Acquired results highlight that the proposed WCamNet outperforms previous approaches in the task of predicting the road surface friction from the roadside camera images.
在严寒的冬季条件下,道路表面的摩擦系数会因路面上积雪、冰和水混合物的影响而大大不同。道路与车辆轮胎之间的摩擦是定义车辆动力学的重要参数,因此获取道路表面摩擦信息对于多个智能交通应用至关重要,例如安全控制自动车辆或警示驾驶员道路湿滑情况。本文从路边摄像机对道路表面摩擦进行计算机视觉评估。之前的研究已经广泛探讨了使用卷积神经网络从图像中评估道路表面状况。本文提出了一种混合深度学习架构WCamNet,包括预训练的视觉 transformer模型和卷积模块。架构的动机是结合 transformer 模型提供的通用视觉特征以及卷积模块的微调特征提取特性。为了验证该方法,从国家芬兰道路基础设施网络的路边摄像机和光学道路表面摩擦传感器中收集了广泛的數據。得到的结果表明,与之前的方法相比,所提出的 WCamNet 在预测从路边摄像机图像中预测道路表面摩擦方面表现优异。
https://arxiv.org/abs/2404.16578
It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that ALBERT models of different sizes, trained and initialized separately, consistently learn to use the axes of greatest variation to represent the approximate ordering of various numerical concepts. Numerals and their textual counterparts are represented in separate clusters, but increase along the same direction in 2D space. Our findings illustrate that language models, trained purely to model text, can intuit basic mathematical concepts, opening avenues for NLP applications that intersect with quantitative reasoning.
研究发现,基于Transformer的语言模型具有执行基本数量推理的能力。在本文中,我们提出了一种研究这些模型内部如何表示数值数据的方法,并使用我们的建议分析ALBERT家族的语言模型。具体来说,我们提取这些模型用于表示数字和序数的 learned嵌入,并对其进行主成分分析(PCA)。PCA的结果表明,具有不同大小的ALBERT模型,在训练和初始化过程中分别进行,能够一致地使用变化最大的轴来表示各种数值概念的近似顺序。数值和它们的文本对应物分别位于不同的簇中,但在二维空间中沿着相同的方向增加。我们的研究结果表明,训练纯粹用于建模文本的语言模型可以直观地理解基本的数学概念,为与数量推理相关的自然语言处理应用程序开辟了道路。
https://arxiv.org/abs/2404.16574
Recent advances in Vision and Language Models (VLMs) have improved open-world 3D representation, facilitating 3D zero-shot capability in unseen categories. Existing open-world methods pre-train an extra 3D encoder to align features from 3D data (e.g., depth maps or point clouds) with CAD-rendered images and corresponding texts. However, the limited color and texture variations in CAD images can compromise the alignment robustness. Furthermore, the volume discrepancy between pre-training datasets of the 3D encoder and VLM leads to sub-optimal 2D to 3D knowledge transfer. To overcome these issues, we propose OpenDlign, a novel framework for learning open-world 3D representations, that leverages depth-aligned images generated from point cloud-projected depth maps. Unlike CAD-rendered images, our generated images provide rich, realistic color and texture diversity while preserving geometric and semantic consistency with the depth maps. OpenDlign also optimizes depth map projection and integrates depth-specific text prompts, improving 2D VLM knowledge adaptation for 3D learning efficient fine-tuning. Experimental results show that OpenDlign significantly outperforms existing benchmarks in zero-shot and few-shot 3D tasks, exceeding prior scores by 8.0% on ModelNet40 and 16.4% on OmniObject3D with just 6 million tuned parameters. Moreover, integrating generated depth-aligned images into existing 3D learning pipelines consistently improves their performance.
近年来,在Vision和语言模型(VLMs)方面的进步已经提高了开放世界3D表示,推动了在未见类别的3D零击能力。现有的开放世界方法在预训练3D编码器时添加了一个额外的3D编码器,使其将来自3D数据(如深度图或点云)的特征与CAD渲染图像和相关文本对齐。然而,CAD图像中有限的颜色和纹理变化可能会削弱对齐稳健性。此外,预训练3D编码器数据集和VLM数据集之间的体积差异导致了2D到3D知识传递的低效。为了克服这些问题,我们提出了OpenDlign,一种学习开放世界3D表示的新框架,它利用点云投影得到的深度图生成的深度对齐图像。与CAD渲染图像不同,我们的生成图像在保持几何和语义一致性的同时,提供了丰富、逼真的颜色和纹理多样性。此外,OpenDlign还优化了深度图投影并集成了深度特定文本提示,提高了2D VLM对3D学习的知识迁移效率。实验结果表明,OpenDlign在零击和少击3D任务上显著优于现有基准,在仅600万调整参数的情况下,超过了ModelNet40和OmniObject3D的分数。此外,将生成的深度对齐图像集成到现有的3D学习流程中,显著提高了它们的性能。
https://arxiv.org/abs/2404.16538