Abstract
In this paper, we aim to generate text classification data given arbitrary class definitions (i.e., user instruction), so one can train a small text classifier without any human annotation or raw corpus. Compared with pioneer attempts, our proposed Incubator is the first framework that can handle complicated and even mutually dependent classes (e.g., "TED Talk given by Educator" and "Other"). Specifically, Incubator is an LLM firstly tuned on the instruction-to-data mappings that we obtained from classification datasets and descriptions on HuggingFace together with in-context augmentation by GPT-4. We then refine Incubator by learning on the cluster centers of semantic textual embeddings to emphasize the uniformity and semantic diversity in generations. We compare Incubator on various classification tasks with strong baselines such as direct LLM-based inference and training data generation by prompt engineering. Experiments show Incubator is able to (1) perform well on traditional benchmarks, (2) take label dependency and user preference into consideration, and (3) enable logical text mining by incubating multiple classifiers.
Abstract (translated)
在本文中,我们的目标是根据任意类定义生成文本分类数据,从而可以训练一个没有人工标注或原始语料库的小文本分类器。与先驱尝试相比,我们提出的孵化器是第一个可以处理复杂甚至相互依赖类别的框架(例如,"TED演讲由教育者给出"和"其他")。具体来说,孵化器是我们从分类数据和描述中获得的指令到数据映射的LLM,并使用GPT-4在上下文增强。然后,通过在语义文本嵌入的聚类中心上学习来优化孵化器,以强调代际之间的统一性和多样性。我们比较孵化器在各种分类任务上的结果与强大的基线,如直接LLM推理和训练数据生成通过提示工程。实验结果表明,孵化器能够(1)在传统基准测试中表现良好,(2)考虑标签依赖和用户偏好,(3)通过孵化多个分类器实现逻辑文本挖掘。
URL
https://arxiv.org/abs/2404.10877