Abstract
The rapid advancement of large language models (LLMs) has led to significant improvements in their capabilities, but also to increased concerns about their alignment with human values and intentions. Current alignment strategies, including adaptive training and inference-time methods, have demonstrated potential in this area. However, these approaches still struggle to balance deployment complexity and capability across various tasks and difficulties. In this work, we introduce the Streaming Distribution Induce Aligner (Stream Aligner), a novel alignment paradigm that combines efficiency with enhanced performance in various tasks throughout the generation process. Stream Aligner achieves dynamic sentence-level correction by using a small model to learn the preferences of the suffix sentence, iteratively correcting the suffix sentence output by the upstream model, and then using the corrected sentence to replace the suffix sentence in subsequent generations. Compared to Aligner, our experiments demonstrate that Stream Aligner reduces reliance on the capabilities of additional models, enhances the reasoning abilities of LLMs, and decreases latency during user interaction. Specifically, Stream Aligner-2B model has achieved an improvement of 76.1% in helpfulness, 36.0% in harmlessness on the tested Llama2-70B-chat model, and Stream Aligner-8B has achieved an improvement of 3.5% on the math ability of the tested Llama3-70B-Instruct model.
Abstract (translated)
大型语言模型(LLMs)的迅速发展已经显著提高了它们的能力,但也引发了关于这些模型与人类价值观和意图相一致性的担忧。目前的对齐策略,包括自适应训练和推理时间方法,在这一领域显示出了一定潜力。然而,这些方法在平衡部署复杂性和跨各种任务和难度的任务能力方面仍面临挑战。在这项工作中,我们介绍了流式分布诱导对齐器(Stream Aligner),这是一种结合了效率与生成过程中多种任务性能增强的新颖对齐范式。Stream Aligner 通过使用一个小模型来学习后缀句子的偏好,在迭代中纠正上游模型输出的后缀句子,并用修正后的句子替换后续生成中的后缀句子,实现了动态句级校正。 相比Aligner,我们的实验表明,Stream Aligner 减少了对额外模型能力的依赖,增强了LLMs 的推理能力,并在用户交互期间降低了延迟。具体来说,在测试的Llama2-70B-chat 模型中,Stream Aligner-2B 模型实现了76.1% 的有用性改进和36.0% 的无害性改进;而在测试的Llama3-70B-Instruct 模型中,Stream Aligner-8B 实现了数学能力方面3.5% 的提升。
URL
https://arxiv.org/abs/2501.05336