Abstract
We propose an efficient knowledge transfer approach for model-based reinforcement learning, addressing the challenge of deploying large world models in resource-constrained environments. Our method distills a high-capacity multi-task agent (317M parameters) into a compact 1M parameter model, achieving state-of-the-art performance on the MT30 benchmark with a normalized score of 28.45, a substantial improvement over the original 1M parameter model's score of 18.93. This demonstrates the ability of our distillation technique to consolidate complex multi-task knowledge effectively. Additionally, we apply FP16 post-training quantization, reducing the model size by 50% while maintaining performance. Our work bridges the gap between the power of large models and practical deployment constraints, offering a scalable solution for efficient and accessible multi-task reinforcement learning in robotics and other resource-limited domains.
Abstract (translated)
我们提出了一种基于模型的强化学习中的高效知识转移方法,以解决在资源受限环境中部署大型世界模型的挑战。我们的方法将一个高容量的多任务代理(3.17亿参数)精简为一个紧凑型的100万参数模型,在MT30基准测试中取得了28.45的标准化分数,比原100万参数模型的18.93分有了显著提升。这表明我们的蒸馏技术能够有效地整合复杂的多任务知识。此外,我们还应用了FP16后训练量化技术,使模型体积减半的同时保持性能不变。 我们的工作弥合了大型模型的强大功能与实际部署限制之间的差距,为机器人及其他资源受限领域提供了一种可扩展的解决方案,实现了高效且易于访问的多任务强化学习。
URL
https://arxiv.org/abs/2501.05329