Abstract
Food image classification systems play a crucial role in health monitoring and diet tracking through image-based dietary assessment techniques. However, existing food recognition systems rely on static datasets characterized by a pre-defined fixed number of food classes. This contrasts drastically with the reality of food consumption, which features constantly changing data. Therefore, food image classification systems should adapt to and manage data that continuously evolves. This is where continual learning plays an important role. A challenge in continual learning is catastrophic forgetting, where ML models tend to discard old knowledge upon learning new information. While memory-replay algorithms have shown promise in mitigating this problem by storing old data as exemplars, they are hampered by the limited capacity of memory buffers, leading to an imbalance between new and previously learned data. To address this, our work explores the use of neural image compression to extend buffer size and enhance data diversity. We introduced the concept of continuously learning a neural compression model to adaptively improve the quality of compressed data and optimize the bitrates per pixel (bpp) to store more exemplars. Our extensive experiments, including evaluations on food-specific datasets including Food-101 and VFN-74, as well as the general dataset ImageNet-100, demonstrate improvements in classification accuracy. This progress is pivotal in advancing more realistic food recognition systems that are capable of adapting to continually evolving data. Moreover, the principles and methodologies we've developed hold promise for broader applications, extending their benefits to other domains of continual machine learning systems.
Abstract (translated)
食品图像分类系统在通过图像为基础的饮食评估技术对健康状况进行监测和饮食跟踪中发挥着关键作用。然而,现有的食品识别系统依赖于静态数据集,其特征是预定义的固定数量食品类别。这与现实中的食品消费情况存在很大差异,因为食品消费数据具有不断变化的特点。因此,食品图像分类系统应该适应并管理持续变化的数据。这正是持续学习发挥作用的地方。 持续学习的挑战之一是灾难性遗忘,即机器学习模型在学习新信息时倾向于丢弃旧知识。虽然记忆回放算法通过将旧数据存储为示例来减轻这个问题,但由于内存缓冲区的有限容量,导致新学习和旧学习数据之间的不平衡。为了解决这个问题,我们的工作探讨了使用神经图像压缩来扩展缓冲区大小和增强数据多样性的方法。我们引入了连续学习神经压缩模型的概念,以适应性地提高压缩数据的质量并优化每像素(bpp)以存储更多示例。 我们在包括Food-101和VFN-74等食品特定数据集以及ImageNet-100等通用数据集的广泛实验中进行了评估,证明了分类准确度的提高。这一进步对于推动更现实、能够适应不断变化数据的食品识别系统至关重要。此外,我们开发的原则和方法论对于更广泛的应用场景也具有潜在意义,将这些 benefits 扩展到其他连续机器学习系统中。
URL
https://arxiv.org/abs/2404.07507