Abstract
Traditional methods, such as JPEG, perform image compression by operating on structural information, such as pixel values or frequency content. These methods are effective to bitrates around one bit per pixel (bpp) and higher at standard image sizes. In contrast, text-based semantic compression directly stores concepts and their relationships using natural language, which has evolved with humans to efficiently represent these salient concepts. These methods can operate at extremely low bitrates by disregarding structural information like location, size, and orientation. In this work, we use GPT-4V and DALL-E3 from OpenAI to explore the quality-compression frontier for image compression and identify the limitations of current technology. We push semantic compression as low as 100 $\mu$bpp (up to $10,000\times$ smaller than JPEG) by introducing an iterative reflection process to improve the decoded image. We further hypothesize this 100 $\mu$bpp level represents a soft limit on semantic compression at standard image resolutions.
Abstract (translated)
传统方法,如JPEG,通过操作于结构信息,如像素值或频率内容来进行图像压缩。这些方法在标准图像大小下的位速率在每像素1位(bpp)及以上时效果很好。相比之下,基于文本的语义压缩直接使用自然语言存储概念及其关系,这些概念随着人类的发展而有效地表示。这些方法可以操作在非常低的位速率,通过忽略类似于位置、大小和方向的结构信息来达到这一目的。在这项工作中,我们使用GPT-4V和DALL-E3来自OpenAI,探讨图像压缩的品质-压缩前沿,并确定当前技术的局限性。我们通过引入迭代反射过程将语义压缩推向100 $\mu$bpp(最多比JPEG低10,000倍)的极限。我们进一步假设,100 $\mu$bpp的级别代表了在标准图像分辨率下的语义压缩的软极限。
URL
https://arxiv.org/abs/2402.13536