Abstract
The Information Contrastive (I-Con) framework revealed that over 23 representation learning methods implicitly minimize KL divergence between data and learned distributions that encode similarities between data points. However, a KL-based loss may be misaligned with the true objective, and properties of KL divergence such as asymmetry and unboundedness may create optimization challenges. We present Beyond I-Con, a framework that enables systematic discovery of novel loss functions by exploring alternative statistical divergences and similarity kernels. Key findings: (1) on unsupervised clustering of DINO-ViT embeddings, we achieve state-of-the-art results by modifying the PMI algorithm to use total variation (TV) distance; (2) on supervised contrastive learning, we outperform the standard approach by using TV and a distance-based similarity kernel instead of KL and an angular kernel; (3) on dimensionality reduction, we achieve superior qualitative results and better performance on downstream tasks than SNE by replacing KL with a bounded f-divergence. Our results highlight the importance of considering divergence and similarity kernel choices in representation learning optimization.
Abstract (translated)
信息对比(I-Con)框架揭示了超过23种表示学习方法隐含地最小化数据分布与编码数据点之间相似性的学习分布之间的KL散度。然而,基于KL的损失函数可能与其真实目标不一致,并且KL散度的一些性质如不对称性和无界性可能会带来优化难题。我们提出了超越I-Con框架,这是一个通过探索替代统计分歧和相似性核来系统发现新型损失函数的框架。 关键发现包括: 1. 在对DINO-ViT嵌入进行无监督聚类时,我们通过对PMI算法进行修改使用总变差(TV)距离实现了最新的研究成果。 2. 在监督对比学习中,通过使用TV和基于距离的相似性核而不是KL散度和角度核,我们的方法超越了标准方法的表现。 3. 在降维任务上,我们通过用有界的f-散度替代KL散度,获得了比t-SNE更好的定性和定量结果。 这些成果突显出在表示学习优化过程中考虑分歧与相似性内核选择的重要性。
URL
https://arxiv.org/abs/2509.04734