Abstract
A central challenge in representation learning is constructing latent embeddings that are both expressive and efficient. In practice, deep networks often produce redundant latent spaces where multiple coordinates encode overlapping information, reducing effective capacity and hindering generalization. Standard metrics such as accuracy or reconstruction loss provide only indirect evidence of such redundancy and cannot isolate it as a failure mode. We introduce a redundancy index, denoted rho(C), that directly quantifies inter-dimensional dependencies by analyzing coupling matrices derived from latent representations and comparing their off-diagonal statistics against a normal distribution via energy distance. The result is a compact, interpretable, and statistically grounded measure of representational quality. We validate rho(C) across discriminative and generative settings on MNIST variants, Fashion-MNIST, CIFAR-10, and CIFAR-100, spanning multiple architectures and hyperparameter optimization strategies. Empirically, low rho(C) reliably predicts high classification accuracy or low reconstruction error, while elevated redundancy is associated with performance collapse. Estimator reliability grows with latent dimension, yielding natural lower bounds for reliable analysis. We further show that Tree-structured Parzen Estimators (TPE) preferentially explore low-rho regions, suggesting that rho(C) can guide neural architecture search and serve as a redundancy-aware regularization target. By exposing redundancy as a universal bottleneck across models and tasks, rho(C) offers both a theoretical lens and a practical tool for evaluating and improving the efficiency of learned representations.
Abstract (translated)
在表示学习中的一个核心挑战是构建既能表达丰富信息又能保持高效性的潜在嵌入。实践中,深度网络常常产生冗余的潜在空间,在这种空间中,多个坐标编码重叠的信息,这会减少有效容量并阻碍泛化能力。标准指标如准确率或重构损失只能间接证明这一冗余现象,并且无法将其作为失败模式进行隔离。我们引入了一个冗余度指数ρ(C),通过分析从潜在表示推导出的耦合矩阵的对角线之外统计量与正态分布之间的能量距离来直接量化维度间的相互依赖性,从而提供了一种紧凑、可解释和基于统计数据的质量衡量标准。 我们在多个数据集上验证了ρ(C)的有效性,包括MNIST变体、Fashion-MNIST、CIFAR-10以及CIFAR-100,并且涵盖了多种架构与超参数优化策略。从经验来看,低ρ(C)值能够可靠地预测高分类准确率或低重构误差,而较高的冗余度则常常导致性能下降。随着潜在维度的增加,估计器可靠性也相应提升,从而为可靠的分析提供了自然下界。 此外我们还表明,树结构帕尔森估算器(TPE)倾向于探索低ρ区域,这暗示着ρ(C)可以作为神经架构搜索中的指引工具,并且可以用作考虑冗余度的正则化目标。通过揭示在各种模型和任务中普遍存在的冗余瓶颈问题,ρ(C)不仅为评估和改进学习表示效率提供了理论视角,还提供了一种实用工具。
URL
https://arxiv.org/abs/2509.06314