Abstract
Multi-view feature extraction is an efficient approach for alleviating the issue of dimensionality in highdimensional multi-view data. Contrastive learning (CL), which is a popular self-supervised learning method, has recently attracted considerable attention. In this study, we propose a novel multi-view feature extraction method based on triple contrastive heads, which combines the sample-, recovery- , and feature-level contrastive losses to extract the sufficient yet minimal subspace discriminative information in compliance with information bottleneck principle. In MFETCH, we construct the feature-level contrastive loss, which removes the redundent information in the consistency information to achieve the minimality of the subspace discriminative information. Moreover, the recovery-level contrastive loss is also constructed in MFETCH, which captures the view-specific discriminative information to achieve the sufficiency of the subspace discriminative information.The numerical experiments demonstrate that the proposed method offers a strong advantage for multi-view feature extraction.
Abstract (translated)
多视图特征提取是一种高效的方法,以解决高维多视图数据中的维度问题。对比学习(CL)是一种常见的自监督学习方法,最近引起了广泛关注。在本研究中,我们提出了基于三对比头的新多视图特征提取方法,该方法将样本、恢复和特征级别的对比损失相结合,以提取符合信息瓶颈原则的足够但最小的子空间相关特征。在MFETCH中,我们建立了特征级别的对比损失,该损失消除了一致性信息中的冗余信息,以实现子空间相关特征的最小化。此外,在MFETCH中,我们还建立了恢复级别的对比损失,该损失捕捉视图特定的相关特征,以实现子空间相关特征的足够。数值实验表明,该方法为多视图特征提取提供了强有力的优势。
URL
https://arxiv.org/abs/2303.12615