Abstract
The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between topic words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining a high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.
Abstract (translated)
神经网络主题建模的标准方法使用Variational Autoencoder (VAE)框架,该框架 jointly 最小化估计的后验与先验之间的KL散度,同时最小化重建损失。由于神经网络主题模型是通过重塑 individual 输入文档来训练的,它们并不 explicitly 捕捉主题词在语料库级别上的一致性。在这项工作中,我们提出了一种独特的一致性相关一致性损失,它鼓励模型学习语料库级别上的一致性评分,同时保持主题之间的高多样性。多个数据集的实验结果表明,我们的方法显著改进了神经网络主题模型的性能,而不需要任何预训练或额外的参数。
URL
https://arxiv.org/abs/2305.16199