Abstract
Graph structured data are abundant in the real world. Among different graph types, directed acyclic graphs (DAGs) are of particular interests to machine learning researchers, as many machine learning models are realized as computations on DAGs, including neural networks and Bayesian networks. In this paper, we study deep generative models for DAGs, and propose a novel DAG variational autoencoder (D-VAE). To encode DAGs into the latent space, we leverage graph neural networks. We propose a DAG-style asynchronous message passing scheme that allows encoding the computations defined by DAGs, rather than using existing simultaneous message passing schemes to encode the graph structures. We demonstrate the effectiveness of our proposed D-VAE through two tasks: neural architecture search and Bayesian network structure learning. Experiments show that our model not only generates novel and valid DAGs, but also produces a smooth latent space that facilitates searching for DAGs with better performance through Bayesian optimization.
Abstract (translated)
图结构数据在现实世界中非常丰富。在不同的图类型中,有向无环图(DAG)是机器学习研究者特别感兴趣的,因为许多机器学习模型都是通过对DAG的计算来实现的,包括神经网络和贝叶斯网络。本文研究了DAG的深生成模型,提出了一种新型的DAG变分自动编码器(D-VAE)。为了将DAG编码到潜在空间,我们利用了图神经网络。我们提出了一种DAG风格的异步消息传递方案,它允许对DAG定义的计算进行编码,而不是使用现有的同步消息传递方案来对图形结构进行编码。我们通过两个任务来证明我们提出的D-VAE的有效性:神经架构搜索和贝叶斯网络结构学习。实验表明,该模型不仅生成了新颖有效的DAG,而且通过贝叶斯优化产生了一个平滑的潜在空间,便于搜索性能更好的DAG。
URL
https://arxiv.org/abs/1904.11088