Abstract
Differentiable logics (DL) have recently been proposed as a method of training neural networks to satisfy logical specifications. A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions. These loss functions can then be used during training with standard gradient descent algorithms. The variety of existing DLs and the differing levels of formality with which they are treated makes a systematic comparative study of their properties and implementations difficult. This paper remedies this problem by suggesting a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces a general interpretation function that can be instantiated to define loss functions arising from different existing DLs. We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification.
Abstract (translated)
可分逻辑(DL)最近被提出作为一种训练神经网络以满足逻辑规范的方法和工具。一个DL由表示性语法组成,其中定义被陈述,并且解释函数将语法中的表达式转换为损失函数。这些损失函数可以在标准梯度下降算法训练中使用。现有的DL的多样性以及它们被处理的不同程度使得对他们的性质和实现进行系统比较研究很困难。本文解决这个问题,提出了一种称为“不同分逻辑逻辑”的元语言来定义DL,我们称之为“不同分逻辑逻辑语言”(LDL)。在符号学上,它普遍化了现有的DL的语法,并第一次引入了对向量和学习器推理的表示性形式。在语义学上,它引入了一种通用的解释函数,可以实例化来定义来自不同现有DL的不同损失函数。我们使用LDL建立了现有的DL的几个理论性质,并在神经网络验证中进行它们的实证研究。
URL
https://arxiv.org/abs/2303.10650