Abstract
Knowledge claims are abundant in the literature on large language models (LLMs); but can we say that GPT-4 truly "knows" the Earth is round? To address this question, we review standard definitions of knowledge in epistemology and we formalize interpretations applicable to LLMs. In doing so, we identify inconsistencies and gaps in how current NLP research conceptualizes knowledge with respect to epistemological frameworks. Additionally, we conduct a survey of 100 professional philosophers and computer scientists to compare their preferences in knowledge definitions and their views on whether LLMs can really be said to know. Finally, we suggest evaluation protocols for testing knowledge in accordance to the most relevant definitions.
Abstract (translated)
知识断言在大型语言模型(LLMs)的文獻中很常見;但我们可以说 GPT-4 真正 "知道" 地球是圓的吗?为了解决这个问题,我们审查了关于知识在形而上学中的标准定义,并 formalize 适用于 LLMs 的解释。在这样做的时候,我们识别出当前 NLP 研究在知识与形而上学框架之间存在的不一致性和空白。此外,我们对 100 名专业哲学家兼计算机科学家进行了调查,以比较他们对知识定义的偏好以及他们是否认为 LLMs 真的可以被认为知道。最后,我们提出了测试知识符合最相关定义的评价协议。
URL
https://arxiv.org/abs/2410.02499