Abstract
We explored the addition bias, a cognitive tendency to prefer adding elements over removing them to alter an initial state or structure, by conducting four preregistered experiments examining the problem-solving behavior of both humans and OpenAl's GPT-4 large language model. The experiments involved 588 participants from the U.S. and 680 iterations of the GPT-4 model. The problem-solving task was either to create symmetry within a grid (Experiments 1 and 3) or to edit a summary (Experiments 2 and 4). As hypothesized, we found that overall, the addition bias was present. Solution efficiency (Experiments 1 and 2) and valence of the instruction (Experiments 3 and 4) played important roles. Human participants were less likely to use additive strategies when subtraction was relatively more efficient than when addition and subtraction were equally efficient. GPT-4 exhibited the opposite behavior, with a strong addition bias when subtraction was more efficient. In terms of instruction valence, GPT-4 was more likely to add words when asked to "improve" compared to "edit", whereas humans did not show this effect. When we looked at the addition bias under different conditions, we found more biased responses for GPT-4 compared to humans. Our findings highlight the importance of considering comparable and sometimes superior subtractive alternatives, as well as reevaluating one's own and particularly the language models' problem-solving behavior.
Abstract (translated)
我们通过进行四项预注册实验,研究了人类和OpenAl的GPT-4大型语言模型在解决问题行为方面的差异,以探讨添加偏差(addition bias)这一认知趋势。实验涉及来自美国588名参与者和GPT-4模型的680个迭代。问题解决任务可以是创建网格内的对称性(实验1和3)或者编辑摘要(实验2和4)。 根据我们的假设,我们发现总体上存在添加偏差。解决方案效率(实验1和2)和指令的积极性(实验3和4)非常重要。当减法相对更有效时,人类参与者更不可能使用添加策略。GPT-4表现出相反的行为,在减法更有效时具有强烈的添加偏差。 在指令积极性方面,GPT-4在被告知“改进”时更可能添加单词,而人类则没有这种效果。当我们研究添加偏差在不同条件下时,发现GPT-4的回答更加偏见,相对于人类而言。我们的研究结果强调了考虑可比较的和有时更好的减法替代方案的重要性,以及重新评估自己以及特别是语言模型的解决问题的行为。
URL
https://arxiv.org/abs/2404.16692