人工智能是否不利于批判性思维?这取决于你何时使用它。

内容来源:https://www.sciencenews.org/article/ai-timing-critical-thinking-study
内容总结:
人工智能是否阻碍批判性思维?研究揭示关键在于使用时机
一项最新研究表明,人工智能对批判性思维的影响并非绝对,其效果很大程度上取决于使用时机。研究指出,在完成写作等任务时,若在自主思考一段时间后再借助AI,往往能获得更优结果。
在近期于巴塞罗那举行的2026年“人机交互”国际会议上,芝加哥大学计算机科学家李敏娜及其团队报告了这一发现。研究将393名参与者随机分组,模拟城市议员角色,依据给定材料就一项水污染治理提案撰写决策论述。
结果显示,拥有充足时间(30分钟)的参与者整体表现优于时间紧张(10分钟)者。其中,在时间充裕且后期才使用AI聊天机器人(GPT-4o) 的小组,其论述在论点质量、文本引用广度及多视角整合方面得分最高。而时间充裕但全程未使用AI 的组别,对原始材料的记忆理解最为扎实。
研究同时揭示了现实中的两难:在时间紧迫时,早期使用AI的小组 论述得分最高,AI确实能快速提升产出效率。但研究者警告,这种“加速”可能以牺牲独立思考为代价——用户容易不自觉地接受AI的预设框架,减少自主论证与深度分析。
美国奥克兰大学教育专家芭芭拉·奥克利指出,这印证了两种学习模式的差异:“慢思考”依赖自主深入分析,而“快思考”倾向于依赖习惯与即时判断。先自主思考再使用AI的参与者,正是经历了前者更审慎的思维过程。
李敏娜强调,公众需建立“AI使用意识”:在不同场景和任务阶段,应权衡使用AI的利弊。她表示:“我们的研究以时间约束为切入点,揭示了培养AI素养与了解自身思维模式的重要性——这或许是当下每个人需要努力的方向。”
核心结论:AI并非批判性思维的“天敌”,也非万能助手。合理的使用策略是——在时间允许时优先自主思考,后期借助AI拓展视角;在紧急情况下虽可借AI提速,但需警惕对其输出的过度依赖。
中文翻译:
人工智能不利于批判性思维吗?这取决于你何时使用它
如果在写作后期使用,聊天机器人有助于纳入更多视角
下次当你准备求助人工智能聊天机器人解决难题时,或许应该先放慢节奏。
研究人员在2026年4月14日于巴塞罗那举行的"人机交互计算系统因素"会议上报告称:那些先独立完成部分问题再咨询AI聊天机器人的参与者,在批判性思维任务中的表现优于从一开始就使用AI的群体。然而在紧迫时限下,早期使用AI确实能提升效率——这凸显了速度与独立推理之间的权衡,也引发了我们应如何及何时使用聊天机器人的思考。
在这项研究中,芝加哥大学计算机科学家李敏娜(音)与同事将393名参与者随机分为八组。首先,参与者被划分为两大群体:充足时间组(30分钟)与不足时间组(10分钟)。随后根据使用OpenAI的GPT-4o聊天机器人的时机(或是否使用)进一步细分:早期使用组、持续使用组、后期使用组及禁止使用组,每组约40至50人。
参与者被要求扮演市议会成员,根据七份文件决定是否接受某公司解决水污染问题的提案,并撰写解释其决策的文章。
研究人员根据文章包含的有效论据和文本引用进行评分,发现拥有30分钟的参与者整体表现优于仅有10分钟的群体。其中得分最高的是那些拥有充足时间、且在写作后期才使用聊天机器人的参与者。
在考察参与者对文件信息的记忆程度时,表现最佳的是拥有充足时间且从未使用聊天机器人的群体。而在衡量"自我偏向偏差"(即论证中纳入多元视角的程度)时,拥有充足时间且后期使用聊天机器人的组别表现最为突出。
密歇根州罗彻斯特山奥克兰大学的系统工程师兼教育专家芭芭拉·奥克利指出,这些结果与两种学习模式的研究相吻合:一种是基于缓慢、费力的推理,另一种则依赖快速、自动的思维。她解释道:"慢速学习需要仔细构建对问题的理解并权衡选项,而快速学习则依赖习惯和几乎不经过反思的即时判断。那些在使用AI前有时间自主推理材料的参与者之所以表现最佳,正是因为他们已经进行了更缓慢、更审慎的学习过程。"
当然在现实世界中,人们常常需要在时间压力下完成批判性思维任务。在"时间不足"的四个组别中,早期使用聊天机器人的组别文章得分最高。但李敏娜强调这并不意味着我们应该急于使用AI:"当你在时间压力下借助AI提升表现时,实际上是在冒险接受AI的思维框架,这会限制你提出的论证类型,削弱你对文件或多元信息的深入思考。使用者至少需要清楚自己正在面临什么。"
这种认知或许正是当前每个人应该追求的目标。李敏娜认为,人们需要具备强大的人工智能素养,并了解自身的思维模式,才能在不同场景和问题解决的不同阶段权衡使用聊天机器人的风险与收益。"我们的研究将时间约束作为理解这一问题的第一步探索。"
英文来源:
Is AI bad for critical thinking? It depends on when you use it
If used later in writing an essay, the chatbots can help include more perspectives
Next time you’re about to ask an AI chatbot to help you solve a hard problem, you might want to slow your roll.
People who waited to consult an AI chatbot until they had partially worked through a problem on their own performed better on a critical thinking task than those who used the chatbot from the start, researchers reported April 14 at the 2026 CHI conference on Human Factors in Computing Systems in Barcelona. Under tight deadlines, though, using AI early in the process did provide a boost, highlighting a trade-off between speed and independent reasoning, and raising questions about how and when we should use chatbots.
In the study, computer scientist Mina Lee of the University of Chicago and colleagues randomly assigned 393 people to one of eight categories. First, participants were divided into two large groups: those given sufficient time (30 minutes) or insufficient time (10 minutes). Then, they were divided into smaller groups based on when, or if, they could use the OpenAI’s GPT-4o chatbot: early, continuous, late or no access. Each group had roughly 40 to 50 participants.
Next, participants were instructed to play the role of a city council member and decide, using a set of seven documents, whether to accept or reject a company’s proposal to mitigate a water contamination problem. Each participant had to write an essay explaining their decision.
The researchers scored the essays based, in part, on how many valid arguments and textual references they contained and found that participants given 30 minutes performed better across the board than those given only 10 minutes. The most successful in terms of essay scores were participants who had enough time to complete the task and had access to the chatbot later in the process.
When the researchers looked at how well participants remembered information in the provided documents, the most successful group was the one that had sufficient time and never had access to the chatbot. The researchers also scored myside bias, measuring how many perspectives participants incorporated in their arguments. They found that the group with sufficient time and late chatbot access did best.
The results align with research on two kinds of learning: one based on slow, effortful reasoning and another based on fast, automatic thinking, says Barbara Oakley, a systems engineer and education expert at Oakland University in Rochester Hills, Mich. Slow learning involves carefully building an understanding of the problem and weighing options, while fast learning relies on habits and quick judgments with little reflection. Participants who had time to reason through the material on their own before using AI did best because they had already engaged in that slower, more deliberate learning, she says.
Of course, in the real world, people often have to complete critical thinking tasks under time pressure. In the four groups in the “insufficient time” category, the group with access to the chatbot early on scored the highest on their essays. That doesn’t mean we should rush to use AI, Lee says. “When you are under time pressure and use AI to boost your performance, then you are basically risking [just taking and using the] AI’s framing, and that reduces the kinds of arguments that you make and your engagement with the documents or different pieces of information,” she says. You have to “at least be aware of what you’re signing up [for].”
That awareness is probably what everyone should aim for right now. People will need strong AI literacy and knowledge of their own thinking patterns to weigh the risks and benefits of using chatbots in different scenarios and at different points in problem-solving, Lee says. “I think our work kind of targets time constraints as the first step towards [that] understanding.”
文章标题:人工智能是否不利于批判性思维?这取决于你何时使用它。
文章链接:https://news.qimuai.cn/?post=3814
本站文章均为原创,未经授权请勿用于任何商业用途