All Content from Business Insider 2小时前
AI悄然转移知识与批判性思维:教育面临算法权威挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

英国诺森布里亚大学的商业与市场营销教授Kimberley Hardcastle警告,生成式AI正悄然将知识的生产和批判性思维的运用从人类转移到大型科技公司的算法中。学生已开始依赖AI进行写作、编辑甚至直接解决作业,这导致了“认知过程外包”和“认识警惕性萎缩”。Hardcastle教授指出,真正的风险在于教育系统可能将判断权拱手让给由科技巨头构建的算法,从而改变知识的构建方式,并使社会日益依赖算法作为真理仲裁者。她强调,教育界需超越应对抄袭和技术熟练度,深入思考在AI时代如何维护人类的认知能动性,避免知识权威被算法悄然取代,并警惕商业数据和优化指标对知识生成和验证方式的潜在影响。

🧠 **认知过程外包与知识权威转移**:生成式AI,如ChatGPT,正促使学生将思考过程,包括写作、编辑和解决问题的任务,外包给算法。这种依赖性不仅体现在作业完成上,更深远地改变了学生与知识的关系,可能导致独立思考和批判性评估能力的退化,并将知识的权威性悄然转移至大型科技公司的算法手中。

📉 **认识警惕性萎缩与社会风险**:Hardcastle教授提出的“认识警惕性萎缩”是指个体独立验证、挑战和构建知识的能力下降。当人们习惯于依赖AI生成的答案和分析时,社会整体的求真能力可能减弱,并日益依赖算法作为真理的仲裁者,这可能导致商业训练数据和优化指标在不经意间塑造我们对知识的认知和方法论。

💡 **教育面临的结构性挑战与应对**:教育系统不仅要应对抄袭问题,更需认识到AI带来的结构性变化。真正的风险在于学生和教育者将判断权交给算法。教育界需要超越表面的技术应对,深入探讨在AI驱动的世界中,如何有意识地引导AI整合,以保护人类的认知能动性——即独立思考、推理和判断的能力,避免知识的创造和验证过程受到算法的过度影响。

🚀 **AI时代下人类认知能动性的维护**:关键在于教育能否抓住AI整合的关键节点,主动塑造AI在学习中的角色,而非被动接受。这意味着需要从根本上思考知识的权威性在AI辅助环境下的演变,确保学生在AI的帮助下,依然能够发展和保持独立判断和创造知识的能力,避免技术进步导致人类认知能力的普遍下降。

Kimberley Hardcastle said generative AI is quietly shifting knowledge and critical thinking from humans to Big Tech's algorithms.

Generative AI isn't just changing how students learn — it's changing who controls knowledge itself.

That's the warning from Kimberley Hardcastle, a business and marketing professor at the UK's Northumbria University.

She told Business Insider that the rise of ChatGPT, Claude, and Gemini in classrooms is shifting education's foundations in ways few institutions are prepared to confront.

While schools and universities focus on plagiarism, grading, and AI literacy, Harcastle said the real risk lies deeper: in students and educators outsourcing judgment to algorithms built by Big Tech.

Students are outsourcing the thinking process

Data from Anthropic, the company behind Claude, shows just how deeply AI has entered the classroom.

After analyzing about one million student conversations in April, the company found that 39.3% involved creating or polishing educational content, while 33.5% asked the chatbot to solve assignments directly.

However, Hardcastle said this isn't just a case of students "not doing the work." She said it's also about how knowledge itself is constructed.

"When we bypass the cognitive journey of synthesis and critical evaluation, we're not just losing skills," she said. "We're changing our epistemological relationship with knowledge itself."

In other words, students are beginning to rely on AI not just to find answers but also to decide what counts as a good answer.

"This affects job prospects not through reduced ability, but through a shifted cognitive framework where validation and creation of knowledge increasingly depend on AI mediation rather than human judgment," she said.

The 'atrophy of epistemic vigilance'

Hardcastle said her biggest concern is what she called the "atrophy of epistemic vigilance" — the ability to independently verify, challenge, and construct knowledge without the help of algorithms.

As AI becomes more embedded in learning, she said, students risk losing the instinct to question sources, test assumptions, or think critically.

"We're witnessing the first experimental cohort encountering AI mid-stream in their cognitive development, making them AI-displaced rather than AI-native learners," she said.

"We're witnessing a transformation in cognitive practices," she added.

That loss could ripple beyond classrooms. If people stop practicing independent evaluation, society risks becoming dependent on algorithms as the arbiters of truth.

Big Tech's growing control over knowledge

Hardcastle warned that the deeper danger isn't just cognitive but structural.

If AI systems become the primary mediators of knowledge, Big Tech companies effectively control what counts as valid knowledge.

"The issue isn't dramatic control but subtle epistemic drift: when we consistently defer to AI-generated summaries and analyses, we inadvertently allow commercial training data and optimization metrics to shape what questions get asked and which methodologies appear valid," she said.

That drift, she said, risks entrenching corporate influence over how knowledge is created and validated — and quietly shifting authority from human judgment to algorithmic logic.

The stakes for education

Hardcastle said the question isn't whether education will "fight back" against AI, but whether it will consciously shape AI integration to preserve human epistemic agency — the capacity to think, reason, and judge independently.

That requires educators to move beyond compliance and operational fixes, and to start asking fundamental questions about knowledge authority in an AI-mediated world, she said.

"I'm less concerned about cohorts being 'worse off' than about education missing this critical inflection point," she said.

Unless universities act deliberately, she said, AI could erode independent thought — while Big Tech profits from controlling how knowledge itself is created.

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI 人工智能 教育 批判性思维 知识权威 算法 Generative AI Artificial Intelligence Education Critical Thinking Knowledge Authority Algorithms
相关文章