All Content from Business Insider 10月07日
研究者:大学应教学生批判AI,而非仅复制
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一位南非研究者指出,当前大学教育未能有效培养学生的批判性思维能力,因为学生倾向于让AI代为思考。该研究者认为,教育者应引导学生学会评估AI生成的内容,识别其不准确、偏见或浅显之处,而不是仅仅模仿。文章提出,大学应改革评估方式,鼓励学生在AI辅助下进行多层次思考,并推广AI的伦理和透明使用,同时通过同伴评审和奖励反思过程来强化学生的独立思考和判断能力,以培养真正具备批判性思维的毕业生。

🎓 **AI生成内容评估:** 研究者强调,大学应教授学生如何批判性地评估AI生成的内容,识别其潜在的不准确、偏见或肤浅之处,从而培养学生对信息的审慎态度,而非被动接受。

🔄 **分层级设计作业:** 建议教育者设计分阶段的作业,引导学生从基础理解逐步深入到分析和原创性创造,防止学生将整个项目外包给AI,确保学生在整个学习过程中参与深度思考。

💡 **推广AI的伦理与透明使用:** 学生需要理解,负责任地使用AI意味着要披露何时、如何以及为何使用AI工具。这种透明度有助于将AI视为学习伙伴,而非“秘密武器”,并建立学术诚信。

🤝 **鼓励AI辅助工作的同行评审:** 通过让学生互评AI辅助完成的草稿,可以促使他们同时评估技术本身以及背后的人类思考。这有助于恢复学习中的对话和协作感,对抗纯粹的自动化。

📈 **奖励反思而非仅结果:** 建议评分体系应纳入学生如何使用AI的考量,包括记录过程、论证选择、以及通过与机器推理进行比较来展示学习成果,从而鼓励学生进行深度反思。

A South African researcher says universities are failing to teach critical thinking as students let AI do their thinking for them.

Across university campuses, professors are wrestling with a new kind of plagiarism panic: the fear that students are letting ChatGPT and other generative AI tools do the thinking for them.

But one education researcher said that the real crisis isn't cheating — it's that higher education keeps testing the very skills AI performs best, while neglecting others it can't.

In an essay for The Conversation published on Sunday, Anitia Lubbe, an associate professor at North-West University in South Africa, said universities are "focusing only on policing" AI use instead of asking a more fundamental question: whether students are really learning.

Most assessments, she wrote, still reward memorization and rote learning — "exactly the tasks that AI performs best."

Lubbe warned that unless universities rethink how they teach and assess students, they risk producing graduates who can use AI but not critique its output.

"This should include the ability to evaluate and analyse AI-created text," she wrote. "That's a skill which is essential for critical thinking."

Instead of banning AI, Lubbe said, universities should use it to teach what machines can't do — reflection, judgment, and ethical reasoning.

She proposes five ways educators can fight back:

1. Teach students to evaluate AI output as a skill

She said professors should make students interrogate AI generative tools' output — asking them to identify where an AI-generated answer is inaccurate, biased, or shallow before they can use it in their own work.

That, she said, is how students learn to think critically about information rather than just consume it.

2. Scaffold assignments across multiple levels of thinking

Rather than letting AI handle every stage of a project, she urged teachers to design tasks that guide students through progressively deeper levels of thinking — moving from basic comprehension to analysis and ultimately to original creation — so they can't simply delegate the entire process to a machine.

3. Promote ethical and transparent use of AI

Students, she said, must understand that responsible use begins with disclosure — explaining when, how, and why they've used tools like ChatGPT.

She said that openness not only builds integrity but also helps demystify AI as a learning partner instead of a secret weapon.

4. Encourage peer review of AI-assisted work

When students critique each other's AI-generated drafts, she said, they learn to evaluate both the technology and the human thinking behind it.

That process, in her view, restores a sense of dialogue and collaboration that pure automation erases.

5. Reward reflection, not just results

She said grades should factor in how students used AI — whether they documented their process, justified their choices, or demonstrated learning through comparison with the machine's reasoning.

"But focusing only on policing misses a bigger issue: whether students are really learning," Lubbe wrote.

A wider academic alarm

Lubbe's warning echoes a broader unease among educators that students are quietly outsourcing thinking to AI.

Last week, Kimberley Hardcastle, a business professor at Northumbria University, wrote that AI allows students to "produce sophisticated outputs without the cognitive journey traditionally required to create them," calling it an "intellectual revolution" that risks handing control of knowledge to Big Tech.

While Hardcastle fears AI is hollowing out critical thought, former venture capitalist turned educator Ted Dintersmith warned that schools are already training students to think like machines — a mistake he says will leave them unprepared for a job market where "two or three people who are good at AI will replace 20 or 30 who aren't."

Last week, he told BI that schools are already "training kids to follow distantly in the footsteps of AI," churning out "flawed, expensive versions of ChatGPT" instead of teaching creativity, curiosity, and collaboration — the very skills machines can't replicate.

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

批判性思维 人工智能 高等教育 AI伦理 教学改革 Critical Thinking Artificial Intelligence Higher Education AI Ethics Educational Reform
相关文章