MIT Technology Review » Artificial Intelligence 09月10日
治疗师秘密使用ChatGPT引发争议
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近日,有报道揭露部分治疗师在未告知患者的情况下,秘密使用ChatGPT辅助治疗过程,引发了关于AI在心理健康领域应用的伦理争议。尽管AI在未来可能提供心理健康支持,但目前通用AI模型(如ChatGPT)的保密性、准确性以及是否适合用于专业心理治疗存在诸多疑问。文章指出,治疗师应公开透明地告知患者AI的使用情况,以维护信任。同时,监管机构和专业协会也正积极关注并制定相关指导方针,以规范AI在心理健康领域的应用,防止潜在风险。

🤫 **治疗师秘密使用AI引发信任危机**:部分治疗师在心理咨询过程中未告知患者便秘密使用ChatGPT,当患者发现后,这种行为严重损害了医患之间的信任。文章强调,治疗师在使用AI工具时,必须向患者进行充分的披露,说明使用方式和目的,否则将带来不适和潜在的信任破坏。

🤔 **AI作为辅助工具的潜力与局限**:虽然有研究表明专门为治疗设计的AI模型在某些标准化疗法(如CBT)中可能有效,但像ChatGPT这样的通用AI模型在提供治疗建议方面存在局限性。治疗师们普遍对输入敏感数据持谨慎态度,并认为咨询同事或查阅文献是更可靠的专业支持方式。AI在处理复杂情感和提供深度理解方面仍无法取代人类治疗师。

⚖️ **监管与伦理规范的必要性**:随着AI在心理健康领域的应用日益增多,伦理问题和潜在风险也随之浮现。专业协会(如美国咨询协会)建议反对使用AI进行患者诊断,一些州(如内华达州和伊利诺伊州)已开始立法禁止AI用于治疗决策。未来可能出台更严格的法规来规范AI在心理治疗中的应用,以保护患者权益。

🚀 **科技公司对AI能力的过度承诺**:文章质疑科技公司是否在“过度承诺”AI的治疗能力。虽然一些人可能将ChatGPT视为一种“情感支持”,但真正的心理治疗远不止于此,它需要挑战、引导和深入理解。通用AI模型在提供真正意义上的治疗方面,与人类治疗师相比仍有巨大差距。

In Silicon Valley’s imagined future, AI models are so empathetic that we’ll use them as therapists. They’ll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. 

Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn’t subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time. The model then suggested responses that his therapist parroted. 

It’s my favorite AI story as of late, probably because it captures so well the chaos that can unfold when people actually use AI the way tech companies have all but told them to.

As the writer of the story, Laurie Clarke, points out, it’s not a total pipe dream that AI could be therapeutically useful. Early this year, I wrote about the first clinical trial of an AI bot built specifically for therapy. The results were promising! But the secretive use by therapists of AI models that are not vetted for mental health is something very different. I had a conversation with Clarke to hear more about what she found. 

I have to say, I was really fascinated that people called out their therapists after finding out they were covertly using AI. How did you interpret the reactions of these therapists? Were they trying to hide it?

In all the cases mentioned in the piece, the therapist hadn’t provided prior disclosure of how they were using AI to their patients. So whether or not they were explicitly trying to conceal it, that’s how it ended up looking when it was discovered. I think for this reason, one of my main takeaways from writing the piece was that therapists should absolutely disclose when they’re going to use AI and how (if they plan to use it). If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the trust that’s been built.

In the examples you’ve come across, are therapists turning to AI simply as a time-saver? Or do they think AI models can genuinely give them a new perspective on what’s bothering someone?

Some see AI as a potential time-saver. I heard from a few therapists that notes are the bane of their lives. So I think there is some interest in AI-powered tools that can support this. Most I spoke to were very skeptical about using AI for advice on how to treat a patient. They said it would be better to consult supervisors or colleagues, or case studies in the literature. They were also understandably very wary of inputting sensitive data into these tools.

There is some evidence AI can deliver more standardized, “manualized” therapies like CBT [cognitive behavioral therapy] reasonably effectively. So it’s possible it could be more useful for that. But that is AI specifically designed for that purpose, not general-purpose tools like ChatGPT.

What happens if this goes awry? What attention is this getting from ethics groups and lawmakers?

At present, professional bodies like the American Counseling Association advise against using AI tools to diagnose patients. There could also be more stringent regulations preventing this in future. Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in therapeutic decision-making. More states could follow.

OpenAI’s Sam Altman said last month that “a lot of people effectively use ChatGPT as a sort of therapist,” and that to him, that’s a good thing. Do you think tech companies are overpromising on AI’s ability to help us?

I think that tech companies are subtly encouraging this use of AI because clearly it’s a route through which some people are forming an attachment to their products. I think the main issue is that what people are getting from these tools isn’t really “therapy” by any stretch. Good therapy goes far beyond being soothing and validating everything someone says. I’ve never in my life looked forward to a (real, in-person) therapy session. They’re often highly uncomfortable, and even distressing. But that’s part of the point. The therapist should be challenging you and drawing you out and seeking to understand you. ChatGPT doesn’t do any of these things. 

Read the full story from Laurie Clarke

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 心理健康 ChatGPT 伦理 治疗 AI in Mental Health Ethics Therapy Confidentiality
相关文章