All Content from Business Insider 10月04日 17:46
医疗领域AI应用现状与挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能在医疗领域的应用现状,包括通用模型如ChatGPT和专业医疗AI初创公司。尽管AI技术发展迅速,但许多工具仍处于早期阶段,医院对其进行严格的试点和伦理测试。目前,AI在自动化重复性任务(如病历记录)方面表现突出,但其在诊断准确性、数据隐私和用户接受度方面仍面临挑战。文章特别强调了“环境倾听”技术在缓解医生职业倦怠方面的潜力,并指出尽管存在争议,AI在医疗领域的应用正逐渐普及,但负责任和一致的使用至关重要。

🩺 **AI在医疗领域的应用日益广泛,但仍面临诸多挑战。** 许多医疗专业人士开始使用AI工具,但同时也对其准确性、数据隐私和潜在的“幻觉”效应感到担忧。通用模型如ChatGPT被用于多种任务,包括邮件撰写和数据分析,但其专业性不足可能导致错误信息。同时,医疗AI初创公司蓬勃发展,吸引了大量风险投资,但这些新工具的有效性仍需大量实地测试和医院内部的严格评估。

📝 **自动化病历记录成为AI应用的重要切入点。** 多数受访医生表示,AI在处理记录病历这类重复性工作中提供了显著帮助,有效减轻了“认知负担”,缓解了“临床倦怠”。“环境倾听”设备通过记录医患对话并自动生成摘要,极大地简化了医生的工作流程,使他们能够更专注于与患者的交流,而非埋头于电脑前。这种技术被认为具有“真正的变革力量”,有望延长医生的职业生涯。

💡 **AI在医疗领域的推广需谨慎,伦理和准确性是关键考量。** 尽管AI的效率提升潜力巨大,但其潜在错误可能带来严重后果。例如,一项研究表明,在使用AI辅助进行结肠镜检查后,息肉检出率反而有所下降。因此,医疗机构在采纳AI工具时,需要进行双重独立审查,评估其伦理合规性和临床效益。同时,患者对AI技术(如环境倾听)的接受度也至关重要,透明的披露和患者的知情同意是推广的关键。

St. Luke's internal medicine physician Carl Dirks said that ambient listening was a solution for "clinician burnout."

Adam Rodman uses ChatGPT when he's stumped.

The internal medicine physician and Harvard Medical School professor is upfront about it with his patients, and said he avoids putting in private medical information.

"Once I did it, and the patient herself was typing in and giving additional information to the chatbot," Rodman said. "It was a three-way conversation with the two of us in ChatGPT."

Rodman is one of over a dozen healthcare professionals who spoke to Business Insider about the daunting task of choosing which, if any, AI tools to use. Many use general-purpose models like ChatGPT, but those may not be specialized enough — or could hallucinate faulty answers.

Others choose to bring in a medical technology startup. In the first half of 2025, AI-enabled medical startups raised 62% of digital health venture funding, according to Rock Health's data. The period also saw $6.4 billion in venture funding poured into digital health companies, compared to $6 billion in the first half of 2024.

Since generative AI is still young, many of these tools have yet to face significant field testing, leading many hospitals to extensively trial and pilot these tools in-house. These tests look for medical ethics — and whether the tools are helpful in the first place.

Many healthcare professionals still choose not to touch the tech at all. According to an Elsevier Health survey, 48% of doctors reported using an AI tool in their work. That number is rapidly growing, though, having stood at 26% the year prior.

Somos Community Care pathologist David Zhang.

Doctors we spoke to had some common ground, with almost all using an AI device to automate one of the most repetitive parts of their job: taking notes. Beyond that, they varied widely in where they thought AI was helpful — and safe.

Many doctors are using ChatGPT — and not just for email-writing.

The AI giants are positioning themselves in the medical market. At a recent Federal Reserve conference, OpenAI CEO Sam Altman said that ChatGPT was a "better diagnostician than most doctors in the world." Google and Microsoft have specialized medical models; Microsoft's model was 4x more accurate than human diagnosticians in solving case studies, its AI CEO said.

Most doctors who spoke with Business Insider used a general-purpose chatbot in their practice. Some used enterprise-level models that couldn't train on the data, and others said they avoided feeding any sensitive information to the system.

John Brownstein said Boston Children's Hospital, where he is chief innovation officer, launched a HIPAA-compliant ChatGPT that guards protected health information. Almost 30% of employees now use it. He listed some examples: generating letters of reference, quantitative data analysis, or combing through internal documents.

Brownstein also works with AI startups, though his bar for these tools keeps rising.

In the era of AI coding tools, what once took a whole team of engineers can increasingly be done with only a handful — so Brownstein said the hospital can make their own version in-house. Some medical technology startups are also trying to use AI as a "hook," Brownstein said, making it difficult to separate the truly helpful from the bandwagon riders.

Just how many of these AI startups are pitching him? "It's like dozens every week," Brownstein said. "It's relentless."

During our call, Somos Community Care pathologist David Zhang demonstrated ChatGPT's response to a prostate biopsy. The chatbot said that an image of the biopsy was consistent with ductal adenocarcinoma, a form of pancreatic cancer. That diagnosis was "far from accurate," he said.

"That's why we have a secure job," Zhang said. "We can make the foundation model do better."

Zhang fed ChatGPT a prostate biopsy. The chatbot's diagnosis was "far from accurate."

For doctors looking for more than the average ChatGPT response, they may seek out more specialized technology. There are hundreds, if not thousands, for them to choose from. The FDA's list of AI-enabled medical devices currently has 1,247 entries.

"AI was supposed to come in and help streamline things," said dentist Divian Patel. "What it became is a thousand portals, a thousand logins, and nobody wants to use it."

Patel is betting that his fellow dentists are willing to pay to simplify. He and Shervin Molayem, another dentist, cofounded Trust AI. The company ostensibly combines many of these AI services on one platform, and has raised $6 million in seed funding, per PitchBook.

Rebecca Mishuris, the chief medical information officer for Mass General Brigham, said that she tries not to have "shiny object syndrome" when judging the generative AI tools that come across her desk.

"They're offering solutions for problems I just don't have," Mishuris said. "Maybe I'll have it a year from now, but I just don't have it today."

Zhang said that doctors had a "secure job" because they can make foundation models do better.

Alan Weiss, an SVP of clinical advancement for hospital system Banner Health, said that it was "almost overwhelming" how many of his vendors were pitching new AI tools. He's careful with which new tech he takes on, having two independent groups review tools for both ethical and clinical concerns.

Like many other industries, hospitals are also trying to use AI for administrative work. Doctors who spoke with Business Insider reported using AI for patient check-ins and to process health insurance claims.

These tools don't always live up to their hype. Last year, Rodman said that the industry was buzzing about AI-written messages to patients. Then studies came out demonstrating that some doctors spent more time correcting the messages than hand-writing them, he said.

"Those have all pretty much fizzled out," Rodman said. "That's an area where there was a lot of excitement that hasn't lived up to the promise."

The doctors Business Insider interviewed ranged from excited AI practitioners to skeptics. But most of the doctors agreed on the power of one tool: ambient listening devices.

The AI-powered note-taking devices listen in on conversations between doctors and patients, summarizing insights in one document. Mishuris said the devices have "real transformative power."

Carl Dirks said ambient listening limited the "cognitive drain" of note-taking.

Carl Dirks, an internal medicine physician at St. Luke's in Kansas City, called ambient listening a solution to "clinician burnout." Some primary care physicians told him that the tool extended their careers by limiting the "cognitive drain" of fast note-taking.

"We're really trying to restore human-to-human connection," said Philip Payne, Dirk's colleague and chief health AI officer at BJC Healthcare. "How do we get the computer out of the way so that the provider and the patient have a conversation rather than the provider sitting behind a keyboard and typing the whole time?"

There is no official rule or guideline about how to disclose the use of an ambient listening device. However, doctors have a legal obligation to disclose the device's use in states with two-party recording laws. Some hospitals also set disclosure policies for all their providers.

Patients then need to decide if they're comfortable with their chats being recorded and processed by AI. The quality of the disclosure matters here: In a July study of 121 ambient documentation pilots, nearly 75% of patients surveyed were comfortable with the technology's use.

As a psychiatrist, Farhan Hussain's notes are fairly extensive. He loved using an ambient listening device in previous jobs, but no longer has access in his new role at a telehealth firm. He misses it.

"Otherwise, we really are taking notes the whole time," Hussain said. "Like damn, I didn't go to med school to just become a scribe."

Ambient listening is also teeming with venture funding. In July, Ambience Healthcare raised $243 million in Series C funding backed by Oak HC/FT and Andreessen Horowitz. Nabla raised a $70 million Series C in June, led by HV Capital.

Dirks' colleague Philip Payne said that ambient listening helps "restore human-to-human connection."

The number of well-funded ambient companies leaves doctors with additional choices. Brownstein uses Abridge, Hussain uses Nabla, and Weiss is currently piloting four different ambient companies.

Francisco Lopez-Jimenez believes cardiology is one of the most technology-forward medical disciplines, likely because of his peers' backgrounds in physics and computer science.

"Cardiology has really been at the front of innovation," said Lopez-Jimenez, Mayo Clinic's codirector of AI in cardiology. "It's one of the fields that started using and developing AI the earliest in medicine, no question."

Lopez-Jimenez is quick to admit bias — he is a cardiologist himself, after all. Pierre Elias, medical director of artificial intelligence at New York-Presbyterian Hospital and yet another cardiologist, agreed.

"If you look at FDA clearance for AI technology, it's radiology and cardiology that have led the path," Elias said. "But radiologists don't interface with patients quite as much as cardiologists do."

Among the wide swaths of medicine, a gulf is emerging between the eager and the skeptical. According to Elsevier Health, 48% of clinicians surveyed said they use AI in their practice — but 24% still don't use the technology at all, even in non-work settings.

On the other side of the gulf, then, are the doctors rejecting AI on its face. Maybe they're worried about ethics or about losing their tact. Indeed, an August study asked doctors who regularly use AI to perform a colonoscopy without its assistance. Before their AI use, adenoma detection was at 28.4%; after, it declined to 22.4%.

Jonathan Simon, an internal medicine physician at Bayhealth, said there was only "one area" where he used AI: note-taking. He does not touch a general model like ChatGPT or any of the emerging med tech players in his practice, though he's attended some AI research talks that have intrigued him.

Simon worried about efficiency gains. He recognized that faster diagnoses mean a quicker rate of seeing patients — something that could be lucrative for doctors.

"In that desire to throughput as many patients as possible, because that's where the money is, the industry really has to keep an eye on responsible and consistent use of AI," he said.

"Mistakes might be rare, but a rare mistake can destroy someone's life."

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 医疗AI ChatGPT 临床倦怠 环境倾听 AI伦理 医疗技术 AI adoption healthcare AI clinician burnout ambient listening AI ethics medical technology
相关文章