ΑΙhub 09月16日
AI或助长“伪科学”研究,影响科研公信力
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)技术可能加剧“伪科学”研究现象的风险。通过引用制药公司利用“枪手写手”撰写虚假科学评论的案例,以及软饮料和肉类生产商资助有偏研究的现象,文章指出AI工具大幅降低了此类研究的成本,使得单个个体能在短时间内生成大量看似合理的论文。这些研究常聚焦于单一因素与健康结果的关联,容易产生虚假的相关性。文章进一步分析了AI驱动下,单因素研究数量激增的现状,并提及了英国政府鼓励企业提供科学证据支持产品健康声明的政策,可能进一步刺激对AI辅助“科学证据”的需求。文章最后提出了改革对策,包括加强同行评审、要求作者公开研究计划、数据、代码和利益冲突,以及采用如规范化曲线分析等方法来提高研究的稳健性,以维护科学的公信力。

🤖 AI技术降低了“伪科学”研究的门槛和成本,可能导致大量虚假或误导性科学研究的涌现。过去需要数月才能完成的论文,现在AI工具几小时内就能生成多篇,这极大地降低了不实研究的生产成本,使得追求特定商业利益或个人职业发展的研究者更容易制造看似合理的证据。

📊 单因素研究数量激增,易产生虚假关联。当研究数据集庞大,包含大量个体和信息点时,研究者很容易通过偶然发现误导性的相关性。文章指出,2014-2021年间平均每年仅发布四项单因素研究,而2024年前十个月已激增至190项,这反映了AI技术对这类研究的助推作用,这些研究可能被用于支持产品营销或特定议题。

🔬 改革同行评审机制和提高研究透明度是应对挑战的关键。文章提出,应警惕未经同行评审的研究,并改革现有评审流程,如要求作者在研究前公开研究计划(预注册)、透明报告所有研究步骤、公开数据、代码和实验材料。此外,对于单因素论文,应采用规范化曲线分析等方法检验结果的稳健性,并要求作者披露所有次级分析和AI使用情况,以提高研究的可信度。

⚖️ 科学的公信力至关重要,AI的滥用可能威胁其根基。尽管公众对科学的信任度普遍较高,但AI的普及可能进一步削弱科学作为“公正裁判”的作用。文章强调,为了维护科学的独立性和权威性,必须采取措施激励有意义的同行评审,确保科学研究真正服务于真理,而非利润或流行观点。

Nadia Piet & Archival Images of AI + AIxDESIGN / AI Am Over It / Licenced by CC-BY 4.0

By David Comerford, University of Stirling

Back in the 2000s, the American pharmaceutical firm Wyeth was sued by thousands of women who had developed breast cancer after taking its hormone replacement drugs. Court filings revealed the role of “dozens of ghostwritten reviews and commentaries published in medical journals and supplements being used to promote unproven benefits and downplay harms” related to the drugs.

Wyeth, which was taken over by Pfizer in 2009, had paid a medical communications firm to produce these articles, which were published under the bylines of leading doctors in the field (with their consent). Any medical professionals reading these articles and relying on them for prescription advice would have had no idea that Wyeth was behind them.

The pharmaceutical company insisted that everything written was scientifically accurate and – shockingly – that paying ghostwriters for such services was common in the industry. Pfizer ended up paying out more than US$1 billion (£744 million) in damages over the harms from the drugs.

The articles in question are an excellent example of “resmearch” – bullshit science in the service of corporate interests. While the overwhelming majority of researchers are motivated to uncover the truth and check their findings robustly, resmearch is unconcerned with truth – it seeks only to persuade.

We’ve seen numerous other examples in recent years, such as soft drinks companies and meat producers funding studies that are less likely than independent research to show links between their products and health risks.

A major current worry is that AI tools reduce the costs of producing such evidence to virtually zero. Just a few years ago it took months to produce a single paper. Now a single individual using AI can produce multiple papers that appear valid in a matter of hours.

Already the public health literature is observing a slew of papers that draw on data optimised for use with an AI to report single-factor results. Single-factor results link a single factor to some health outcome, such as finding a link between eating eggs and developing dementia.

These studies lend themselves to specious results. When datasets span thousands of people and hundreds of pieces of information about them, researchers will inevitably find misleading correlations that occur by chance.

A search of leading academic databases Scopus and Pubmed showed that an average of four single-factor studies were published per year between 2014 and 2021. In the first ten months of 2024 alone, a whopping 190 were published.

These weren’t necessarily motivated by corporate interests – some could, for example, be the result of academics looking to publish more material to boost their career prospects. The point is more that with AI facilitating these kinds of studies, they become an added temptation for businesses looking to promote products.

Incidentally, the UK has just given some businesses an additional motivation for producing this material. New government guidance asks baby-food producers to make marketing claims that suggest health benefits only if supported by scientific evidence.

While well-intentioned, it will incentivise firms to find results that their products are healthy. This could increase their demand for the sort of AI-assisted “scientific evidence” that is ever more available.

Fixing the problem

One issue is that research does not always go through peer review prior to informing policy. In 2021, for example, US Supreme Court justice Samuel Alito, in an opinion on the right to carry a gun, cited a briefing paper by a Georgetown academic that presented survey data on gun use.

The academic and gun survey were funded by the Constitutional Defence Fund, which the New York Times describes as a “pro-gun nonprofit”.

Since the survey data are not publicly available and the academic has refused to answer questions about this, it is impossible to know whether his results are resmearch. Still, lawyers have referenced his paper in cases across the US to defend gun interests.

One obvious lesson is that anyone relying on research should be wary of any that has not passed peer review. A less obvious lesson is that we will need to reform peer review as well. There has been much discussion in recent years about the explosion in published research and the extent to which reviewers do their jobs properly.

Over the past decade or so, several groups of researchers have made meaningful progress in identifying procedures that reduce the risk of specious findings in published papers. Advances include getting authors to publish a research plan before doing any work (known as preregistration), then transparently reporting all the research steps taken in a study, and making sure reviewers check this is in order.

Also, for single-factor papers, there’s a recent method called a specification curve analysis that comprehensively tests the robustness of the claimed relationship against alternative ways of slicing the data.

Journal editors in many fields have adopted these proposals, and updated their rules in other ways too. They often now require authors to publish their data, their code and the survey or materials used in experiments (such as questionnaires, stimuli and so on). Authors also have to disclose conflicts of interest and funding sources.

Some journals have gone further, such as requiring, in response to the finding about the use of AI-optimised datasets, authors to cite all other secondary analyses similar to theirs that have been published and to disclose how AI was used in their work.

Some fields have definitely been more reformist than others. Psychology journals have, in my experience, gone further to adopt these processes than have economics journals.

For instance, a recent study applied additional robustness checks to analyses published in the top-tier American Economic Review. This suggested that studies published in the journal systematically overstated the strength of evidence contained within the data.

In general, the current system seems ill-equipped to cope with the deluge of papers that AI will precipitate. Reviewers need to invest time, effort and scrupulous attention checking preregistrations, specification curve analyses, data, code and so on.

This requires a peer-review mechanism that rewards reviewers for the quality of their reviews.

Public trust in science remains high worldwide. That is good for society because the scientific method is an impartial judge that promotes what is true and meaningful over what is popular or profitable.

Yet AI threatens to take us further from that ideal than ever. If science is to maintain its credibility, we urgently need to incentivise meaningful peer review.

David Comerford, Professor of Economics and Behavioural Science, University of Stirling

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人工智能 科学研究 同行评审 伪科学 科研伦理 数据透明 AI Artificial Intelligence Scientific Research Peer Review Pseudoscience Research Ethics Data Transparency
相关文章