ΑΙhub 11月07日 19:38
REx:为药物再利用提供可解释的AI预测
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

药物再利用的AI工具需要提供可解释的科学依据,而不仅仅是预测结果。本文介绍的REx是一种基于强化学习的方法,它能探索生物医学知识图谱,寻找连接药物与疾病的路径。REx不仅关注预测的准确性,更注重解释的科学相关性,通过信息内容度量来评估路径的生物学特异性。该模型能生成简洁、有意义的推理链,并以结构化的子图形式呈现,为科学家提供可验证的假设,从而推动AI在科学研究中的应用。

💡 REx是一种创新的强化学习方法,旨在解决药物再利用中的可解释性难题。它不局限于提供预测分数,而是通过在生物医学知识图谱中进行路径探索,生成连接特定药物与疾病的科学解释。

📊 REx的独特之处在于其奖励机制。它结合了预测的“保真度”(即路径是否成功连接药物和疾病)与解释的“相关性”。相关性通过信息内容度量来量化,偏好更具体、信息量更大的生物医学概念,确保解释的科学价值。

🧬 REx通过学习一个智能体(agent)逐步探索知识图谱,从药物节点出发,寻找通往疾病节点的路径。该智能体在每一步都会评估当前路径的保真度和相关性,并根据预设的奖励信号进行学习,最终生成有意义的推理链。

🖼️ REx生成的解释以结构化的子图形式呈现,这些子图根据其路径模式(如“药物→基因→疾病”)进行分组和合并。此外,它还整合了NCIT和ChEBI等本体术语,为解释提供更丰富的生物学背景,使其更符合已知的生物医学概念。

Drug repurposing often starts as a hypothesis: a known compound might help treat a disease beyond its original indication. A good example is minoxidil: initially prescribed for hypertension, it later proved useful against hair loss. Knowledge graphs are a natural place to look for such hypotheses because they encode biomedical entities (drugs, genes, phenotypes, diseases) and their relations. In KG terms, that repurposing can be framed as a triple (minoxidil, treats, hair loss). However, many link prediction methods trade away interpretability for raw accuracy, making it hard for scientists to see why a suggested drug should work. We argue that for AI to function as a reliable scientific tool, it must deliver scientifically grounded explanations, not just scores. A good explanation connects the dots through established biology (e.g., upregulating VEGF enhances hair follicle survival).

From predictions to explanations

Our work introduces REx, a reinforcement learning approach that not only predicts which drug-disease pairs might hold promise but also explains why. Instead of optimizing purely for accuracy, REx trains an agent to traverse a biomedical knowledge graph while being rewarded for producing paths that are both faithful to the prediction and scientifically relevant.

A path is considered faithful when it successfully connects the drug to the disease under investigation, and relevant when it involves specific, informative biomedical entities rather than generic ones. To measure this relevance, we developed a new metric based on Information Content (IC), which favors more specific biological concepts such as “VEGF signaling pathway” over broad ones like “cancer.”

This reward mechanism encourages the model to search for concise and meaningful reasoning chains, similar to how a researcher might connect experimental evidence across different domains. As a result, REx shifts the focus from “Can we predict this link?” to “Can we justify this link scientifically?”

How REx works

REx trains a reinforcement learning agent to explore the biomedical knowledge graph one step at a time, moving from the drug node toward the disease node. At each step, the agent decides whether to follow an outgoing relation (for example, interacts_with or regulates) or to stop if it has reached a meaningful endpoint.

To encourage scientifically sound reasoning, the agent’s reward combines two signals:

By multiplying these two rewards, REx ensures that the highest-scoring explanations are both correct and insightful. The model also includes an early-stopping mechanism: once the disease node is reached, the agent halts instead of wandering into redundant connections.

Once relevant paths are found, REx groups them by metapath pattern: their structural type of reasoning (for instance, druggenedisease). It then merges the best representatives of each pattern into a compact explanation subgraph. To add biological context, REx enriches this subgraph with ontology terms from the National Cancer Institute Thesaurus (NCIT) and the Chemical Entities of Biological Interest (ChEBI), ensuring each explanation aligns with well-defined biomedical concepts.

Why this matters

REx doesn’t just present predictions, it helps scientists understand them. By rewarding both accuracy and biological relevance, REx finds reasoning chains that mirror scientific thinking. This makes it possible to validate AI-generated hypotheses, not just generate them. In drug repurposing, that distinction is crucial: a prediction is useful only if we can understand why it might hold true.

By turning explainability into a rewardable objective, REx shows that interpretability and performance can reinforce each other, rather than compete.

Future directions

Like most systems built on knowledge graphs, REx’s reach depends on the completeness of available data. As biomedical graphs grow richer, we expect the explanations to become even more detailed and accurate.

We are now extending REx beyond drug repurposing to related areas such as drug recommendation and drug-target interaction prediction. Across all these domains, the goal remains the same: to make AI systems that can reason and explain to scientists.

Available resources


This work was presented at IJCAI2025.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

药物再利用 知识图谱 可解释AI 强化学习 生物医学 Drug Repurposing Knowledge Graph Explainable AI Reinforcement Learning Biomedicine
相关文章