ΑΙhub 09月23日
AI发展:融合多范式是关键
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文访谈了KU Leuven大学的Luc De Raedt教授,他长期致力于研究如何整合逻辑、概率和机器学习,推动了神经符号AI的发展。De Raedt教授分享了他对AI研究的看法,特别是关于单一范式与多范式融合的争论。他强调,不同AI流派各有优劣,真正的进步在于结合它们的优势,尽管这带来了挑战。他还讨论了神经符号AI在解释性方面的潜力,以及当前AI领域的炒作和风险。De Raedt教授认为,AI社区应更深入地探索不同学习范式的结合,并确保这种结合能够恢复原有范式作为特例,而非其中一方被完全取代。

🧠 **神经符号AI的融合之道**:Luc De Raedt教授认为,AI的发展不应局限于单一范式,而是应积极融合逻辑、概率和机器学习等不同领域。这种多范式结合能够汇集各家之长,克服单一方法的局限性,从而推动AI取得更实质性的进展。他将此视为AI领域一个持续且令人兴奋的挑战。

💡 **应对批评与挑战**:对于神经符号AI可能丧失神经网络效率和符号推理严谨性的批评,De Raedt教授提出解决方案是为特定任务构建专门的系统。他承认,完全融合这些范式可能导致高昂的计算成本,但通过工程上的努力和开发特定应用,可以实现高效且可行的解决方案。例如,利用GPU和TensorFlow等工具的普及,促进了神经网络的发展,类似的技术努力对神经符号AI同样至关重要。

🔍 **解释性:神经符号AI的独特优势**:De Raedt教授指出,符号方法本身就具有高度的可解释性,因为证明过程即为解释。将逻辑与神经网络结合,可以利用这种逻辑推导作为解释的一部分。尽管不能完全深入神经网络内部,但可以利用高层概念和图模型提供的解释类型,如寻找最可能的解析树或推断,来增强AI的可解释性。此外,在学习过程中加入约束条件,可以确保模型遵循某些规律,从而增加信任度。

🚀 **未来展望与核心问题**:De Raedt教授认为,AI社区最吸引人的问题是如何有效结合不同的学习范式,而不是依赖单一模型。理想的融合应允许在结合不同技术后,仍能将原有范式作为特例恢复。他以大型语言模型(LLMs)为例,设想如果能利用LLMs构建可靠的证明系统,实现真正具有保证的逻辑推理,那么就能结合LLMs的强大能力和逻辑推理的严谨性,这是当前尚未实现但极具潜力的发展方向。

Should AI continue to be driven by a single paradigm, or does real progress lie in combining the strengths and weaknesses of many? Professor Luc De Raedt of KU Leuven has spent much of his career persistently addressing this question. Through pioneering work that bridges logic, probability, and machine learning, he has helped shape the field of neurosymbolic AI. In our conversation at IJCAI 2025 in Montreal, he spoke about what continues to fascinate him in this line of research, how he responds to criticisms of neurosymbolic AI, and why reconciling multiple paradigms is such an exciting challenge.

Liliane-Caroline Demers: Hello Professor De Raedt, thank you very much for joining me. Could you start by telling me when was your first IJCAI and what is a core memory related to it?

Luc De Raedt: My first IJCAI was in 1989 in Detroit. It was my first time to the US as well. That was quite an experience. It was also a big conference. The sizes have gone up and down and this was one of the bigger ones, and afterwards it went down a little and now it’s up again.  Reflecting the winters and summers of AI, I guess.

Liliane-Caroline: Since you were program chair for IJCAI 2022, do you have any advice you could give to the program chair of IJCAI 2026?

Luc: IJCAI 2022 was one of the first in-person conferences again. People really enjoyed it, and it was a great location. But, as here, not everybody could show up, because travel was still difficult for some.

One of the real challenges as program chair is keeping the acceptance rates balanced between the different areas. I also saw this when I was chairing ICML in 2005: the year before, in some subfields, the acceptance rate went up to 65-70 %, while in others it was much lower. That was because these were newer areas in machine learning, and people accepted more, while in other more established areas, the standards were stricter.

You still see this today. For some communities, IJCAI is the main conference, where they send their very best work. For others, larger specialized conferences dominate. So, IJCAI gets a different mix of submissions and striking the right balance is really challenging.

We made a lot of analyses. We looked at the average review scores, which differ between fields. Some communities score lower than others. In computer vision, for example, people seem more relaxed about publishing, while in machine learning, PR or CP, the reviews are stricter. You really have to monitor this carefully, and I try to approach it scientifically.

Liliane-Caroline: What first drew you to the combination of logic and learning together, and why does it continue to fascinate you?

Luc: Okay, so what drew me to this? I was in a logic group, but I wanted to do machine learning. And so the combination was quite natural and very appealing because I had also done my Master’s on learning.

When I had just finished my PhD, the topic really started to take off, and this idea of combining learning and reasoning felt very natural. I think it’s still one of the main open questions today.

Later on, we added probability. At the time, a lot of people were trying to do synthesized programs, and I became a bit pessimistic—I thought this would never work the way we were doing it. And now you see: LLMs can actually synthesize programs very well though not perfect yet. We could never have dreamed of that. I never believed it would happen in my lifetime, so it’s quite a surprise.

So, yeah, I got in touch with probability, and that was interesting. We developed probabilistic logics, also this probabilisitic Prolog called ProbLog. Then, we had a visitor working on neural networks and their combination with logic, and I thought, yeah, maybe we can also do something. We started discussing it, and that was really cool.

It’s always about adding something new—new for me, in a sense. And working on new topics—that’s exciting, to broaden my perspective.

There are different schools in machine learning—Pedro Domingos describes five schools, and this quest for the “master algorithm”. And I think ultimately, I’m also fascinated by the idea of combining the best of these worlds. Logic is too limited, probability too, neural networks as well. So, by combining them, hopefully we get their strengths. Of course, you also inherit their weaknesses, and that makes it challenging. But it’s exciting.

And what’s also extremely nice to see is that now the field is really taking off. You know, being on the Gartner hype cycle, with a lot of interest and people—everywhere seeing neurosymbolic AI as the next wave. That’s cool.

Luc De Raedt and Liliane-Caroline Demers at IJCAI 2025.

Liliane-Caroline: Yes, you did mention in your talk this morning that neurosymbolic AI is likely to reach the “top of its hype curve” in the next two to five years. I was wondering what you think will drive that hype, and what risks do you see if expectations rise too fast? And why?

Luc: People use the term in many different meanings, and neurosymbolic is also quite broad. You can interpret it broadly or more narrowly. I usually go for the narrow interpretation, which I find is still the most challenging.

And, yeah, the risk is, like with all hypes, that the expectations become too high and that this is viewed as a universal solution. Then, people will get disappointed of course, because the models have to be fit for the task that they address.

Liliane-Caroline: Earlier, you mentioned that combining these paradigms means inheriting both their strengths and weaknesses. How do you respond to the criticism that neurosymbolic AI risks losing both the efficiency of neural networks and the rigor of symbolic reasoning?

Luc: I think that’s indeed a risk. I think the solution to that is to build specific solutions for specific cases. In general, if you look at the full combination of probability, logic and neural networks, neurosymbolic AI has high computational costs. Probabilistic inference itself is hard, so you cannot really avoid that. But if you build more specific systems—like the one that my student presented this morning about a neurosymbolic automaton—then you can build something that’s really efficient.

Another important factor is engineering. Neural networks only became popular after people devoted a lot of time and effort in getting to run them on GPUs, for instance, and building tools like TensorFlow. That part is also important.

And for these complex combinations, it usually takes a while before they become feasible, and before people in academia or elsewhere have enough people and resources to really achieve it.

Liliane-Caroline: Another challenge that often arises is explainability. Could you expand on the stance that neurosymbolic AI might be uniquely positioned to deliver explainability in way deep learning can’t?

Luc: So, I guess symbolic methods are by nature very explainable because if you prove something, then the proof is, in a sense, the explanation. And if you combine these logics with neural networks, then, yeah, you can also use the proof or the kind of logical explanation as a part of the explanation.

Of course, that doesn’t allow me to look deep inside the neural network, but, still, you have some kind of higher-level concepts that you can use to find explanations. For example, graphical models offer certain types of explanations that can be exploited within neurosymbolic approaches—like finding the most likely parse tree, the most likely proof, MAP inference, all these kind of things.

One thing that I like about neurosymbolic AI is that you can use these constraints on your learning process and then you know the constraints are going to be satisfied and that already gives you some trust.

It’s not a full explanation, but at least you know that your model will obey certain regularities. Without this kind of constraint-based learning, there’s just very little gain.

Liliane-Caroline: And, to conclude, what would be one challenging question you would like to invite the AI community to reflect on?

Luc: For me, the most attractive question is still how to combine different learning paradigms. Instead of relying on one learning paradigm, we should really explore the combination of the possibilities.

The way that it should work is that, if you combine different things, you should still be able to recover the originals as special cases. That’s always what I’ve been arguing for. So, if you combine neural networks and logic, then you should have neural networks as a special case, but also logic as a special case. What we often see though is that, in these combinations, typically one of the two paradigms gets lost—everything becomes neural, or everything becomes logical.

Building these deep interfaces between these different paradigms is what I aim for and what I think is interesting. Take LLMs. If you could use them to build reliable proof systems, to do real logical reasoning with guarantees, that would be cool, right? Then, you would have the power of the LLM together with the power of logic. But that’s not happening yet.
                                                                             *
Professor De Raedt added that while AGI dominates headlines, and is often hyped as imminent, he doesn’t believe it will be solved in the next few years. But not all his students agree, which sparks debates in the lab. “Oh yes,” he said, “it’s fun.”

About Luc De Raedt

Luc De Raedt is Director of Leuven.AI, the KU Leuven Institute for AI, full professor at KU Leuven, and guest professor at Örebro University (Sweden) in the Wallenberg AI, Autonomous Systems and Software Program. He is working on the integration of machine learning and machine reasoning techniques, also known under the term neurosymbolic AI. He has chaired the main European and International Machine Learning and Artificial Intelligence conferences (IJCAI, ECAI, ICML and ECMLPKDD) and is a fellow of EurAI, AAAI and ELLIS. He received ERC Advanced Grants in 2015 and 2023.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 神经符号AI 多范式融合 机器学习 逻辑推理 概率模型 人工智能发展 可解释性AI Neurosymbolic AI Machine Learning Logic Probability AI Research Explainable AI
相关文章