AI News 10月29日 21:17
Counterintuitive公司:打造“原生推理计算”,破解AI“双重陷阱”
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI初创公司Counterintuitive正致力于构建“原生推理计算”,旨在让机器实现真正的理解而非简单模仿。这项突破有望将AI从模式识别提升至真实理解,为能够思考和决策的系统铺平道路,使其更具“类人”特性。公司提出的“双重陷阱”问题,即当前AI系统在数值基础和架构上的局限性,是其首要解决的目标。具体而言,现有AI缺乏可靠的数值基础,易受舍入误差影响导致结果不确定;同时,AI模型缺乏记忆功能,无法追溯和构建自身推理过程,只能进行预测式模仿。Counterintuitive正集结顶尖人才,通过80多项专利技术,研发首个推理芯片和软件堆栈,以实现AI的下一代计算,从模仿转向理解,并大幅降低对硬件、数据中心和能源的需求。

💡 **解决AI的“双重陷阱”:** Counterintuitive公司识别出当前AI系统面临的两大核心问题,即“双重陷阱”。第一个陷阱是数值基础的缺陷,现有AI系统基于过时的数学原理,如浮点数运算,易产生累积的舍入误差,导致非确定性输出,难以验证和审计。第二个陷阱是架构问题,现代AI模型缺乏记忆功能,无法追溯和构建推理过程,只能进行预测式模仿,而非真正理解。公司旨在通过其技术克服这些瓶颈,实现AI的稳定、高效和真正智能。

⚙️ **推动“原生推理计算”:** Counterintuitive的核心目标是构建“原生推理计算”,使机器能够进行真正的理解和推理,而非简单的模式模仿。他们正在研发首个推理芯片(Artificial Reasoning Unit, ARU)和相应的软件推理堆栈。ARU是一种全新的计算模式,专注于内存驱动的推理,并能在硅片上执行因果逻辑,与传统的GPU等处理器截然不同。这标志着从概率性计算向确定性推理的根本性转变。

🚀 **定义下一代计算:** 通过集成内存驱动的因果逻辑到硬件和软件中,Counterintuitive期望开发出更可靠、更易于审计的AI系统。这一方法将AI从传统的、以速度为导向的、不透明的“黑箱”概率模型,转向更透明、更负责任的推理过程。公司相信其技术有潜力定义基于推理而非模仿的下一代计算,并能为经济关键领域提供强大的支持,同时大幅减少对庞大硬件、数据中心和能源预算的需求。

AI startup company, Counterintuitive, has set out to build “reasoning-native computing,” enabling machines to understand rather than simply mimic. Such a breakthrough has the potential to shift AI from pattern recognition to genuine comprehension, paving the way for systems that can think and make decisions – in other words, to be more “human-like.”

Counterintuitive Chairman, Gerard Rego, spoke of what the company terms the ‘twin trap’ problem facing AI, stating the company’s first goal is to solve two key problems that limit current AI systems that prevent even the largest AI systems from being stable, efficient, and genuinely intelligent.

The first trap highlights how today’s AI systems lack reliable, reproducible numerical foundations, having been built on outdated mathematical grounds. Examples include floating-point arithmetic that was designed decades ago for speed in tasks including gaming and graphics. Precision and consistency is therefore lacking.

In numerical systems, each mathematical operation introduces tiny rounding errors that can build up over time. Because of this, running the same AI model twice can provide different results, causing non-determinism. Inconsistency of this nature makes it harder to verify, reproduce, and/or audit AI decisions, particularly in fields like law, finance, and healthcare. If AI outputs can not be explained or proven clearly, they become ‘hallucinations’ – a term coined for their “lack of provability.”

Modern AI has a fundamental struggle with precision that lacks truth, creating an invisible wall. The flaw has become a rigid limit, affecting overall performances, increasing costs, and wasting energy on noise corrections.

Modern AI struggles with precision that lacks truth, creating an invisible wall. The flaw has turned into a rigid limit, affecting performance, increasing costs, and wasting energy on computational noise corrections.

The second trap is found in architecture. Current AI models have no memory. Instead, they predict the next frame or token with no reasoning that helped them achieve the prediction. It’s like predictive text, just on steroids, the company says. Once modern models output something, they don’t retain why they made such a decision and are unable to revisit or build on their own reasoning. It may appear that AI has reason, but it’s only mimicking reasoning, not truly understanding how conclusions are reached.

“Counterintuitive is building a world-class team of mathematicians, computer scientists, physicists and engineers who are veterans of leading global research labs and technology companies, and who understand the Twin Trap fundamental and solve it,” Rego said.

Rego’s team has more than 80 patents pending, spanning deterministic reasoning hardware, causal memory systems, and software frameworks that it believes has the potential to “define the next generation of computing based on reasoning – not mimicry.”

Counterintuitive’s reasoning-native computing research aims to produce the first reasoning chip and software reasoning stack that pushes AI beyond its current limits.

The company’s artificial reasoning unit (ARU) is a new type of compute, rather than a processor, that focuses on memory-driven reasoning and executes causal logic in silicon, unlike GPUs. “Our ARU stack is more than a new chip category being developed – it’s a clean break from probabilistic computing,” said Counterintuitive co-founder, Syam Appala.

“The ARU will usher in the next age of computing, redefining intelligence from imitation to understanding and powering the applications that impact the most important sectors of the economy without the need for massive hardware, data centre and energy budgets.”

By integrating memory-driven causal logic into both hardware and software, Counterintuitive aims to develop systems that are more reliable and auditable. It marks a shift from traditional speed-focused, probabilistic AI black-box models towards more transparent and accountable reasoning.

(Image source: “Abacus” by blaahhi is licensed under CC BY 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Counterintuitive’s new chip aims escape the AI ‘twin trap’ appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Counterintuitive Reasoning-Native Computing AI Twin Trap Artificial Reasoning Unit ARU AI芯片 人工智能 推理计算
相关文章