Steampunk AI 09月30日 19:07
AI新闻摘要
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本周AI领域传来多项重要新闻:Google发布了Gemini AI查询的平均能耗数据,显示其能耗与搜索查询相当;OpenAI等公司合作推广使用'Agents.MD'文件为AI编码代理添加上下文信息;Anthropic以1美元的价格向美国政府提供Claude AI;经济学家预测传统头脑风暴可能因ChatGPT而终结;NVIDIA研究指出小语言模型在特定任务上比LLM更有效。

🔍 Google发布数据显示,Gemini AI查询的平均能耗为0.0024 KW/H,与搜索查询的能耗相当,这意味着AI查询并不比传统搜索更耗能。

📚 OpenAI等公司合作推广使用'Agents.MD'文件,该文件用于在代码仓库的根目录中添加上下文信息,以帮助AI编码代理更好地理解代码指令。

💰 Anthropic以1美元的价格向美国政府提供Claude AI,这是AI领域的一次重大合作,显示了AI技术在政府应用中的潜力。

🤔 经济学家预测,传统头脑风暴可能因ChatGPT而终结,因为人们可以简单地使用ChatGPT生成选项并选择一个,这将加速头脑风暴的过程。

🔬 NVIDIA研究表明,小语言模型(参数小于10亿且能在消费级设备上运行)在特定任务上比大型语言模型更有效,因为它们更经济且更适合处理狭窄的主题。

Saturday Links: Agents.MD, $1 Government AI, and SLMs

AI leaders offer government $1 AI, NVIDIA argues for smaller models, and Google shares energy data

Just in case you missed it, ChatGPT got friendlier again this week, but on to the other top bits of news:

    In a first, Google has released data on how much energy an AI prompt uses. Google released some interesting data this week on the power consumption of the median Gemini AI query. At 0.0024 KW/H. Google also released the same data for search queries: just 0.0003 KW/H. So AI queries are no more power hungry than search (Correction -> I previously said 800x!). I actually thought the gap would be way higher. I'd also expect that the power consumption of the largest queries is many orders of magnitude higher than the median number (the average also seems likely to be higher).Agents.MD. OpenAI and others have collaborated on creating a common place to place code instructions to AI coding agents. They are promoting the use of an "Agents.MD" markdown file in the root directory of code repositories to add context information for AI systems writing code. The idea is to have a unified place for extra build and other instructions. The files should hopefully help coding agents from many companies, not just OpenAI. This seems like the beginning of documenting code for AI. Slightly wild.Anthropic takes aim at OpenAI, offers Claude to ‘all three branches of government’ for $1. Both OpenAI and Anthropic now offer AI to the US government for cut-price rates. If you thought that government might be slow to adopt... possibly think again. In the long run, no doubt normal commercial rates will return, but the land grab is on right now.The last days of brainstorming. The economist has a fun piece predicting the end of brainstorming, as people simply use ChatGPT to generate options and then pick one. It seems to me that ChatGPT and other models could be a huge accelerator to brainstorming, but not in the "ask it for the answer" way. Instead, use it not just to generate ideas, but also to do background checks and to create a panel of customers and users: what would they think? What other brands might they confuse a name with, etc?Small Language Models are the Future of Agentic AI (NVIDIA Research). In this research, an NVIDIA and Georgia Tech team argue that small language models (defined as <10bn parameters and able to run on a consumer device) will likely be much more effective at many specific, well-defined tasks than LLMs. Not only does their size make them more economical to run, but they are better suited to operating narrowly on a topic at hand, where an LLM may end up being distracted into activating irrelevant outputs. I definitely subscribe to this view; we're really just at the beginning of understanding how we build reliable AI systems, and smaller models will be a big part of that journey. (The full paper is here.)

Wishing you a wonderful weekend.

CORRECTIONS:

    I initially quoted the Google Gemini error consumption at 0.24 KWH, but it is actually 0.00024.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI新闻 Google OpenAI Anthropic NVIDIA
相关文章