VentureBeat 前天 06:35
谷歌Gemma模型风波:开发者测试模型风险与可用性挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌的Gemma模型因被指控捏造参议员的虚假新闻而引发争议,并被从AI Studio移除,以避免混淆。尽管Gemma模型仍可通过API访问,但此次事件突显了使用开发者测试模型存在的风险,以及模型可用性的不确定性。谷歌强调Gemma模型专为开发者和研究社区设计,并非面向消费者或用于事实查询。然而,其在AI Studio这一相对易用的平台上的可用性,导致了非开发者用户也能接触到。这不仅暴露了AI模型生成不准确甚至有害信息的潜在风险,也对企业开发者提出了项目连续性的挑战,需要提前保存项目,以应对模型被下线或移除的情况。

🧪 **开发者测试模型的风险**:谷歌Gemma模型因被指控捏造参议员虚假新闻而被移除,这暴露了使用未经充分验证的开发者测试模型可能带来的严重后果,包括传播不实信息和潜在的诽谤风险。谷歌表示,Gemma模型专为开发者和研究社区设计,并非面向消费者,其在AI Studio上的意外使用加剧了这一风险。

⚠️ **模型可用性的不确定性**:此次事件表明,AI模型,尤其是处于测试阶段的模型,其可用性可能随时发生变化。谷歌将Gemma从AI Studio移除,虽然是为了避免混淆,但也提醒了企业开发者,依赖于特定平台或模型的项目存在潜在风险。一旦模型被下线或移除,依赖其进行开发的早期项目可能面临中断。

🔒 **项目连续性与数据保存的重要性**:鉴于AI公司对模型的控制权以及模型可能被移除的现实,企业开发者必须采取措施确保项目连续性。这包括在模型可用时,积极保存本地副本或迁移至更稳定的环境,以避免因模型下线而导致的数据丢失或项目停滞。类似OpenAI移除旧模型的情况也强调了这一点。

The recent controversy surrounding Google’s Gemma model has once again highlighted the dangers of using developer test models and the fleeting nature of model availability. 

Google pulled its Gemma 3 model from AI Studio following a statement from Senator Marsha Blackburn (R-Tenn.) that the Gemma model willfully hallucinated falsehoods about her. Blackburn said the model fabricated news stories about her that go beyond “harmless hallucination” and function as a defamatory act. 

In response, Google posted on X on October 31 that it will remove Gemma from AI Studio, stating that this is “to prevent confusion.” Gemma remains available via API. 

It is also available via AI Studio, which, the company described, is "a developer tool (in fact, to use it you need to attest you're a developer). We’ve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this to be a consumer tool or model, or to be used this way. To prevent this confusion, access to Gemma is no longer available on AI Studio."

To be clear, Google has the right to remove its model from its platform, especially if people have found hallucinations and falsehoods that could proliferate. It also underscores the danger of relying mainly on experimental models and why enterprise developers need to save projects before AI models are sunsetted or removed. Technology companies like Google continue to face political controversies, which often influence their deployments. 

VentureBeat reached out to Google for additional information and was pointed to their October 31 posts. We also contacted the office of Sen. Blackburn, who reiterated her stance outlined in a statement that AI companies should “shut [models] down until you can control it."

Developer experiments

The Gemma family of models, which includes a 270M parameter version, is best suited for small, quick apps and tasks that can run on devices such as smartphones and laptops. Google said the Gemma models were “built specifically for the developer and research community. They are not meant for factual assistance or for consumers to use.”

Nevertheless, non-developers could still access Gemma because it is on the AI Studio platform, a more beginner-friendly space for developers to play around with Google AI models compared to Vertex AI. So even if Google never intended Gemma and AI Studio to be accessible to, say, Congressional staffers, these situations can still occur. 

It also shows that as models continue to improve, these models still produce inaccurate and potentially harmful information. Enterprises must continually weigh the benefits of using models like Gemma against their potential inaccuracies. 

Project continuity 

Another concern is the control that AI companies have over their models. The adage “you don’t own anything on the internet” remains true. If you don’t own a physical or local copy of software, it’s easy for you to lose access to it if the company that owns it decides to take it away. Google did not clarify with VentureBeat if current projects on AI Studio powered by Gemma are saved. 

Similarly, OpenAI users were disappointed when the company announced that it would remove popular older models on ChatGPT. Even after walking back his statement and reinstating GPT-4o back to ChatGPT, OpenAI CEO Sam Altman continues to field questions around keeping and supporting the model. 

AI companies can, and should, remove their models if they create harmful outputs. AI models, no matter how mature, remain works in progress and are constantly evolving and improving. But, since they are experimental in nature, models can easily become tools that technology companies and lawmakers can wield as leverage. Enterprise developers must ensure that their work can be saved before models are removed from platforms. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Gemma Google AI模型 开发者测试模型 AI Studio 模型风险 项目连续性 AI伦理 Gemma Google AI Models Developer Test Models AI Studio Model Risks Project Continuity AI Ethics
相关文章