Deeplinks 09月29日
白宫AI行动计划战“觉醒AI”
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

白宫近日发布的“AI行动计划”针对“觉醒AI”,包括与政府气候变化、性别等观点不一致的大型语言模型(LLMs)。该计划还打击旨在减轻种族和性别偏见内容及仇恨言论的措施。AI开发者多年来一直努力解决这种偏见问题。新发布的“防止政府在联邦政府中使用觉醒AI”行政命令试图强迫AI公司修改其模型以符合特朗普政府的意识形态议程。该命令要求获得联邦合同的AI公司证明其LLMs没有所谓的“意识形态偏见”,如“多元化、公平和包容”。这种高压审查不会像特朗普政府声称的那样使模型更准确或“值得信赖”,而是一个明显的企图审查LLMs的开发并限制它们作为表达和信息获取工具的作用。虽然第一修正案允许政府选择仅购买反映政府观点的服务,但政府无权利用这种权力来影响公众可用的服务和信息。有利的政府合同可以迫使商业公司实施他们原本不会实施的功能(或偏见),这些功能通常会传递给用户。这将影响60%的美国人从LLMs获取信息,并迫使开发者撤销减少偏见的工作,使模型更不准确,并更有可能在政府手中造成伤害。

🌍 白宫的AI行动计划针对所谓的“觉醒AI”,包括提供与政府气候变化、性别等观点不一致的大型语言模型(LLMs)。该计划还打击旨在减轻种族和性别偏见内容及仇恨言论的措施。

🔍 新发布的“防止政府在联邦政府中使用觉醒AI”行政命令试图强迫AI公司修改其模型以符合特朗普政府的意识形态议程。该命令要求获得联邦合同的AI公司证明其LLMs没有所谓的“意识形态偏见”,如“多元化、公平和包容”。

🚫 这种高压审查不会像特朗普政府声称的那样使模型更准确或“值得信赖”,而是一个明显的企图审查LLMs的开发并限制它们作为表达和信息获取工具的作用。

📉 利用AI simply to entrench the way things have always been done squanders the promise of this new technology. Biased LLMs used by government agencies can automate systemic, historical injustice and impact people’s personal freedom and access to resources.

🛡️ 我们需要强有力的保护措施来防止政府机构采购有偏见、有害的AI工具。特朗普政府已经撤销了拜登政府已经薄弱的AI保护措施,这使得AI辅助的公民权利侵犯的可能性大大增加。

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI行动计划 觉醒AI 大型语言模型 意识形态偏见 多元化、公平和包容 行政命令 特朗普政府 拜登政府 AI保护措施 公民权利侵犯
相关文章