AI Snake Oil 09月25日
科技政策参与:挑战与机遇
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

许多技术专家对参与公共政策持怀疑态度,因为与学术工作相比,政策影响难以衡量且耗时。然而,成功的政策参与能产生更大的影响力。本文探讨了科技政策参与的常见原因,解释了为何作者仍持谨慎乐观态度,并介绍了近期在科技政策方面的写作和活动,如华盛顿特区的AI政策原则研讨会。虽然政策制定者缺乏技术专长、科技发展迅速、政策预见性不足等问题确实存在,但并非不可逾越。政策关注技术对人的影响,而非技术本身,已有长久的人类保护方法可适应新技术挑战。例如,联邦贸易委员会利用现有权力打击AI公司的虚假声明,反歧视法可应对AI驱动的歧视。此外,许多政策工作在州和地方层面进行。作者团队在Princeton CITP中心参与多项政策工作,包括提出AI基础模型透明度报告框架、参与新泽西州深度伪造法听证会、支持开放模型与开放研究等。他们还发起了AI政策原则非党派项目,为未来十年联邦政策制定探索核心概念、机遇与风险,吸引国会办公室和联邦机构的政策制定者参与。

🔍 许多技术专家对参与公共政策的持怀疑态度,因为政策影响难以衡量且耗时较长,与学术工作的可追踪性形成对比。

📑 政策参与若能成功,其影响力通常远超学术研究。例如,针对技术对人的影响制定政策,而非直接针对技术本身,使得政策更具适应性。

🛡️ 尽管政策制定者缺乏技术专长、科技发展迅速、政策预见性不足等问题确实存在,但并非不可逾越。例如,联邦贸易委员会利用现有权力打击AI公司的虚假声明,反歧视法可应对AI驱动的歧视。

🏛️ 政策工作不仅限于联邦层面,州和地方层面的政策制定同样重要,许多工作在此进行,且不受联邦机构速度限制的影响。

🤝 作者团队在Princeton CITP中心参与多项政策工作,包括提出AI基础模型透明度报告框架、参与新泽西州深度伪造法听证会、支持开放模型与开放研究等,展现了跨学科合作与政策倡导的重要性。

🗣️ 他们还发起了AI政策原则非党派项目,为未来十年联邦政策制定探索核心概念、机遇与风险,吸引国会办公室和联邦机构的政策制定者参与,体现了对长期政策影响的投入和规划。

Many technologists stay far away from public policy. That’s understandable. In our experience, most of the time when we engage with policymakers there is no discernible impact.1 But when we do make a difference to public policy, the impact is much bigger than what we can accomplish through academic work. So we find it fruitful to engage even if it feels frustrating on a day-to-day basis.

In this post, we summarize some common reasons why many people are cynical about tech policy and explain why we’re cautiously optimistic. We also announce some recent writings on tech policy as well as an upcoming event for policymakers in Washington D.C., called AI Policy Precepts.

Tech is not exceptional

Some people want more tech regulation and others want less. But both sides seem to mostly agree that policymakers are bad at regulating tech: because they don’t have tech expertise; or because tech moves too rapidly for law to keep up; or because policymakers are bad at anticipating the effects of regulation. 

While these claims have a kernel of truth, they aren’t reasons for defeatism. It's true that most politicians don't have deep technical knowledge. But their job is not to be subject matter experts. The details of legislation are delegated to staffers, many of whom are experts on the subject. Moreover, much of tech policy is handled by agencies such as the Federal Trade Commission (FTC), which do have tech experts on their staff. There aren’t enough, but that’s being addressed in many ways. Finally, while federal legislators and agencies get the most press, a lot happens on the state and local levels.

Besides, policy does not have to move at the speed of tech. Policy is concerned with technology’s effect on people, not the technology itself. And policy has longstanding approaches to protecting humans that can be adapted to address new challenges from tech. For example, the FTC has taken action in response to deceptive claims made by AI companies under its existing authority. Similarly, the answer to AI-enabled discrimination is the enforcement of long-established anti-discrimination law. Of course, there are some areas where technology poses new threats, and that might require changes to laws, but that’s relatively rare.

In short, there is nothing exceptional about tech policy that makes it harder than any other type of policy requiring deep expertise. If we can do health policy or nuclear policy, we can do tech policy. Of course, there are many reasons why all public policy is slow and painstaking, such as partisan gridlock, or the bias towards inaction built into the structure of the government due to checks and balances. But none of these factors are specific to tech policy.

To be clear, we are not saying that all regulations or policies are useful—far from it. In past essays, we have argued against specific proposals for regulating AI. And there’s a lot that can be accomplished without new legislation. The October 2023 Executive Order by the Biden administration tasked over 50 agencies with 150 actions, showing the scope of existing executive authority.

Our work on informing AI policy

We work at Princeton’s Center for Information Technology Policy. CITP is home to interdisciplinary researchers who look at tech policy from different perspectives. We have also begun working closely with the D.C. office of Princeton's School of Public and International Affairs. Recently, we have been involved in a few collaborations on informing tech policy:

Foundation model transparency reports: In a Stanford-MIT-Princeton collaboration, we propose a structured way for AI companies to release key information about their foundation models. We draw inspiration from transparency reporting in social media, financial reporting, and FDA’s adverse event reporting. We use the set of 100 indicators developed in the 2023 Foundation Model Transparency Index.

We analyze how the 100 indicators align with six existing proposals on AI: Canada's Voluntary Code of Conduct for generative AI, the EU AI Act, the G7 Hiroshima Process Code of Conduct for AI, the U.S. Executive Order on AI, the U.S. Foundation Model Transparency Act, and the U.S. White House voluntary AI commitments. 43 of the 100 indicators in our proposal are required by at least one proposal, with the EU AI Act requiring 30 of the 100 proposed indicators. 

We also found that transparency requirements in government policies can lack specificity: they do not detail how precisely developers should report quantitative information, establish standards for reporting evaluations, or account for differences across modalities. We provide an example of what Foundation Model Transparency Reports could look like to help sharpen what information AI developers must provide. Read the paper here. 

New Jersey Assembly hearing on deepfakes: Last month, Sayash testified before the New Jersey Assembly on reducing harm from deepfakes. We were asked to provide our opinion on four bills creating penalties and mitigations for non-consensual deepfakes. The hearing included testimonies from four experts in intellectual property, tech policy, civil rights, and constitutional law. 

We advocated for collecting better evidence on the impact of AI-generated deepfakes, content provenance standards to help prove that a piece of media is human-created (as opposed to watermarking to prove it is AI-generated), and bolstering defenses on downstream surfaces such as social media. We also cautioned against relying too much on the non-proliferation of powerful AI as a solution—as we've argued before, it is likely to be infeasible and ineffective. Read the written testimony here.

Open models and open research: We submitted a response to the National Telecommunications and Information Administration on its request for comments on openness in AI, in collaboration with various academic and civil society members. Our response built on our paper and policy brief analyzing the societal impact of open foundation models. We were happy to see this paper being cited in responses by several industry and civil society organizations, including the Center for Democracy and Technology, Mozilla, Meta, and Stability AI. Read our response here.

We also contributed to a comment to the copyright office in support of a safe harbor exemption for generative AI research, based on our paper and open letter (signed by over 350 academics, researchers, and civil society members). Read our comment here.

AI safety and existential risk. We’ve analyzed several aspects of AI safety in our recent writing: the impact of openness, the need for safe harbors, and the pitfalls of model alignment. Another major topic of policy debate is on the existential risks posed by AI. We’ve been researching this question for the past year and plan to start writing about it in the next few weeks.

AI policy precepts. CITP has launched a non-partisan program to explore the core concepts, opportunities, and risks underlying AI that will shape federal policy making for the next ten years. The sessions will be facilitated by Arvind alongside CITP colleagues Matthew Salganik and Mihir Kshirsagar. The size is limited to about 18 participants, with policymakers drawn from Congressional offices and Federal agencies. We will explore predictive and generative AI, moving beyond familiar talking points and examining real world case studies. Participants will come away with frameworks to address future challenges, as well as the opportunity to establish relationships with a cohort of policymakers. See here for more information and here to nominate yourself or a colleague. The deadline for nomination is this Friday, April 5.

We thank Mihir Kshirsagar for feedback on a draft of this post. Link to cover image: Source

1

Unlike scholarly impact, which can more-or-less be tracked through citations, policy impact usually happens without any direct attribution. Besides, the timescale for change can be longer. Both of these can make it hard to assess whether one’s policy engagement has any impact.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

科技政策 政策参与 AI政策 透明度报告 深度伪造 开放模型 开放研究 AI安全 存在风险 Princeton CITP
相关文章