AI Snake Oil 09月12日
技术专家与公共政策的互动:挑战与机遇
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

许多技术专家对公共政策敬而远之,但参与政策制定能带来远超学术研究的影响力。本文探讨了科技政策领域的普遍犬儒主义,并阐述了作者们为何对此持谨慎乐观态度。文章强调,技术并非“例外”,政策制定者虽不需是技术专家,但可通过专业人员和现有法律框架应对科技挑战。同时,文章还介绍了普林斯顿大学信息技术政策中心在推动科技政策方面的多项工作,包括基金模型透明度报告、对深度伪造的立法建议、对开放模型的研究,以及即将举行的“AI政策原则”活动,旨在帮助政策制定者更好地理解和应对AI带来的机遇与风险。

💡 技术参与公共政策具有深远影响力:尽管技术专家可能因互动效果不明显而选择回避,但一旦参与并影响政策,其效果将远超学术研究,因此,即使过程充满挑战,参与仍具价值。

⚖️ 科技政策并非例外,可借鉴现有框架:技术政策的复杂性不应导致消极应对。政策制定者虽不需是技术专家,但可依赖专业人员和现有法律框架,如通过联邦贸易委员会(FTC)的现有权力处理AI相关的欺诈性声明和歧视问题,证明科技政策与健康、核政策一样可被有效管理。

🚀 普林斯顿CITP在推动AI政策方面的多项举措:该中心通过基金模型透明度报告、对深度伪造的立法建议、对开放模型的研究,以及即将举办的“AI政策原则”活动,积极与政策制定者合作,旨在提升其对AI核心概念、机遇与风险的理解,并为未来十年联邦政策制定奠定基础。

📊 基金模型透明度报告为AI监管提供具体指导:该报告提出了一种结构化的方式,供AI公司发布关键信息,借鉴了社交媒体、金融报告和FDA的不良事件报告经验,并分析了现有AI政策对透明度要求的覆盖情况,以期提高报告的精确性和实用性。

Many technologists stay far away from public policy. That’s understandable. In our experience, most of the time when we engage with policymakers there is no discernible impact.1 But when we do make a difference to public policy, the impact is much bigger than what we can accomplish through academic work. So we find it fruitful to engage even if it feels frustrating on a day-to-day basis.

In this post, we summarize some common reasons why many people are cynical about tech policy and explain why we’re cautiously optimistic. We also announce some recent writings on tech policy as well as an upcoming event for policymakers in Washington D.C., called AI Policy Precepts.

Tech is not exceptional

Some people want more tech regulation and others want less. But both sides seem to mostly agree that policymakers are bad at regulating tech: because they don’t have tech expertise; or because tech moves too rapidly for law to keep up; or because policymakers are bad at anticipating the effects of regulation. 

While these claims have a kernel of truth, they aren’t reasons for defeatism. It's true that most politicians don't have deep technical knowledge. But their job is not to be subject matter experts. The details of legislation are delegated to staffers, many of whom are experts on the subject. Moreover, much of tech policy is handled by agencies such as the Federal Trade Commission (FTC), which do have tech experts on their staff. There aren’t enough, but that’s being addressed in many ways. Finally, while federal legislators and agencies get the most press, a lot happens on the state and local levels.

Besides, policy does not have to move at the speed of tech. Policy is concerned with technology’s effect on people, not the technology itself. And policy has longstanding approaches to protecting humans that can be adapted to address new challenges from tech. For example, the FTC has taken action in response to deceptive claims made by AI companies under its existing authority. Similarly, the answer to AI-enabled discrimination is the enforcement of long-established anti-discrimination law. Of course, there are some areas where technology poses new threats, and that might require changes to laws, but that’s relatively rare.

In short, there is nothing exceptional about tech policy that makes it harder than any other type of policy requiring deep expertise. If we can do health policy or nuclear policy, we can do tech policy. Of course, there are many reasons why all public policy is slow and painstaking, such as partisan gridlock, or the bias towards inaction built into the structure of the government due to checks and balances. But none of these factors are specific to tech policy.

To be clear, we are not saying that all regulations or policies are useful—far from it. In past essays, we have argued against specific proposals for regulating AI. And there’s a lot that can be accomplished without new legislation. The October 2023 Executive Order by the Biden administration tasked over 50 agencies with 150 actions, showing the scope of existing executive authority.

Our work on informing AI policy

We work at Princeton’s Center for Information Technology Policy. CITP is home to interdisciplinary researchers who look at tech policy from different perspectives. We have also begun working closely with the D.C. office of Princeton's School of Public and International Affairs. Recently, we have been involved in a few collaborations on informing tech policy:

Foundation model transparency reports: In a Stanford-MIT-Princeton collaboration, we propose a structured way for AI companies to release key information about their foundation models. We draw inspiration from transparency reporting in social media, financial reporting, and FDA’s adverse event reporting. We use the set of 100 indicators developed in the 2023 Foundation Model Transparency Index.

We analyze how the 100 indicators align with six existing proposals on AI: Canada's Voluntary Code of Conduct for generative AI, the EU AI Act, the G7 Hiroshima Process Code of Conduct for AI, the U.S. Executive Order on AI, the U.S. Foundation Model Transparency Act, and the U.S. White House voluntary AI commitments. 43 of the 100 indicators in our proposal are required by at least one proposal, with the EU AI Act requiring 30 of the 100 proposed indicators. 

We also found that transparency requirements in government policies can lack specificity: they do not detail how precisely developers should report quantitative information, establish standards for reporting evaluations, or account for differences across modalities. We provide an example of what Foundation Model Transparency Reports could look like to help sharpen what information AI developers must provide. Read the paper here. 

New Jersey Assembly hearing on deepfakes: Last month, Sayash testified before the New Jersey Assembly on reducing harm from deepfakes. We were asked to provide our opinion on four bills creating penalties and mitigations for non-consensual deepfakes. The hearing included testimonies from four experts in intellectual property, tech policy, civil rights, and constitutional law. 

We advocated for collecting better evidence on the impact of AI-generated deepfakes, content provenance standards to help prove that a piece of media is human-created (as opposed to watermarking to prove it is AI-generated), and bolstering defenses on downstream surfaces such as social media. We also cautioned against relying too much on the non-proliferation of powerful AI as a solution—as we've argued before, it is likely to be infeasible and ineffective. Read the written testimony here.

Open models and open research: We submitted a response to the National Telecommunications and Information Administration on its request for comments on openness in AI, in collaboration with various academic and civil society members. Our response built on our paper and policy brief analyzing the societal impact of open foundation models. We were happy to see this paper being cited in responses by several industry and civil society organizations, including the Center for Democracy and Technology, Mozilla, Meta, and Stability AI. Read our response here.

We also contributed to a comment to the copyright office in support of a safe harbor exemption for generative AI research, based on our paper and open letter (signed by over 350 academics, researchers, and civil society members). Read our comment here.

AI safety and existential risk. We’ve analyzed several aspects of AI safety in our recent writing: the impact of openness, the need for safe harbors, and the pitfalls of model alignment. Another major topic of policy debate is on the existential risks posed by AI. We’ve been researching this question for the past year and plan to start writing about it in the next few weeks.

AI policy precepts. CITP has launched a non-partisan program to explore the core concepts, opportunities, and risks underlying AI that will shape federal policy making for the next ten years. The sessions will be facilitated by Arvind alongside CITP colleagues Matthew Salganik and Mihir Kshirsagar. The size is limited to about 18 participants, with policymakers drawn from Congressional offices and Federal agencies. We will explore predictive and generative AI, moving beyond familiar talking points and examining real world case studies. Participants will come away with frameworks to address future challenges, as well as the opportunity to establish relationships with a cohort of policymakers. See here for more information and here to nominate yourself or a colleague. The deadline for nomination is this Friday, April 5.

We thank Mihir Kshirsagar for feedback on a draft of this post. Link to cover image: Source

1

Unlike scholarly impact, which can more-or-less be tracked through citations, policy impact usually happens without any direct attribution. Besides, the timescale for change can be longer. Both of these can make it hard to assess whether one’s policy engagement has any impact.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

科技政策 AI 公共政策 技术专家 监管 透明度 Tech Policy AI Public Policy Technologists Regulation Transparency
相关文章