少点错误 09月30日
AI民主化:挑战与实践
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了“AI民主化”这一概念的实际含义,区分了“民主化访问”和“民主化治理”两种解读。尽管公众对AI发展普遍持谨慎态度,并呼吁加强监管,但目前许多旨在实现AI民主化的尝试,如OpenAI的“Democratic Inputs”计划和CIP的CCAI项目,在实践中存在结构性弱点和概念性缺陷。这些项目往往将公众意见视为市场研究,而非真正的治理机制,且参与者缺乏实际决策权。文章强调,真正的AI民主化需要将公众参与与结构性权力、持续适应相结合,嵌入到AI治理的核心,而非流于形式。

🎯 **AI民主化的双重解读与公众担忧**:文章首先区分了“民主化访问”(让所有人都能使用和开发AI工具)和“民主化治理”(让公众声音参与AI的设计、部署和政策制定)。尽管AI投资激增,但公众普遍对AI发展持谨慎和担忧态度,并对科技公司能否负责任地开发AI缺乏信心,这促使各国呼吁加强政府监管。

💡 **现有AI民主化尝试的局限性**:文章分析了OpenAI的“Democratic Inputs”计划和CIP的CCAI项目。这些项目虽然吸引了公众参与,但存在结构性弱点,如将公众意见视为市场研究而非实质性治理机制,参与者仅提供咨询意见,缺乏实际决策权。OpenAI的计划更是被指责为“市场研究的伪装”,其核心决策仍由公司掌握。

⚙️ **结构性与概念性缺陷导致效果不彰**:文章进一步阐述了现有AI民主化努力为何难以奏效。结构性问题包括公司主导的议程、输入缺乏权力以及市场逻辑压倒民主原则。概念性问题则涉及参与不等于权威、精英俘获与偏见、以及将AI视为解决一切问题的“万能药”。这些都导致AI民主化的尝试流于表面,未能触及根本。

⚖️ **迈向真正AI民主化的路径**:文章强调,真正的AI民主化需要将公众参与与结构性权力、持续适应相结合。这意味着需要建立可强制执行的机构和持续的反馈机制,将民主目的嵌入AI治理的核心。文章呼吁采取更勇敢、更审慎的方式,确保AI发展真正反映集体力量,而非仅仅是表演性的输入,并预告了Odyssean Process的未来实践。

📈 **AI治理的未来方向**:文章指出,目前的AI民主化尝试往往是公司驱动的,将公众意见作为对现有系统的补充,而非根本性的改革。真正的AI治理需要一个更广泛的社会政治框架,能够验证AI工具的有效性,并确保其发展符合公共利益,避免资源浪费和权力集中。

Published on September 29, 2025 8:52 PM GMT

Setting the Stage

Companies and venture capital firms can’t get enough of so called artificial intelligence (AI), with private investments in generative AI increasing by nine times from 2022 to 2023,[1] and tech giants Alphabet (Google), Amazon, Meta, and Microsoft projected to invest more than 1 trillion USD into AI over the next five years.[2]

The general public, however, are not as convinced. Multiple 2024 polls show that the most common sentiments held by US adults with regards to recent advances in AI are caution and concern,[3][4] and a majority have little-to-no confidence that technology companies developing AI will do so responsibly.[5] This opinion is shared globally with frequent calls for government regulation from countries both leading in AI development (US, UK, China) and those who face the risk of being excluded from consideration in it (Brazil, South Korea, India).[6]

A frequent suggestion to resolve this is to democratise AI, which on first inspection sounds incredibly attractive. In practice, however, the processes that are typically called for and conducted are superficial at best, and can even be twisted to promote the appearance of egalitarianism without making any meaningful changes, as is noted in ‘Against “Democratizing AI”’.[7]

Rose Willis & Kathryn Conrad / Better Images of AI / A Rising Tide Lifts All Bots / CC BY 4.0

What Does “Democratising AI” Actually Mean?

Before moving forward, it is important to note what I mean by democratising AI, as there are two often-conflated interpretations:

    Democratising access – making both use and development of AI tools readily available to all.

    Democratising governance – involving public voices in AI design, deployment, and policy.

These aims are fundamentally different approaches, and despite ambiguity, should not be treated as interchangeable. Most of those advocating for democratic AI are pursuing the first interpretation, inputs and methods for the development and use of these systems, but ambiguity between these framings has made discourse more difficult on both fronts. In this article, we focus on democratic design and implementation of AI, not questions of access.

The focus of this thought piece is the latter, governance, which many advocates treat as a cure-all without clear mechanisms or authority.

Major Attempts and Actors

A wide range of institutions have begun exploring how public input might shape AI development, from governments and multinational coalitions to tech companies and civil society organizations.

NOTE: The table doesn't render super well here, so check out the original post on the Odyssean Institute blog for existing efforts and links to them.

I’m sure I’ve missed a few, but this table provides a starting point for exploring this space. Despite the apparent breadth of activity, very little has been done to follow through on these calls. Two stand out for their ambition and influence: OpenAI’s “Democratic Inputs” program[8] and the Collective Intelligence Project (CIP).[9]

OpenAI - Democratic Inputs to AI

In 2023, OpenAI launched the “Democratic Inputs to AI” program, funding 10 global teams to test new ways of surfacing public preferences about AI behavior, developing a “democratic processes for overseeing AGI.” The grantees explored approaches ranging from deliberative polling to community-led red teaming, with the aim of informing how powerful models should behave.[10]

The call for proposals included many implicit assumptions.[11] In essence, OpenAI did not ask “how can we ensure we develop our systems in a democratic way,” but instead asked “how can systems like ours improve democracy?” This framing presumes (i) that AI can and should be used in this context, and (ii) that AI, as it currently exists, is a permanent and justified fixture in our society.

Although beyond the scope of this article, credible arguments have been presented to question both. For example, the use of AI for scaling democratic deliberation has significant potential to result in over-reliance,[12] exposing deliberative practice to private capture or technical vulnerabilities.

Furthermore, participation from the grantees in the program is purely advisory; governance power still lies with the lab itself. The initiative reflects openness to feedback, but does not commit to ceding control.[8] As such, it remains a first step toward legitimacy, not a democratic system in practice.

Interestingly, when combined with the other assumptions hidden in the press release, OpenAI’s efforts look a lot more like market research masquerading as a public good. The experiments remain disconnected from OpenAI’s core release decisions which continue to prioritize fast growth: monetization and scale. This includes a recent $200 million contract with the U.S. Department of Defense to “develop frontier AI capabilities” and includes the appointment of several senior OpenAI execs as Lieutenant Colonels in the US Army Reserve along with leaders from other prominent tech companies like Meta.[13]  These developments run counter to the equitable and peaceful democratic efforts they’d publicly committed to.

Collective Intelligence Project (CIP) - CCAI

The Collective Intelligence Project proposes a deeper institutional shift: embedding democratic processes into the architecture of AI governance. Rather than rely on lab-led consultation, CIP argues for building “civic layer” infrastructure to enable scalable public participation in shaping AI norms and deployment.[9]

Their proposed "CI Stack" includes:

    Value elicitation – Tools like Pol.is or deliberative assemblies to surface shared public values.

    Decision-making – Sortition-based councils or other participatory methods for resolving tradeoffs.

    Implementation – Operational links to model governance or platform rules.

Their first major effort, CCAI, was a collaboration with Anthropic to create a set of rules which would influence a chatbot's behavior, sourced from a representative sample of crowd workers in the United States.[14] This effort suffered from the same assumptions as OpenAI’s democratic inputs projects; the artifact created as a result of this process, a so-called constitution, only informs parts of the reinforcement learning (RL) process, meaning that all other development is accepted as-is.[15]

In addition, the entries in this list of “shoulds” and “should-nots” made it clear that whatever method was used simply isn’t enough to achieve an informed and valid democratic process with which to design AI systems. Statements from the original output list include:[16] 

These statements indicate the lack of context provided to the participants regarding (i) the purpose of the task, and (ii) the fundamental limitations of language models. Importantly, this is not the fault of the participants, it implies that more efforts should have been made to ensure a certain level of understanding by the organisers. The Odyssean Process acknowledges the inherent difficulty of this task, and leverages a wide literature on debiased expert elicitation, exploratory modeling, decision support, and citizen deliberation to provide participants what they need to robustly parse potential interventions.[17]

Why Efforts Fall Short - In Brief

Structural Weaknesses

Conceptual and Pragmatic Flaws

Johannes Himmelreich casts a crucial spotlight: “Such a democratization of AI … is resource intensive … morally myopic … and neither theoretically nor practically the right kind of response.” He argues that instead of more participation, we should raise the democratic quality of administrative and executive processes.[7]

Existing efforts attempt to put democracy into predefined systems, which assumes that the system is part of the solution. While language models likely are a good tool to leverage in some cases, we need a framework to validate this. Current efforts are ultimately an oversimplification of the problem, and are too corporate aligned for their own good. This is likely to lead to public spending on private market research, as well as unnecessarily complex and redundant solutions that line the pockets of those already in power.[19]

Democracy Demands More

We risk converting democratisation into a symbolic gesture unless participation is tied to structural authority and ongoing adaptation. Only by embedding democratic purpose at the core, through enforceable institutions and continuous feedback, can AI development truly reflect collective power, not performative input.

We understand how complex and fraught with risk both regulating transformative technology, and innovating sociopolitical systems can be. We intend to conduct the first Odyssean Process in full, built of components with strong track records for robust and legitimate collective intelligence, in 2026. This piece serves as a call to action to proceed as boldly, but also as diligently, as possible. Ensuring democratic engagement on AI governance does what it says on the tin: empower those impacted by AI to contribute to pivotal regulatory efforts, not the other way around.

Acknowledgements

While I was the primary author of this blogpost, I would not have been able to write it without the significant assistance from Kendal Peirce and Giuseppe Dal Pra, as well as feedback from Max Ramsahoye.


Additional References

While not explicitly referenced in the text, the following sources were consulted and informed the writing this blogpost.

[20] Inside OpenAI's Plan to Make AI More 'Democratic', TIME

[21] CIP Annual Report 2024, The Collective Intelligence Project

[22] A Roadmap to Democratic AI, The Collective Intelligence Project

[23] AI Risk Prioritization: OpenAI Alignment Assembly Report, The Collective Intelligence Project

[24] Democratising AI: Multiple Meanings, Goals, and Methods, Seger et al.

[25] Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns and Future Pathways, McKinney

[26] Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits, Mun et al.

[27] Better Together? The Role of Explanations in Supporting Novices in Individual and Collective Deliberations about AI, Schmude et al.

[28] AGI and Democracy, Ash Center

[29] Series 4: Enabling Secure Democratic Ecosystems Through AI, AI4Democracy

[30] AI Democracy Projects, Institute for Advanced Study

[31] Launch of “Democratic Commons” The first global research program to build AI in service of Democracy, Sorbonne Université

[32] The Case for Local and Regional Public Engagement in Governing Artificial Intelligence, DemocracyNext

 

  1. ^

     AI Index Report 2024, Stanford Human-centered AI

  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^

    Against "Democratizing AI", Johannes Himmelreich

  8. ^
  9. ^

    The Collective Intelligence Project Whitepaper, The Collective Intelligence Project

  10. ^
  11. ^
  12. ^
  13. ^
  14. ^

    CIP and Anthropic launch Collective Constitutional AI, The Collective Intelligence Project

  15. ^
  16. ^

    Community Model Library - Original CCAI, Community Models (CIP) 

  17. ^

    The Odyssean Process Whitepaper, The Odyssean Institute

  18. ^
  19. ^


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI民主化 AI治理 人工智能伦理 公众参与 OpenAI CIP Democratizing AI AI Governance AI Ethics Public Participation
相关文章