少点错误 10月14日
Forethought 招聘研究人员,共同应对 AI 发展挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Forethought 正在招聘研究人员,旨在应对快速发展的 AI 技术带来的挑战,并帮助社会更好地适应超级智能 AI 系统。该机构专注于研究其他组织较少关注的领域,如 AI 驱动的政变、追求繁荣而非生存、不同类型的智能爆炸以及 AI 工具在生存安全方面的应用。文章详细介绍了 Forethought 的使命、研究方向以及在其中工作的优势,包括与优秀同事的合作、受保护的研究环境以及顺畅的内部运作。同时,文章也坦诚地指出了 Forethought 可能不适合所有人的地方,例如需要投入时间和精力参与团队事务、项目选择的局限性以及团队内部观点的多样性。文章强调,无论申请者拥有何种背景,只要对 AI 领域充满好奇心,并致力于帮助社会应对挑战,都有可能成为 Forethought 的一员。

🚀 **Forethought 致力于应对 AI 带来的重大挑战**:该机构专注于研究快速发展的 AI 技术及其对社会产生的深远影响,特别是那些容易被忽视的关键问题,如 AI 驱动的政变、追求长远繁荣而非仅仅生存、不同类型的智能爆炸以及 AI 在保障生存安全方面的作用。其使命是帮助社会更好地理解和适应超级智能 AI 系统。

💡 **加入 Forethought 的独特优势**:在 Forethought 工作,研究人员可以获得与顶尖人才合作、交流思想、快速获得反馈的机会,并置身于一个能够保护研究免受外部干扰和扭曲的环境中。顺畅的内部运作和明确的工作职责也为研究人员提供了坚实的支持,使其能够更专注于核心研究工作。

🤔 **Forethought 的适用性与局限性**:文章坦诚地指出,Forethought 并非适合所有人。申请者需要投入时间和精力参与团队事务,理解项目选择并非完全自由,并能接受团队内部观点的多样性。然而,对于那些对 AI 领域充满好奇、有独立思考能力并致力于解决社会问题的研究者而言,Forethought 提供了一个独特且有价值的平台。

🔍 **多元背景与关键特质备受青睐**:Forethought 鼓励拥有不同背景和技能的研究者加入,强调对 AI 领域的好奇心、社会责任感以及在不确定环境中保持困惑并探索新模型的能力。开放的心态、健康的怀疑精神以及灵活性是评估候选人的关键特质。

🌟 **招聘信息与申请指引**:Forethought 目前正在招聘高级研究员和研究员,具体职责和要求已在文中详细列出。文章鼓励有意者直接申请,并提供了推荐奖励信息,以吸引更多优秀人才加入,共同应对 AI 时代带来的机遇与挑战。

Published on October 14, 2025 8:59 AM GMT

idenBasic facts:

In the rest of this post, I sketch out more of my personal take on this area, how Forethought fits in, and why you might or might not want to do this kind of work at Forethought.

Others at Forethought might disagree with various parts of what I say (and might fight me or add other info in the comments). Max reviewed a first draft and agreed that, at least at that point, I wasn’t misrepresenting the org, but that’s it for input from Forethought folks — I wrote this because I dislike posts written in an ~institutional voice and thought it could be helpful to show a more personal version of “what it’s like to work here.”[1]


A background worldview

Here’s roughly how I see our current situation:

Forethought’s mission is to help us “navigate the transition to a world with superintelligent AI systems.” The idea is to focus in particular on the questions/issues that get less attention from other people who think that advanced AI could be a big deal. So far this has involved publishing on topics like:

(If you want to see more, here’s the full “research” page, although a lot of my favorite public stuff from my colleagues is scattered in other places,[2] e.g. the podcast Fin runs, posts like this one from Rose, Tom’s responses on LW & Twitter, or Will’s recent post on EA. And the same goes for my own content; there's more on the EA Forum/LW & Twitter.)

So in my view Forethought is helping to fill a very important gap in the space.[3] But Forethought is pretty tiny,[4] and collectively my sense is we’re nowhere near on track for understanding things to a degree that would make me happy. I think we don’t even have a bunch of the relevant questions/issues/dynamics on our radar at this point.

Quick sketch illustrating this perspective

(Things that give me this sense include: often running into pretty fundamental but not well articulated disagreements with "state-of-the-art" research in this space or with others at Forethought, often feeling like everyone seems to be rolling with some assumption or perspective that doesn’t seem justified or feeling like some proposal rests on a mishmash of conceptual models that don’t actually fit together.)

So I would love to see more people get involved in this space, one way or another.[5] And for some of those people, I think Forethought could be one of the best places to make progress on these questions.[6]


Why do this kind of work at Forethought? (Why not?)

I joined (proto-)Forethought about a year ago. This section outlines my takes on why someone interested in this area might or might not want to join Forethought; if you want the “official” version, you should go to the main job listing, and there’s also some relevant stuff on the more generic “Careers” page.


Doing this kind of research alone — or without being surrounded by other people thinking seriously about related topics — seems really hard for most people. Being able to develop (butterfly) ideas in discussion with others, quickly get high-context feedback on your drafts or input on possible research directions,[7] and spend time with people you respect (from whom you can learn various skills[8]) helps a lot. It’s also valuable to have a space in which your thinking is protected from various distortion/distraction forces, like the pressure to signal allegiance to your in-group (or to distance yourself from perspectives that are “too weird”), the pull of timely, bikeshed-y topics[9] (or the urge to focus on topics that will get you lots of karma), the need to satisfy your stakeholders or to get funding or to immediately demonstrate measurable results, and so on. It’s also easier to stay motivated when surrounded by people who have context on your work.

And joining a team can create a critical mass around your area. Other people start visiting your org (and engaging with the ideas, sharing their own views or expertise, etc.). It’s easier for people to remember at all that this area exists. Etc.

I think the above is basically the point of Forethought-the-institution. 

Many of the same properties have, I think, helped me develop as a researcher. For instance, I’ve learned a lot by collaborating with Owen, talking to people in Forethought’s orbit, and getting feedback in seminars. The standards of seriousness[10] set by the people around me have helped train me out of shallow or overly timid engagement with these ideas. And the mix of opportunity-to-upskill, having a large surface area / many opportunities to encounter a bunch of different people, and protection from various gravitational forces (and in particular the pull to build on top of AI strategy worldviews that I don’t fully buy or understand) has helped me form better and more independent models.[11]

(I also think Forethought could improve on various fronts here. See more on that a few paragraphs down.)

Working at Forethought has some other benefits:

Sample of crazy/messy diagrams I made at various points during my time at Forethought.

(The job listing shares more info on stuff like salary, the office[14], the support you’d get on things like making sure what you write actually reaches relevant people or turns into “action”, etc.)

Still, Forethought is not the right place for everyone, and there are things I’d tell e.g. a friend to make sure they’re ok with before they decide to join. These might include:

I'll also list some things that I personally wish were different:

Who do I think would/wouldn’t be a good fit for this?

I’ve pasted in the official "fit" descriptions below (mostly because I don’t really trust people to just go read them on the job listing). 

The main things I want to emphasize or add are:

And as promised here’s the official description of the roles:

Senior research fellows will lead their own research projects and set their direction, typically in areas that are poorly understood and pre-paradigmatic. They might also lead a small team of researchers working on these topics.

 

You could be a good fit for this role if:

    You care about working on important questions related to navigating rapid AI progressYou have your own views on what is most important and ideas for research directions to pursue.You have a strong track record of producing original research, particularly in poorly understood, interdisciplinary domainsYou can work autonomously and are able to make consistent progress on gnarly problems without much guidance from more established researchers (since there often aren’t people who have thought more deeply about the questions you’d be tackling)You can communicate clearly in writing (and verbally)

Research fellows are in the process of developing their own independent views and research directions, since they might be earlier-career or switching domains.

Initially, [r]esearch fellows will generally collaborate with senior research fellows, to produce research on important topics of mutual interest. We expect research fellows to form their own view on topics that they work on, and to spend 20-50% of their time thinking and exploring their own research directions.

We expect some research fellows to (potentially rapidly) develop their own research agenda and begin to set their own direction, while others may continue to play a more collaborative role within the team. Both are very valuable.

 

You could be a good fit for this role if:

    You care about working on important questions related to navigating rapid AI progress.You are developing your own views on what is most important and ideas for research directions to pursue.You have a track record of getting obsessed with a project, committing to it, and performing well. This will most likely be in research or academia, but could also be in other domains.You are productive, autonomous, and strong at quantitative and conceptual reasoning.You can communicate clearly in writing (and verbally).

More basic info

The location, salary, benefits, etc, are included in the job listing (see also the careers page). If you have any questions, you could comment below and I’ll try to pull in relevant people (or they might just respond), or you might want to reach out directly.

There’s also the referral bonus; Forethought is offering a £10,000 referral bonus for “counterfactual recommendations for successful Senior Research Fellow hires” (or £5,000 for Research Fellows. Here’s the form.

In any case: 

The application looks pretty short. Consider following the classic advice and just applying.

A final note

One last thing I want to say here — which you can take as a source of bias and/or as evidence about Forethought being a nice place (at least for people like me) — is that I just really enjoy spending time with the people I work with. I love the random social chats about stuff like early modern money-lending schemes[21] or cat-naming or the jaggedness of child language learning. People have been very supportive when personal life stuff got difficult for me, or when I’ve felt especially impostory. Some of my meetings happen during walks along a river, and at other times there’s home-made cake. And we have fun with whiteboards:

 

  1. ^

     More specifics on the context here: Max asked me to help him draft a note about Forethought’s open roles for the EA Forum/LessWrong; I said it’d be idiosyncratic & would include critical stuff if I did that; he encouraged me to go for it and left some comments on the first partial draft (and confirmed what I was saying made sense); I wrote the next draft without input from Max or others at Forethought; made some edits after comments from Owen (hijacked some of our coworking time); and here we are.

  2. ^

     I’ve mentioned that I think more of this (often more informal) content should go on our website (or at least the Substack); I think others at Forethought disagree with me, although we haven’t really invested in resolving this question.

  3. ^

     (which is one of the main reasons I work here!)

  4. ^

     Forethought has 9 people, of whom 6 are researchers. If I had to come up with some estimate of how many people overall (not just at FOrethought) are devoting a significant amount of attention to this kind of work, I might go with a number like 50. Of course, because so much of this work is interdisciplinary and preparadigmatic, there’s no shared language/context and I’m quite likely to be missing people (and per my own post, it’s pretty easy to get a skewed sense of how neglected some area is). (OTOH I also think the disjointedness of the field hurts our ability to collectively understand this space.)

    Overall, I don’t feel reassured that the rest of the world “has it covered”. And at least in the broader community around EA / existential risk, I’m pretty confident that few people are devoting any real attention to this area.

  5. ^

     While I’m at it: you might also be interested in applying for one of the open roles at ACS Research (topics listed are Gradual Disempowerment, AI/LLM psychology/sociology, agent foundations)

  6. ^

     I wanted to quickly list some examples somewhere, and this seemed like a fine place to do that. So here are sketches of some threads that come to mind:

    (I’m not trying to be exhaustive — basically just trying to quickly share a sample — and this is obviously filtered through my interests & taste [a])

    -- How can we get our civilization to be in a good position by the time various critical choices are being made? Should we be working towards some kind of collective deliberation process, and what would that look like? Are there stable waypoint-worlds that we'd be excited to work towards? [Assorted references: paretotopia, long reflection stuff, various things on AI for epistemics & coordination, e.g. here]

    -- What might ~distributed (or otherwise not-classical-agent-shaped) powerful AI systems look like? How does this map on to / interact with the rest of the threat/strategic landscape? (Or: which kinds of systems should we expect to see by default?) [See e.g. writing on hierarchical/scale-free agency]

    -- Is it in fact the case that states will stop protecting their citizens’ interests as (or if) automation means they’re no longer “incentivized” to invest in a happy etc. labor force? (And what can/should we do if so?) [Related]

    -- How should we plan for / think about worlds with digital minds that might deserve moral consideration? When might we need to make certain key decisions? Will we be able to find sources of signal about what is good for the relevant systems that we trust are connected to the thing that matters? (And also stuff like: what might this issue do to our political landscape?) [See e.g. this, this, and this]

    -- How would AI-enabled coups actually play out? (What about things that look less like coups?) [ref]

    -- Which specific “pre-ASI” technologies might be a big deal, in which worlds, how, ...?

    -- More on how coordination tech (e.g. structured transparency stuff, credible commitments) could go really wrong, which technologies might be especially risky here, etc.

    -- What might it look like for institutions/systems/entities that are vastly more powerful than us to actually interact with us in healthy (virtuous?) ways?

    -- If AI systems get deeply integrated into the market etc., what would that actually look like, how would that play out? [E.g. more stuff like frictionless bargaining, or how things could go unevenly, or cascading risk stuff, perhaps.]


    [a] If I try to channel others at Forethought, other things probably become more salient. E.g. how acausal trade stuff might matter, more on modeling dynamics related to automation or the intelligence explosion, exploring how we might in fact try to speed up good automation of moral philosophy, more on governance of space resources, more on safeguarding some core of liberal democracy, etc.

     

  7. ^

    Sometimes there might be too many comments, I suppose:

  8. ^

     One important dynamic here is picking up “tacit” skills, like underlying thinking/reasoning patterns. As a quick example, I’ve occasionally found myself copying a mental/collaborative move (often subconsciously) that I’d appreciated before.

  9. ^

     Part of my model here is that my brain is often following the “easiest” salient paths, and that reasoning about stuff like radically new kinds of technology, what the situation might be like for animals in totally transformed worlds, what-the-hell-is-agency-and-do-we-care, etc. is hard. So if I don’t immerse myself in an environment in which those kinds of questions are default, my focus will slip away towards simpler or familiar topics.

  10. ^

    Or maybe “realness”? (Tbh I’ll take any opportunity to link this post)

  11. ^

    As an aside: I’d really like to see more people try to form their own ~worldviews, particularly trying to make them more coherent/holistic. Because the space is extremely raw and developing pretty quickly (so basically no one has the time or conceptual tools to fit all the parts together on their own), I think large chunks of the work here rest on the same shaky foundations, which I want to see tested / corrected / supported with others. I also think this is good for actually noticing the gaps.

  12. ^

     I’m already on the record as an appreciator of good management, so it might not be too surprising that I’m really grateful for the help I get from my Max. But I really think this is pretty crucial, and often overlooked (maybe especially in research contexts), so I’m still emphasizing it.

  13. ^

     a call to adventure ? (another post I love linking to)

  14. ^

     which tbh might be underemphasized there; the opportunity to work in person from a nice office is a game-changer for some

  15. ^

     In the past I’ve been overly concerned about what might not be in scope. Because hesitation around this felt distracting to me, I’ve got an agreement with Max right now that I’ll just focus on whatever seems important and he can flag things to me if he notices that I’m going too far out / account for this retrospectively. (So far I haven't hit that limit.)

  16. ^

    (This was called “Explorethought”. There’s an abundance of puns at Forethought.; do with that information what you will.)

  17. ^

     I like this aspect a lot, fwiw. (Especially since it doesn’t end up feeling that people are out to “win” arguments or put down others’ views.)

    (Once in a while I get reminders of how weird Forethought’s culture can seem from the outside, e.g. when I remember that many (most?) people would hesitate to say they strongly disagree with a more senior researcher's doc. Meanwhile I’ve been leaving comments like “This breakdown feels pretty fake to me...” or “I think this whole proposal could only really work in some extreme/paradigmatic scenarios, and in those worlds it feels like other parts of the setup wouldn’t hold...”)

  18. ^

    As one example, near the start of my tenure at Forethought, I ended up spending a while on a project that I now think was pretty misguided (an ITN BOTEC comparing this area of work with technical AI safety), I think partly because I hadn’t properly synced up with someone I was working with.

    (Although the experience itself may have been useful for me, and it’s one of the things that fed into this post on pitfalls in ITN BOTECs.)

  19. ^

    If interested see some notes on this here

  20. ^

     Related themes here, IIRC: Fixation and denial (Meaningness) 

  21. ^

     Although I failed to explain it to some other people the other day, so I need some rescuing here



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Forethought AI研究 人工智能 招聘 未来科技 Forethought AI Research Artificial Intelligence Hiring Future Technology
相关文章