Published on October 30, 2025 12:00 PM GMT
These days, it's relatively easy to create a digital replica of a person.
You give the person's writings to a top LLM, and (with a clever prompt) the LLM starts thinking like the person. E.g. see our experiments on the topic.
Of course, it's far away from a proper mind uploading. But even in this limited form, it could be highly useful for AI alignment research:
- accelerate the research by building digital teams of hundreds of virtual alignment researchersrun smarter alignment benchmarks (e.g. the digital Yudkowsky running millions of clever tests against your new model)explore the human values, inner- and outer alignment with the help of digital humans.
Why no one is doing this?
Given the short timelines and the low likelihood of AI slowdown, this may be the only way to get alignment before AGI, by massively (OOMs) accelerating the alignment research.
Discuss
