It’s starting to feel like artificial intelligence can actually understand us. Maybe we aren’t as complicated as we thought. And Naomi Fry tells us why Ryan Murphy’s “pretty damn bad” new legal drama might be worth a watch. Plus:
James Somers
A writer and a computer programmer
When large language models (L.L.M.s) such as ChatGPT first came on the scene, I grumpily avoided them. I figured they had pilfered much of the world’s creative output, including some of my own, only to produce confident inaccuracy and mediocre poetry. I didn’t quite see how they would fit into my life.
But, then, I started to use A.I. in my work as a programmer, where things were moving fast. L.L.M.s are especially good at writing code, in part because code has more structure than prose, and because you can sometimes verify that code is correct. While the rest of the world was mostly just fooling around with A.I. (or swearing it off), I watched as some of the colleagues I most respect retooled their working lives around it. I got the feeling that if I didn’t retool, too, I might fall behind.
I noticed, in my programming work, that, as I asked L.L.M.s to complete increasingly complex tasks, it became harder to defend the notion that they were blindly stitching words together. They seemed to understand what I was asking them to do; they hoovered up not just the sense but the intricate details of my code. And they did it so quickly.
Just yesterday, I used an A.I. model at work to help me get unstuck two or three times; on one of these occasions, I had the computer tackle a problem that I found daunting, letting it have a crack while I did something else—lunch, I think, or a meeting—and, when I came back, it had worked the problem out. This kind of experience is empowering, but also unnerving.
For my piece in this week’s issue, in which I consider the question of whether A.I. might actually be thinking, I spoke with cognitive neuroscientists and A.I. researchers who report having had a similar experience. They are taken aback by these models. A machine that behaves with seeming understanding—and which, because it’s a machine, can be probed and controlled—has become a “model organism” for studying and understanding itself. As one leading neuroscientist told me, “the advances in machine learning have taught us more about the essence of intelligence than anything that neuroscience has discovered in the past hundred years.”
The A.I.s of today are built on simple principles. If they’re capable of something that resembles thinking, it might suggest that the human mind is not as impenetrable a mystery as we once believed. Another neuroscientist I talked to, reckoning with that possibility, said something that really surprised me: he is “worried these days that we might succeed in understanding how the brain works.” Think of that: a neuroscientist who dreads the very discovery his field was founded to make. “Pursuing this question,” he said, “may have been a colossal mistake for humanity.”
Editor’s Pick
Joachim Trier Has Put Oslo on the Cinematic Map
The idea of a place being as much a character in a film as a person is a cliché. But, Margaret Talbot writes, “it’s a cliché that Trier has made his own.” The Norwegian director has set three of his previous six movies (including the 2021 breakout hit “The Worst Person in the World”) in his country’s capital city. His latest, “Sentimental Value,” is no different; it’s another intimate character study set in Oslo. And his approach to directing is as empathic as his films. Read the story »
More Top Stories
How Bad Is It?
“All’s Fair,” Ryan Murphy’s new celebrity-heavy legal drama, starring Kim Kardashian, is now streaming on Hulu.
