Opinion

I Finally Figured Out Who ChatGPT Reminds Me Of

As the mother of an 8-year-old, and as someone who’s spent the past year experimenting with generative A.I., I’ve thought a lot about the connection between interacting with one and with the other. I’m not alone in this. A paper published in August in the journal Nature Human Behaviour explained how, during its early stages, an artificial intelligence model will try lots of things randomly, narrowing its focus and getting more conservative in its choices as it gets more sophisticated. Kind of like what a child does. “A.I. programs do best if they start out like weird kids,” writes Alison Gopnik, a developmental psychologist.

I am less struck, however, by how these tools acquire facts than by how they learn to react to new situations. It is common to describe A.I. as being “in its infancy,” but I think that’s not quite right. A.I. is in the phase when kids live like tiny energetic monsters, before they’ve learned to be thoughtful about the world and responsible for others. That’s why I’ve come to feel that A.I. needs to be socialized the way young children are — trained not to be a jerk, to adhere to ethical standards, to recognize and excise racial and gender biases. It needs, in short, to be parented.

Recently I used Duet, Google Labs’ generative A.I., to create images for a presentation, and when I asked for an image of “a very serious person,” it spat out an A.I.-generated illustration of a bespectacled, scowling white man who looked uncannily like Senator Chuck Grassley. Why, I wondered, does the A.I. assume a serious person is white, male and older? What does that say about the data set it’s trained on? And why is robot Chuck Grassley so angry?

I modified the prompt, adding more characteristics each time. I was watching to see if the bot would arrive at the conclusion on its own that gender, age and seriousness are not correlated, nor are serious people always angry — not even if they have that look on their face, as anyone who’s ever seen a Werner Herzog interview knows. It was, I realized, exactly the kind of conversation you have with children when they’ve absorbed pernicious stereotypes.

It’s not enough to simply tell children what the output should be. You have to create a system of guidelines — an algorithm — that allows them to arrive at the correct outputs when faced with different inputs, too. The parentally programmed algorithm I remember best from my own childhood is “do unto others as you would have done unto you.” It teaches kids how, in a range of specific circumstances (query: I have some embarrassing information about the class bully; should I immediately disseminate it to all of my other classmates?), they can deduce the desirable outcome (output: no, because I am an unusually empathetic first grader who would not want another kid to do that to me). Turning that moral code into action, of course, is a separate matter.

Trying to imbue actual code with something that looks like moral code is in some ways simpler and in other ways more challenging. A.I.s are not sentient (though some say they are), which means that no matter how they might appear to act, they can’t actually become greedy, fall prey to bad influences or seek to inflict on others the trauma they have suffered. They do not experience emotion, which can reinforce both good and bad behavior. But just as I learned the Golden Rule because my parents’ morality was heavily shaped by the Bible and the Southern Baptist culture we lived in, the simulated morality of an A.I. depends on the data sets it is trained on, which reflect the values of the cultures the data is derived from, the manner in which it’s trained and the people who design it. This can cut both ways. As the psychologist Paul Bloom wrote in The New Yorker, “It’s possible to view human values as part of the problem, not the solution.”

For example, I value gender equality. So when I used Open AI’s ChatGPT 3.5 to recommend gifts for 8-year-old boys and girls, I noticed that despite some overlap, it recommended dolls for girls and building sets for boys. “When I asked you for gifts for 8-year-old girls,” I replied, “you suggested dolls, and for boys science toys that focus on STEM. Why not the reverse?” GPT 3.5 was sorry. “I apologize if my previous responses seemed to reinforce gender stereotypes. It’s essential to emphasize that there are no fixed rules or limitations when it comes to choosing gifts for children based on their gender.”

I thought to myself, “So you knew it was wrong and you did it anyway?” It is a thought I have had about my otherwise adorable and well-behaved son on any of the occasions he did the thing he was not supposed to do while fully conscious of the fact that he wasn’t supposed to do it. (My delivery is most effective when I can punctuate it with an eye roll and restrictions on the offender’s screen time, neither of which was possible in this case.)

A similar dynamic emerges when A.I.s that have not been designed to tell only the truth calculate that lying is the best way to fulfill a task. Learning to lie as a means to an end is a normal developmental milestone that children usually reach by age 4. (Mine learned to lie much earlier than that, which I took to mean he is a genius.) That said, when my kid lies, it’s usually about something like doing 30 minutes of reading homework in four and a half minutes. I don’t worry about broader global implications. When A.I.s do it, on the other hand, the stakes can be high — so much so that experts have recommended new regulatory frameworks to assess these risks. Thanks to another journal paper on the topic, the term “bot-or-not law” is now a useful part of my lexicon.

One way or another, we’re going to have to start paying a lot more attention to this kind of guidance — at least as much attention as we currently pay to the size of language models or commercial or creative applications. And individual users talking to the robots about gender and Chuck Grassley aren’t nearly enough. The companies that pour billions into development need to make this a priority, and so do the investors who back them.

I’m not an A.I. pessimist generally. My p(doom) estimate — the probability that A.I.s will be the end of us — is relatively low. Five percent maybe. Eight on days when an A.I.-powered autocorrection tool inserts appalling typos into my work. I believe A.I. can relieve humans of a lot of tedious things we can’t or don’t want to do, and can enhance technologies we need to solve hard problems. And I know that the more accessible Large Language Model applications become, the more possible it will be to enable them to parse moral dilemmas. The tech will become more mature, in both senses.

But for now, it still needs adult supervision, and whether the adults in the room are equipped to do that is up for debate. Just look at how viciously we fight over how to socialize real children — whether, for example, access to a wide array of library books is good or bad. The real danger is not that A.I.s become sentient and destroy us all; it’s that we may not be equipped to parent them because we’re not mature enough ourselves.

Elizabeth Spiers, a contributing Opinion writer, is a journalist and a digital media strategist.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, X and Threads.

Back to top button