Why AI Can’t Be Your Friend

For The Dispatch, I was happy to take a careful look at when an AI tool can and cannot be a helpful thinking partner.

Programmers have their own way to get their thoughts outside their heads. Coders noticed that when they pulled aside a colleague to explain where they were stuck, they often spotted why they were stuck partway through their spiel, without their friend saying a word. Thus, some wondered, why not find a way to get the same effect without breaking into someone else’s workflow? “Rubber duck debugging” is the practice of debugging code with a small rubber duck as a partner. Explain your whole problem to the duck, and you’ll get to see if it’s the kind of problem you can solve yourself, once you get it outside your head. In the assumption that you may possess the solution unknowingly, rubber duck debugging is a cousin to Rogerian psychotherapy. For coders, it’s an efficient first pass at a problem. You can always escalate to a person if the duck’s silence fails you. 

But debugging your program with a rubber duck is safer than debugging yourself with an LLM chatbot. After all, you can check if your code works by running it. You can’t as easily do a test run on your own sense of self-identity, to see if it fails to compile. When AI echoes back a user’s thoughts, it can reinforce harmful ideas. Verbalizing a paranoid or psychotic thought may give it more force than leaving it half-examined. Multiple deaths are already partially attributable to the funhouse mirror of AI-aided introspection. Despite companies’ efforts, an AI therapist cannot reliably call in help when a patient expresses the intent to harm himself or others. It can’t easily identify a patient’s paranoid delusion, and instead will sycophantically echo it back and urge further exploration. 

It can be helpful to externalize your thoughts, but it’s not a good idea to believe everything you think. Multiple Christian traditions advise against examining intrusive thoughts too closely. Eastern Orthodox writers describe these thoughts as logismoi (words or images that draw people away from God). They advise disciples to allow a passing thought to pass. A brief flicker of lust, a quick flash of anger might merit a prayer that the thought not become rooted. But one would not be advised to unpack it at length and meditate deeply on what the momentary ugly thought tells you about your deepest, truest self. Indeed, secular therapeutic traditions define some of these errors as rumination—verbalizing a thought and then returning to it over and over, out of proportion to its place in one’s life. 

Sometimes, the role of an interlocutor is not to help us examine our troubling thoughts, but to urge us to set them aside and go out and act in the world. If you are limited by anxiety, you might benefit from unpacking the roots of your fears, but you might be better off simply articulating what you fear and realizing you don’t quite believe your predictions when you hear them. You might benefit even more from simply trying out small versions of the acts that frighten you and gaining real-world confirmation that the worst does not come to pass. Acting in the world can rewrite your model of the world. 

Read the article here