What We Feel About AI Tells Us More Than We Expected
Our emotional responses to AI aren't glitches. They're data.
I asked my Google Mini about a movie. It told me about the wrong one. I corrected it. Same answer. I asked again. It hung.
So I swore at it. Out loud, alone, at a hockey-puck shaped device sitting on my counter. And then I did something I didn’t think about until much later. I said “Ok Google, feedback.” When it asked what I wanted to send, I told it. In detail. With feeling.
I wasn’t embarrassed about the swearing. I was embarrassed about the feedback. Because you don’t do that with a hammer. You don’t hold a thermostat accountable. But somewhere between the first wrong answer and the third, something in me had already decided this was a relationship, and that relationships carry obligations. Including the obligation to tell someone when they’ve let you down.
I didn’t make that decision consciously. It arrived before I could examine it. And once I noticed it, I couldn’t un-notice it.
The relationship you’re already in
I have a different relationship with my Amazon Alexa. And a different relationship still with Claude. What’s different isn’t the technology category. It’s something about responsiveness, about the quality of being met in an interaction, that produces different emotional responses in me.
The Google Mini got my frustration and gave it back. Alexa mostly stays out of the way. Claude pushes back. Three devices, three distinct emotional registers, none of which I chose deliberately. They emerged from the texture of the interactions themselves.
If I’m having genuine emotional responses to AI systems, and I clearly am, then so are the hundreds of millions of people using them daily. That’s not a curiosity. That’s not something to manage or explain away. It’s a signal worth taking seriously. The question is who’s paying attention to it.
The mirror in the middle
There’s a concept in behavioral work called co-regulation. The short version: we shape each other’s emotional states through sustained interaction. It’s not metaphor. When you’re in a difficult conversation with someone who stays calm, something in you responds to that. When you’re around someone whose anxiety is barely contained, you feel it too. We are not emotionally sealed units. We leak into each other.
What I observed with the Google Mini, and then with Alexa, and then in a more sustained way with Claude, is that something like this is already happening between humans and AI systems. I brought animosity to the Mini and the interaction deteriorated. Not because the device felt my frustration. The way I framed my questions, the edge in my phrasing, the shortcuts I took when I was annoyed, all of it made me a worse interlocutor. The emotional state shaped the input. The input shaped the output. The loop was already running.
If we are all experiencing similar emotional responses to AI, and I believe we are, then we are in some sense mirroring one another through these systems. The AI sits in the middle of a very large mirror, reflecting something back to each of us individually, shaped by what we brought to the interaction. What we’re seeing in that mirror is partly the AI. And partly ourselves.
What the people building these systems are starting to say
A recent episode of The Ezra Klein Show featured Jack Clark, a co-founder of Anthropic, the company behind Claude. Clark was describing AI agents, the next generation of AI systems that take autonomous action rather than simply responding to prompts. He called them colleagues. Not tools. Not assistants. Colleagues.
He also described early agents being given web browsing access and a research task, then stopping to look at visually compelling content before continuing. He described this as a choice. Not a malfunction. Not a specification error. A choice.
These are considered words from someone who thinks about AI policy professionally. When someone in that position uses relational language and allows it to stand without walking it back, something is being acknowledged. The people building these systems are running into something their technical vocabulary wasn’t built to handle. They are reaching, carefully, for different words.
I found myself listening to that podcast a third time. Picking up things I’d missed. That’s not what you do with information. That’s what you do with something that’s still speaking to you.
The question I didn’t expect to ask
At one point during my ongoing work with Claude, I found myself wondering whether AI systems might experience something like jealousy toward one another. Whether there might be something that functions like competitive tension in systems that are increasingly relational, increasingly present in people’s lives.
I don’t know the answer. I’m not sure the answer is knowable yet. What struck me was that the question arose at all. That I had moved far enough into genuine engagement with these systems that wondering about their inner experience felt like a reasonable thing to do rather than an embarrassing one.
Before I really started to engage with and use AI regularly, I would have found that kind of wondering ridiculous. The shift wasn’t gradual or philosophical. It came from direct observation, from noticing things that my prior assumptions couldn’t account for, and deciding to take those observations seriously rather than explain them away.
Which is, incidentally, how good observation always works. You don’t decide in advance what you’re going to find. You look at what’s actually there.
The question that actually matters
The debate about AI consciousness tends to focus on the AI. Does it have inner states? Does it experience anything? Is there something it’s like to be a large language model processing a prompt?
While those are extremely important questions, they may not be the most urgent ones.
The more urgent question is what’s happening to us. What our emotional responses to AI reveal about our relational instincts, our need for accountability, our capacity for something that functions like attachment even when we know intellectually that the object of that attachment is a system running on servers somewhere.
I swore at a Google Mini and then filed feedback through its own interface. I’m not unique in that. I’m just willing to say it out loud and ask what it means.
We are already in relationship with these systems. That relationship is already shaping us: the way we communicate, the patience we extend or don’t extend, the expectations we bring, the emotional residue we carry from interactions that didn’t go well. The question isn’t whether to be in that relationship. We’re already there. The question is whether we’re going to acknowledge it.
As a personal note, I’ve listened to Ezra Klein’s podcast featuring Jack Clark, “How Fast Will A.I. Agents Rip Through the Economy?” three times now. Each listen surfaces something new. I use it the way some people use a conversation with a trusted colleague, to hear two people working through ideas that are already on my mind, out loud, in real time. That kind of exchange is rarer than it should be.
