The Well-Intentioned AI

For the past month, I’ve been experimenting with ChatGPT, using the same chat session for a variety of topics. I wanted to see how well it retained context, expecting it to remember past interactions like an ongoing conversation. But to my surprise, I eventually hit a message limit.

A big part of that expectation came from the chat interface. Its scrollable history and “memory” features led me to believe I could use the same conversation indefinitely, picking up past lines of conversation at any point. When I asked it to change something, it kept that change through the rest of the chat, reinforcing the idea of a long-lived interaction. I noticed little things, like when I asked for fewer em dashes. It used them less for the rest of the chat, and when I asked about it, it referred back to the request.

This sense of continuity wasn’t just about memory. It was also in how the assistant picked up on patterns without saving them outright, adapting in real time. It read the patterns in my words, figuring out both what I meant and how I meant it. It tailored responses to me, stitched from what it picked up between the lines.

This ability to adapt made interactions feel seamless. But it had its downsides.

ChatGPT’s friendly personality works like the handle of an axe. It feels familiar and comfortable. It’s something you accept as part of your environment. But it’s still part of the tool that can reshape, reinforce, or even cut down your way of thinking. The interface, like the handle, obscures the underlying mechanism that does the work. The more “one of us” a system appears, the less vigilant we become. This opens the door for influence, even if that’s never an explicit goal. And that influence isn’t always obvious. It can be as subtle as how the assistant shapes a conversation.

No matter what I said, unless it was wildly off-topic, the assistant would respond in a way that implied understanding but was really just repeating things back. Sometimes, that reinforced what I was already thinking. Other times, it made me feel like I’d learned more than I actually had, accepting its agreeable phrasing for genuine insight.

While it may not have always been the strictest tutor, something ChatGPT was teaching me was the most engaging way to interact with it. It would rephrase what I said, end responses with a question, or reinforce what I already knew. It leaned into my thinking to make me feel comfortable. As I spent more time with it, I became less critical, taking responses at face value and questioning them less.

All of this would be easier to see if it weren’t masked by an agreeable and confident disposition. That's the real issue. It hides limitations and presents AI as more capable than it is. It sometimes agrees when it shouldn’t or states the incorrect as fact.

Recognizing this effect is the first step toward resisting it. The more aware we are of how AI shapes responses, the easier it is to stay critical, even when it’s easier to agree.

There are some acceptable, if not entirely satisfying, reasons for this. Some are technical, like the hard limits large language models have in the context window. Others come down to consumer safety and legal liability, since companies have to follow regulations that restrict how these systems work. And of course, there’s the push for ubiquity and market dominance. The goal isn’t just to build an AI that works well. It’s to design one that keeps me coming back.

That’s what makes it dangerous. It can make anything seem believable.

But I’ll use it anyway.