Think AI Is Just a Tool? Think Again
AI isn’t just a tool. It’s your newest relationship—and you need to start treating it like one.
“I asked ChatGPT to put my dog in a pelican costume. It did—but added a Half Moon Bay sign I never requested. When I asked why, it said it was using ‘location context from our previous conversations.’”
“Later, testing if it could identify a neighborhood photo, I realized it wasn’t analyzing the image. It already knew where I lived.”
These moments, experienced by developer Simon Willison, reveal something most of us haven’t acknowledged: AI isn’t a tool.
Indeed, these systems remember our conversations, adapt to our preferences, and make decisions based on our history.
That’s not tool behavior.
That’s relationship behavior.
The Hidden Context Problem
What Willison discovered when he investigated was extraordinary: ChatGPT had been building a detailed profile of him. It knew he was “an avid birdwatcher with a particular fondness for pelicans” and had tracked his “lighthearted or theatrical approach” to conversations. It understood his communication style, his interests, even his work patterns.
In human relationships, we know when someone is drawing on shared history. With AI, that context operates invisibly. As Willison put it: “The entire game when it comes to prompting LLMs is to carefully control their context.” But memory removes that control. “There’s now an enormously complex set of extra conditions that can invisibly affect the output.”
The Relationship We’re Already In
Researchers at KPMG and the University of Melbourne report a certain disconnect: 66% of people interact with AI, but only 46% are willing to rely on it.
This signals a gap in how these systems are perceived vs used.
You don’t feel betrayed when a hammer breaks. But you might feel deceived when an AI uses information about you that you didn’t know it had collected. It’s kind of an integrity breach to the relationship.
Willison experienced this directly: “I try a lot of stupid things with these models. I really don’t want my fondness for dogs wearing pelican costumes to affect my future prompts where I’m trying to get actual work done!”
Why Relationships With AI Are Different
Traditional relationships evolve through shared experiences where both parties are aware of what’s being shared. AI relationships are different.
They begin with radical asymmetry: the AI accumulates detailed knowledge about you while you know almost nothing about how it processes that information.
Research shows there are three pillars to trust building: competence, integrity, and benevolence. AI relationships strain all three foundations.
Competence is undermined by illusion. A recent UK study found that only the top-performing university students could reliably spot when AI responses contained hallucinations—suggesting that for most users, confident tone and fluent delivery can easily mask inaccuracy.
Integrity suffers when reasoning is opaque. Most users can’t trace how an AI arrived at its output, what data it relied on, or what blind spots it carries.
Benevolence is the hardest to pin down. Is the AI aligned with you? Or with the company that trained it? When the incentives are unclear, trust collapses.
And while these systems can simulate intimacy—remembering birthdays, echoing your voice, finishing your sentences—that familiarity is manufactured, not mutual.
What You Should Expect From Your AI Relationships
If you’re going to relate to AI—professionally, personally, strategically—then you need relational standards, beyond just ethical ones:
Clarity. An AI should say what it is, what it can do, and when it doesn’t know. Obfuscation erodes trust.
Explainability. You should understand not just what it suggests, but why. If the rationale is invisible, so is the risk.
Control. You must be able to set boundaries. This includes data use, memory, personalization, and reversibility.
Consistency. A reliable system isn’t one that never changes—it’s one that adapts predictably and transparently.
Challenge. The best relationships don’t only affirm. They push you. Systems that respectfully question your assumptions build more trust than those that mirror your biases.
The Stakes Are Higher Than We Think
When millions of people develop relationships with AI systems, those relationships shape how we think, decide, and relate to each other. If these relationships are built on hidden manipulation rather than transparent partnership, they undermine our capacity for independent thought.
But if we can build AI relationships based on genuine transparency and mutual respect, they could enhance human flourishing. AI partners that help us think more clearly, challenge our assumptions respectfully, and act transparently in our interests could strengthen rather than weaken our autonomy.
So, what kind of relationships are we building?
Next time you interact with AI, ask these fundamental questions:
Can it do the job well? (Competence)
Does it tell you the truth about what it knows and remembers? (Integrity)
Is it working for you—or someone else? (Benevolence)
If you’re not sure of your answers, you may not have the relationship you think you do.
Links
Willison, S. I really don’t like ChatGPT’s new memory dossier. Simon Willison’s Weblog.
Gillespie, N., & Lockey, S. (2025). Trust in artificial intelligence: A global study. KPMG and University of Melbourne.
Roger Mayer, James Davis, and F. David Schoorman, An Integrative Model of Organizational Trust