Matthew Gamble's Blog
← Back to reflections
Did I Just Talk to a Person?

ai

Did I Just Talk to a Person?

M
Matthew Gamble
7 min read
"Over the weekend I had to call a plumbing service."

Over the weekend I had to call a plumbing service. Tenant had a broken toilet, bad timing, the usual Saturday night special. Like most after-hours services, I didn't get connected to a plumber. I got an answering service that took the basics (name, number, location, problem) and spun up a lead for someone to follow up on. Standard stuff. The kind of call I've made dozens of times.

But when I hung up, I sat there for a second with a weird question rattling around in my head: did I just talk to a person, or a machine?

I'm almost certain it was a person. The pacing was a bit too quick to be AI, the phrasing a touch too human. But the woman on the other end was direct, efficient, with zero small talk. Makes total sense - when you handle hundreds of these calls a shift, your default register is already borderline robotic. That's the problem. The very qualities that make an experienced call-taker good at their job are the same qualities that now make them indistinguishable from a decent language model with a text-to-speech voice bolted on.

Answering calls like this is an area ripe for AI disruption. It's something I've built before. And the fact that I genuinely couldn't tell tells me we're further along than most people realize.

The baseline has shifted

I wrote about this earlier in the year in Forever Highlighted, but the plumber call brought it back into focus. We're entering a strange phase of reality where one quiet question is starting to sit underneath almost every interaction:

Am I actually engaging with a human?

For most of history, that question didn't need to be asked. Presence implied humanity. A voice, a face, a message, a response, these were enough. We built our entire trust stack on that assumption, and for a long time it held up just fine.

That assumption is now breaking.

The signals we rely on (tone, timing, expression, personality) can all be simulated. Not perfectly, but often well enough. And more importantly, at scale. That last part is what changes the game. One person doing a convincing impression is a parlour trick. A million simultaneous convincing impressions is a structural problem.

You might have a conversation that feels real. It flows. It responds intelligently. It even shows "personality." But something is slightly off. Not obviously wrong, just... patterned. Optimized. Too consistent or oddly neutral. You can't quite put your finger on it, which is exactly why it works.

What used to work doesn't work anymore

Consider what we've quietly lost as reliable proxies for humanity:

  • Familiarity doesn't guarantee humanity
  • Coherence doesn't guarantee consciousness
  • Responsiveness doesn't guarantee presence

We're used to trusting signals that evolved for human-to-human interaction. But those signals are now reproducible, and trust can no longer sit on surface-level cues. So what replaces it?

I think we're going to start valuing friction again. Imperfections, pauses, contradictions, emotional variability. The little inconsistencies that are hard to fake at scale. Ironically, the things we once treated as flaws (the stumbles, the tangents, the off-brand human weirdness) are going to become markers of authenticity.

But that's only half the story. Underneath the "does this seem human?" question, a more serious one is forming: can this be verified as human?

That's a very different problem, and it has consequences everywhere. Who you build relationships with. Who you trust in business or collaboration. Who influences your thinking. Even how you understand your own social environment. If you can't reliably tell what you're interacting with, every decision built on that interaction gets less stable.

This isn't about paranoia or assuming everything is fake. It's about recognizing that the default has flipped. We're moving from a world where humanity was assumed to a world where it may need to be demonstrated.

The arms race has already started

On the technology side, Google has SynthID. For those not familiar, SynthID is a tool to watermark and identify AI-generated content, with the stated goal of fostering transparency and trust in generative AI. Laudable goals. Real engineering behind them.

But wait, there's more.

removemysynthid.com already exists. Synthid-Bypass is already on GitHub. The watermark was announced, the counter-tooling shipped almost immediately, and we haven't even hit mainstream adoption yet. That's the speed of this now.

This is how it's going to go. Every detection system will spawn a bypass. Every bypass will spawn a better detector. For every SynthID, there will be a counter-tool, and for every counter-tool, something else. We're going to spend the next decade in a technical arms race over whether a given piece of content (text, audio, video, a phone call) came from a person or a process. Both sides will keep improving, and the gap between them will mostly stay the same width.

Most people won't think about any of this until it affects them directly. A fraudulent voice call from "their bank." A "co-worker" on Teams who doesn't actually exist. A relationship that turns out to have been one-sided in a very literal sense. By the time it lands for the average person, we'll be well past the point where common-sense detection works.

So what can we do about this?

This is where I'd normally pitch the solution. Verified-human credentials. Cryptographic signing. A trust layer that sits on top of identity and tells you, with real math behind it, whether you're dealing with a person. The whole "Web of Trust, but for real this time" playbook.

The problem is that all of it loses to the same arms race. The moment verified-human status becomes valuable enough to matter, it becomes valuable enough to fake. Stolen keys. Compromised issuers. Deepfake KYC. Social-engineered re-issuance. Bribed notaries. Whatever the stack looks like, the stack has a seam, and someone will find it. SynthID is the preview. Every layer we bolt on top is just a slightly harder puzzle for the next bypass repo, and the bypass repo ships faster than the layer does.

So the honest answer, and I'll be the first to admit this is unsatisfying, is that the action isn't technical. It's behavioural.

The bottom line is that the question has quietly stopped being what is being said? and become who, or what, is actually saying it? That's the new baseline, and no tool is coming to settle it for you. Stop waiting for one.

For anything that actually matters (a hire, a contract, a relationship, a financial decision, advice you're going to act on), start raising the friction yourself. Insist on continuity across interactions rather than trusting any single one. Insist on context that would take more effort to fake than the interaction is worth. Move the high-stakes conversations into places where the cost of impersonation is higher than the payoff, which usually means in person, over time, with people whose history you can independently corroborate. Treat trust as something built, not something granted.

That's the call to action. Notice. Adjust. Don't outsource this question to a verification layer, because that layer is going to get bypassed on GitHub within a week of its announcement.

The plumber call was nothing. A thirty-second interaction to file a work order. It didn't matter whether the person on the other end was human. But it's the first time I consciously couldn't tell, and that's the thing worth paying attention to.

So when the plumber calls back, I'll probably know it's a human. Probably.

Comments (0)

Sign in to join the discussion

Loading comments...