Are we in a race to the bottom? Friction can help

What Does AI Actually Do to Us? The Questions We're Not Asking Enough

We keep talking about what AI can do. We're less honest about what it's already doing to our relationships, our sense of self, and the social fabric we've spent centuries building.

That's the conversation anthropologist Dr. Lollie Mancey has been pushing for. And frankly, it's the one that matters most right now.

We've Skipped the Hardest Part

The speed of AI development has been dizzying. But speed is also the problem. By the time academic research catches up with what's happening in the real world, the world has moved on. Policies are drafted, models are deployed, and the human consequences trail somewhere far behind.

Dr. Mancey's argument is straightforward: we've been treating AI as a technical challenge when it's fundamentally a social one. This isn't just about algorithms. It's about how people feel when they talk to a chatbot at 2am and, surprisingly, feel heard. It's about why Gen Z increasingly turns to AI therapy tools, not because they're fooled, but because they feel less judged. It's about what that reveals about how little space we've made for vulnerability in human-to-human connection.

The machine isn't the problem. The emotional hunger it's feeding is what's worth examining.

The Mirror Problem

When AI generates an image of "an Australian woman," it frequently produces someone light-skinned, Western-presenting, and broadly Anglo in appearance. Australia is one of the most multicultural countries on earth. The gap between those two realities isn't accidental. It's a product of what data we train on, whose stories get told online, and whose don't.

This is what bias looks like in practice. Not a villain in a boardroom, but a system quietly reinforcing a narrow version of the world because that's what it learned from. Healthcare decisions, hiring tools, criminal justice applications: the same dynamic plays out, with significantly higher stakes.

Fixing it isn't a technical patch. It requires bringing different people into the room: anthropologists, ethicists, community members, people whose lives will actually be shaped by these systems.

Who's Actually Setting the Rules?

Here's something worth sitting with. A handful of technology companies are currently making decisions about ethics, values, and acceptable behaviour that, historically, were the domain of governments and democratic institutions.

Some are doing it thoughtfully. But the question isn't whether their intentions are good. It's whether corporate frameworks, however carefully designed, are a substitute for democratic accountability. They're not.

Different countries are taking very different approaches. China mandates that AI systems flag users in emotional distress. Europe leans heavily on individual rights and data protection. The US largely follows market logic. None of these approaches are complete, and without international coordination, we end up with a patchwork that serves nobody well.

The conversation about what kind of AI we want needs to involve more than developers and regulators. It needs citizens.

What We Owe Each Other on This

Dr. Mancey makes a case that feels almost old-fashioned in its simplicity: we need to slow down. Not stop. Slow down. Create intentional pauses in organisations, in policy cycles, in our own lives, to ask whether the direction we're heading is actually the one we chose.

AI literacy is part of that. Not everyone needs to understand transformer architecture, but everyone deserves to understand enough about how these systems work to make meaningful choices about them. That starts in schools, and it starts now.

The deeper shift, though, is cultural. As automation reshapes work, the question of human purpose becomes urgent and real. If our identity is tied to what we do for a living, we're going to need new answers. That's not a technology problem. That's a values problem, and it belongs to all of us.

The Honest Takeaway

AI is not coming. It's here, embedded in decisions that affect people's lives every day. The question isn't whether to engage with it. It's whether we engage consciously.

That means demanding transparency from the systems we use and the companies that build them. It means designing for diversity from the start, not as an afterthought. And it means being honest about the emotional and social needs that AI is currently, imperfectly and artificially, trying to fill.

We built these tools. We can decide what they're for.

AI optimised summary

Continue reading