Why We Need AI Sociology — Now. Urgently.

TL;DR:
Alert – AI may erode our very ability to think.
When machines predict our emotions, write our messages, curate our playlists, and even tell us when to go for a walk — we must ask:
Are those thoughts and feelings truly our own?
The most catastrophic consequence of AI might not be surveillance or job loss.
It could be the collapse of human agency itself.

In 1935, Walter Benjamin famously argued in “The Work of Art in the Age of Mechanical Reproduction” that new technologies reshape the very structure of human perception. The rise of film, he wrote, stripped traditional theater of its unique “aura” — the irreplaceable presence of the actor, the “here and now” of the performance. Film actors were no longer just creators; they became viewers of their own work through editing and playback. Even more provocatively, Benjamin asserted that cinematic techniques like slow motion and close-ups were altering how humans experience and interpret the world.

Fast forward, 90 years later, now in 2025, we face a technological transformation Benjamin could never have imagined. The change is no longer just about film actors replacing theater actors — it’s about virtual humans replacing actors altogether. Music is increasingly generated by AI trained on massive datasets, reaching a quality level indistinguishable from human-made compositions. We now live in an era where art can be generated effortlessly and in large quantities, matching the quality of well-trained human creators.

Art, once a domain of contemplation and authenticity, has become a cheap, easily-made commodity product — 10-second viral shorts and algorithm-generated reels provide shallow emotional thrills that demand little reflection, while virtual influencers attract subscribers who engage with avatars instead of real humans. In this context, the role of art shifts from meaningful reflection to mass-produced sentiment.

Just as film restructured the perceptual habits of the early 20th century, today’s AI — especially large language models (LLMs) — is reconfiguring how we experience, perceive, and interact with the world. But this time, the shift is not only sensory or aesthetic; it reaches into the very fabric of our cognitive and emotional processes.

We must ask:
How is AI altering the way we experience reality?

Today’s AI systems — still largely controlled by for-profit corporations — are optimized not for collective enlightenment, but for user engagement (number of clicks and likes), retention and profit (customized ads and profit based on user’s longer stay in their product). As portrayed in the latest Black Mirror episode Common People, such technologies risk drawing us into subscription-based dependencies where existence itself may become gated behind paywalls. The episode chillingly illustrates how a once-happy couple becomes enslaved to a neural cloud backup service, as the wife’s consciousness can no longer function without the premium tier subscription model. What starts as convenience turns into entrapment. We need to be alerted.

There is a growing fear of what we might call the vertigo of the acceleration of technological advancement — the sensation that technology is advancing so fast that there’s no safe place to step off. We are trapped on a speeding train with no exits, anxious yet unable to stop.

Moreover, there’s an even more insidious risk:
AI may erode our very ability to think.

When AI predicts our needs and emotions – so that it can write our messages, curate our playlists, and even recommend when we should go for a walk — we must ask: Are those thoughts and feelings truly our own?

This concern ties directly to Benjamin’s idea of the loss of aura. Today, we face the loss of intellectual and emotional agency. Interfaces that adapt to our moods may start to shape those moods. The personalization becomes performative; the emotion, counterfeit.

As Susan Sontag warned, in a world oversaturated with kitsch, we are too often seduced by harmless, cheap, easily-accessible feelings — and we think too little. In a world dominated by AI, we risk becoming beings who no longer know how to reflect or question.

The most catastrophic consequence of AI might not be economic displacement or surveillance. It could be the loss of the human capacity for independent thought.

What can we do in the face of such a future?

Perhaps we must begin crafting a new kind of sociology — one for artificial intelligence. In Liu Cixin’s The Three-Body Problem, one character Luo Ji becomes the most feared “Wallfacer” because he anticipates an alien species based on the two axioms of “cosmic sociology.” If AI is our version of the Trisolarans — a fundamentally alien intelligence that we neither control nor fully understand — then our responsibility is to invent the intellectual frameworks that can help humanity prepare.

We need an AI Sociology, one that treats AI not merely as a tool but as an autonomous force with the potential to reshape civilization itself. This field must investigate not only the impacts of AI on jobs or privacy but also its influence on identity, perception, agency, and emotion.

I want to continue exploring and developing these ideas on AI sociology in my next write-up.

Leave a comment