Ever since I was a kid, I’ve loved science fiction. I was obsessed with time travel and space exploration, but robots always had my heart. One Christmas, I was given a huge (well, huge for me at the time) R2-D2 that beeped and booped as I dragged it around the house. I loved them all, from the clunky, boxy machines to the eerily human-like androids.
Now, as artificial intelligence weaves itself into our daily lives, I find myself thinking about those fictional depictions a lot.
For decades, we’ve been fed stories of talking computers, sentient machines, and AI companions. And today, as people form relationships with chatbots and debate whether large language models are “conscious,” I wonder: have those stories helped us prepare or distorted our expectations? Have they made us too trusting, too fearful – or simply too imaginative – for our own good?
Sci-fi as rehearsal for reality
Fiction has never been just entertainment. Stories bring communities together, serve as cautionary tales, and let us test out “what if” scenarios in safe spaces.
“The stories we tell ourselves about the world have always shaped both it and us,” says Professor Beth Singler, Assistant Professor in Digital Religion(s) at the University of Zurich, who studies the social, ethical, philosophical, and religious implications of advances in AI and robotics.
Science fiction, in particular, often feels like a rehearsal: mental simulations for technologies that don’t exist yet. But how have those stories shaped the way we respond to real, present-day AI?
One major issue is how quickly we anthropomorphize it. That’s not solely down to fiction – it’s a mix of factors – but decades of seeing robots cast as friends, colleagues, or even lovers have certainly nudged us in that direction.
I also spoke to author L. R. Lam, who writes speculative and science fiction, to hear directly from a creator. “So many depictions of sentient, conscious AI in fiction feed back into people thinking LLMs are more than they are,” she tells me.
Part of the problem is how easily we equate fluent communication with intelligence.
“To suit a story, an AI (whether an android, computer, or robot) must be able to communicate with other characters,” Singler explains. “That reflects and reinforces our existing bias towards communication as a sign of intelligence.”
That bias matters. Not just because some people might fall in love with chatbots – though that happens and is cause for concern – but more simply because everyday users may end up trusting AI’s answers more than they should.
“It risks people putting too much trust in what they spit out without independently verifying the information,” Lam warns.
And the cycle reinforces itself. Many AI personas are modelled on fictional tropes, sometimes even trained on them.
“People feel like they are encountering fully sentient non-human Others, the kinds of artificial beings that we’ve discussed in science fiction for a very long time,” Singler says. “They are also coming into contact with personas that have been designed in the light of those existing AI stories, even trained on them. It’s not surprising that they seem like the fulfilment of those expectations.”
Should sci-fi creators take more responsibility?
I wanted to know how storytellers themselves think about this.
“It’s been odd seeing how things I thought were maybe too much of a stretch in my near-future dystopia Goldilocks, which I wrote in 2019, are starting to come true,” says author L. R. Lam.
She tells me she’s become more cautious about how she writes AI – and more aware of the stereotypes.
“In one of my works in progress, I am being very mindful about the AI tropes we’ve seen in fiction,” she explains. “There’s an interesting layer of gender roles: how male vs female AIs are coded in literature, or how AI is treated as something between or beyond gender. We sometimes see AI or robots as a temptress (Ex Machina, Her), and other times as an omniscient god.”
Lam also points out that the way AI is created in fiction can give us the wrong impression about power.
“We often show one inventor of an AI, when in reality science is so much more collaborative than that,” she says.
But while it’s important for creators to reflect on these tropes, Singler is wary of putting too much responsibility on fiction itself.
“Science fiction is not to blame for this, not any more than our existing spiritual ideas are to blame for people also finding god through Large Language Models,” she explains. “It is all part of our dual nature as both storytellers and the ones being told the stories.”
Stories, in other words, are part of being human. They guide us, inspire us, and sometimes mislead us. But the responsibility for how we respond lies with us.
The bigger risk may be that today’s tech builders aren’t reflecting on the stories they’ve grown up with. Or treating sci-fi as the cautionary material it often is. Too often, they chase the shiny promise of new technology while ignoring its deeper warnings about inequality and power. And in doing so, we risk sleepwalking into the very dystopias those stories tried to warn us about.
The choice ahead
Fiction, then, isn’t prophecy. It’s a mirror, reflecting our hopes and fears through artificial beings. Whether that makes us reckless or wiser depends on how much we’re willing to question the stories we’ve been told and act accordingly.
In the end, we get to decide. “We are sold two versions: AI will save us, AI will damn us,” L. R. Lam says. “But in reality, it doesn’t need to do either unless we let it. And I think it’s more interesting if we save ourselves, don’t you?”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.