Here’s what you’ll learn when you read this story:
It sounds fictional, but the “science fiction science” or “sci-fi-sci” method is meant to predict the effects of real upcoming technologies on society before things take a dystopian turn.Researchers are attempting to apply the scientific method to technologies that are still speculative, or have not yet made it into the mainstream.Having people virtually interact with things such as autonomous cars and advanced forms of AI could reveal social and ethical implications ahead of time.
Social media. AI. Genetic engineering. Self-driving cars. Autonomous robots. What if hindsight was ahead of us, and we could at least have an idea of the social, behavioral and ethical implications of emerging technologies before they even existed?
If it sounds like science fiction, it sort of is. “Science fiction science” or “sci-fi-sci” is an idea put together by researchers Iyad Rahwan (from the Max Planck Institute for Human Development in Germany), Azim Shariff (from the University of British Columbia in Canada), and Jean-Francois Bonnefon (from the Toulouse School of Economics in France). They describe it as a new process that attempts to apply the scientific method to technologies that are either being planned or are in the early phases of development. Such predictions have been made in science fiction before, but outside of the genre, they have never been fully explored from a scientific perspective.
Related Story
“Predicting the social and behavioral impact of future technologies, before they are achieved, would allow us to guide their development and regulation before these impacts get entrenched,” the researchers said in a study posted to the preprint server arXiv. “[We use] experimental methods to simulate future technologies, and collect quantitative measures of the attitudes and behaviors of participants assigned to controlled variations of the future.”
Rahwan, Shariff, and Bonnefon suggest that using the scientific method to predict the effects of technologies that will likely surface in the near future (though, what exactly “near future” means can be hazy) will make their potential effects more clear to developers, consumers, and policymakers. This unconventional approach has been met with skepticism, as experimental scientists understandably tend to question its validity. But the trio has pushed on. Using social media as an example, they suggest that in retrospect, running simulations of how the technology might have operated and having participants virtually interact with it could have helped predict the aftermath of its widespread use, from self-esteem issues to ethics being seriously questioned.
Predicting the effects of social media before everyone was constantly checking socials on their smartphones might have led to a more cautious outlook. For instance, effects of the technology gone to extremes become a dystopian reality on the Black Mirror episode “Nosedive,” where social media not only broadcasts people’s lives, but social scores that help gauge their popularity and use that data to rank them among their peers. And the scoring technology suggested in this episode is on the edge of being released into society.
Gage is an app that keeps track of how employees are rated by coworkers. Created by founder and CEO Justin Henshaw, it logs a “social credit score,” includes the number of compliments and virtual high-fives given, and is meant to be transferred from one job to another. YouTube creator Joshua Fluke criticized Gage as “an algorithmic reputation system” that could be extremely problematic. When employee evaluation relies more heavily on social scores than the quality of actual work done, entire groups could suffer. Those who are neurodivergent and may not communicate in the expected neurotypical way could face difficulty being hired as a result of codified negative social feedback.
Related Story
The researchers describe social credit systems on an even larger scale than Black Mirror, and list them among other types of what they refer to as “nascent or speculative technologies” that could potentially spark policy debates. Hypothetical systems that use AI to monitor every behavior in real time before publicly releasing social credit scores have sparked so much controversy that the European Union is leaning toward preemptively banning them.
On the same list are autonomous vehicles, the process of screening embryos for desired traits, and ectogenesis (reminiscent of the artificial gestation in Aldous Huxley’s Brave New World).
“Studying the behavior of future humans interacting with future technology in a future social world raises unusual challenges for behavioral scientists, which call for unconventional methods,” the researchers said.
Will virtual reality experiments that introduce people to technologies that do not yet exist be able to accurately predict their impact on society? For now, that remains in the realm of science fiction.
Elizabeth Rayne is a creature who writes. Her work has appeared in Popular Mechanics, Ars Technica, SYFY WIRE, Space.com, Live Science, Den of Geek, Forbidden Futures and Collective Tales. She lurks right outside New York City with her parrot, Lestat. When not writing, she can be found drawing, playing the piano or shapeshifting.