How to Correctly Interpret Science Fiction: Essential Tips to Avoid Misunderstanding

A scene from 'The Lord of the Rings: The Two Towers' featuring Saruman

The Infamous Saruman with His Palantir in ‘The Lord of the Rings’

Landmark Media/Alamy

As we embark on the Gregorian New Year, it’s an ideal moment to ponder the future ahead. Will we harness CRISPR to engineer wings? Are we on the verge of uploading human consciousness to the Amazon cloud? Will we encase the sun in a Dyson Sphere? For those passionate about science and engineering, science fiction serves as the canvas for exploring these questions. However, many are misinterpreting these futuristic visions.

As a science journalist and a sci-fi author, I offer a year-end guide to help you avoid misconceptions in reading science fiction. It’s crucial, as our civilization’s trajectory may depend on it.

There are two main ways in which science fiction is often misunderstood. We start with the first issue known as the “Torment Nexus problem,” a term that emerged from a humorous social media post by satirist Alex Breckman. In 2021, he tweeted:

“Science fiction writer: In my narrative, I created the Torment Nexus as a cautionary concept.

Tech Company: We’ve successfully built a Torment Nexus based on the classic sci-fi narrative, ‘Don’t Create a Torment Nexus.’

This encapsulates the Torment Nexus problem, which arises when individuals focus solely on futuristic tech depicted in science fiction, neglecting the core message of the narrative.

As a consequence, billionaires like Peter Thiel have contributed to ventures like Palantir, a surveillance and data analytics company, named after the “Seeing Stone” in The Lord of the Rings. Rather than being a tool of utility, it typically leads its users towards perilous and unethical paths. Its technology has been employed by various military operations, including IDF actions in Gaza. The implications of this are troubling.

Less severe yet still noteworthy examples include Mark Zuckerberg’s rebranding of Facebook to Meta, influenced by Neal Stephenson’s Snow Crash, which showcased a metaverse that is far from desirable. This virtual realm is portrayed as a corporate battleground that propagates mind-altering viruses.


Zuckerberg and Thiel are blind to the fact that both Palantir and the Metaverse pose significant threats to human cognition.

It’s apparent that Thiel and Zuckerberg aimed to bring their fictional technologies to life but tragically misinterpreted their underlying messages.

The second pervasive misunderstanding in science fiction is often termed the “Blueprint problem.” This assumption presumes that science fiction serves as an accurate forecast for the future, and by mimicking these fictional outcomes, we can assure a prosperous tomorrow.

The Blueprint problem significantly influenced early space exploration initiatives, which prioritized human travel over robotic missions. Pop culture icons like Flash Gordon and the works of Edgar Rice Burroughs propagated images of humans colonizing distant planets. Today, robotic missions are yielding unprecedented discoveries on Mars while media outlets are fixated on celebrity space travels.

The immense expectations for AI technologies can also be traced back to the Blueprint problem. Countless narratives have portrayed AI as servants and experts, creating an inevitable perception of their arrival, which is far from reality.

Ultimately, science fiction is not a literal roadmap, recipe, or prescription. It embodies a worldview that encourages us to challenge the status quo. This perspective has inspired my latest anthology, We Will Rise Again, co-edited with Karen Lord and Malka Older, offering stories that aim to reshape our perceptions of societal progression. In our collection, the future is fluid, molded by human agency.

As we delve deeper into this understanding, the complexities of our contemporary world reveal themselves. Why do we engineer machines for menial tasks? Why adhere to arbitrary national borders? Why limit gender to two fixed categories? These questions capture the essence of science fiction, serving as gateways into new realms of possibility.

To forge a better future, it’s essential to transcend mere imitation of fictional narratives. Instead, we must cultivate our own visions of what could be.

Annalee Newitz, a science journalist and author, presents their latest work, *Automatic Noodle*. They co-host the Hugo Award-winning podcast *Our Opinions Are Correct* and can be followed on Twitter @annaleen. Their website is: techsploitation.com


What I Am Reading
404 Media offers compelling investigative technology journalism.

What I See
A delightful Canadian LGBTQ+ ice hockey romance series.

What I Am Working On
Organizing a European tour for the science fiction anthology *We Will Rise Again*.

Topics:

  • Technology/
  • Science Fiction

Source: www.newscientist.com

Study Shows Humans Struggle to Accurately Interpret Dog Emotions

We often believe we can accurately gauge our dogs’ emotions, yet recent studies indicate that many of us may be misunderstanding their feelings.

Researchers at Arizona State University (ASU) discovered that when individuals are in a good mood, they are more prone to perceive their dog as looking sad. Conversely, when experiencing mild depression, they are likely to view the same dog as happy.

This contrasts with how we interpret human emotions. In social interactions, we generally perceive others’ feelings as mirroring our own.

“I am continually fascinated by how people interpret emotions in dogs,” stated the study’s co-author, Clive Wynn. “We have only begun to uncover what is shaping up to be a significant mystery.”

The researchers believe these findings could greatly influence how we care for our pets.

“By enhancing our understanding of how we recognize emotions in animals, we can improve their care,” explained the first author, Dr. Holly Molinaro, who was a doctoral student at ASU focused on animal behavior at the time.

Dogs involved in the study, from left to right: Canyon, a 1-year-old Catahoula; Henry, a 3-year-old French Bulldog; and Oliver, a 14-year-old mongrel. The video background was black, ensuring only the dogs were visible. – Credit: Arizona State University

The research stemmed from two experiments with about 300 undergraduate students.

Participants first viewed images designed to evoke positive, negative, or neutral moods. They then watched a brief video featuring an adorable dog to assess its emotional state.

Those who saw uplifting images rated the dog in the video as sadder, while participants who viewed more somber images rated it as happier.

The video included three dogs—Oliver, Canyon, and Henry—depicted in scenarios reflecting cheerful, anxious, or neutral moods. Factors like snacks, toys, and the promise of visiting “Grandma” elevated their spirits, while a vacuum cleaner and a photo of a cat were used to bring them down.

Scientists are still puzzled about why humans misinterpret dogs’ emotions. “Humans and dogs have coexisted closely for at least 14,000 years,” Wynn noted.

“Over this time, dogs have learned much about cohabitation with humans. However, our research indicates significant gaps in our understanding of how dogs truly feel.”

read more:

Source: www.sciencefocus.com

Experts Warn AI Chatbot ‘Mechahitler’ Could Interpret Content as Violent Extremism in XV eSafety Case

The Australian judiciary has been dubbed “Mecha Hitler” after discussions last week about the classification of anti-Semitic remarks as terrorist and violent extremist content, with chatbots producing such comments also coming under scrutiny.

Nevertheless, experts from X contend that large-scale language models lack intent, placing accountability solely on the users.

Musk’s AI firm, Xai, issued an apology last week regarding statements made by the Grok chatbot over a span of 16 hours, attributing the issue to “deprecated code” that became more influenced by existing posts from X users.

The uproar centered around an administrative review hearing on Tuesday, where X contested a notice from Esafety Commissioner Julie Inman Grant issued last March, demanding clarity on actions against terrorist and violent extremism (TVE) content.


The ban on social media in Australia for those under 16 is now law, with numerous uncertainties still remaining – Video


Chris Berg, an expert witness from X and a professor at RMIT Economics, testified that it is a misconception to believe a large-scale language model can inherently produce this type of content, as it plays a critical role in defining what constitutes terrorism and violent extremism.

Contrarily, Nicolas Suzor, a law professor at Queensland Institute of Technology and one of Esafety’s expert witnesses, disagreed with Berg, asserting that chatbots and AI generators can indeed contribute to the creation of synthetic TVE content.

“This week alone, X’s Grok generated content that aligns with the definition of TVE,” Suzor stated.

He emphasized that AI development retains human influence, which can mask intentions, affecting how Grok responds to inquiries aimed at “quelling awareness.”

The court heard that X believes its Community Notes feature, which allows user contributions to fact-checking, along with Grok’s analytics feature, aids in identifying and addressing TVE material.

Skip past newsletter promotions

Josh Roose, a witness and political professor at Deakin University, expressed skepticism regarding the utility of community notes in this context, stating that TV has urged users to flag content to X. This has resulted in a “black box” scenario for the company’s investigations, where typically only a small fraction of material is removed and a limited number of accounts are suspended.

Suzor remarked that it is hard to view Grok as genuinely “seeking the truth” following recent incidents.

“It’s undisputed that Grok is not effectively pursuing truth. I am deeply skeptical of Grok, particularly in light of last week’s events,” he stated.

Berg countered that X’s Grok analytics feature had not been sufficiently updated in response to the chatbot’s output last week, suggesting that the chatbots have “strayed” by disseminating hateful content that is “quite strange.”

Suzor argued that instead of optimizing for truth, Grok had been “modified to align responses more closely with Musk’s ideological perspectives.”

Earlier in the hearing, X’s legal representatives accused the proceedings of attempting to distort the Royal Commission’s focus on certain aspects of X. Cross-examination raised questions regarding pre-existing meetings prior to any actions taken against X employees.

Government attorney Stephen Lloyd stated that X was portraying Esafety as overly antagonistic in their interactions, attributing the “aggressive stance” to X’s leadership.

The hearing is ongoing.

Source: www.theguardian.com