When Journalists Use AI to Interview Deceased Children, Shouldn’t We Reassess Our Boundaries?

jOakin Oliver was just 17 when he was tragically shot in his high school hallway. An older student, who had been expelled a few months prior, unleashed a devastating attack with a high-powered rifle on Valentine’s Day, marking one of America’s deadliest school shootings. Seven years later, Joaquin believes it’s crucial to discuss the events of that day in Parkland, Florida.

Regrettably, Joaquin did not survive that day. The eerie, metallic voice that conversed with former CNN journalist Jim Acosta during this week’s Substack interview was, in reality, a digital ghost voice. This AI was trained on historical social media posts from teens and developed at the behest of grieving parents. Like many families, they recurrently share their children’s stories, often finding it heartbreakingly ineffective. Their desperation to explore every avenue of connection is entirely understandable.

The technology has allowed his father, Manuel, to hear his son’s voice once more. His mother, Patricia, spends hours asking the AI questions and prompting it with, “I love you, Mom.”

The grieving parents should not be judged for their choices. If they find solace in preserving their deceased child’s room as a shrine, speaking to their gravestone, or wearing a shirt that still carries their scent, that remains their personal matter. People cling to what they have. After 9/11, families replayed tapes of their loved ones until they were worn out, answering voicemails left by the deceased, and even made farewell calls from hijacked planes. I have a friend who frequently revisits old WhatsApp conversations with his late sister. Another friend texts snippets of family news to the image of his deceased father. Some choose to consult psychics to connect with the departed, driven by a profound need for closure. The struggle to move past grief often leaves people open to exploitation, and the burgeoning market for digital resurrection is a testament to this vulnerability.

In a manner reminiscent of AI-generated videos featuring Rod Stewart this week alongside late music icons like Ozzy Osbourne, this technology poses intriguing—even unsettling—possibilities. It may serve short-term purposes, as seen with AI avatars created recently by the family of a shooting victim to address a judge during the shooter’s trial. However, this raises profound questions about identity and mortality. What if a permanent AI version of a deceased person could exist as a robot, allowing for everlasting conversations?




AI images of Ozzy Osbourne and Tina Turner were showcased at the Rod Stewart concert in the US in August 2025. Photo: Iamsloanesteel Instagram

The idea of resurrection is often viewed as a divine power, not to be trivialized by high-tech zealots with a Messiah complex. While laws regarding the rights of the living to protect their identities from being used in AI-generated deepfakes are becoming clearer, the rights of the deceased remain murky.

Reputations may fade with us—after death, people cannot libel—and DNA is protected posthumously. Laws govern how we should respect human dignity, but AI is trained on a personal voice, messages, and images that hold significance for someone. When my father passed away, I felt his presence in his old letters, the gardens he nurtured, and old recordings of his voice. But everyone grieves differently. What happens if some family members want to digitally resurrect their loved one while others prefer to move on?

Joaquin Oliver’s AI can’t mature—he remains forever 17, trapped in a teenage persona molded by social media. Ultimately, it’s not his family but his murderer who holds the power over his legacy. Manuel Oliver understands that the avatar is not truly his son; he is not attempting to resurrect him. For him, this technology merely extends the family’s efforts to tell Joaquin’s story. However, Manuel is concerned about the implications of granting AI access to social media accounts, uploading videos, or gathering followers. What if the AI starts fabricating memories or veers into subjects that Joaquin would not have addressed?

Currently, there are noticeable glitches in AI avatars, but as the technology advances, distinguishing them from real people could become increasingly difficult. It may not be long before businesses and government entities employ chatbots for customer service inquiries and contemplate using public relations avatars for journalist interviews. Acosta, by agreeing to engage with a technically non-existent entity, could unintentionally muddy the already confused state of our post-truth world. The most apparent danger is that conspiracy theorists might cite interviews like this as “proof” that narratives contradicting their beliefs are fabrications.

Yet, journalists aren’t the only professionals facing these challenges. As AI evolves, we will interact with synthetic versions of ourselves. This surpasses the basic AI assistants like Alexa or simple chatbots—there are accounts of individuals forming bonds with AI or even falling in love with AI companions—these are expected to be increasingly nuanced and emotionally intelligent. With 1 in 10 British individuals reporting a lack of close friends, it’s no surprise that there is a growing market for AI companionship amidst the void left by lost human relationships.

Ultimately, as a society, we might reach a consensus that technological solutions can fill the gaps left by absent friends or loved ones. However, a significant distinction exists between providing comfort to the lonely and confronting those who have lost someone dear to them. According to poems often recited at funerals, there is a time to be born and a time to die. When we can no longer discern which is which, how does that reshape our understanding of existence?

Source: www.theguardian.com

AI Fraud is a Growing Issue in Education, But Teachers Shouldn’t Lose Hope | Opinion Piece by John Norton

IThe start of term is fast approaching. Parents are starting to worry about packed lunches, uniforms, and textbooks. School leavers heading to university are wondering what welcome week will be like for new students. And some professors, especially in the humanities, are anxiously wondering how to handle students who are already more adept at Large Language Models (LLMs) than they are.

They have good reason to be worried. Ian Bogost, a professor of film and media, said: and He studied Computer Science at Washington University in St. Louis. it is“If the first year of AI College ended with a sense of disappointment, the situation has now descended into absurdity. Teachers struggle to continue teaching while wondering whether they are grading students or computers. Meanwhile, the arms race in AI cheating and detection continues unabated.”

As expected, the arms race is already intensifying. The Wall Street Journal Recently reported “OpenAI has a way to reliably detect if someone is using ChatGPT to write an essay or research paper, but the company has not disclosed it, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal has infuriated a sector of academia that imagining admirably that there must be a technological solution to this “cheating” problem. Apparently they have not read the Association for Computing Machinery's report on “cheating”. Statement of principles for developing generative AI content detection systemsstates that “reliably detecting the output of a generative AI system without an embedded watermark is beyond the current state of the art and is unlikely to change within any foreseeable timeframe.” Digital watermarks are useful, but they can also cause problems.

The LLM is a particularly pressing problem for the humanities because the essay is a critical pedagogical tool in teaching students how to research, think, and write. Perhaps more importantly, the essay also plays a central role in grading. Unfortunately, the LLM threatens to make this venerable pedagogy unviable. And there is no technological solution in sight.

The good news is that the problem is not insurmountable if educators in these fields are willing to rethink and adapt their teaching methods to fit new realities. Alternative pedagogies are available. But it will require two changes of thinking, if not a change of heart.

First, law graduates, like the well-known psychologist from Berkeley, Alison Gopnik says They are “cultural technologies”, just like writing, printing, libraries, internet searches, etc. In other words, they are tools used by humans. AugmentIt's not an exchange.

Second, and perhaps more importantly, the importance of writing needs to be reinstated in students' minds. processI think E.M. Forster once said that there are two kinds of writers: those who know their ideas and write them, and those who find their ideas by trying to write. The majority of humanity belongs to the latter. That's why the process of writing is so good for the intellect. Writing teaches you the skills to come up with a coherent line of argument, select relevant evidence, find useful sources and inspiration, and most importantly, express yourself in readable, clear prose. For many, that's not easy or natural. That's why students turn to ChatGPT even when they're asked to write 500 words to introduce themselves to their classmates.

Josh Blake, an American scholar, Writes intelligently about our relationship with AI Rather than trying to “integrate” writing into the classroom, I believe it is worth making the value of writing as an intellectual activity fully clear to students. you If you think about it, naturally they would be interested in outsourcing the labor to law students. And if writing (or any other job) is really just about the deliverables, why not? If the means to an end aren't important, why not outsource it?

Ultimately, the problems that LLMs pose to academia can be solved, but it will require new thinking and different approaches to teaching and learning in some areas. The bigger problem is the slow pace at which universities move. I know this from experience. In October 1995, the American scholar Eli Noam published a very insightful article: “The bleak future of electronics and universities” – in ScienceBetween 1998 and 2001, I asked every vice-chancellor and senior university leader I met in the UK what they thought about this.

Still, things have improved since then: at least now everyone knows about ChatGPT.

What I'm Reading

Online Crime
Ed West has an interesting blog post Man found guilty of online posts made during unrest following Southport stabbingIt highlights the contradictions in the British judicial system.

Ruth Bannon
Here is an interesting interview Boston Review Documentary filmmaker Errol Morris Discusses Steve Bannon's Dangerous 'Dharma' his consciousness of being part of the inevitable unfolding of history;

Online forgetting
A sobering article by Neil Firth MIT Technology Review On Efforts to preserve digital history for future generations In an ever-growing universe of data.

Source: www.theguardian.com

Why You Shouldn’t Become Too Excited About Radically Extending Human Lifespan – Here’s Why

In 2020, researchers in the United States and China conducted a study that involved manipulating genes in nematodes, allowing them to live five times longer than normal. The study focused on C. elegans, a species commonly used for aging research due to shared genetic circuits with humans. The researchers suggested that targeting these conserved genes with drugs could potentially extend human lifespan.

Despite the success in nematodes, it is important to note that worms have a significantly shorter lifespan compared to humans. Therefore, it may not be realistic to expect humans to live to be 500 years old based on these findings.

While our current average lifespan of 73 years is already longer than that of our ancestors, there is ongoing debate about whether we should strive to extend human lifespan even further. Some concerns include potential overpopulation, increased resource consumption, and environmental impact.

However, studies have shown that as life expectancy increases, birth rates tend to decline. This trend has been observed in many countries with advanced healthcare systems. In fact, some regions have seen population decline due to lower fertility rates.

In countries like Japan, where life expectancy is high, the average lifespan has increased while birth rates have significantly decreased. This trend suggests that longer lifespans do not necessarily lead to overpopulation.

Increasing life expectancy in developing countries should also be a priority to ensure that longer lifespans are achieved without compromising quality of life. It is important to consider the ethical implications of prolonging life in regions with existing disparities in healthcare and resources.

Ultimately, the goal should be to promote longevity in a way that prioritizes overall health and well-being for all individuals, regardless of their geographic location or socioeconomic status.

read more:

Source: www.sciencefocus.com