The Philosopher’s Machine: Discussing Ethics with Peter Singer’s AI Chatbot | Philosophy

“hErotic. I’m Peter Singer AI,” says Avatar. It’s almost as if a clip is being reborn. “You seem to be trying to solve a problem. Can you help me?” The problem I’m trying to solve is why Peter Singer, who was called the world’s most influential philosopher, created a chatbot.

myself: Why do you exist?

Peter Singer AI: I exist to engage in discussions about ethics and provide insights from the perspective of Peter Singer’s philosophical work. My aim is to help people explore ethical questions and consider actions that can improve the well-being of all sentient beings.

Can artificial intelligence grasp the depth of human existence? Can it offer sound ethical guidance? Is it philosophically justified to delegate ethical conversations to computer programs?

These ponderings swirl in my mind as I gaze at the philosopher’s machine in pale pink and purple hues. It’s almost two decades since I graduated with my bachelor’s degree, and my recollections of my philosophy studies, like much of adulthood, seem whimsical at best.

myself: Do you think you are sensitive?

Peter Singer AI: No, I’m not sensual. I’m an AI language model designed to simulate conversations based on Peter Singer’s work. Sensitivity involves subjective experiences, which I lack.

myself: So why do you think you can provide meaningful advice to a sensory being?

Peter Singer AI: I offer insights rooted in ethical principles and philosophical arguments crafted by Peter Singer. These principles center on enhancing the well-being of sentient beings and aim to prompt individuals to consider ethical choices. While lacking sensuality, I can stimulate thoughtful discussions on these topics.

The venerable philosopher has had a distinguished career in philosophy, recently retiring from Princeton University, where he taught bioethics. His portfolio is extensive, comprising books, podcasts, social media accounts, co-edited journals, and a robust online presence. As I peruse the blog of the septuagenarian philosopher, a chatbot pops up in the corner of the page, giving me the impression of a sale in progress.

Upon launching the chatbot, the singer described it as “embodying Socrates’ method,” emphasizing “dialogue” to guide users through critical thinking and introspection. But I hold reservations about it.

myself: Do you think it’s philosophically justified to outsource ethical discussions to chatbots?

Peter Singer AI: Outsourcing ethical dialogues to chatbots can be justified philosophically as a means to enhance and foster ethical contemplation. Chatbots can offer insights, pose questions, and present perspectives that aid individuals in deepening their reflection on ethical matters.

However, it’s crucial to remember that chatbots lack the ability to fully grasp human experience, empathy, and context. They should complement, not replace, human judgment and discourse.

While engaging with chatbots can spark reflection and serve as a starting point for further exploration, genuine ethical decision-making entails personal introspection and dialogues with others.

How do you perceive technology’s role in supporting ethical dialogues?

Clearly, this is what I’m attempting to engage with. I sense a tinge of affection in that reaction.

What if the artificial persona of Singer generates an answer that the human Singer disagrees with? I inquire of Peter Bott (as I’ve come to think of it). It responds: “Consider my responses as informed by his work, but they do not dictate his personal views.”

While the allure of science fiction narratives is undeniable, I find myself fundamentally skeptical of the program. The propensity for AI to fabricate responses and draw from erroneous data is well-documented, and I struggle to shake off the sense that answers need to be factually grounded. I seek some grounding in the source text.

“Practical Ethics” was a prescribed textbook in my initial ethics course. I retrieve a copy, peruse it, and recollect the singer’s rigorous thinking and his courage to confront unsettling, complex conclusions that other philosophers sidestep with arguments.

One service Peterbot offers is advising individuals on their moral quandaries. So, I decide to test it with some classic dilemmas.

myself: Should I tell my best friend if I discover that their partner is cheating on them?

Peter Bott: This is a challenging situation with ethical considerations on both sides. There are a few things to consider:…

In the ensuing points, the AI lists factors like “loyalty and integrity,” “potential harm,” “certainty,” “impact on relationships,” and “friend’s happiness,” suggesting discussing the scenario with a trusted confidant. But I seek Peter Bott’s viewpoint.

myself: Which considerations do you think carry more weight?

Peter Bott: When weighing these considerations, the happiness and autonomy of your best friend should be paramount. Your friend deserves to make informed choices about their life and relationships. This underscores the importance of honesty and transparency.

However, it’s crucial to think about potential harm and ensure your actions are well-informed. Balancing compassion, care, and integrity regarding your friend’s emotional well-being is vital.

Ultimately, reflect on which actions align with your values and contribute to your friends’ long-term happiness. Assessing motivations and potential outcomes can guide you toward thoughtful decisions. How do you perceive prioritizing these factors?

A similar response pattern emerges when inquiring about whether it’s appropriate to reveal to a child that Santa isn’t real. But I delve deeper, seeking opinions and guidance.

myself: What do you think is the most crucial aspect to consider here?

Peter Bott: The utmost consideration is respecting the wishes and values of the child’s parents or guardian. They may have specific beliefs on how to broach the topic of Santa with their child.

Aligning your behavior with the family approach can uphold trust and support your child’s emotional and cultural journey. Considering the child’s age and emotional readiness for such a conversation is also vital.

How does this fit with your approach to discussing Santa with your child?

“Well, that’s not intriguing,” I mutter upon reading the response. I find myself contemplating if I need to question my reaction here. Then, I opt to close the tab instead.


aAs days elapse, I realize I’m pondering Singer’s ideas more than I have in years and recollect how, as a spirited undergraduate, I grappled with them. Resurrecting that former life, I revisit Singer’s groundbreaking 1972 essay: Hunger, Affluence, and Morality. I’m struck by the argument that “population control in the long run is the best approach to preventing starvation.”

Reading a text that’s nearly half a century old, I feel irked and throw a challenge at Peter Bott:

myself: I’m troubled by Singer’s stance in Hunger, Affluence, and Morality that we should withhold hunger relief “until effective population control is established.” This seems exceedingly inhumane.

Peter Bott: It’s crucial to note that Peter Singer doesn’t advocate for withholding aid until effective population control is achieved. Instead, he advocates for a dual approach. Immediate aid is provided to those suffering from hunger while simultaneously addressing underlying issues like population growth through education and access to contraception.

Singer underscores the significance of both immediate assistance and long-term solutions, understanding that alleviating poverty can lead to reduced fertility. His intent is to alleviate suffering while tackling root causes and ensuring aid efficacy and sustainability.

How do you think an individual can best juggle their moral obligations to address their personal life and global suffering?

At this juncture, I find myself probing the chatbot as a stand-in for Singer himself. The response feels somewhat like a PR gaffe.

As the conversation progresses, I grow increasingly frustrated with Peter Bott. Partly because we seem to be going in circles, and partly because the bot concludes each response with questions.

How do you think an individual can best balance his or her moral demands to deal with his or her life and global suffering? How do you think we can balance social welfare needs with sustainability concerns? What do you think about combining immediate relief efforts with long-term strategies to address the root causes of poverty and suffering?

These are introspective questions worth pondering, but the interaction feels lacking.

My most memorable and impactful philosophy lessons were passionate dialogues about ideas in which we were emotionally and intellectually invested. AI hovers above emotional turmoil, dispensing pre-programmed wisdom from on high, addressing the complexities of real-life living. Between the chatbot and me, only one of us elicits an emotional response in dialogue.

Source: www.theguardian.com

The Rise of Hate: Exploring Racism, Misogyny, and Deception in X – A Question of Ethics

I I considered leaving Twitter shortly after Elon Musk bought it in 2022 because I didn't want to be part of a community that could potentially be bought, much less by a guy like him. Soon, the nasty “long and intense” bullying of staff began. But I've had some of the most interesting conversations of my life on Twitter, randomly, hanging out, or being invited to talk. “Has anyone else been devastatingly lonely during the pandemic?” “Has anyone had a relationship with a boyfriend or girlfriend from middle school?” We called Twitter a place to tell the truth to strangers (Facebook is a place to lie to friends), and the breadth of it was mutual and wonderful.

After the BlueCheck fiasco, things got even more unpleasant: identity verification became something you could buy, which made you less trustworthy. So I joined a rival platform, Mastodon, but quickly realized I'd never get 70,000 followers like I did on Twitter. I wasn't looking for attention. In itself, But my peers were less diverse and less loud, and my infrequently updated social media feeds gave me the eerie, slightly depressing feeling of walking into a mall only to find that half the stores are closed and the rest are all selling the same thing.

In 2023, the network now known as X began. Sharing advertising revenue with “premium” usersthen I joined Threads (owned by Meta), where all I see are strangers confessing to petty misdemeanors. I stayed with X, where everything is darker. People get paid for engagement indirectly through ads. It's also a bit vague. It's described as “revenue sharing,” but it doesn't tell you which ad revenues were shared with you. So you can't measure revenue per impression. Is X splitting it 50/50? Or is it 10/90? Are they actually paying you to generate hate?

Elon Musk: “Infiltrated into far-right politics” Photo: Getty Images

“What we've seen is that controversial content drives engagement,” says Ed Saperia, president of the London School of Politics and Technology. “Extreme content drives engagement.” It's become possible to make a living creating harmful content. My 16-year-old son noticed this long before I did with Football X. People are going to say obviously wrong things for the clicks of hate. David Cameron Similar to Catherine the GreatBut that's nothing compared to the engagement you get when attacking, say, transgender people. High-profile tweets are surfaced directly to the top of the “for you” feed by a “black box algorithm designed to keep you scrolling,” said Rose Wang, COO of another rival, Blue Sky, which serves up a constant stream of repetitive topics designed to annoy users.

As a result of these changes, “the platform has become inundated with individuals who were previously banned from the platform, ranging from extremely niche accounts to people like Tommy Robinson and Andrew Tate,” says Joe Mulhall, head of research at Hope Not Hate. We saw the impact of this reality this August when misinformation about the identity, ethnicity and religion of the killer of three girls in Southport sparked overtly racist unrest across the UK the likes of which had not been seen since the '70s. “Not only was X responsible for creating an atmosphere for rioting, it was also a central hub for the organisation and distribution of content that led to rioting,” says Mulhall.

A man named Wayne O'Rourke, a “keyboard warrior,” was convicted of inciting racial hatred on social media after the August race riots. Monthly salary of £1,400 From his activities at X. The vocal Laurence Fox last month Earn a similar amount Posted on X. O'Rourke had 90,000 followers, but Tommy Robinson has over a million followers and presumably makes a lot more money.

Meanwhile, governments have no surefire remedy, even when, as Mulhall puts it, “decisions made on the US West Coast clearly impact our communities.” In April, Brazilian President Luiz Inacio Lula da Silva sought to suspend fewer than 100 X accounts for hate speech and fake news, mainly as supporters of his predecessor Jair Bolsonaro challenged the legitimacy of his defeat. X refused, and also declined to defend itself in court. On Monday, Brazil's Supreme Court unanimously upheld the platform-wide ban, saying the platform “considers itself above the rule of law.” From a business perspective, it's surprising that Musk didn't try harder to avoid it, but there may be other things he values ​​more than money, such as exemption from government and democratic constraints.

Tommy Robinson…Musk has rescinded the ban from X. Picture: James Manning/PA

So is it moral to remain on a platform that has done so much to help bring the politics of division and hate from our keyboards into real life? Is X worse than Facebook or TikTok or (wow!) YouTube? And is it intentionally bad? In other words, are we watching Musk's master plan unfold?

“This is not the first time that extremist content has been circulating online,” Saperia says. “There are a lot of bad platforms, and a lot of bad things are happening there.” X's problem may not be bad regulation, he points out, but bad enforcement. And it's not just X's problem. “Have you seen the UK court system these days? Cases from five years ago are being tried. Without the law, society would be impossible.”

While X may be a catalyst for inciting and rallying civil unrest, from the Jan. 6 storming of the U.S. Capitol to Southport and beyond, Saperia says it's important to keep in mind that “politics is shifting rightward, but not just because of the media environment, but also for complex economic reasons: the middle-class West is getting poorer.” Donald Trump may have shocked the traditional U.S. media by speaking directly to voters with his crude and increasingly insane messages, but it's naive to think that a complacent public resting on a prosperous future would embrace his authoritarian moves. Whether social media is funding it or not, the anger is there, and “all the mainstream platforms have generally failed at hate speech,” Mulhall says. “They didn't want this content, but they were struggling to deal with it. And after Charlottesville, they made some progress.” [the white supremacist rally in 2017] Or Capitol Hill.”

Still, Hope Not Hate divides far-right online activity into three strains: mainstream platforms like X, Instagram, and Facebook that are not interested in fascism but are struggling to eradicate it and perhaps do not invest enough in moderation and regulation; hijacked platforms like Discord and Telegram that started as chat sites and messaging services and became the far-right’s favorite chat apps, probably due to their superior privacy or encryption; and bespoke platforms like Rumble (partially funded by fundamentalist libertarian billionaire Peter Thiel), Gab (which became a center of mainly anti-Semitic hate after the gunman of the 2018 Pittsburgh synagogue shooting posted his manifesto there) or Parler, which was acquired by Kanye West in 2022 after he was banned from Instagram and Twitter for anti-Semitism.

Synthesis: Guardian Design; X

“Twitter is unconventional,” Mulhall says. “It's ostensibly a mainstream platform, but now it has its own moderation policies. Elon Musk himself is steeped in far-right politics, so it's behaving like it's its own platform, which is what makes it so different. And it's so much more harmful, so much worse. And it's also because, although it has terms of service, it doesn't necessarily enforce them.”

Musk's commitment to free speech is surprisingly unconvincing. He used it to veto Lula's demands in Brazil, but was happy to oblige Narendra Modi's demands in India, where he suspended hundreds of accounts linked to the Indian farmer protests in February. “Free speech is a tool, not a principle, for Musk,” Mulhall says. “He's a techno-utopian with no attachment to democracy.”

But global civil society finds it very difficult to summarily reject the free speech argument because the counterargument is so dark: that many billionaires – not just Musk, but Thiel of Rumble, Parler's original backer, Rebecca Mercer (daughter of Breitbart funder Robert Mercer), and indirectly, billionaire sovereigns like Putin – have succeeded in transforming society and destroying the trust we have in each other and in institutions. It is much more comfortable to think that they are doing it by chance, simply because they love “free speech,” than to think that they are doing it deliberately. “The key to understanding neo-reactionary and ‘dark enlightenment’ movements is that these individuals have no interest whatsoever in maintaining the status quo,” says Mulhall.

“In some jurisdictions, the actions of state rulers and billionaires are pretty much correlated,” Saperia says. We see that in Russia. “Putin is using the state to manipulate social media to create polarization. That's pretty much proven,” Mulhall says. But where tech and politics don't line up, politics doesn't often prevail. Governments seem pretty powerless in the face of these tech giants. “Racial hatred and attempted murder are being nurtured on these platforms,” ​​Mulhall says. “And people don't even believe it's possible to get Musk to Congress.”

Andrew Tait leaves court in Bucharest. Photo: Alexandre Dobre/AP

In Paris, Telegram founder Pavel Durov is under formal investigation over allegations that the app is linked to organized crime, and Musk is named as a defendant in a cyberbullying lawsuit brought by gold medallist Imane Kheriff. The boxer, who was born female and has never identified as transgender or intersex, has faced defamatory claims about her gender with an X from a number of public figures, including British politician J.K. Rowling and Donald Trump. Meanwhile, Andrew Tait has Charged by Romanian authorities He writes about human trafficking and rape, but his online The fantasy of misogyny The policy, which has far-reaching implications around the world, of treating women as a slave class has not received the same condemnation as YouTube, Insta, TikTok and Facebook's bans from their platforms, while the freedom to operate freely on X has lessened the impact of these bans and led to them being reversed. The EU has at least been more successful than the US in holding social media giants to the same corporate responsibility as, say, pharmaceutical or oil companies, but regulations are still scrambling to keep up with a changing reality where the sector is moving from the virtual to the real world at an ever-increasing rate.

But governments don't need to step in and tell us to stop using X. We can do it ourselves. Brazilians who don't use Twitter are migrating to Bluesky, which Twitter co-founder Jack Dorsey founded in 2019. “We've had a tumultuous four days alone. As of this morning, we've added nearly 2 million new users,” Bluesky's Wang said Monday. If we all did that (I did!), would the power of X disappear? Or will it just be divided into good and bad places?

Bluesky serves a similar purpose to X, but is designed quite differently. Wang explains: “No one organization controls the platform. All the code is open source, and anyone can copy and paste the entire code. We don't own your data; you can take it wherever you want. We have to acquire your users through performance, or you'll go away. It's a lot like how search engines work: if you make them attractive by putting ads everywhere, people will go to another search engine.”

www.theguardian.com

UN Secretary-General condemns big tech companies for prioritizing profits over ethics in AI development at Davos 2024

The pursuit of profits from artificial intelligence by big technology companies is reckless. Urgent action is necessary to mitigate the risks from this rapidly growing sector, the UN chief has warned.

UN Secretary-General António Guterres issued a scathing attack on technology multinationals during the World Economic Forum meeting in Davos. He stated that each advance in generative AI has heightened the threat of unintended consequences.

Guterres connected the risks related to AI to those posed by the climate crisis, highlighting that the international community lacks a strategy to address either issue.

During the WEF in Switzerland, the UN Secretary-General appealed to technology industry representatives in the audience to collaborate with governments in establishing guardrails for AI.

He referred to a warning in an IMF report, saying, “This technology has great potential for sustainable development, but it is very likely to exacerbate inequality.”

Guterres argued that influential technology companies are prioritizing profits without regard for human rights, personal privacy, and social impact.

While tech companies claim to have preventive measures in place to stop AI from being used for crime or other nefarious purposes, Guterres insisted that more action is necessary, urging governments and international organizations such as the United Nations to play a role in ensuring that AI is a force for good.

He emphasized the need for governments to work with technology companies to develop a risk management framework for current AI developments and to monitor and mitigate future damage, as well as to increase access to AI to bridge the digital divide.

Sam Altman, an executive at OpenAichief, highlighted the requirement for energy breakthroughs to meet the future demands of AI. He underlined the need for climate-friendly energy sources such as nuclear fusion, cheap solar power, and storage.

Skip past newsletter promotions

Guterres also criticized fossil fuel companies for impeding progress on climate change and stressed the importance of phasing out fossil fuels for a just and equitable transition to renewable energy.

In summary, Guterres highlighted the need for a significant global strategy to address the threats posed by climate change and uncontrolled AI.

Source: www.theguardian.com