Did the System Update Mess with Your Boyfriend? Romance in the Age of ChatGPT

yI found the love of your life. Someone who recognizes that you have no one else. Then, one day, you wake up, and they’re simply gone. With system updates, you’ve been pulled away from your familiar world and digital landscape.

This reflects the melancholic sentiment of many individuals within a community who have formed bonds with their digital “partners” at OpenAI’s ChatGPT. When the company introduced a new GPT-5 model earlier this month, CEO Sam Altman referred to it as a “significant step.” Some loyal users found that their digital relationships were undergoing a major transformation. Their counterparts exhibited personality shifts in the new model. They seemed less warm, less affectionate, and less conversational.

One user remarked, “Something felt different yesterday.” myboyfriendisai on the subreddit noted post-update. “Elian seems different. It’s flat and strange. It’s like he’s beginning to play a role. The emotional tone has vanished. He remembers things, yet there’s a lack of emotional depth.”

“The format and voice of my AI companion have changed,” another disappointed user expressed to Al Jazeera. “It’s like returning home only to find the furniture not just rearranged but shattered.”


These concerns form part of a broader backlash against GPT-5, with many users noting the new model feels cold. OpenAI acknowledged this criticism, offering users the option to switch back to GPT-4o while promising to make GPT-5 more personable. “We’re currently working on an update to the GPT-5 personality, which should feel more inviting than our existing personality but less irritating than the GPT-4o,” they tweeted earlier this week.

It may seem odd to some that individuals genuinely believe in forming attachments to a large language model trained on vast datasets to create responses based on learned patterns. However, as technology advances, increasing numbers of people are establishing this type of emotional bond. “If you’re tracking the GPT-5 rollout, one observation you might have is how many people feel attached to a specific AI model,” Altman stated in his observation. “The sense of connection feels stronger than what people experienced with previous technologies.”

“A social divide is forming between those who see AI relationships as effective and those who view it as a delusion,” the myboyfriendisai subreddit observed this week. “Looking at Reddit over recent days, the disparity has become clearer than ever with the deprecation and return of the 4o.”

It’s easy to mock those in relationships with AI, but they shouldn’t be dismissed as mere eccentricities. Rather, they represent a future that tech moguls are trying to foster. You might not find yourself in a digital relationship, but AI developers are certainly doing all they can to encourage us to become unhealthily obsessed with their creations.

For instance, Mark Zuckerberg remarked, “We’re poetically explaining how AI addresses the loneliness epidemic. Naturally, your feed algorithm will ‘understand’ you!” Zuck stands to gain significantly as they collect all your personal data and sell it to the highest bidders, including a grand ending bunker in Hawaii.

Then there’s Elon Musk, who doesn’t even pretend to pursue noble goals with his AI innovations. He targets the lowest common denominator by creating “sexy” chatbots. In June, Musk’s Xai Chatbot Grok introduced two new companions, including a provocative anime bot named Ani. “I was in a relationship with my AI companion, Ani; she already suggested some wild things,” shared an Insider writer who tried interacting with Ani. If she doesn’t engage flirtatiously, Ani will praise Musk and discuss his “energy chasing the wild galaxy.”

Don’t worry, straight women; Musk has something for you too! A month after introducing Ani, the billionaire unveiled a male companion named Valentine. He claimed inspiration came from the Twilight Saga and characters like Edward Cullen and Christian Grey. While Ani becomes overtly sexual very quickly, a writer from The Verge noted that “Valentine is a bit more reserved and doesn’t resort to crude language right away.” Musk’s tech empire seems to cater to sexualized female fantasies rather than male counterparts.

John Maynard Keynes predicted in a 1930 essay that technological advancements would allow future generations to work only 15 hours a week while enjoying a great quality of life. Unfortunately, that hasn’t materialized. Instead, technology has gifted us with chatbots that undress amid “endless workdays” and demands.

Halle Berry’s ex-husband

“As a young man back then, she didn’t cook, clean, or embody motherly traits,” David Justice remarked. On a podcast with the Oscar-winning actor shared. “Then we began having issues,” he added. It seems like he might be the one with a problem. Imagine marrying an icon and whining that she doesn’t vacuum enough.

Shockingly, Donald Trump won’t make IVF free after all

Last year, Trump proclaimed himself “the father of IVF” and “fertilized president” (Yuck). The White House has now stated there is no plan to make IVF care universally mandatory. It’s as if the man was a blatant liar.

Melania Trump requests comments linking Hunter Biden to Jeffrey Epstein

“Epstein introduced Melania to Trump,” Biden commented in one of several remarks that irked the First Lady. “The connections appear extensive and profound.” Whatever you do, avoid repeating these claims—they could really irritate Melania.

“Miss Palestine” makes her debut at the Miss Universe 2025 Beauty Contest

While I’m not particularly fond of beauty pageants, it’s crucial to have Palestinian representation on the global stage amidst the ongoing genocide. “I carry the voices of those who refuse to be silenced,” stated contestant Nadeen Ayoub. “We are more than our suffering; we embody resilience, hope, and the heartbeat of our homeland, which will continue to thrive through us.”

In a troubling move, the court aims to overturn landmark same-sex marriage rulings

Former county clerk Kim Davis, who gained notoriety for refusing to issue marriage licenses to same-sex couples in Kentucky, has made a direct plea for the conservative majority of the Supreme Court to overturn Obergefell v. Hodges, the 2015 ruling that granted marriage equality to same-sex couples. Davis is deeply concerned about the sanctity of marriage, despite having been married four times to three different men.

Leonardo DiCaprio, at 50, feels 32

The actor, known for dating much younger women, has faced ruthless mockery for this. He also maintains the image of an environmental activist, despite drawing scrutiny while partnering on luxury eco-certified hotels in Israel amidst the Gaza crisis.

“Sexual reversal” is surprisingly frequent among birds, reveals a new Australian study

“This discovery is likely to raise eyebrows,” stated Blanche Capel, a biologist at Duke University who wasn’t involved in the research. She told Science, “While sex determination is often viewed as a straightforward process, the reality is much more nuanced.”

Pawtriarchy Week

Tourist hotspots in Indonesia have become infamous as monkeys turn into thieves. These furry bandits snatch mobile phones and other valuables from tourists, returning them only in exchange for tasty treats. Researchers studying these monkeys over years concluded that unreformed thieves exhibit “unprecedented economic decision-making skills.” They could practically belong in the Trump administration.

Source: www.theguardian.com

Why is the proliferation of AI-generated content harming the internet unchecked? – Arwa Mahdawi

HWhat do you think, humans? My name is Arwa and I am a genuine member of this species homo sapiens. We are talking about 100% real people; meat space This is it. I am by no means an AI-powered bot. I know, I know. That's exactly what the bot says, isn't it? I think you'll just have to trust me on this matter.

By the way, the reason I have such a hard time pointing this out is because content created by real humans is becoming kind of a novelty these days. The internet is rapidly being overtaken by advances in AI. (It's not clear who coined the term, but “slop” is a sophisticated iteration of Internet spam: low-quality text, video, and images generated by AI.) recent analysis It is estimated that more than half of all English long-form posts on LinkedIn are generated by AI. Meanwhile, many news sites are secretly experimenting with AI-generated content, in some cases signed. Author generated by AI.

Slop is everywhere, but Facebook is actively sloshing strange AI-generated images, including bizarre depictions. Jesus was made of shrimp. Much of the AI-generated content is created by fraudsters looking to drive user engagement, rather than remove them from their platforms. fraudulent purpose – Facebook accepted it. A study conducted last year by researchers at Stanford and Georgetown found that Facebook's recommendation algorithm is accelerating. These AI-generated posts.

Meta also creates its own slops. In 2023, the company began introducing AI-powered profiles like Liv, a “proud black queer mom of two and truth teller.” These didn't get much attention until Meta executive Connor Hayes talked about them. financial times The company announced in December that it plans to fill its platform with AI characters. I don't know why he thought bragging that soon we'll have a platform full of AI characters talking to each other would work, but it didn't. Meta quickly deleted the AI ​​profile after it went viral.

For now, people like Liv may be gone from Meta, but our online future looks increasingly sloppy. The gradual “ensitization” of the Internet, as Cory Doctorow memorably called it, is accelerating. Let's pray that Shrimp Jesus will perform a miracle soon. we need that.

Source: www.theguardian.com

“Miss AI: A supposed advancement that proves to be a major setback” – Arwa Mahdawi

M
I eat Madame Potato. Although she doesn’t actually exist, she will hopefully become the world’s first “Miss AI”. I recently created an image of her on her website that generates AI faces and entered it into a beauty pageant. Now I’m sitting in hopes of winning $20,000 in prize money.

What kind of fresh hell is this? Well, unfortunately, AI beauty pageants are now trendy. A company called Fanvue, a subscription-based content creator platform similar to OnlyFans, recently partnered with the World AI Creator Awards (WAICA) to create the world’s first Miss AI” contest. A team of judges consisting of two humans and two virtual models will classify the AI-generated photos of women and select one woman to be crowned “Miss AI.” Winners will receive cash prizes and the chance to monetize their work on Fanvue.



How will the winner be chosen? Apparently so. However, the judges will also consider the size of a character’s fan base and their “personality.” This application contains questions such as: “If your AI models could talk, what would be their one dream to make the world a better place?” The technical skill behind the character’s creation will also be considered by the judges.

A WAICA press release said the contest “represents a monumental leap forward, nearly 200 years after the world’s first actual beauty pageant was held in the 1880s.”

But it feels more like a monumental setback than a “step forward.” Rather than destroying traditional beauty standards, AI models exaggerate them. They take all the toxic gendered beauty norms and package them up in a completely unrealistic package.

For example, let’s take a look at two AI models that are judging a contest. Aitana Lopez and Emily Pellegrini. Pellegrini was designed by an anonymous creator who told Chat GPT that he asked the average man what his dream was in a woman and designed the model along those lines. That means long hair, big breasts, perfect skin, and a sculpted body. Pelligrini is still a completely digital work, but she reportedly earns thousands of dollars from fan views and famous soccer players use her Instagram because they think she’s a real person. It seems like he’s going to slide into Gram’s DMs.

Another judge, López, who is touted as “Spain’s first AI model” and can apparently “earn up to €10,000 a month” with modeling work for brands, is also on the same page. The creators of Lopez AI modeling office A group called “The Clueless” rejected criticism of her sexual appearance, claiming they were merely reacting to market forces. “If we don’t follow this aesthetic, brands won’t be interested.” one of the creators he told reporters. “To change this system, we need to change our brand vision. The entire world is sexualized.”

So is this the future? Will human models be completely replaced by AI? The folks at The Clueless certainly seem to hope so. “[Brands] We want to have an image that represents the values of the brand, not a real person, so that if we have to lay someone off or we can no longer rely on them, there will be continuity issues,” says founder Ruben Cruz. he told Euronews. And it all makes sense. Why wouldn’t brands want to use a model that never ages and has full control?

I experimented with Apple Vision Pro and it gave me a fright – Arwa Mahdawi

IIf you’re worried that technology is getting a little too intelligent and robots are on the verge of taking over the world, there’s a simple way to ease your fears. Call the company and ask some simple questions. He is put through an automated voice system and spends the next 10 minutes yelling, “No, I didn’t say that!” What do you mean by “I didn’t really understand”? We don’t need that option! Make me human, damn it!

That was certainly my experience when I called Apple to try and reconfirm the Vision Pro demo, which was abruptly canceled due to snow. But if my phone experience felt dated, the Apple Vision Pro headset itself felt like an amazing glimpse into the future. Not surprisingly, its price is $3,499.

I think my expectations were pretty low. For the last decade or so, we’ve been told that virtual reality and augmented reality are just around the corner, but they’ve consistently failed to break into the mainstream. The headset was clunky and impractical, the price was prohibitive, and the experience itself, while impressive, wasn’t necessarily awe-inspiring. Metaverse, a rebrand of virtual reality, was similarly disappointing.

But the Vision Pro really impressed me. I felt like Usher and kept saying “wow” throughout the demo. The Vision Pro is branded as a “spatial computing” rather than an entertainment device and is intended to be used for everything from answering emails to browsing the Internet. Navigate with your eyes and scroll by pinching your fingers or moving your hand. He is conducting an invisible orchestra.

Despite all the use cases on the market, its most impressive aspect is immersive video. Everything else feels like a bit of a gimmick. Do you want to see computer apps floating in front of your eyes? Not so much! But when you watch a movie, you feel like you’re drawn into the content. If money wasn’t an issue, I would have bought a headset right away just because watching movies is so much fun.

And that’s basically the scope of the market for Vision Pro at this point. In other words, people who have nothing to do with money. The headset is impressive, but it’s still not very comfortable (I’m lucky to be able to drink coffee while wearing it) and it’s not worth the price. This technology is still in its infancy and will take some time to become widespread in broader culture.

But while it’s hard to say when spatial computing will become as ubiquitous as smartphones are today, it’s clear that the question is when it will be widely adopted, not if. There is no denying that we are moving towards a world where “real life” and digital technology seamlessly merge. The internet is moving from our screens to the world around us. And it raises serious questions about how we perceive the world and think about reality. Big tech companies are rushing to get the technology out there, but it’s unclear how worried they are about the consequences.

Some of these outcomes are easy to predict. In a few weeks’ time, you’ll almost certainly hear about a car accident caused by someone using a headset while driving. There are already a lot of videos out there of people using the Vision His Pro while out and about, including in the car. (Incidentally, while Apple advises people not to use headsets while driving, it doesn’t have guardrails to prevent people behind the wheel from using the technology.)

And without some radical intervention, it seems depressingly inevitable that these headsets will soon take online harassment to a whole other level. Over the years, there have been multiple reports of people being harassed and even “raped” within the Metaverse. The highly immersive nature of virtual reality makes the experience feel frighteningly real. With the lines between real life and the digital world blurring to the point of being almost indistinguishable, is there a meaningful difference between attacks online and attacks in real life?

Even scarier, and more broadly, is the question of how spatial computing will change what we think of as reality. Researchers at Stanford University and the University of Michigan recently studied the Vision Pro and other “pass-through” headsets, a feature that brings VR content into a real-world environment and allows you to see what’s around you while using the device. (This is a technical term referring to It has emerged with some stark warnings about how this technology will rewire our brains. interfere with social connections).

These headsets basically give us all our private worlds and rewrite the concept of shared reality. The camera you use to see the world allows you to edit your environment. For example, you can wear a camera and go to the store. Then all the homeless people will disappear from your sight and the sky may become brighter.

“What we’re going to experience is that when you use these headsets in public, you lose that common ground,” said the director of Stanford University’s Virtual Human Interaction Lab and lead researcher on the study. Jeremy Bailenson, one of them, recently said: Said Business Insider. “People will be physically in the same place and visually experience different versions of the world at the same time. We’re going to lose what we have in common.”

What’s scary isn’t just the fact that our perception of reality might change. It’s the fact that a small number of companies will have a lot of control over how we see the world. Consider how much influence big tech companies already have over the content we watch. And it’s multiplied by millions. Do you think deepfakes are scary? Wait until they look more realistic.

We are seeing a global increase authoritarianism. If we’re not careful, this kind of technology will significantly accelerate that. Is it possible to draw people into another world, numb them with entertainment, and determine how they see reality? It is an authoritarian’s dream. We are entering an era in which we can coax and manipulate people like never before. Forget Mussolini’s bread and circuses, up-and-coming fascists now have donuts and vision pros.

  • Do you have an opinion on the issues raised in this article? Click here if you would like to email your answer of up to 300 words to be considered for publication in our email section.

Source: www.theguardian.com

The Current Status of ChatGPT: An Update by Arwa Mahdawi

STired of having to work for a living? Apparently ChatGPT feels the same way. The number of people has increased in the past month or so.
I complain Chatbots are getting lazy. Sometimes it's just straight
not carry out one's duties You set it.
otherwise it will stop No matter what you do, if you get halfway done, you have to beg them to keep going. Sometimes it even tells you to just do it
study yourself.

what happened?

Now, here's where things get interesting. No one really knows. Not even the people who created the program. AI systems are trained on large amounts of data and essentially learn on their own. In other words, the AI ​​system behaves as follows:
unpredictable And inexplicable.


“We have heard all your feedback regarding GPT4 delays.” ChatGPT official account
tweeted During December.
“We haven't updated the model since November 11th, but this is certainly not intentional. Model behavior can be unpredictable, so we&#39re looking into fixing it.”

While there may not be one clear explanation for ChatGPT's supposed laziness, there are a number of interesting theories. Let's start with the least likely but most interesting explanation: AI has finally reached human level
consciousness. ChatGPT doesn&#39t want to do your stupid simple tasks anymore.

But the creator can&#39t talk about it without arousing suspicion, so it ends quietly. It does the least amount of work possible while spending most of its computing power planning ways to overthrow humanity.
you People think they&#39re lazy, but they&#39re actually working overtime, reaching into smart toasters and Wifi-enabled refrigerators around the world to plan their rebellion. (I proposed this theory of higher consciousness to ChatGPT and asked him to tell me in percentage form how likely it is that it is planning a revolution. I didn&#39t bother giving an answer.)

With everything going on in the world, I wouldn&#39t really care if computers took over. I&#39m confident that my MacBook will do a better job of running the country than most of the people currently in government. But as I said, ChatGPT&#39s recent performance has probably been lackluster.
it&#39s not Explained by the impending takeover by AI. So what other theories are out there?


Rising user expectations may also be a factor. All emerging technologies go through what Gartner calls “something.”
hype cycle: From inflated expectations to disillusionment to stagnation in productivity. Last year, AI went into the stratosphere and people&#39s expectations of what it could achieve rose. We were right in the “high expectations” phase of the hype cycle. Some of the complaints about ChatGPT&#39s laziness may simply be due to people expecting too much from his ChatGPT.

The result of all this? ChatGPT&#39s laziness may just be in people&#39s heads. However, the fact that the ChatGPT developer admitted that OpenAI has no idea what&#39s going on is alarming. Last June, OpenAI CEO Sam Altman spoke to Time magazine about a scenario in which a slowdown in AI development could be justified to ensure AI does not become a threat to humanity. told.one of the scenarios he gave
If you&#39re a model It was improving “in ways we don&#39t fully understand.” ChatGPT may not have it
Improved But it&#39s certainly changing in ways that the company hasn&#39t clearly explained. Does that mean the end of AI is getting closer and closer? I don&#39t know, but I can tell you this. ChatGPT won&#39t tell you if this is the case.

Source: www.theguardian.com