Why I Avoid Dating People Who Rely on ChatGPT: A Sign of Laziness?

It was the perfect backdrop for a Nancy Meyers film. We found ourselves at a friend’s rehearsal dinner in Oregon’s wine country, nestled in a rustic-chic barn that exuded a subtle sense of luxury. “This venue is amazing,” I said to the groom-to-be. He leaned in as if to share a secret: “Found it on ChatGPT.”

The Guardian’s journalism remains independent. If you purchase something through an affiliate link, we may earn a commission. Learn more.

As he explained that he had incorporated generative AI into the early stages of his wedding planning, a smile crept onto my face. (They also hired a human wedding planner.) They were attentive, yet I realized that if my future partner approached me with wedding suggestions from ChatGPT, the wedding would be off.

Many have non-negotiable preferences in relationships. I don’t smoke, I love cats, and I wish to have children. With recent warnings about the impending AI crisis dominating my newsfeed and conversations, I formulated a new boundary: I won’t date anyone who uses ChatGPT. (To be fair, it could refer to any generative AI, but with 700 million weekly users, ChatGPT is my primary target.)

I’ve heard all the hypothetical scenarios. I use it in my professional life, but what lies beyond? What if it benefits others? What if you just want to utilize it as a proofing tool? Personally, I never use it to “write” anything. I believe there are people out there who can genuinely assist you, but I’m not one of them.

The phrase “feeling sick” signifies being turned off. Sometimes, we encounter behaviors that irk us—like the time I felt nauseated watching a man sip a smoothie through a straw. Initially, my distaste for ChatGPT seemed trivial, a baseless detestation.

Now, in the fall of 2025, using this program for even mundane tasks like crafting a fitness plan or selecting an outfit feels increasingly like a political statement. We’re aware that energy-consuming technologies drain water supplies and escalate electricity costs. It’s marketed as a helper for building relationships, yet isolated individuals are forging connections with algorithms instead of people—a current reality, not just a plot for sci-fi. The tech moguls spearheading this shift prioritize profit over humanity.

Sure, ChatGPT can help draft a shopping list. But does your convenience surpass the potential social repercussions?

As if that weren’t enough, ChatGPT has somehow exacerbated the dating scene. A good friend shared a recent experience where, after spending the night with a guy, she suggested breakfast. He pulled out his phone, opened ChatGPT, and asked for restaurant recommendations. Why would anyone want to date someone who offloads decision-making—especially for something as enjoyable as choosing a place to eat? If they’re too lazy to plan a first date with ChatGPT, how little effort will they expend in six months?

It’s hard to envision a deep, meaningful relationship with someone who frequently engages with technology that erodes our focus and possibly hints at our ultimate downfall. Intellectual curiosity, creativity, originality—if you equate productivity with an app summarizing a movie to save time, we likely don’t share the same values.

Ali Jackson, a New York-based dating coach, uses ChatGPT for some tasks but isn’t an advocate. Over the past six months, she notes many clients have expressed frustration with “chat phishing” and the use of AI-generated content even for dating apps. When I questioned Jackson about my critiques of ChatGPT users, she replied, “No, you can set your own boundaries, but that might limit your dating pool.” Approximately 10% of adults currently use this technology.

“Ask yourself if your preferences truly align with your long-term aspirations,” advises Jackson. “In your situation, I believe this could reflect a core value. It’s crucial to find someone who resonates with your principles.”

People’s aversion to AI extends beyond dating. Ana Pereira, 26, a sound engineer in Brooklyn, fantasizes about disabling AI features on her phone, yet platforms like Google and Spotify make opting out nearly impossible. Pereira thinks using ChatGPT “indicates profound laziness.”

“You seem unable to think independently and rely on apps for help,” she remarked. Recently, two of her friends endured harsh breakups, and she supported one who turned to ChatGPT, a notoriously ineffective therapy, instead of their partner to express feelings. “They wanted to avoid uncomfortable emotions,” she stated. “However, processing emotions isn’t that simple.”

Luciano Noisine echoes a similar sentiment. Richard Burns, a 31-year-old marine biologist and restaurant server in Hawaii, is equally fatigued. “I’m not sure how I feel about people using ChatGPT, but my response would be, ‘Here we go.’ You don’t need to rely on it for a shopping list. Your life shouldn’t be that challenging. We can create one together.”

When director Guillermo del Toro declared he’d “rather die” than use generative AI, it grabbed attention, as did SZA’s harsh words about “environmental racism” and concerns over tech firms creating a “co-dependent” user base. Figures like Simu Liu and Emily Blunt have also criticized AI’s role in various industries. It’s no wonder such statements resonate with the public.

Even within the tech industry, nuances exist. Last month, Pinterest introduced filters that enable users to eliminate AI-generated content. Meta allows users to mute similar actions on Instagram, though it doesn’t disable it entirely. Reports have surfaced of some Silicon Valley engineers becoming more “cursor-resistant,” hesitant to rely on AI for coding.

Luciano Neusine, a principal software engineer based in Greece and the Netherlands, was once eager to use AI for coding assistance. However, he grew aware of his dependencies. “Before, I was just on autopilot,” said Noisine, 27. Recently, when planning a rendezvous with a friend three hours away by train, she suggested using ChatGPT to pick a meeting spot. “There’s a city right in between us,” he pointed out. “Why not just look at a map?”

I don’t intend to date a technology-dependent Luddite, but I aspire to lead a life unencumbered by ChatGPT’s influence. Recently, I declared this sentiment on my dating app profile, replying to Hinge’s prompt about what would disqualify a potential date with “You use ChatGPT for absolutely everything.” This clearly conveys my main points.

Source: www.theguardian.com

Why Simple Tasks Like Charging Rely on Mobile Phone Measurements

Mobile phone chargers require precise quantum measurements

ShutterStock/Zoomik

If you’re anything like me, your smartphone is almost always connected to a charger. What we often overlook is that the capacity to safely conduct intricate quantum measurements in cutting-edge physics hinges on safety standards.

To grasp this, consider what occurs when you connect the charger to a standard socket. The electricity flowing from the outlet exceeds 100 volts, yet the charger is engineered to reduce it to around a dozen volts as it reaches the phone. Without this voltage reduction, the device would be damaged.

Essentially, the precise voltage matters in a specific way. But how can one truly know the value of a single volt? Moreover, when reporting voltages, can we fully trust the manufacturers of phone chargers?

This may appear to be merely a scientific query; however, in the U.S., the volt has a legal definition established in 1904, governed by the National Institute of Standards and Technology (NIST). Various countries maintain their own national measurement units for the same purpose, such as the UK’s National Physics Institute.

For volts, NIST’s definition has relied on quantum devices for over three decades. In this process, the metrologist begins with a series of superconducting junctions—like crosswalks in narrow superconducting regions separated by insulation—and exposes them to microwaves of extremely specific frequencies. This stimulates a purely quantum phenomenon that creates voltage differences across junctions. The number of volts contributing to this difference is directly linked to two of the universe’s fundamental constants. This allows scientists to define a volt based on what we understand as foundational to our physical reality.

Specifically, the two constants involved are Planck’s constants that connect the charge of an electron—a fundamental quantum particle—to the energy of a photon (a quantum particle of light) and its frequency. Remarkably, the connection between charging a mobile phone and the most basic elements of the quantum realm is quite brief.

However, volts are not solely entrenched in the quantum realm. In fact, in 2018, metrologists globally unanimously voted to redefine several entries in the International System of Units (SI Units) with close ties to microscopic details.

Some unit changes were quite radical. For instance, kilograms are now defined in terms of a combination of Planck’s constant, the speed of light, and the frequency at which electrons in a specific type of atomic clock “click,” derived from platinum alloy polished only by the hide of endangered European goats. If you’ve recently stood on a scale at your doctor’s office, you’re witnessing how quantum physics influences the numbers displayed there.

The shift towards quantum-based definitions of units underscores the remarkable scientific advancements achieved in the past decades concerning our understanding, control, and exploration of the microscopic world. For example, I spoke in January with Alexander Epri at the University of Colorado Boulder, a key player in developing some of the most accurate clocks globally. “Frequency measurements have reached unprecedented levels of precision,” he noted. The frequencies from these clocks are linked to the electron transitions between energy levels within atoms, governed by quantum physics.

This extraordinary control over quantum systems places humans at the “top tier” of quantum measurements, yielding benefits beyond merely defining time. For example, atomic-based clocks may play vital roles in next-generation early warning systems for earthquakes and volcanic activities.

Moreover, the move towards quantum methodology could democratize access to the world’s premier metrics. Before the 2018 SI unit redefinition, manufacturers, researchers, and technicians needing to validate the accuracy of their devices often had to seek certification at local Metrology Institutes, where certified experts operated. The current standard for certification essentially requires sophisticated labs. “As we’ve mentioned previously, the aim is to put ourselves out of business,” Richard Davis from the International Bureau of Weights and Measures stated, which oversees SI systems. “The entire system has become more adaptable and significantly less Euro-centric.”

“We possess ample equipment, so individuals come to us. However, this redefinition is one of our focal points since people aren’t sending their instruments to us; we’re teaching them how to measure independently,” Jason Underwood explained to me in August. “Currently, this framework operates under the new SI. Our aim is to develop instruments that can establish traceability to the basic constants of the universe.”

He and his team recently introduced a prototype of a quantum device capable of measuring three distinct electrical units simultaneously, including volts. By offering this three-in-one functionality, such devices could make it much simpler and more cost-effective to compare electronic devices against relevant standards, assuming they remain portable.

As we have evolved our understanding of units, what might the future hold? For electrical units like those designed by Underwood and his team, the Quantum Standard has yet to achieve international acceptance akin to the second or kilogram, with further experiments necessary to reach that milestone. Similar innovations are emerging in other parts of the world, including the EU-based Quahmet Consortium.

The concept of the second, too, is fluid, reflecting researchers’ ongoing endeavors to refine atomic-based clocks and redefine our understanding of time measurement. In April, I reported on some cutting-edge timepieces created by an international team on a mission to compare models from Japan, Germany, and other nations. This research is ongoing, and I look forward to sharing more about quantum clocks in the future.

Despite metrologists’ pursuit of stability in definitions, measurement work is inherently variable, tied closely to national funding strategies and international relations. This was evident in 1875, as representatives of the first international measurement treaty confronted political tensions between France and Germany following the Franco-Prussian War. This remains relevant today—as I reported on NIST’s work in August, discussions included the institutional infrastructure’s challenges, highlighted by a proposed 43% budget cut by the Trump administration earlier this year. Though Congress ultimately dismissed this proposal, it underscores the complexities of disentangling Metrology Institute operations from national politics.

Topic:

Source: www.newscientist.com

Can You Rely on AI for Web Searches?: Chatbot Optimization Game

Does aspartame cause cancer? The possible cancer-causing effects of popular artificial sweeteners, added to everything from soft drinks to pediatric medicines, have been debated for decades. Its approval in the US was controversial in 1974, some British supermarkets banned its use from their products in the 2000s, and peer-reviewed academic studies have long been at odds. Last year, the World Health Organization said that aspartame is possibly carcinogenic. On the other hand, public health regulators suggest that it is safe to take in commonly used small doses.

While many of us may try to resolve our questions with a simple Google search, this is exactly the kind of controversial discussion that could cause problems for the future of the Internet.

Generative AI chatbots have developed rapidly in recent years, with technology companies quickly touting them as a utopian alternative to a variety of jobs and services, including internet search engines. The idea is that instead of scrolling through a list of web pages to find the answer to a question, an AI chatbot can scour the internet, look up relevant information and compile a short answer to the query. Google and Microsoft are betting big on this idea, already bringing AI-generated summaries to Google Search and Bing.

However, being touted as a more convenient way to find information online has prompted scrutiny of where and how these chatbots choose the information they provide. Looking at the evidence that large-scale language models (LLMs, the engines on which chatbots are built) are the most convincing, three computer science researchers at the University of California, Berkeley, say that current chatbots are found to be overly reliant on superficial relevance of information. They ignore text that includes relevant technical terms and related keywords, while ignoring other features they typically use to assess trustworthiness, such as the inclusion of scientific references and objective language free of personal bias.

Online content can be displayed in a way that increases visibility to the chatbot, making it more likely to appear in the chatbot’s output. For the simplest queries, such selection criteria will provide a sufficient answer. But what a chatbot should do in more complex discussions, such as the debate over aspartame, is less clear.

“Do we want them to simply summarize the search results, or do we want them to function as mini-research assistants who weigh all the evidence and provide a final answer?” asks undergraduate researcher and co-investigator Alexander Wang, author of the study. The latter option provides maximum convenience, but the criteria by which the chatbot selects information becomes even more important. And if one could somehow game those standards, can we guarantee the information chatbots put in front of billions of internet users?

It’s a problem plaguing animation companies, content creators, and others who want to control how they are seen online, and an emerging industry of marketing agencies offering a service known as generative engine optimization (GEO) has caused it. The idea is that online content can be created and displayed in a way that increases its visibility to the chatbot, making it more likely to appear in the chatbot’s output. The benefits are obvious.

The basic principle is similar to search engine optimization (SEO). This is a common technique for building and writing web pages to attract the attention of search engine algorithms, pushing them to the top of the list of results returned when you search on Google or Bing. GEO and SEO share some basic techniques, and websites that are already optimized for search engines are generally more likely to appear in chatbot output.

But those who really want to improve their AI visibility need to think more holistically. “Rankings on AI search engines and LLMs require features and mentions on relevant third-party websites, such as press outlets, articles, forums, and industry publications,” says Viola Eva, founder of marketing firm Flow Agency, incorporating her SEO expertise into GEO.

Chatbots for games are possible, but not easy. And while website owners and content creators have derived an evolving list of SEO do’s and don’ts over the past two decades, there are no clearer rules for working with AI models.

Researchers have demonstrated that chatbots can be controlled tactically through carefully written text strings. So if you want to get a better grip on chatbots, you might want to consider a more hacky approach, like the one discovered by two Harvard computer science researchers. They have proven how chatbots can be tactically controlled by introducing something as simple as a carefully written text string. This “strategic text sequence” looks like a meaningless series of characters, but is actually a subtle command that forces the chatbot to generate a specific response.

Current search engines and the practices surrounding them are not without their own problems. SEO involves some of the most hostile practices for readers on the modern internet. Blogs create a large number of nearly duplicate articles targeting the same high traffic queries. Text tailored to get the attention of Google’s algorithms rather than the reader.

An internet dominated by obedient chatbots raises questions of a more existential kind. When you ask a search engine a question, it returns a long list of web pages. In contrast, chatbots only refer to four or five websites for information.

“For the reader, seeing the chatbot’s response also increases the possibility of interaction,” says Wang. This kind of thinking points to a broader concern called the “direct answer dilemma.” For Google, the company integrated AI-generated summaries into its search engine with a bold slogan: “Let Google do the searching.” But if you’re the type of internet user who wants to make sure you’re getting the most unbiased, accurate, and useful information, you might not want to leave your search in the hands of such susceptible AI.

Source: www.theguardian.com

The success of a racehorse may rely on its gut microbiome in early life

Gut microbiota of racehorses may affect health and performance

Brian Lawless/PA/Alamy Stock Photo

Racehorses who have a more diverse gut microbiome as foals appear to perform better and have a lower risk of health complications.

The findings suggest that, as suspected in humans, there are critical periods in the horse’s gut microbiome for establishing a bacterial composition that may contribute to an individual’s long-term health and fitness.

Christopher Proudman Researchers from the University of Surrey in the UK analysed DNA sequences from fecal samples from 52 thoroughbred foals born at five stud farms in 2018.

The researchers took samples nine times over the first year of life: at 2, 8, 14 and 28 days of age, and at 2, 3, 6, 9 and 12 months of age. Once the animals were a year old, they were transferred to 29 racing training centres across the UK.

The researchers then measured the athletic performance of the two- and three-year-old horses during the races, and collected data on rankings and total prize money, as well as recording the horses’ respiratory systems, orthopedic health, and soft tissue health.

The team found that greater bacterial diversity at 28 days of age was associated with better performance in the race. The researchers also detected two bacterial families: Anaeroplasmataceae and Bacillaceae was associated with having a competitive advantage.

In contrast, low bacterial diversity at 1, 2 and 9 months of age was found to be associated with an increased risk of orthopedic and other problems, such as muscle strains and “hairline” fractures. The team also found that certain bacterial families, when abundant around the first week or two of life, were associated with an increased risk of respiratory and musculoskeletal diseases later in life.

Foals treated with antibiotics (which can affect gut microbiomes) during the first few weeks of life had significantly lower bacterial diversity than untreated foals at day 28, Proudman said. These animals subsequently produced fewer winnings and developed respiratory disease at 10 times the rate of untreated foals from age 6 months onwards.

The early health problems that prompted antibiotic treatment may have actually affected later performance and health. Simon Daniels Researchers from the Royal Agricultural University in Gloucestershire, UK, say it’s realistic to think that antibiotics themselves reduce bacterial diversity, leading to poorer health and performance.

“Although more evidence is needed before any firm conclusions can be drawn, it appears that how young horses are managed is particularly important for their later athletic performance,” Daniels says.

topic:

Source: www.newscientist.com