Using Profanity in Google Searches Might Make AI Stop Responding – Is It Worth It?

Using explicit language in your Google searches can help reduce the frequency of unwanted AI-generated summaries. Some applications also provide options to disable artificial intelligence features.

You might consider not utilizing ChatGPT, steering clear of AI-integrated software, or avoiding interactions with chatbots altogether. You can disregard Donald Trump’s deepfake posts, and find alternatives to Tilly the AI actor.

As AI becomes more widespread, so do concerns regarding its associated risks and the resistance to its omnipresence.

Dr. Kobi Raines, a specialist in AI management and governance, emphasizes that healthcare professionals often feel compelled to utilize AI.

She mentioned that she preferred not to use AI transcription software for her child’s appointment, but was informed that the specialist required it due to time constraints and suggested she seek services elsewhere if she disagreed.

“There is individual resistance, but there are also institutional barriers. The industry is advocating for the use of these tools in ways that may not be sensible,” she states.


Where is the AI?

AI is deeply embedded in digital frameworks.

It’s integrated into tools like ChatGPT, Google’s AI repository, and Grok, the controversial chatbot developed by Elon Musk. It informs smartphones, social media platforms, and navigation systems.

Additionally, it’s now part of customer service, finance, and online dating, impacting how resumes, job applications, rental requests, and lawsuits are evaluated.

AI is expected to further integrate into the healthcare sector, easing administrative workloads for physicians and aiding in disease diagnoses.

A University of Melbourne Global Studies report released in April noted that half of Australians engage with AI regularly or semi-regularly, yet only 36% express trust in it.

Professor Paul Salmon, deputy director of the Center for Human Factors and Socio-Technical Systems at the University of the Sunshine Coast, highlights that avoiding AI is becoming increasingly challenging.

“In professional environments, there’s often pressure to adopt it,” he shares.

“You either feel excluded or are informed you will be.”


Should we avoid using AI?

Concerns include privacy violations, biases, misinformation, fraudulent use, loss of human agency, and lack of transparency—just a few risks highlighted in MIT’s AI risk database.

It warns about AIs potentially pursuing objectives conflicting with human goals and values, which could lead to hazardous capabilities.

Greg Sadler, CEO of Good Ancestors charity and co-coordinator of Australians for AI Safety, frequently references the database and advises caution, stating, “Never use AI if you don’t trust its output or are apprehensive about it retaining information.”

Additionally, AI has a sizable energy footprint. Google’s emissions rose by over 51%, partly because of the energy demands of its data centers that facilitate AI operations.

The International Energy Agency predicts that electricity consumption by data centers could double from 2022 levels by 2026. Research indicates that by 2030, data centers may consume 4.5% of the world’s total energy production.


How can I avoid using AI?

AI Overview features a “Profanity Trigger.” If you inquire on Google, “What is AI?” its Gemini AI interface may provide a bland or sometimes inaccurate response, acting as an “answer engine” rather than a “search engine.”

However, posing the question, “What exactly is AI?” will yield more targeted search results along with relevant links.

There are a variety of browser extensions capable of blocking AI-related sites, images, and content.

To bypass certain chatbots, you can attempt to engage a human by repeating words like “urgent” and “emergency” or using the term “blancmange,” a popular dessert across Europe.

James Jin Kang, Senior Lecturer in Computer Science at RMIT University, Vietnam, remarked: living without it entails taking a break from much of modern life.

“Why not implement a kill switch?” he questions. The issue, he claims, is that AI is so deeply entrenched in our lives that “it’s no longer something you can easily switch off.”

“As AI continues to seep into every facet of our existence, it’s imperative to ask ourselves: Do we still have the freedom to refuse?”

“The real concern is not whether we can coexist with AI, but whether we possess the right to live without it before it becomes too late to break away.”


What does the future hold for AI?

Globally, including in Australia, governments are grappling with AI, its implications, potential, and governance challenges.

The federal government faces mounting pressure to clarify its regulatory approach as major tech firms seek access to journalism, literature, and other resources necessary for training their AI models.

The discussion includes insights from five experts on the future trajectory of AI.

Notably, three out of five experts believe AI does not present an existential threat.

Among those who express concerns, Aaron J. Snoswell of the Queensland University of Technology opines that the transformative nature of AI is not due to its potential intelligence but rather to “human decisions about how to construct and utilize these tools.”

Sarah Vivian Bentley of CSIRO concurs that the effectiveness of AI is dictated by its operators, while Simon Coghlan of the University of Melbourne argues that despite the worries and hype, evidence remains scant that superintelligent AI capable of global devastation will emerge anytime soon.

Conversely, Nyusha Shafiabadi of Australian Catholic University warns that although current systems possess limited capabilities, they are gradually acquiring features that could facilitate widespread exploitation and present existential risks.

Moreover, Saydari Mirjalili, an AI professor at Torrens University in Australia, expresses greater concern that humans might wield AI destructively—through militarization—rather than AI autonomously taking over.


Raines mentions she employs AI tools judiciously, utilizing them only where they add value.

“I understand the environmental impacts and have a passion for writing. With a PhD, I value the process of writing,” she shares.

“The key is to focus on what is evidence-based and meaningful. Avoid becoming ensnared in the hype or the apocalyptic narratives.

“We believe it’s complex and intelligent enough to accommodate both perspectives, implying these tools can yield both beneficial and detrimental outcomes.”

Source: www.theguardian.com

Astronomers Investigate Methods to Enhance Searches for Alien Technosignatures

A recent study indicates that a group of astronomers in Pennsylvania, along with NASA’s Jet Propulsion Laboratory, can determine when and where human deep space transmissions are most likely to be detected by extraterrestrial observers beyond our solar system. They can use observed patterns to inform searches for alien intelligence.

Analysis conducted on deep spacenetwork uplink transmission logs over the last two decades et al. It was found that these emissions mainly targeted the Sun or various planets. Image credit: Gemini AI.

“Humans primarily communicate with probes sent to explore spacecraft and other planets like Mars,” stated Pinken Hwang, a graduate student in Pennsylvania.

“Nevertheless, planets such as Mars do not obstruct entire transmissions, enabling spacecraft or celestial bodies along these interplanetary communication pathways to potentially detect signals.

“This implies that when searching for extraterrestrial communications, we need to consider planets outside our solar system that might align with our signals.”

“SETI researchers frequently scan the universe for indicators of past or current technology, referred to as Technosignatures, as potential signs of intelligent life.”

“By analyzing the direction and frequency of our most prevalent signals, we shed light on where we should enhance our chances of discovering alien technical stations.”

In this research, scientists scrutinized logs from NASA’s Deep Space Network (DSN), a global facility that enables two-way radio communication with human-made objects in space, serving as a relay to send commands and receive data from spacecraft.

They meticulously aligned the DSN logs with spacecraft location data to pinpoint the timing and direction of radio communications emanating from Earth.

Even though some countries have their own deep space networks, researchers argue that the NASA-operated DSN effectively represents the types of communications coming from Earth, as NASA has spearheaded the most profound space missions to date.

“The DSN establishes crucial connections between Earth and interplanetary missions, such as the NASA New Horizons spacecraft and the NASA/CSA James Webb Space Telescope.”

“It emits some of humanity’s most powerful and sustained radio signals into space, and the public logs of these transmissions have enabled our team to identify temporal and spatial patterns over the past 20 years.”

This study concentrated on transmissions directed into deep space, such as signals sent to interplanetary spacecraft, rather than those intended for low-Earth orbit satellites.

The researchers found that deep-space radio signals primarily targeted spacecraft close to Mars.

Other frequent transmissions were directed at telescopes situated at the Lagrange points near Earth and various planets. These points are areas where the gravitational forces of the Sun and Earth keep the telescope in a relatively fixed position from the perspective of Earth.

“Based on data from the last 20 years, we found that if extraterrestrial intelligence exists where we can observe the alignment of Earth and Mars, there is a 77% chance it falls within our transmission path.

“Furthermore, if they can see consistency with another planet in a solar system, there is a 12% chance they are on that transmission path.”

“However, these opportunities are quite substantial if planetary alignment is not observed.”

The team emphasized the need for humans to search for interplanetary alignments to enhance their quest for Technosignatures.

Astronomers routinely examine exoplanets during alignments with their host stars. In fact, the majority of known exoplanets were discovered by observing a star dimming as a planet passes in front of it.

“We only recently started detecting a significant number of exoplanets in the last 10 to 20 years, so we still lack knowledge about many systems that include more than two transiting exoplanets,” Fan noted.

“With the imminent launch of NASA’s Nancy Grace Roman Space Telescope, we anticipate the detection of 100,000 previously unknown exoplanets, which should significantly expand our search area.”

Our solar system is relatively flat, with most planets orbiting in the same plane, consequently, most DSN transmissions occurred within 5 degrees of Earth’s orbital plane.

If the solar system were metaphorically likened to a dinner plate with planets and objects lying on its surface, human transmissions would predominantly travel along the surface instead of leaping out into space at steep angles.

The authors also calculated that average DSN transmissions can be detected approximately 23 light-years away using telescopes similar to ours.

“Focusing on solar systems within 23 light-years, particularly those aligned in the plane towards Earth, could enhance our search for extraterrestrial intelligence,” they concluded.

The team is currently strategizing on identifying these systems and estimating how often they receive signals from Earth.

“Humanity is still in the early stages of our space exploration journey, and as we extend our missions into the solar system, transmissions to other planets will only increase,” remarked Professor Jason Wright of Penn.

“We have quantified ways to improve future searches for extraterrestrial intelligence by using our deep space communications as a benchmark to target systems with specific orientations and planetary alignments.”

The team’s paper was published online today in the Astrophysics Journal Letters.

____

Ping Chen Fan et al. 2025. Detection of extraterrestrial civilizations employing a global-level deep space network. apjl 990, L1; doi: 10.3847/2041-8213/adf6b0

Source: www.sci.news

Can You Rely on AI for Web Searches?: Chatbot Optimization Game

Does aspartame cause cancer? The possible cancer-causing effects of popular artificial sweeteners, added to everything from soft drinks to pediatric medicines, have been debated for decades. Its approval in the US was controversial in 1974, some British supermarkets banned its use from their products in the 2000s, and peer-reviewed academic studies have long been at odds. Last year, the World Health Organization said that aspartame is possibly carcinogenic. On the other hand, public health regulators suggest that it is safe to take in commonly used small doses.

While many of us may try to resolve our questions with a simple Google search, this is exactly the kind of controversial discussion that could cause problems for the future of the Internet.

Generative AI chatbots have developed rapidly in recent years, with technology companies quickly touting them as a utopian alternative to a variety of jobs and services, including internet search engines. The idea is that instead of scrolling through a list of web pages to find the answer to a question, an AI chatbot can scour the internet, look up relevant information and compile a short answer to the query. Google and Microsoft are betting big on this idea, already bringing AI-generated summaries to Google Search and Bing.

However, being touted as a more convenient way to find information online has prompted scrutiny of where and how these chatbots choose the information they provide. Looking at the evidence that large-scale language models (LLMs, the engines on which chatbots are built) are the most convincing, three computer science researchers at the University of California, Berkeley, say that current chatbots are found to be overly reliant on superficial relevance of information. They ignore text that includes relevant technical terms and related keywords, while ignoring other features they typically use to assess trustworthiness, such as the inclusion of scientific references and objective language free of personal bias.

Online content can be displayed in a way that increases visibility to the chatbot, making it more likely to appear in the chatbot’s output. For the simplest queries, such selection criteria will provide a sufficient answer. But what a chatbot should do in more complex discussions, such as the debate over aspartame, is less clear.

“Do we want them to simply summarize the search results, or do we want them to function as mini-research assistants who weigh all the evidence and provide a final answer?” asks undergraduate researcher and co-investigator Alexander Wang, author of the study. The latter option provides maximum convenience, but the criteria by which the chatbot selects information becomes even more important. And if one could somehow game those standards, can we guarantee the information chatbots put in front of billions of internet users?

It’s a problem plaguing animation companies, content creators, and others who want to control how they are seen online, and an emerging industry of marketing agencies offering a service known as generative engine optimization (GEO) has caused it. The idea is that online content can be created and displayed in a way that increases its visibility to the chatbot, making it more likely to appear in the chatbot’s output. The benefits are obvious.

The basic principle is similar to search engine optimization (SEO). This is a common technique for building and writing web pages to attract the attention of search engine algorithms, pushing them to the top of the list of results returned when you search on Google or Bing. GEO and SEO share some basic techniques, and websites that are already optimized for search engines are generally more likely to appear in chatbot output.

But those who really want to improve their AI visibility need to think more holistically. “Rankings on AI search engines and LLMs require features and mentions on relevant third-party websites, such as press outlets, articles, forums, and industry publications,” says Viola Eva, founder of marketing firm Flow Agency, incorporating her SEO expertise into GEO.

Chatbots for games are possible, but not easy. And while website owners and content creators have derived an evolving list of SEO do’s and don’ts over the past two decades, there are no clearer rules for working with AI models.

Researchers have demonstrated that chatbots can be controlled tactically through carefully written text strings. So if you want to get a better grip on chatbots, you might want to consider a more hacky approach, like the one discovered by two Harvard computer science researchers. They have proven how chatbots can be tactically controlled by introducing something as simple as a carefully written text string. This “strategic text sequence” looks like a meaningless series of characters, but is actually a subtle command that forces the chatbot to generate a specific response.

Current search engines and the practices surrounding them are not without their own problems. SEO involves some of the most hostile practices for readers on the modern internet. Blogs create a large number of nearly duplicate articles targeting the same high traffic queries. Text tailored to get the attention of Google’s algorithms rather than the reader.

An internet dominated by obedient chatbots raises questions of a more existential kind. When you ask a search engine a question, it returns a long list of web pages. In contrast, chatbots only refer to four or five websites for information.

“For the reader, seeing the chatbot’s response also increases the possibility of interaction,” says Wang. This kind of thinking points to a broader concern called the “direct answer dilemma.” For Google, the company integrated AI-generated summaries into its search engine with a bold slogan: “Let Google do the searching.” But if you’re the type of internet user who wants to make sure you’re getting the most unbiased, accurate, and useful information, you might not want to leave your search in the hands of such susceptible AI.

Source: www.theguardian.com