Does aspartame cause cancer? The possible cancer-causing effects of popular artificial sweeteners, added to everything from soft drinks to pediatric medicines, have been debated for decades. Its approval in the US was controversial in 1974, some British supermarkets banned its use from their products in the 2000s, and peer-reviewed academic studies have long been at odds. Last year, the World Health Organization said that aspartame is possibly carcinogenic. On the other hand, public health regulators suggest that it is safe to take in commonly used small doses.
While many of us may try to resolve our questions with a simple Google search, this is exactly the kind of controversial discussion that could cause problems for the future of the Internet.
Generative AI chatbots have developed rapidly in recent years, with technology companies quickly touting them as a utopian alternative to a variety of jobs and services, including internet search engines. The idea is that instead of scrolling through a list of web pages to find the answer to a question, an AI chatbot can scour the internet, look up relevant information and compile a short answer to the query. Google and Microsoft are betting big on this idea, already bringing AI-generated summaries to Google Search and Bing.
However, being touted as a more convenient way to find information online has prompted scrutiny of where and how these chatbots choose the information they provide. Looking at the evidence that large-scale language models (LLMs, the engines on which chatbots are built) are the most convincing, three computer science researchers at the University of California, Berkeley, say that current chatbots are found to be overly reliant on superficial relevance of information. They ignore text that includes relevant technical terms and related keywords, while ignoring other features they typically use to assess trustworthiness, such as the inclusion of scientific references and objective language free of personal bias.
Online content can be displayed in a way that increases visibility to the chatbot, making it more likely to appear in the chatbot’s output. For the simplest queries, such selection criteria will provide a sufficient answer. But what a chatbot should do in more complex discussions, such as the debate over aspartame, is less clear.
“Do we want them to simply summarize the search results, or do we want them to function as mini-research assistants who weigh all the evidence and provide a final answer?” asks undergraduate researcher and co-investigator Alexander Wang, author of the study. The latter option provides maximum convenience, but the criteria by which the chatbot selects information becomes even more important. And if one could somehow game those standards, can we guarantee the information chatbots put in front of billions of internet users?
It’s a problem plaguing animation companies, content creators, and others who want to control how they are seen online, and an emerging industry of marketing agencies offering a service known as generative engine optimization (GEO) has caused it. The idea is that online content can be created and displayed in a way that increases its visibility to the chatbot, making it more likely to appear in the chatbot’s output. The benefits are obvious.
The basic principle is similar to search engine optimization (SEO). This is a common technique for building and writing web pages to attract the attention of search engine algorithms, pushing them to the top of the list of results returned when you search on Google or Bing. GEO and SEO share some basic techniques, and websites that are already optimized for search engines are generally more likely to appear in chatbot output.
But those who really want to improve their AI visibility need to think more holistically. “Rankings on AI search engines and LLMs require features and mentions on relevant third-party websites, such as press outlets, articles, forums, and industry publications,” says Viola Eva, founder of marketing firm Flow Agency, incorporating her SEO expertise into GEO.
Chatbots for games are possible, but not easy. And while website owners and content creators have derived an evolving list of SEO do’s and don’ts over the past two decades, there are no clearer rules for working with AI models.
Researchers have demonstrated that chatbots can be controlled tactically through carefully written text strings. So if you want to get a better grip on chatbots, you might want to consider a more hacky approach, like the one discovered by two Harvard computer science researchers. They have proven how chatbots can be tactically controlled by introducing something as simple as a carefully written text string. This “strategic text sequence” looks like a meaningless series of characters, but is actually a subtle command that forces the chatbot to generate a specific response.
Current search engines and the practices surrounding them are not without their own problems. SEO involves some of the most hostile practices for readers on the modern internet. Blogs create a large number of nearly duplicate articles targeting the same high traffic queries. Text tailored to get the attention of Google’s algorithms rather than the reader.
An internet dominated by obedient chatbots raises questions of a more existential kind. When you ask a search engine a question, it returns a long list of web pages. In contrast, chatbots only refer to four or five websites for information.
“For the reader, seeing the chatbot’s response also increases the possibility of interaction,” says Wang. This kind of thinking points to a broader concern called the “direct answer dilemma.” For Google, the company integrated AI-generated summaries into its search engine with a bold slogan: “Let Google do the searching.” But if you’re the type of internet user who wants to make sure you’re getting the most unbiased, accurate, and useful information, you might not want to leave your search in the hands of such susceptible AI.
Source: www.theguardian.com