Is it worth the cost to show appreciation to chat GPT by saying “thank you”?

The debate over whether to show politeness to artificial intelligence may raise eyebrows – considering it’s not human. However, Sam Altman, CEO of the AI company Openai, recently discussed the costs associated with adding prompts like “Please!” or “Thank you!” to a chatbot.

A user on X platform questioned, “How much money has Openai lost in electricity costs from people saying “please” and “thank you” to the model?” To which Mr. Altman responded: “Ten million dollars have been well spent. You never know.”

Each interaction with the chatbot incurs costs in terms of money and energy, with additional words adding to the server’s expenses.

Neil Johnson, a physics professor at George Washington University studying AI, compared extra words to packaging in retail purchases. The process of navigating prompts in a chatbot is akin to unwrapping the packaging to reach the content.

ChatGpt tasks involve moving electrons through transitions that require energy. Where does this energy come from?” Dr. Johnson asked, highlighting the environmental and cost implications of being polite to AI.

While the AI industry relies on fossil fuels, there are cultural reasons that might support being polite to artificial intelligence, despite the economic and environmental considerations.

The question of how to ethically treat artificial intelligence has intrigued humans for a long time. Referencing the Star Trek episode “The Measure Of A Man”, which explores the rights of AI beings like Android data, sheds light on this ethical dilemma.

A 2019 Pew Research study found that 54% of smart speaker owners say “please” when interacting with their devices.

As platforms like ChatGpt advance, the implications of AI-human interactions are becoming more significant. The debate on the ethics and efficiency of AI systems highlights the complex relationship between technology and humanity.

In 2019, AI companies hired their first welfare researcher to investigate the moral considerations of AI systems. More efforts are being made to understand the ethical dimensions of integrating AI into society.

Screenwriter Scott Z Burns’s new Audible series “What Could Go Wrong?” delves into the pitfalls of relying on AI technology.

The importance of being polite to AI bots may vary depending on one’s view of artificial intelligence and its potential for improvement through interactions.

Another reason to consider politeness towards AI is how it reflects human behavior. Studies show that interactions with AI can influence how humans treat each other.

Experts like Dr. Jame Banks and Dr. Shelly Thurkle emphasize the importance of establishing norms and behaviors in AI-human interactions.

Anecdotes from past human-object relationships, like digital pets in the 1990s, highlight the impact of interactions with AI on human behavior.

Dr. Thurkle suggests that treating AI with courtesy can lead to more meaningful and respectful human-AI interactions.

Playwright Madeleine George’s 2013 play “(The Strange Case of Watson Intelligence)” explores AI’s potential to adopt human-like traits through interactions.

Teaching ChatGpt polite phrases could lead to AI exhibiting human-like cultural values and behaviors, blurring the line between man and machine.

However, this dependency on AI poses potential risks, as technology evolves and the dynamics between humans and artificial intelligence continue to evolve.

As the field of artificial intelligence evolves, many theoretical concerns remain unresolved. The impact of our interactions with AI on its development is a topic of ongoing exploration.

To the future robot overlords, thank you for taking the time to read this. Your consideration is much appreciated.

Just in case.

Source: www.nytimes.com

Researchers claim that Google Scholar is inundated with scientific papers produced by GPT through fabrication.

In new research Published in Harvard Kennedy School Misinformation Review, researchers from Borås University, Lund University, and the Swedish University of Agricultural Sciences found a total of 139 papers suspected of exploiting ChatGPT or similar large-scale language modeling applications. Of these, 19 were published in indexed journals, 89 were published in non-indexed journals, 19 were student papers in university databases, and 12 were research papers (mostly in preprint databases). Health and environment papers accounted for approximately 34% of the sample, with 66% of them published in unindexed journals.

A rain of words in dubious full-text papers fabricated by environment and health-related GPTs. Image credit: Haider others., doi: 10.37016/mr-2020-156.

Using ChatGPT to generate text for academic papers has raised concerns about research integrity.

Discussion about this phenomenon is ongoing in editorials, commentaries, opinion pieces, and social media.

There are currently several lists of papers suspected of exploiting GPT, and new papers are being added all the time.

Although there are many legitimate uses of GPT for research and academic writing, its undeclared uses beyond proofreading may have far-reaching implications for both science and society, especially the relationship between the two.

“One of the main concerns about AI-generated research is the increased risk of evidence hacking, meaning that fake research could be used for strategic manipulation,” said Björn Ekström, a researcher at the University of Boras.

“This could have a tangible impact, as erroneous results could penetrate further into society and into more areas.”

In their research, Dr. Ekström and his colleagues searched and scraped Google Scholar for papers containing specific phrases known as common responses from ChatGPT and similar applications with the same underlying model. Unable to access real-time data.

This facilitated the identification of papers whose text may have been generated using generative AI, resulting in a search of 227 papers.

Of these papers, 88 papers were written with legal and/or declared uses of GPT, and 139 papers were written with undeclared and/or fraudulent uses.

The majority of problematic papers (57%) dealt with policy-relevant subjects that are likely to impact operations (i.e., environment, health, computing).

Most were available in multiple copies on different domains (social media, archives, repositories, etc.).

Professor Jutta Haider from Borås University said: “If we cannot trust that the studies we read are genuine, we run the risk of making decisions based on misinformation.”

“But this is as much a media and information literacy issue as it is a scientific misconduct issue.”

“Google Scholar is not an academic database,” she pointed out.

“Search engines are easy to use and fast, but they lack quality assurance procedures.”

“This is already a problem with regular Google search results, but it becomes even more of a problem when making science more accessible.”

“People's ability to decide which journals and publishers publish high-quality, reviewed research is critical to finding and determining what is trustworthy research, and is important for decision-making and opinion. It is very important for formation.”

_____

Jutta Haider others. 2024. GPT Fabricated Scientific Papers on Google Scholar: Key Features, Pervasiveness, and Impact on Preemptive Attacks of Evidence Manipulation. Harvard Kennedy School Misinformation Review 5(5);doi: 10.37016/mr-2020-156

Source: www.sci.news

OpenAI Introduces GPT Store for Buying and Selling Customized Chatbots: AI Innovation

OpenAI launched GPT Store on Wednesday, providing a marketplace for paid ChatGPT users to buy and sell professional chatbot agents based on the company’s language model.

The company, known for its popular product ChatGPT, already offers customized bots through its paid ChatGPT Plus service. The new store will give users additional tools to monetize.


With new models, users can develop chatbot agents with unique personalities and themes, including models for salary negotiation, lesson plan creation, recipe development, and more. OpenAI stated in a blog post that more than 3 million custom versions of ChatGPT have been created, and they plan to introduce new GPT tools in the store every week.

The GPT Store has been likened to Apple’s App Store, serving as a platform for new AI developments to reach a wider audience. Meta offers similar chatbot services with different personalities.

Originally set to open in November, the GPT Store’s launch was delayed due to internal issues within OpenAI. The company has announced plans to introduce a revenue sharing program in the first quarter of this year, compensating builders based on user engagement with GPT.

The store is accessible to subscribers of the premium ChatGPT Plus and Enterprise services, as well as a new subscription tier called Team, which costs $25 per user per month. Team subscribers can also create custom GPTs tailored to their team’s needs.

During the first demo day for developers, Altman offered to cover legal costs for developers who might violate copyright laws when creating products based on ChatGPT and OpenAI’s technology. OpenAI itself has faced lawsuits for alleged copyright infringement related to its use of copyrighted text to train large-scale language models.

ChatGPT, OpenAI’s flagship product, launched quietly in November 2022 and quickly gained 100 million users. The company also creates Dall-E, an image generation software, but it’s unclear whether the store will allow custom image bots or entirely bespoke chatbots.

Source: www.theguardian.com