The Most Ineffective ChatGPT Prompts for Environmental Research, According to Studies

Each time I interact with ChatGPT, I consume energy—what does that really mean? A new study has highlighted the environmental costs of using large-scale language models (LLMs) and provided insights on how users can minimize their carbon footprints.

German researchers evaluated 14 open-source LLMs, ranging from 14 to 72 billion parameters, administering 1,000 benchmark questions to assess the CO2 emissions generated in response to each.

They discovered that utilizing internal reasoning to formulate answers can result in emissions up to 50 times greater than those generated by a brief response.

Conversely, models with a higher number of parameters—typically more accurate—also emit more carbon.

Nonetheless, the model isn’t the only factor; user interaction plays a significant role as well.

“When people use friendly phrases like ‘please’ and ‘thank you,’ LLMs tend to generate longer answers,” explained Maximilian Dorner, a researcher from Hochschule München Applied Sciences University and the lead author of the study, to BBC Science Focus.

“This results in the production of more words, which leads to longer processing times for the model.

The extra words don’t enhance the utility of the answer, yet they significantly increase the environmental impact.

“Whether the model generates 10,000 words of highly useful content or 10,000 words of gibberish, the emissions remain the same,” said Dorner.

Being polite to an AI platform uses more power – Getty

This indicates that users can help reduce emissions by encouraging succinct responses from AI models, such as asking for bullet points instead of detailed paragraphs. Casual requests for images, jokes, or essays when unnecessary can also contribute to climate costs.

The study revealed that questions demanding more in-depth reasoning—like topics in philosophy or abstract algebra—yield significantly higher emissions compared to simpler subjects like history.

Researchers tested smaller models that could operate locally, yet Dorner noted that larger models like ChatGPT, which possess more than 10 times the parameters, likely exhibit even worse patterns of energy consumption.

“The primary difference between the models I evaluated and those powering Microsoft Copilot or ChatGPT is the parameter count,” Dorner stated. These commonly used models have nearly tenfold the parameters, which equates to a tenfold rise in CO2 emissions.

Dorner encourages not only individual users to be mindful but also highlights that organizations behind LLMs have a role to play. For instance, he suggests that they could mitigate unnecessary emissions by creating systems that select the smallest model necessary for accurately answering each question.

“I’m a big supporter of these tools,” he remarked. “I utilize them daily. The key is to engage with them concisely and understand the implications.”

read more:

About our experts

Maximilian Dorner, PhD candidate at Hochschule München Applied Sciences University.

Source: www.sciencefocus.com

Trump’s encouragement prompts AI companies to push for reduced regulations

Technology leaders in the artificial intelligence sector have been pushing for regulations for over two years. They have expressed concerns about the potential risks of generative AI and its impact on national security, elections, and jobs.

Openai CEO Sam Altman testified before Congress in May 2023 that AI is “very wrong.”

However, following Trump’s election, these technology leaders have shifted their stance and are now focused on advancing their products without government interference.

Recently, companies like Meta, Google, and Openai have urged the Trump administration to block state AI laws and allow the use of copyrighted material to train AI models. They have also sought incentives such as tax cuts and grants to support their AI development.

This change in approach was influenced by Trump declaring AI as a strategic asset for the country.

Laura Karoli, a senior fellow at the Wadwani AI Center, noted that concerns about safety and responsible AI have diminished due to the encouragement from the Trump administration.

AI policy experts are concerned about the potential negative consequences of unchecked AI growth, including the spread of disinformation and discrimination in various sectors.

Tech leaders took a different stance in September 2023, supporting AI regulations proposed by Senator Chuck Schumer. Afterward, the Biden administration collaborated with major AI companies to enhance safety standards and security.

(The New York Times sued Openai and Microsoft over copyright infringement claims related to AI content. Openai and Microsoft denied the allegations.)

Following Trump’s election victory, tech companies intensified lobbying efforts. Google, Meta, and Microsoft donated to Trump’s inauguration, and leaders like Mark Zuckerberg and Elon Musk engaged with the president.

Trump embraced AI advancements, welcoming investments from companies like Openai, Oracle, and SoftBank. The administration emphasized the importance of AI leadership for the country.

Vice President JD Vance advocated for optimistic AI policies at various summits, highlighting the need for US leadership in AI.

Tech companies are responding to the President’s executive orders on AI, submitting comments and proposals for future AI policies within 180 days.

Openai and other companies are advocating for the use of copyrighted materials in AI training, arguing for legal access to such content.

Companies like Meta, Google, and Microsoft support the legal use of copyrighted data for AI development. Some are pushing for open-source AI to accelerate technological progress.

Venture capital firm Andreessen Horowitz is advocating for open-source models in AI development.

Andreessen Horowitz and other tech firms are engaged in debates over AI regulations, emphasizing the need for safety and consumer protection measures.

Civil rights groups are calling for audits to prevent discrimination in AI applications, while artists and publishers demand transparency in the use of copyrighted materials.

Source: www.nytimes.com

Viral Video of Tesla Driver Using VR Headset Prompts US Government Alert

U.S. Transportation Secretary Pete Buttigieg said on Monday that human drivers should always use caution after videos surfaced of people driving Teslas wearing what appears to be Apple’s recently released Vision Pro headset. He said he needed to pay.


Buttigieg responded on Twitter/X to a video that has been viewed more than 24 million times that shows a Tesla driver seemingly gesturing with his hands to manipulate a virtual reality field.

Buttigieg said on Monday that Tesla’s self-driving assist features (Autopilot, Enhanced Autopilot, Full Self-Driving), despite their names, do not mean the vehicle is fully self-driving. said on social media.

“Be careful – all advanced driver assistance systems available today require a human driver to be in control and fully engaged in the driving task at all times,” Buttigieg said.

Apple’s Vision Pro was released last week and blends three-dimensional digital content with views of the outside world. Apple, which says it should never be used while operating a moving vehicle, did not respond to a request for comment.

Note: All currently available advanced driver assistance systems require the human driver to be in control and fully engaged in the driving task at all times. pic.com/OpPy36mOgC

— Secretary Pete Buttigieg (@SecretaryPete) February 5, 2024


According to Apple’s Alan Dye, the Vision Pro will work as a headset that allows users to interact with “apps and experiences” in an augmented reality (AR) version of their surroundings or in a fully immersive virtual reality (VR) space. Vice President of Human Interface Design announced in June.

“Apple Vision Pro relies solely on your eyes, hands, and voice,” Dai said in June. “Browse your system just by looking. App icons come to life when you look at them. Just tap your fingers at the same time to select them and scroll them with a light flick.”

“Apple Vision Pro will change the way we communicate, collaborate, work, and enjoy entertainment,” said Apple executive Tim Cook. But the company didn’t intend for Vision Pro to change the way people commute.

Tesla did not immediately respond to a request for comment.

Buttigieg previously made similar comments about Tesla’s use of Autopilot. Tesla says its advanced driver features are intended for use by fully alert drivers who “keep their hands on the wheel and ready to take over at any time.”

Source: www.theguardian.com

Spotify is testing AI playlist feature with prompts

Earlier this fall, it was revealed that Spotify was developing a new feature that would allow users of its streaming app to create playlists using AI technology and prompts. Now, its “AI playlist” feature was discovered in the wild as part of a test to see how users would respond to AI-driven playlist creation. The company allowed TechCrunch to test it, but did not provide details about the technology or how it works or commit to a release date.

This feature was unveiled in TikTok videos User @robdad_ wrote, “Have you discovered Spotify’s ChatGPT by chance?” According to a screenshot he shared, the AI ​​playlist feature can be accessed from the “Your Library” tab in the Spotify app by clicking the plus (+) button in the top right corner of the screen. Tap to access. Here, a pop-up menu will appear and the AI ​​playlist feature will be added as a new option below the existing “Playlist” and “Blend” options.

The feature’s description says, “Using AI to turn your ideas into playlists,” and notes that it’s currently only available in English.

@robdad_ Since when did this update occur on Spotify? Now, I have chatGPT create playlists for me…also Which House Exploration😭😭 #Spotify #update #love ♬ Heavy metal lovers overlap – jinxknsaudios

Once an option is selected, users will be presented with a screen where they can enter a prompt in an AI chatbot-style box or browse a list of suggested prompts to get started. This video features “Focus on your work with instrumental electronica”, “Fill the silence with cafe music background”, “Pick up your mood with fun, upbeat, positive songs”, and “Niche genres like witches”. Improvised ideas such as “Exploring” were presented. House. “

When the user chose the latter prompt, the AI ​​chatbot responded, “Processing your request…” and offered a sample playlist. From this screen, you can swipe left to remove songs you don’t want to further refine your playlist.

TechCrunch reported in October that references to this new AI feature were spotted on Spotify Mobile by a tech veteran. Chris Messina, shared a screenshot of a feature that creates “playlists based on prompts.” However, Spotify declined to confirm its plans for AI playlists at the time, saying it doesn’t comment on speculation about new features.

The company is still trying to downplay user expectations and excitement for the AI ​​playlist feature, only confirming that it was an interim test.

“We conduct numerous tests on a regular basis. Some of these tests ultimately pave the way to our extensive experience, while others serve only as important learning.,” a Spotify spokesperson said. “We have nothing further to share at this time,” they added.

While the company isn’t ready to launch AI playlists yet, the streamer is investing heavily in AI across its apps, including launching AI DJ earlier this year to provide personalized playlists and commentary. Is going. With an AI voice based on Spotify’s Head of Cultural Partnerships, Xavier ‘X’ Jernigan. This feature became available worldwide in August.

Ziad Sultan, head of personalization at Spotify, said of the launch of DJ that the company has “large-scale language models, generated audio, [and] Across personalization. He told TechCrunch that Spotify wants to be known for its “AI expertise.”

Spotify CEO Daniel Ek also teased other ways the company will leverage AI, including Spotify using generative AI to summarize podcasts and automatically run audio ads. He said he may be considering creating one. He also touted the role of his AI in music production, saying he could imagine artists using AI tools when creating new songs. Spotify is also looking into using AI to create podcast ads read by hosts that sound like real people, and using AI to power its personalization technology. It’s no exaggeration to say that AI will be leveraged for playlist creation, which is one of the most popular use cases for apps.

We don’t yet know when new AI capabilities will be made publicly available. In the meantime, please let us know if your app has that feature and how well it works.

Sarah Perez can be reached at sarahp@techcrunch.com and Signal at (415) 234-3994.

Source: techcrunch.com