Artificial Intelligence (AI) is revolutionizing education by automating tasks like grading and communication with parents, allowing teachers to focus more on student guidance, engagement, and hands-on learning. As technology advances, the future may hold real-time tracking of student progress, automated assessments, and personalized learning paths.
While AI enhances classroom efficiency, the UK government stresses its use should be limited to low-stakes assessments, urging teachers to maintain transparency. This emphasizes the crucial role of human expertise in ensuring the integrity and fairness of high-stakes evaluations.
Science educators possess profound subject knowledge, which is vital for equitable assessments. Their professional judgment and contextual understanding are key to accurately reflecting each student’s potential while maintaining assessment integrity.
Leverage Your Expertise in Education
Pearson, the world’s leading educational company, employs over 18,000 professionals across 70+ countries, positively impacting millions of learners and educators. Roles like examiners, facilitators, and subject experts are crucial in ensuring students achieve the grades necessary to thrive in their careers.
By becoming an Examiner with Pearson, you can play an essential part in our mission to empower students, using your expertise to help maintain the rigorous standards that shape educational qualifications and open doors to future opportunities.
Professional Development Opportunities
Taking on the role of an Examiner offers numerous benefits that positively impact your professional trajectory:
Insight: Gain a comprehensive view of national performance, learning from common mistakes and successful strategies that can benefit your students.
Additional Income: Enjoy flexible work-from-home opportunities that fit seamlessly with your existing educational responsibilities.
Expand Your Network: Connect with fellow education professionals from diverse backgrounds, exchanging ideas and building a supportive community.
Professional Evaluation: Achieve recognized CPD credentials, enriching your professional portfolio with respected subject matter expertise.
What Qualifications Are Required?
To qualify for most Pearson Examiner roles, candidates typically need at least one year of teaching experience within the last eight years, a degree in the relevant subject, and a pertinent educational qualification or its equivalent. A recommendation from a senior professional with teaching experience at your institution is also necessary.
Some vocational qualifications may only require relevant work experience, bypassing the need for a degree or teaching certification.
Sure! Here’s the rewritten content while keeping the HTML tags intact:
Data Center in Ashburn, Virginia
Jim Roe Scalzo/EPA/Shutterstock
As the artificial intelligence sector grows swiftly, concerns about the ecological effects of data centers are increasingly being discussed. New projections indicate that the industry may fall short of achieving net-zero emissions by 2030.
Fenki Yu and researchers from Cornell University in New York have evaluated the potential energy, water, and carbon consumption of current leading AI servers by 2030, under various growth scenarios and specific U.S. data center locations. Their analysis integrates anticipated chip production, server energy demands, and cooling efficiency, coupled with state power grid data. While not all AI enterprises have declared net-zero objectives, major tech firms involved in AI, like Google, Microsoft, and Meta, have set targets for 2030.
“The rapid expansion of AI computing is fundamentally altering everything,” says Yu. “We’re striving to understand the implications of this growth.”
The researchers estimate that establishing AI servers in the U.S. may require between 731 million to 1.125 billion cubic meters of additional water by 2030, along with greenhouse gas emissions ranging from 24 million to 44 million tons of carbon dioxide each year. These estimates hinge on the pace of AI demand growth, the actual number of advanced servers that can be produced, and the sites of new U.S. data centers.
To address these issues, the researchers modeled five scenarios based on varying growth rates and outlined potential measures to minimize the impact. “The top priority is location,” Yu explains. By situating data centers in Midwestern states with abundant water resources and a significant share of renewable energy in the power grid, the environmental fallout can be mitigated. The team also emphasizes that transitioning to decarbonized energy sources and enhancing efficiency in computing and cooling processes are essential strategies for minimizing environmental impact. Collectively, these three measures could potentially lower industry emissions by 73% and reduce water usage by 86%.
However, public resistance may disrupt these predictions, particularly regarding the environmental ramifications of establishing data centers. In Virginia, where 1/8 of the world’s data centers are located, residents have voiced opposition to upcoming construction plans, citing concerns over water resources and broader environmental impacts. Similar petitions against data centers have arisen in Pennsylvania, Texas, Arizona, California, and Oregon. As per Data Center Watch, a firm that monitors data center developments, local opposition is stalling approximately $64 billion worth of projects. Even where certain locations successfully deny data center projects, questions remain regarding their potential power and water consumption.
This new research is viewed cautiously by those analyzing and quantifying AI’s environmental effects. “The AI field evolves so quickly that making accurate future predictions is incredibly challenging,” says Sasha Luccioni from the AI company Hugging Face. “As mentioned by the authors, breakthroughs in the industry can radically alter computing and energy needs, reminiscent of DeepSeek’s innovative techniques that reduced reliance on brute-force calculations.”
Chris Priest from the University of Bristol in the UK concurs, highlighting the necessity for increased investment in renewable energy infrastructure and the importance of data center placement. “I believe their projections for water usage in direct cooling of AI data centers are rather pessimistic,” he remarks, suggesting that the model’s “best case” scenario aligns more closely with “business as usual” for contemporary data centers.
Luccioni believes the paper underscores a vital missing element in the AI ecosystem: “greater transparency.” She notes that this issue can be addressed by “mandating model developers to track and disclose their computing and energy consumption, share this information with users and policymakers, and commit to reducing overall environmental impacts, including emissions.”
Topic:
If you need further adjustments or another type of rewrite, let me know!
The government’s initiative to leverage artificial intelligence for accelerating home planning could face an unforeseen hurdle: the agility of AI.
A new platform named Opponent is providing “policy-backed appeals in minutes” for those dissatisfied with nearby development plans.
Utilizing generative AI, the service examines planning applications, evaluates grounds for objections, and categorizes the potential impact as ‘high’, ‘medium’, or ‘low’. It also automatically generates challenge letters, AI-enhanced speeches for planning commissions, and even AI-produced videos aimed at persuading legislators.
Kent residents Hannah and Paul George developed this tool after their lengthy opposition to a proposed mosque near their residence, estimating they invested hundreds of hours in the planning process.
They’re making this service available for £45, specifically targeting people without the financial means to hire specialized lawyers to navigate the complexities of planning law. They believe this initiative will “empower everyone, level the playing field, and enhance fairness in the process.”
Though we are a small company, we aim to make a significant impact. A similar offering, Planningobjection.com, markets a £99 AI-generated objection letter with the slogan ‘Stop complaining and take action’.
A prominent planning lawyer cautioned that such AI could potentially “boost agility,” yet widespread adoption might overwhelm the planning systems and inundate planners with requests.
Sebastian Charles from Aardvark Planning Law noted that in their practice, no AI-generated objections contained references to prior litigation or appeal decisions, which were verified by human lawyers.
“The risk lies in decisions being based on flawed information,” he remarked. “Elected officials could mistakenly trust AI-generated planning speeches, even when rife with inaccuracies about case law and regulations.”
Hannah George, co-founder of Objector, refuted claims that the platform promotes nimbyism.
“It’s simply about making the planning system more equitable,” she explained. “Currently, our experience suggests that it’s far from fair. With the government’s ‘build, produce, build’ approach, we only see things heading in one direction.”
Objector acknowledged the potential for AI-generated inaccuracies, stating that using multiple AI models and comparing their outputs mitigates the risk of “hallucinations” (where AI generates falsehoods).
The current Objector platform is oriented towards small-scale planning applications, like repurposing an office building extension or modifications to a neighbor’s home. George mentioned that they are developing features to address larger projects, such as residential developments on greenbelt land.
The Labor government is advocating for AI as part of the solution to the current planning gridlock. Recently, they introduced a tool named extract, which aims to expedite the planning process and assist the government in fulfilling its goal of constructing 1.5 million new homes.
However, an impending AI “arms race” may be on the horizon, warned John Myers, director of the Inbee Alliance, a campaign advocating for more housing with community backing.
“This will intensify opposition to planning applications and lead to people unearthing vague objections they hadn’t previously discovered,” he stated.
Myers suggested a new dynamic could emerge where “one faction employs AI to expedite the process, while the opposing faction utilizes AI to impede it.” “As long as we lack a method to progress with desirable development, this stalemate will persist.”
Governments might already possess AI systems capable of managing the rising number of dissenting voices spawned by AI. Recently, they unveiled a tool named consult, which examines public consultation responses.
This initiative hopes to ensure “large-scale language models will see widespread implementation,” akin to those utilized by Objector, although it may merely increase the volume of consultation responses.
Paul Smith, managing director of Strategic Land Group, reported this month a rise in AI use among those opposing planning applications.
“AI-based opposition undermines the very rationale of public consultation,” he expressed in Building magazine. “It’s claimed that local communities are best suited to understand their areas…hence, we seek their input.”
“However, if residents opt to reject the system and discover reasons prior to submitting their applications, what’s the purpose of soliciting their opinions in the first place?”
TSir Richard Evans, a distinguished British historian, authored three expert witness reports for libel trials involving Holocaust denier David Irving, pursued his doctorate under Theodore Zeldin, took over the Regius Professorship of History at Cambridge University (a title originally bestowed by King Henry VIII), and oversaw Bismarck’s dissertation on social policy.
However, all these details were fabricated, as Professor Evans found when he logged onto Grokipedia, the AI-driven encyclopedia launched last week by the world’s richest individual, Elon Musk.
This marks a rocky beginning for humanity’s latest venture to encapsulate the entirety of human knowledge, or, as Musk describes it, to establish a compendium of “the truth, the whole truth, and nothing but the truth,” created through the capabilities of his Grok artificial intelligence model.
With his fortune, Musk switched his views on Grokipedia this Tuesday, claiming it is “better than Wikipedia,” or “Walkpedia,” as its proponents call it, highlighting the belief that the leading online encyclopedia often leans toward leftist narratives. One post on X encapsulated the victorious sentiment among Musk’s supporters: “Elon just killed Wikipedia. Good for you.”
Nevertheless, users quickly discovered that Grokipedia mainly excerpts from the websites it aimed to co-opt, is rife with inaccuracies, and seems to endorse right-wing narratives championed by Musk. In a series of posts promoting his creations this week, Musk asserted that “a British civil war is inevitable,” urged Brits to “ally with the hardliners” like far-right figure Tommy Robinson, and claimed only the AfD party could “save Germany.”
Musk is so captivated by his AI encyclopedia that he has expressed a desire to engrave “a comprehensive collection of all knowledge” into stable oxide and place a copy in orbit, on the moon, and on Mars, to ensure its preservation for the future.
However, Evans identified a more pressing issue with Musk’s application of AI for fact-checking and verification. As a specialist in the Third Reich, he shared with the Guardian that “contributions to chat rooms are granted the same weight as serious academic work.” He emphasized, “AI merely observes everything.”
Richard Evans noted that Grokipedia’s entry on Albert Speer (shown to the left of Hitler) reiterated fabrications and distortions propagated by the Nazi Munitions Minister himself. Photo: Picture Library
He pointed out that the article attributed to Albert Speer, Hitler’s architect and wartime munitions minister, perpetuated lies previously debunked in his award-winning 2017 biography. Evans also stated that the entry about Eric Hobsbawm, a Marxist historian for whom he wrote a biography, falsely claimed he experienced Germany’s hyperinflation in 1923, served as an officer in the Royal Corps of Signals, and overlooked the fact that he had married twice.
David Larson Heidenblad, deputy director of the Lund Knowledge History Center, commented on the clash of knowledge cultures emerging in Sweden.
“We live in an era where there is a prevalent belief that algorithmic aggregation is more trustworthy than interpersonal insight,” Heidenblad remarked. “The Silicon Valley mindset significantly diverges from the traditional academic methodology. While Silicon Valley’s knowledge culture embraces iterations and views mistakes as part of the process, academia builds trust gradually and fosters scholarship over extended periods, during which the illusion of total knowledge dissipates. These represent the genuine processes of knowledge.”
The launch of Grokipedia follows a long-standing tradition of encyclopedias, ranging from the Yongle encyclopedias of 15th-century China to the Enlightenment-era creations in 18th-century France. These were succeeded by the primarily English Encyclopedia Britannica and, since 2001, the crowd-sourced Wikipedia. However, Grokipedia stands out as the first service significantly driven by AI, raising pressing questions: Who governs the truth when an AI controlled by powerful entities holds the pen?
“If Mr. Musk is behind it, I fear there could be political manipulation,” wrote Peter Burke, a cultural historian and professor emeritus at Emmanuel College in Cambridge, in his 2000 work on the social history of knowledge dating back to Johannes Gutenberg’s printing press in the 15th century.
“While some aspects may be evident to certain readers, the concern is that others might overlook them,” Burke elaborated, highlighting that many entries in the encyclopedia were anonymous, lending them an “air of authority they do not deserve.”
“An AI-generated encyclopedia (a sanitized reflection of reality) is a superior offering compared to what we’ve had in the past,” asserted Andrew Dudfield, head of AI at the UK-based fact-checking organization Full Fact. While we lack the same transparency, we desire comparable trust. There’s ambiguity regarding how much input was human and how much was produced by AI, along with what the AI’s agenda was.” Trust becomes problematic when choices remain obscured.”
Musk was encouraged to initiate Grokipedia by Donald Trump’s technology advisor David Sachs, among others, who criticized Wikipedia as “hopelessly biased” and maintained by an “army of leftist activists.”
Grokipedia refers to the far-right group Britain First as a “patriotic party,” which delighted its leader Paul Golding (left), who was imprisoned for anti-Muslim hate crimes in 2018. Photo: Gareth Fuller/PA
Until 2021, Musk expressed support for Wikipedia, celebrating its 20th anniversary on Twitter with “I’m so glad you exist.” However, by October 2023, his growing disdain for the platform led him to offer £1bn “if it would change its name to Dickipedia.”
Yet, many of Grokipedia’s 885,279 articles available in its launch week were nearly verbatim reproductions from Wikipedia, including entries on the PlayStation 5, Ford Focus, and Led Zeppelin. Nonetheless, other components differ substantially.
Grokipedia’s entry on Russia’s invasion of Ukraine cites the Kremlin as a main information source, incorporating official Russian language regarding the “denazification” of Ukraine, the defense of ethnic Russians, and the removal of threats to Russian security. In contrast, Wikipedia characterizes Putin’s views as imperialistic and states he “baselessly claimed that the Ukrainian government is neo-Nazi.”
Grokipedia labeled the turmoil at the U.S. Capitol in Washington, D.C., on January 6, 2021, as an “insurrection” instead of an attempted coup. It asserted an “empirical basis” for the belief that mass immigration was orchestrating the deliberate demographic erasure of whites in Western nations, a notion critics dismiss as a conspiracy theory.
Grokipedia’s section on Donald Trump’s conviction for falsifying business records related to the Stormy Daniels case stated it was decided “after a trial in a heavily Democratic jurisdiction” and omitted mention of his conflicts of interest, such as receiving a private jet from Qatar or the Trump family’s cryptocurrency enterprise.
Grokipedia categorized the unrest at the U.S. Capitol in Washington, D.C., on January 6, 2021, as an “insurrection” rather than an attempted coup. Photo: Leah Millis/Reuters
Wikipedia responded to Grokipedia’s inception with poise, stating it seeks to understand how Grokipedia will function.
“In contrast to new endeavors, Wikipedia’s advantages are evident,” a spokesperson for the Wikimedia Foundation remarked. “Wikipedia upholds transparent guidelines, meticulous volunteer oversight, and a robust culture of continuous enhancement. Wikipedia is an encyclopedia designed to inform billions of readers without endorsing a particular viewpoint.”
jESUS strolls through the lush green field holding a selfie stick. The initial notes from Billie Eilish’s ethereal tune rise like a prayer. “It’s all good, Besties, this is my choice. Totally a genuine Save Humanity Arc,” he smiles. “Adore it for me,” Jesus playfully tucks Jonathan Van Ness’s hair behind his ears.
We transition to a new scene. He still wields a selfie stick, but now he’s wandering through a gritty town. “So, I told the team I had to die. Peter literally tried to gaslight me. It’s not dramatic, like Baby. This is a prophecy.”
Cut to Jesus at a candlelit feast. “It’s more of a conversation, so here I am in the middle of dinner. Judas couldn’t even hold my gaze,” he shakes his head, then turns to the camera, grinning at his insight. “Such a phony!”
Do you allow Instagram content?
This article contains the content provided by Instagram. You may be using cookies or other technologies, so you will ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Initially, videos of this genre—a retelling of biblical tales through the lens of Americanized video blog culture—may seem bizarre and sacrilegious. However, might they represent a unique synthesis of the Holy Trinity of 2025: AI, influencer culture, and rising conservatism? Are these videos indicative of our era? Do they reflect the concerns of American conservatism? Am I being subtly influenced towards Christianity? Why do these Biblical inspirations feel oddly alluring? Why can’t I look away? What’s happening to my brain?!
My first encounter with these biblical video blogs was while I lounged in bed. When the algorithm unveiled Joseph of Nazareth, I momentarily halted my endless scrolling. “Whoa, look at that fit! Ancient rock vibes.” I wiped the drool from my chin and took a moment. Although mindlessly scrolling may not usually be a cure for mental fatigue, that day, I felt like Daniel in the lion’s den or Jonah in the whale. My commitment to scrolling brought me a sense of salvation.
Do you allow TikTok content?
This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
In my younger days, I flirted with religion. When my grandparents visited, I would kneel in prayer, attend Bible studies, and socialize with youth groups to meet friends and boys. I had a brief infatuation with Hillsong (I was 13 and just wanted to plan for a Friday night). a) The girl before me screamed, “I’ve been captured by the devil.” And b) I sneaked behind the church curtains to find the teenagers locked in each other’s glances.
My attitudes towards both faithfulness and spirituality have transformed. Now, my spiritual routine consists of exclamations like, “Jesus take the wheel!” or “What a deity!” as I snap photos of church art while traversing Catholic nations, sharing through Instagram later on.
This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Recently, I came clean to a friend about my obsession. I was evangelizing to a fellow enthusiast. She mentioned that Jesus resembled the first influencer and that Mary and Joseph embodied the archetypal toxic vlog parents. If Judas were alive today, he would upload lengthy unedited rants on YouTube.
Momentarily, I ponder the environmental ramifications. How much water was used for Mary’s perfect dab? What resources were consumed so AI Jesus could jokingly narrate a tutorial on wine making? And how long have we been off-planet? Hold on! Shhh, the next video starts.
Adam is now seated in a podcast studio, headphones on, microphone positioned, dressed informally with leaf-patterned fabric. “So, God creates me? Boom. The first man, the parents, nothing. I… ‘Ah… I’m literally going to be everyone’s dad! When they split up, I’ll ensure they clash endlessly. Another! Another! Another! Another!”
Sure! Here’s a rewritten version of your content, preserving the HTML structure:
<div id="">
<p>
<figure class="ArticleImage">
<div class="Image__Wrapper">
<img class="Image" alt="" width="1350" height="900" src="https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg" sizes="(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)" srcset="https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=300 300w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=400 400w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=500 500w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=600 600w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=700 700w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=800 800w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=837 837w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=900 900w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1003 1003w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1100 1100w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1200 1200w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1300 1300w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1400 1400w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1500 1500w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1600 1600w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1674 1674w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1700 1700w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1800 1800w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=1900 1900w, https://images.newscientist.com/wp-content/uploads/2025/04/29124741/SEI_249299022.jpg?width=2006 2006w" loading="eager" fetchpriority="high" data-image-context="Article" data-image-id="2478323" data-caption="The logo of the social media platform Reddit" data-credit="Artur Widak/NurPhoto via Getty Image"/>
</div>
<figcaption class="ArticleImageCaption">
<div class="ArticleImageCaption__CaptionWrapper">
<p class="ArticleImageCaption__Title">Logo of the Social Media Platform Reddit</p>
<p class="ArticleImageCaption__Credit">Artur Widak/NurPhoto via Getty Images</p>
</div>
</figcaption>
</figure>
</p>
<p>Users of Reddit unknowingly participated in AI-driven experiments conducted by scientists, raising concerns about ethical practices in such research.</p>
<p>The platform is organized into various "subreddits," each catering to specific interests, moderated by volunteers. One notable subreddit, <a href="https://www.reddit.com/r/changemyview/">R/ChangeMyView</a>, encourages discussions on controversial topics. Recently, a moderator informed users about unauthorized experiments conducted by researchers from the University of Zurich, using the subreddit as a testing ground.</p>
<p>The study involved inserting over 1,700 comments into the subreddit, all produced by different large-scale language models (LLMs). These comments mimicked individuals posing as trauma counselors who had experienced abuse. An <a href="https://osf.io/atcvn?view_only=dcf58026c0374c1885368c23763a2bad">explanation of the comment generation process</a> indicates that researchers instructed AI models to disregard ethical concerns, claiming users had provided consent to use their data.</p>
<span class="js-content-prompt-opportunity"/>
<p>A <a href="https://drive.google.com/file/d/1Eo4SHrKGPErTzL1t_QmQhfZGU27jKBjx/view">draft version</a> of the research findings revealed that AI-generated comments were found to be three to six times more persuasive than those authored by humans, based on how often they swayed opinions. The authors noted that users on <em>R/ChangeMyView</em> did not express concerns regarding AI involvement in the comments, suggesting a seamless integration of AI bots into the community.</p>
<p>Following the revelation of the experiment, subreddit moderators raised complaints with the University of Zurich. Despite the project's prior approval from the Ethics Committee, moderators did not disclose researchers' identities but informed the community about the alleged manipulation.</p>
<p>This experiment drew criticism from fellow academics. "At a time when criticism is prevalent, it is crucial for researchers to uphold higher standards and respect individuals' autonomy," stated <a href="https://www.hertford.ox.ac.uk/staff/carissa-veliz">Carissa Veliz</a> from Oxford University. "In this instance, the researchers fell short."</p>
<p>Scholars must demonstrate the ethical basis of research involving human subjects to university ethics committees before proceeding, and the study received approval from the University of Zurich. Veliz has contested this decision, stating, "The study relied on manipulation and deception involving non-consenting subjects, which seems unjust. It should have been designed to prevent such misrepresentation."</p>
<p>"While research may allow for deceit, the reasoning behind this particular case is questionable," commented <a href="https://www.linkedin.com/in/matthodgkinson">Matt Hodgkinson</a>, a member of the Council of Publication Ethics Committee, albeit in a personal capacity. "It's ironic that participants need to deceive LLMs to assert their agreement. Do chatbots have higher ethical standards than universities?"</p>
<p>When <em>New Scientist</em> reached out to the researchers through an anonymous email provided by a subreddit moderator, they declined to comment and called for a press conference at the University of Zurich.</p>
<p>A university spokesperson stated, "The researchers are accountable for conducting the project and publishing results," adding that the ethics committee acknowledged the experiment was "very complex" and that participants should be "informed as much as possible."</p>
<p>The University of Zurich plans to implement a stricter review process moving forward and aims to work more closely with the community on the platform before undertaking experimental research, the spokesperson reported. The investigation remains ongoing, and researchers have opted not to publish the paper formally, as confirmed by a spokesperson who declined to identify specific officials.</p>
<section class="ArticleTopics" data-component-name="article-topics">
<p class="ArticleTopics__Heading">Topics:</p>
</section>
</div>
With the use of a new AI weather forecast approach, a single researcher working on desktop computers can deliver precise weather forecasts that are significantly faster and require much less computing power compared to traditional systems.
Traditional weather forecasting methods involve multiple time-consuming stages that rely on supercomputers and teams of experts. Aardvark Weather offers a more efficient solution by training AI on raw data collected from various sources worldwide.
This innovative approach, detailed in a publication by researchers from the University of Cambridge, Alan Turing Institute, Microsoft Research, and ECMWF, holds the potential to enhance forecast speed, accuracy, and cost-effectiveness.
Richard Turner, a machine learning professor at Cambridge University, envisions the use of this technology for creating tailored forecasts for specific industries and regions, such as predicting agricultural conditions in Africa or wind speeds for European renewable energy companies.
Members of New South Wales Emergency Services will inspect the advancement of the tropical cyclone Alfred on March 5, 2025 at a weather satellite view in Sydney, Australia. Photo: Bianca de Mart/Reuters
Unlike traditional forecasting methods that rely on extensive manual work and lengthy processing times, this new approach streamlines the prediction process, offering potentially more accurate and extended forecasts.
According to Dr. Scott Hosking from the Alan Turing Institute, this breakthrough can democratize weather forecasting by making advanced technologies accessible to developing countries and aiding decision-makers, emergency planners, and industries that rely on precise weather information.
Dr. Anna Allen, the lead author of the Cambridge University research, believes that these findings could revolutionize predictions for various climate-related events like hurricanes, wildfires, and air quality.
Drawing on recent advancements by tech giants like Huawei, Google, and Microsoft, Aardvark aims to revolutionize weather forecasting by leveraging AI to accelerate predictions. The system has already shown promising results, outperforming existing forecast models in certain aspects.
If you want evidence of Microsoft’s progress towards its environmental “moonshot” goals, look closer to Earth to a construction site on an industrial estate in west London. The company’s Park Royal data center is part of the company’s efforts to drive the expansion of artificial intelligence (AI), but its ambitions are The goal is to become carbon negative by 2030. Microsoft says the center will be run entirely on renewable energy, but construction of the data center and the servers it will house will contribute to the company’s Scope 3 emissions (CO2)2. These relate to the electricity people use when using building materials or products like the Xbox. 30% increase from 2020. As a result, the company is exceeding its overall emissions target by roughly the same percentage.
This week, Microsoft co-founder Bill Gates argued that AI can help fight climate change because big tech companies are “seriously willing” to pay extra to use clean sources of electricity so they can “say they’re using green energy.” In the short term, AI poses a problem for Microsoft’s environmental goals. Microsoft’s outspoken president, Brad Smith, once called the company’s carbon-reduction ambitions a “moonshot.” In May, he stretched that metaphor to its limits and said that the company’s AI strategy has “moved the moon” for it. It plans to spend £2.5bn over the next three years to expand its AI data center infrastructure in the UK, and has announced new data center projects around the world this year, including in the US, Japan, Spain, and Germany.
Training and running the AI models underlying products like OpenAI’s ChatGPT and Google’s Gemini uses significant amounts of electricity to power and cool the associated hardware, plus carbon is generated by manufacturing and transporting the associated equipment. “This is a technology that will increase energy consumption,” said Alex de Vries, founder of DigiConomist, a website that tracks the environmental impact of new technologies. The International Energy Agency estimates that the total electricity consumption of data centers is Doubling from 2022 levels to 1,000 TWh (terawatt hours) in 2026. This is equivalent to Japan’s energy demand. With AI, data centers 4.5% of world energy production That will happen by 2030, according to calculations by research firm Semianalysis.
The environment has also been in the spotlight amid concerns about AI’s impact on jobs and human lifespan. Last week, the International Monetary Fund said governments should consider imposing carbon taxes to capture the environmental costs of AI, either through a general carbon tax that covers emissions from servers, or a specific tax on CO2.2 It is generated by the device. The big tech companies involved in AI (Meta, Google, Amazon, Microsoft) are seeking renewable energy sources to meet their climate change targets. Largest Corporate Buyer Renewable Energy I bought more than half The power output of offshore wind farms in Scotland, which Microsoft announced in May it would invest $10 billion (£7.9 billion) in. Renewable Energy Projects.
Google aims to run its data centers entirely on carbon-free energy by 2030. “We remain steadfast in our commitment to achieving our climate change goals,” a Microsoft spokesperson said. Microsoft co-founder Bill Gates, who left the company in 2020 but retains a stake in the company through his Foundation, has argued that AI can directly help combat climate change. He said Thursday that any increase in electricity demand would be matched by new investments in green generation to more than offset usage. A recent UK government-backed report agreed, saying that “the carbon intensity of energy sources is an important variable in In calculating AI-related emissions, but adding that “a significant portion of AI training worldwide still relies on high-carbon sources such as coal and natural gas”. Water needed to cool servers is also an issue, A study It estimates that AI could account for up to 6.6 billion cubic meters of water use by 2027. Two thirds This is equivalent to the annual consumption of England.
De Vries argues that the pursuit of sustainable computing power will put a strain on demand for renewable energy, resulting in fossil fuels making up for shortfalls in other parts of the global economy. “Increasing energy consumption means there isn’t enough renewable energy to cover that increase,” he says. Data center server rooms consume large amounts of energy. Photo: i3D_VR/Getty Images/iStockphoto. NexGen Cloud, a UK company that provides sustainable cloud computing, says that in an industry that relies on data centers to provide IT services such as data storage and computing power over the internet, data centers could use renewable energy sources for AI-related computing if they were located away from urban areas and near hydroelectric or geothermal generation sources. “We are excited to join forces with NVIDIA to bring the power of cloud to the cloud,” said Youlian Tzanev, co-founder of NexGen Cloud.
“Until now, the industry standard has been to build around economic centers, not renewable energy sources.” This makes it even harder for AI-focused tech companies to meet their carbon emissions targets. Amazon, the world’s largest cloud computing provider, aims to be net zero (removing as much carbon as it emits) by 2040 and aims to source 100% of its global electricity usage from renewable energy by 2025. Google and Meta are also pursuing the same net zero goal by 2030. OpenAI, the developer of ChatGPT, uses Microsoft data centers to train and run its products.
There are two main ways that large-scale language models, the underlying technology behind chatbots like ChatGPT and Gemini, consume energy: The first is the training phase, where the model is fed huge amounts of data, often from the internet, to build up a statistical understanding of the language itself, which ultimately enables it to generate large numbers of compelling answers to queries. The initial energy costs of training an AI are astronomical, meaning that small businesses (and even smaller governments) that can’t afford to spend $100 million on training can’t compete in the field. But this cost pales in comparison to the cost of actually running the resulting models, a process called “inference.” According to Brent Till, an analyst at investment firm Jefferies, 90% of AI’s energy costs are in the inference stage – the power consumed when you ask an AI system to answer a factual question, summarize a chunk of text, or write an academic paper.
The power used for training and inference is delivered through a vast and growing digital infrastructure. Data centers contain thousands of servers built from the ground up for specific pieces of AI workloads. A single training server contains a central processing unit (CPU) that’s nearly as powerful as a computer’s, and dozens of specialized graphics processing units (GPUs) or tensor processing units (TPUs), microchips designed to speed up the vast amounts of simple calculations that make up AI models. When you use the chatbot, you watch it spit out answers word for word, powered by powerful GPUs that consume about a quarter of the power it takes to boil a kettle. All of this is hosted in a data center, whether owned by the AI provider itself or a third party. In the latter case, it’s sometimes called “the cloud,” a fancy name for someone else’s computer.
SemiAnalysis estimates that if generative AI were integrated into every Google search, it could consume 29.2 TWh of energy per year, roughly the annual consumption of Ireland, which would be prohibitively financial for the tech company, sparking speculation that Google may start charging for some of its AI tools. But some argue that focusing on the energy overhead of AI is the wrong way to think about it. Instead, think about the energy that new tools can save. A provocative paper published in Nature’s peer-reviewed journal Scientific Reports earlier this year argued that AI creates a smaller carbon footprint when writing or illustrating text than humans. Researchers at the University of California, Irvine estimate that AI systems emit “130 to 1,500 times” less carbon dioxide per page of text than a human writer, and up to 2,900 times less carbon dioxide per image. Of course, there’s no word on what human authors and illustrators will do instead: redirect and retrain their workforce in other areas, e.g. Green Jobs – It could be another moonshot.
Apple is delaying the launch of three new artificial intelligence features in Europe due to European Union competition rules. The features will be available in the US this fall, but not in Europe until 2025.
The delay is a result of regulatory uncertainty caused by the EU’s Digital Markets Act (DMA). Apple stated that phone mirroring, SharePlay screen sharing enhancements, and Apple Intelligence will not roll out to EU users this year.
Apple argues that complying with the EU regulations would compromise the security of its products, a claim that EU authorities have challenged in the past.
Apple stated in an email that they are concerned about the DMA’s interoperability requirements potentially compromising user privacy and data security.
The European Commission welcomes Apple in the EU as long as it complies with EU law, as stated in a Bloomberg article.
At its annual developers conference earlier this month, Apple announced Apple Intelligence, a suite of AI features that integrates ChatGPT with Siri for web searching and content generation.
The upcoming Apple mobile operating system will enable the assistant feature to search through emails, texts, and photos to find specific information as instructed by the user.
Apple assures that the new AI features, available on select Apple devices, will prioritize user privacy and safety. The company is working with the European Commission to address concerns and provide these features to EU customers securely.
CEO Tim Cook has reaffirmed that Apple’s AI features will respect personal privacy and context, aligning with the company’s commitment to user security.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.