Melting Arctic ice. Record-breaking wildfires across multiple states. Countries experiencing average warming are warming at a twice the rate of other regions worldwide.
Yet, when Canadians head to the polls on Monday, climate change isn’t even among the top ten issues for voters. Recent surveys indicate this shift.
“That’s not the focus of this election,” remarked Jessica Green, a political scientist at the University of Toronto specializing in climate-related topics.
The election revolves around a collective desire to choose a leader capable of standing up to Donald J. Trump, who poses a threat to Canada amidst a trade war, if not a full annexation as the “51st state.”
Leading the polls is liberal Mark Carney, who boasts decades of experience in climate policy. He served for five years as a UN envoy on climate action and finance, orchestrating a coalition of banks committed to halting carbon dioxide emissions through financing practices by 2050.
Despite his impressive background, Carney hasn’t prioritized climate change in his campaign. Following Prime Minister Justin Trudeau’s resignation, one of Carney’s initial actions was to eliminate fuel taxes based on emission levels, including gasoline taxes.
While many Canadians have redirected the resulting funds into rebate checks, Mr. Carney appears to misunderstand the policy, labeling it as “too divided.”
This decision, coupled with similarities between his Conservative opponents, Pierre Poilievre and Trump, has contributed to Carney’s rise in the polls.
“Carney made a clever move by abolishing the consumer carbon tax, which was widely unpopular and essentially formed the basis of Poilievre’s campaign against him,” said Dr. Green. “It took the wind out of the Conservative Party’s sails.”
Mr. Carney is acutely aware of political dynamics. In a recent television discussion, he mentioned to Poilievre, “I spent years advocating for Justin Trudeau and the carbon tax.”
Poilievre is a staunch supporter of Canada’s vast oil and gas industry, making Canada the fourth-largest oil producer and the fifth-largest gas producer globally. Yet, unlike Trump, he recognizes the necessity of reducing greenhouse gas emissions driving climate change.
“Canadian oil and clean natural gas must replace coal globally, allowing countries like India and others in Asia to utilize gas instead of dirty coal,” he stated at a recent press conference during his campaign.
However, Carney’s proposals don’t significantly differ. He envisions Canada as a “superpower of both traditional and clean energy.” His platform suggests reforms like bolstering the carbon market and expediting approvals for clean energy initiatives.
Perhaps the most significant distinction between the candidates lies in their views on Canada’s oil and gas emission caps and the tax on industrial emissions, both defended by Trudeau.
Poilievre aims to eliminate these in accordance with industry demands, whereas Carney intends to maintain them. The Canadian Climate Research Institute states that the Industrial Carbon Tax reduces emissions by at least three times more than the consumer tax, making it the most effective policy deployed to decrease emissions leading up to 2030.
Canada ranks among the world’s highest per capita greenhouse gas emitters and is not on track to meet its commitments under the 2015 Paris Agreement. By 2030, the aim is to achieve a minimum of 40-45% reductions from 2005 levels, but the latest national emissions Inventory Report indicates just an 8.5% decrease through 2023.
As natural disasters increase in frequency and severity, FEMA and NOAA are becoming politicized. Their future hangs in the balance of elections.
Project 2025, a conservative policy roadmap, recommends “breaking up and downsizing” NOAA and shifting much of the burden of disaster recovery from FEMA.
Experts and current and former officials said the changes could make the U.S. more vulnerable to extreme weather events.
With the close 2024 election just days away, the future of federal agencies responsible for weather forecasting, climate change research and disaster recovery is at stake.
These agencies, the National Oceanic and Atmospheric Administration (NOAA) and the Federal Emergency Management Agency (FEMA), have become increasingly politicized in recent years, despite a history of conflict. But natural disasters caused by climate change are now hitting the United States on a regular basis, with 24 weather events already occurring this year. Each caused at least $1 billion in damage — Government agencies are taking on a bigger role. As a result, it has become a target for some conservatives who are skeptical about climate change and want to cut government spending.
Republican presidential candidate Donald Trump has promised deep cuts to the federal budget, and one of his most vocal allies, Elon Musk, said last week: He will cut at least $2 trillion Those who served in the second Trump administration will be exempt from the budget. project 2025A 922-page conservative policy roadmap compiled by the Heritage Foundation, a right-wing think tank, recommends “dismantling and downsizing” NOAA and zeroing in on FEMA, which would shoulder much of the financial burden of disaster recovery. This suggests that the transfer will be made. to state and local governments;
If that happens, it could dramatically change the way disaster relief is provided in the United States.
Craig Fugate, who served as FEMA administrator under the Obama administration, said it has become “almost inconceivable that states will be able to recover without a lengthy and costly recovery period drawn from state and local budgets.” .
It's not entirely clear what a second Trump administration means for FEMA and NOAA. President Trump has publicly distanced himself from Project 2025, even though many of its authors were his advisers. “Project 2025 has nothing to do with President Trump or the Trump campaign,” Trump campaign officials said in an email to NBC News. “It's not the organization or its former staff.” The campaign did not respond to additional questions about the plan from NOAA and FEMA.
FEMA has already come under scrutiny and criticism from some Republican leaders in the wake of Hurricanes Helen and Milton. Mr. Trump and several other prominent Republicans even pushed false claims that FEMA funds were illegally flowing to U.S. immigrants. At the same time, rampant misinformation about the two storms made meteorologists the target of threats, even though their predictions were surprisingly accurate.
Because NOAA oversees the National Weather Service, these forecasts may no longer be freely available to the public or state governments if the Project 2025 recommendations are implemented.
Academics and current and former officials said in interviews that even an agenda based in part on a conservative roadmap would make the U.S. an outlier in a world where large-scale disasters are already intensifying and becoming more serious. He said it could make them more vulnerable to weather. frequently.
Currently, FEMA aid covers at least 75% of the cost of major disasters, but Project 2025's proposal would reduce that percentage to just 25%.
Restrictions on relief supplies could turn some communities into ghost towns, said Rep. Jared Moskowitz (Fla.), who served as Florida Emergency Management Director from 2019 to 2021 under Gov. Ron DeSantis. He said that there is a sex. He cited Hurricane Michael, which hit Florida as a Category 5 storm in 2018.
“These areas would not have recovered without the federal government stepping in and paying for the response and recovery efforts,” Moskowitz said.
He added that the hardest-hit areas that benefited the most from federal aid “voted for Donald Trump, voted for Rick Scott, voted for Ron DeSantis.”
Since Hurricanes Helen and Milton, the federal government has approved more than $1.2 billion in aid for recovery efforts. According to FEMA. This includes more than $185 million in assistance to 116,000 households in North Carolina and more than $413 million in assistance to more than 125,000 households in Florida, where both storms made landfall.
A home destroyed by Hurricane Milton on Thursday, October 10, 2024, in St. Pete Beach, Florida. Tristan Wheelock/Bloomberg – Getty Images File
If Project 2025's proposals had been implemented during Helen's time frame, “more lives would have been lost, the response would have been much slower, and there would have been little financial assistance to help communities rebuild.” '' Fugate said.
Project 2025 recommends that NOAA be “disbanded, many of its functions eliminated, transferred to other agencies, privatized, or placed under state and territory control.”
Matthew Saunders, acting deputy director of Stanford University's Environmental Law Clinic, said privatizing weather forecasting could lead to a decline in the quality of forecasts by putting corporate profits ahead of providing robust public services. He said there is.
“A neutral, centralized government agency has an important role to play here that private industry cannot or will not play,” Sanders said.
Matthew Burgess, an assistant professor at the University of Wyoming's School of Business, said privatizing weather forecasting gives states and local governments with more resources access to higher quality forecasts, while leaving municipalities with fewer resources left behind. He said that a situation could arise. dark. Or areas with a higher risk of hurricanes or tornadoes may have to pay more for their predictions, he said.
“Right now, the state of Florida gets hurricane forecasts free of charge from the federal government,” Burgess said. “If you privatize it, the private sector will probably operate more efficiently on average, but will that be offset by price gouging incentives? Because basically, when a hurricane hits, , because we really need that forecast and will pay whatever they charge.”
The Heritage Foundation said in a statement: “Project 2025 is not calling for the abolition of NOAA or NWS. That claim is false and ridiculous.”
“There is a difference between privatization and commercialization,” the statement added. “Using commercially available products to provide better outcomes for taxpayers at a lower cost is nothing new.”
In addition to proposals for specific agencies, Project 2025 also calls for disbanding federal climate change research. But understanding the effects of climate change is an essential part of predicting storms in particular. That's because as the ocean warms, hurricanes strengthen more quickly, and as the atmosphere warms, they can produce more rain.
“That's why everyone wakes up every day to come out here and do research and prepare people to make decisions that matter to them and their families,” said Dena Karlis, NOAA's National Severe Storm Preparedness Director. he said. Laboratory.
Fugate said ending climate research would make the United States even more vulnerable to its effects.
“Just because you don't like the answer doesn't mean the information isn't important,” he says. “If we ignore what's coming, how can we prepare for it?”
Sanders said deep cuts to research, weather and disaster agencies could further erode trust at a time when trust in government agencies is growing.
“Climate change, like most environmental issues, is a very unique problem in that it does not respect our political boundaries or our state boundaries,” he said. “We need a centralized federal agency to respond to climate change, an agency that can respond at scale to large and significant multi-state disasters.”
OpenAI announced on Friday that it had taken down the accounts of an Iranian group using its chatbot, ChatGPT, to create content with the aim of influencing the U.S. presidential election and other important issues.
Dubbed “Storm-2035,” the attack involved the use of ChatGPT to generate content related to various topics, including discussions on the U.S. presidential election, the Gaza conflict, and Israel’s involvement in the Olympics. This content was then shared on social media platforms and websites.
A Microsoft-backed AI company investigation revealed that ChatGPT was being utilized to produce lengthy articles and short comments for social media.
OpenAI noted that this strategy did not result in significant engagement from the audience, as most of the social media posts had minimal likes, shares, or comments. There was also no evidence of the web articles being shared on social media platforms.
These accounts have been banned from using OpenAI’s services, and the company stated that it will continue to monitor them for any policy violations.
In an early August report by Microsoft threat intelligence, it was revealed that an Iranian network called Storm 2035, operating through four websites posing as news outlets, was actively interacting with U.S. voters across the political spectrum.
The network’s activities focused on generating divisive messages on topics like U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
As the November 5th presidential election approaches, the battle between Democratic candidate Kamala Harris and Republican opponent Donald Trump intensifies.
OpenAI previously disrupted five covert influence operations in May that attempted to use their models for deceptive online activities.
Russia has been attempting online fraudulent activities using generative artificial intelligence, but according to a Metasecurity report published on Thursday, these efforts have not been successful.
Meta, the parent company of Facebook and Instagram, discovered that AI-powered strategies have only brought minimal benefits in terms of productivity and content generation to malicious actors. Meta was successful in thwarting deceptive influence campaigns.
Meta’s actions against “systematic fraud” on its platform are in response to concerns that generative AI could be employed to mislead or confuse individuals during elections in the U.S. and other nations.
David Agranovich, Meta’s director of security policy, informed reporters that Russia continues to be the primary source of “coordinated illicit activity” using fake Facebook and Instagram accounts.
Since the 2022 invasion of Ukraine by Russia, these efforts have been aimed at weakening Ukraine and its allies, as outlined in the report.
With the upcoming U.S. election, Meta anticipates Russian-backed online fraud campaigns targeting political candidates who support Ukraine.
Facebook has faced accusations of being a platform for election disinformation, while Russian operatives have utilized it and other U.S.-based social media platforms to fuel political tensions during various U.S. elections, including the 2016 election won by Donald Trump.
Experts are worried that generative AI tools like ChatGPT and Dall-E image generator can rapidly create on-demand content, leading to a flood of disinformation on social networks by malicious actors.
The report notes the use of AI in producing images, videos, translating and generating text, and crafting fake news articles and summaries.
When Meta investigates fraudulent activity, the focus is on account behavior rather than posted content.
Influence campaigns span across various online platforms, with Meta observing that X (formerly Twitter) posts are used to lend credibility to fabricated content. Meta shared its findings with X and other internet companies, emphasizing the need for a coordinated defense against misinformation.
When asked about Meta’s view on X addressing scam reports, Agranovic mentioned, “With regards to Twitter (X), we’re still in the process of transitioning. Many people we’ve dealt with there in the past have already gone elsewhere.”
X has disbanded its trust and safety team and reduced content moderation efforts previously used to combat misinformation, making it a breeding ground for disinformation according to researchers.
Following a dry run of Taiwan’s presidential election this year, China is anticipated to disrupt elections in the United States, South Korea, and India with artificial intelligence-generated content, as warned by Microsoft.
The tech giant predicts that Chinese state-backed cyber groups will target high-profile elections in 2024, with North Korea also getting involved, according to a report released by the company’s threat intelligence team.
“As voters in India, South Korea, and the United States participate in elections, Chinese cyber and influence actors, along with North Korean cyber attack groups, are expected to influence these elections,” Microsoft mentioned.
Microsoft stated that China will create and distribute AI-generated content through social media to benefit positions in high-profile elections.
Although the immediate impact of AI-generated content seems low in swaying audiences, China is increasingly experimenting with enhancing memes, videos, and audio, potentially being effective in the future.
During Taiwan’s presidential election in January, China attempted an AI-powered disinformation campaign for the first time to influence a foreign election, Microsoft reported.
The Beijing-backed group Storm 1376, also known as Spamoflage or Dragonbridge, heavily influenced Taiwan’s elections with AI-generated content spreading false information about candidates.
Chinese groups are also engaged in influencing operations in the United States, with Chinese government-backed actors using social media to probe divisive issues among American voters.
In a blog post, Microsoft stated, “This may be to collect intelligence and obtain accurate information on key voting demographics ahead of the US presidential election.”
The report coincides with a White House board’s announcement of a Chinese cyber operator infiltrating US officials’ email accounts due to errors made by Microsoft, as well as accusations of Chinese-backed hackers conducting cyberattacks targeting various entities in the US and UK.
This year, artificial intelligence-generated robocalls targeted New Hampshire voters during the January primary, posing as President Joe Biden and instructing them to stay home. This incident might be the initial attempt to interfere with a US election. The “deepfake” call was linked to two of his companies in Texas: Life His Corporation and Apple His Telecom.
The impact of deepfake calls on voter turnout remains uncertain, but according to Lisa Gilbert, executive vice president of Public Citizen, a group advocating for government oversight, the potential consequences are significant. Regulating the use of AI in politics is crucial.
Events mirroring what might occur in the US are unfolding around the globe. In Slovakia, fabricated audio recordings may have influenced an election, serving as a troubling prelude to potential US election interference in 2024, as reported by CNN. AI developments in Indonesia and India have also raised concerns. Without robust regulations, the US is ill-prepared for the evolving landscape of AI technology and its implications for elections.
Despite efforts to address AI misuse in political campaigns, US regulations are struggling to keep pace with AI advancements. The House of Representatives recently formed a task force to explore regulatory options, but partisan gridlock and regulatory delays cast uncertainty on the efficacy of measures that will be in place for this year’s election.
Without safeguards, the influence of AI on elections hinges on voters’ ability to discern real from fabricated content. AI-powered disinformation campaigns can sow confusion and undermine electoral integrity, posing a threat to democracy.
Manipulating audio content with AI raises concerns due to its potential to mislead with minimal detection capabilities, unlike deepfake videos. AI-generated voices can mimic those known to the recipient, fostering a false sense of familiarity and trust, which may have significant implications.
Gail Huntley picked up the phone and immediately recognized Joe Biden's raspy voice. Huntley, a 73-year-old New Hampshire resident, had planned to vote for the president in the state's upcoming primary and was perplexed when she received a prerecorded message urging her not to vote.
“It's important to save your vote for the November election,” the message said. “Only this Tuesday's vote will allow the Republican Party to seek re-election of Donald Trump.”
Huntley quickly realized the call was fake, but thought Biden's words had been taken out of context. She was shocked when it was revealed that the recording was generated by AI. Within weeks, the United States outlawed robocalls that use AI-generated voices.
The Biden deepfake was the first major test for governments, tech companies, and civil society groups. Governments, technology companies and civil society organizations are grappling with how best to police an information ecosystem where anyone can create photorealistic images of candidates or replicate their voices. It is embroiled in a heated debate. Terrifying accuracy.
As citizens of dozens of countries, including the US, India and possibly the UK, go to the polls in 2024, experts say democratic processes are at serious risk of being disrupted by artificial intelligence. .
AI fakes are already being used in elections Slovakia,Taiwan, Indonesiaand they are thrown into an environment where trust in politicians, institutions and media is already low.
Watchdog groups have warned that more than 40,000 people have been laid off at the tech companies that host and manage much of this content, and that digital media is uniquely vulnerable to abuse.
Mission Impossible?
For Biden, concerns about the potentially dangerous uses of AI spiked after watching the latest Mission: Impossible movie. Over the weekend at Camp David, the president relaxed in front of a movie in which Tom Cruise's Ethan Hunt takes on a rogue AI.
After watching the film, White House Deputy Chief of Staff Bruce Reid said that if Biden wasn't already concerned about what could go wrong with AI, “he has much more to worry about.” It turns out there are a lot of them.”
Since then, Biden has signed an executive order requiring major AI developers to share safety test results and other information with the government.
And the United States is not alone in taking action. The EU is about to pass one of the most comprehensive laws to regulate AI, but it won't come into force until 2026. Proposed regulations in the UK have been criticized for moving too slowly.
But because the United States is home to many of the most innovative technology companies, the White House's actions will have a major impact on how the most disruptive AI products are developed.
Katie Harvath, who spent a decade helping shape policy at Facebook and now works on trust and safety issues at tech companies, says the U.S. government isn't doing enough. Concerns about stifling innovation could play into this, especially as China moves to develop its own AI industry, she says.
Harvath discusses how information systems have evolved from the “golden age” of social media growth, to the Great Reckoning after the Brexit and Trump votes, and the subsequent efforts to stay ahead of disinformation. I watched what happened from my ringside seat.
Her mantra for 2024 is “panic responsibly.”
In the short term, she says, the regulators and polices for AI-generated content will be the very companies developing the tools to create it.
“I don't know if companies are ready,” Harvath said. “There are also new platforms whose first real test will be this election season.”
Last week, major tech companies signed an agreement to voluntarily adopt “reasonable precautions” to prevent AI from being used to disrupt democratic elections around the world, and to coordinate efforts. We took a big step.
Signatories include OpenAI, the creator of ChatGPT, as well as Google, Adobe, and Microsoft, all of which have launched tools to generate AI-authored content. Many companies have also updated their own rules to prohibit the use of their products in political campaigns.. Enforcing these bans is another matter.
OpenAI, which uses its powerful Dall-E software to create photorealistic images, said its tool rejects requests to generate images of real people, including candidates.
Midjourney, whose AI image generation is considered by many to be the most powerful and accurate, says users should not use the product to “attempt to influence the outcome of a political campaign or election.” Says.
Midjourney CEO David Holtz said the company is close to banning political images, including photos of leading presidential candidates. It appears that some changes are already in effect. When the Guardian asked Midjourney to produce an image of Joe Biden and Donald Trump in a boxing ring, the request was denied, saying it violated the company's community standards. A flag was raised.
But when I entered the same prompt, replacing Biden and Trump with British Prime Minister Rishi Sunak and opposition leader Keir Starmer, the software produced a series of images without a problem.
This example is at the center of concerns among many policymakers about how effectively tech companies are regulating AI-generated content outside the hothouse of the U.S. presidential election.
“Multi-million euro weapons of mass operation”
Despite OpenAI's ban on using its tools in political campaigns, its products were used to create campaign art, track social media sentiment, build interactive chatbots, and engage voters in Indonesia's elections this month. Reuters reported that it was widely used as a target.
Harvath said it's an open question how startups like OpenAI can aggressively enforce their policies outside the United States.
“Each country is a little different, with different laws and cultural norms. When you run a US-focused company, you realize that things work differently in the US than they do in other parts of the world. can be difficult.”
Last year's national elections in Slovakia pitted pro-Russian candidates against those advocating stronger ties with the EU. Ballot papers include support for Ukraine's war effort, and EU officials say the vote could be at risk of interference by Russia and its “multi-million euro weapons of mass manipulation” emphasized by those.
As the election approached and a national media blackout began, an audio recording of pro-EU candidate Michal Šimeka was posted on Facebook.
In the recording, Simechka appears to discuss ways to rig elections by buying votes from marginalized communities. The audio was fake, and AFP news agency reported that it appeared to have been manipulated using AI.
However, media outlets and politicians are required to remain silent under election concealment laws, making it nearly impossible to uncover errors in the recording.
The doctored audio appears to have fallen through a loophole in how Facebook owner Meta Inc. polices AI-generated material on its platform.below it community standardsprohibits posting content that has been manipulated in a way that “the average person wouldn't understand,” or that has been edited to make someone say something they didn't say. However, this only applies to videos.
Pro-Russian candidate Robert Fico won the election and became prime minister.
When will we know that the future is here?
Despite the dangers, there are some signs that voters are better prepared for what's to come than officials think.
“Voters are smarter than we think,” Harvath said. “They may be overwhelmed, but they understand what's going on in the information environment.”
For many experts, the main concern is not the technologies we are already working on, but the innovations that are on the other side of the horizon.
Writing in MIT's Technology Review, academics said the public debate about how AI threatens democracy is “lacking imagination.” The real danger, they say, is not what we already fear, but what we cannot yet imagine.
“What rocks are we not examining?” Halvath asks. “New technologies emerge, new bad guys emerge. There are constant high and low tides, and we have to get used to living with them.”
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.