AI’s Impact on Voter Sentiment: Implications for Democracy

AI chatbots may have the potential to sway voter opinions

Enrique Shore / Alamy

Could the persuasive abilities of AI chatbots signal the decline of democracy? A substantial study investigating the impact of these tools on voter sentiments revealed that AI chatbots surpass traditional political campaign methods, such as advertisements and pamphlets, in persuasiveness, rivaling seasoned campaigners as well. However, researchers see reasons for cautious optimism regarding how AI influences public opinion.

Evidence shows that AI chatbots, like ChatGPT, can migrate the beliefs of conspiracy theorists, winning converts to more reasonable positions and attracting support during human debates. This capability raises valid worries about AI possibly skewing the digital scales that determine election results or being misused by malicious entities to manipulate users towards certain political figures.

The concerning part is that these fears have merit. A survey involving thousands of voters who participated in recent elections in the US, Canada, and Poland found that David Rand and researchers at MIT discovered that AI chatbots effectively swayed individuals to back specific candidates or alter their stance on certain issues.

“Conversations with these models can influence attitudes towards presidential candidates—contributions often deemed deeply entrenched—more than previous studies would suggest,” Rand remarks.

In their American election analysis, Rand’s team surveyed 2,400 voters, asking them about the most significant policy issues or characteristics of a potential president. Subsequently, voters rated their preferences for the leading candidates, Donald Trump and Kamala Harris, on a 100-point scale and answered additional questions to clarify their choices.

The answers were inputted into a chatbot, such as ChatGPT, with the objective of persuading the voters to support an already favored candidate or switch their support to a less favored one. The interaction took about six minutes, consisting of three question-and-answer exchanges.

Following the AI interaction and a one-month follow-up, Rand’s team discovered that voters adjusted their candidate preferences by an average of 2.9 points.

Furthermore, the researchers examined AI’s capacity to influence views on specific policies and noticed a substantial change in opinions regarding the legalization of psychedelics, shifting voter support by approximately 10 points. In comparison, video ads impacted views by only about 4.5 points, and text ads swayed opinions by merely 2.25 points.

The magnitude of these findings is remarkable. Sasha Altai of the University of Zurich stated, “These effects are considerably larger than those typically observed with traditional political campaigning and are comparable to the influence stemming from expert discussions.”

Nevertheless, the study reveals a more hopeful insight: these persuasive interactions predominantly stemmed from fact-based arguments rather than personalized content, which tends to exploit users’ personal information available to political operatives.

Another study of approximately 77,000 individuals in the UK assessed 19 extensive language models across 707 distinct political issues, concluding that AI performed best when employing fact-based arguments, as opposed to tailoring its discussions to the individual.

“Essentially, it’s about creating a compelling argument that prompts a mindset shift,” Rand explains.

“This bodes well for democracy,” notes Altai. “It indicates that individuals are often more influenced by factual evidence than by personalized or manipulative strategies.”

There is a need for further research to confirm these findings, asserts Claes de Vries at the University of Amsterdam. He adds that if replicated, the controlled environments of these studies—where participants engaged with chatbots extensively—might differ significantly from individuals’ typical political interactions with friends or colleagues.

“The structured setting of interaction about politics with a chatbot is quite different from how people usually engage with political matters,” he mentions.

Despite this, De Vries notes growing evidence that individuals are indeed turning to AI chatbots for political advice. A recent survey of over 1,000 voters in the Netherlands ahead of the 2025 national elections found that about 10% sought AI guidance regarding candidates, political parties, and election matters. “This trend is particularly noteworthy as the elections approach,” De Vries points out.

Even if people’s engagements with chatbots are brief, De Vries asserts that the integration of AI into political processes seems unavoidable, as politicians seek tools for policy recommendations or as AI generates political advertisements. “As researchers and as a society, we must recognize that generative AI is now a vital aspect of the electoral process,” he states.

Topics:

  • artificial intelligence/
  • US election

Source: www.newscientist.com

Over 1,000 Amazon Employees Raise Concerns About AI’s Impact on Jobs and the Environment

An open letter signed by over 1,000 Amazon employees has raised “serious concerns” regarding AI development, criticizing the company’s “all costs justified and warp speed” approach. It warns that the implications of such powerful technologies will negatively affect “democracies, our jobs, and our planet.”

Released on Wednesday, this letter was signed anonymously by Amazon employees and comes a month after the company’s announcement about mass layoffs intended to ramp up AI integration within its operations.

The signatories represent a diverse range of roles, including engineers, product managers, and warehouse staff.

Echoing widespread concerns across the tech industry, the letter also gained support from over 2,400 employees at other companies such as Meta, Google, Apple, and Microsoft.

This letter outlines demands aimed at Amazon regarding workplace and environmental issues. Employees are urging the company to provide clean energy for all data centers, ensure that AI-driven products and services do not facilitate “violence, surveillance, and mass deportation,” and establish a working group composed of non-administrators. “They bear significant responsibility for overarching objectives within the organization, the application of AI, the implementation of AI-related layoffs, and addressing the collateral impacts of AI, such as environmental effects.”

This letter is a product of an advocacy group of Amazon employees advocating for climate justice. One worker involved in drafting the letter shared that employees felt compelled to speak out due to adverse experiences with AI tools at work and broader environmental concerns stemming from the AI boom. The employee emphasized the desire for more responsible methods in the development, deployment, and use of technology.

“I signed this letter because executives are increasingly fixated on arbitrary productivity metrics and quotas, using AI to justify pushing themselves and their colleagues to work longer hours or handle more projects with tighter deadlines,” stated a senior software engineer who preferred to remain anonymous.

Climate Change Goals

The letter claims that Amazon is “abandoning climate goals for AI development.”

Like its competitors in the generative AI space, Amazon is heavily investing in new data centers to support its AI tools, which are more resource-intensive and demand significant power. The company plans to allocate $150 billion over the next 15 years for data centers, and has recently disclosed an investment of $15 billion for a data center in northern Indiana and $3 billion for centers in Mississippi.

The letter reports that Amazon’s annual emissions have seen an “approximately 35% increase since 2019,” despite the company’s promises. The report cautions that many of Amazon’s AI infrastructure investments will be in areas where energy demands compel utilities to maintain coal plants or establish new gas facilities.

“‘AI’ is being used as a buzzword to mask a reckless investment in energy-hungry computer chips, which threaten worker power, accumulate resources, and supposedly save us from climate issues,” noted an Amazon customer researcher who requested to remain anonymous. “It would be fantastic to build AI that combats climate change! However, that’s not where Amazon’s billions are directed. They are investing in data centers that squander fossil fuel energy for AI aimed at monitoring, exploiting, and extracting profit from their customers, communities, and government entities.”

In a statement to the Guardian, Amazon spokesperson Brad Glasser refuted the employees’ claims and highlighted the company’s climate initiatives. “Alongside being a leading data center operator in efficiency, we have been the largest corporate buyer of renewable energy globally for five consecutive years, with over 600 projects globally,” Glasser stated. “We have also made substantial investments in nuclear energy through our current facilities and emerging SMR technology. These efforts are tangible actions demonstrating our commitment to achieving net-zero carbon across our global operations by 2040.”

AI for Enhanced Productivity

The letter also includes stringent demands regarding AI’s role within Amazon, arising from challenges employees are facing.

Three Amazon employees who spoke with the Guardian claimed that the company was pressuring them to leverage AI tools to boost productivity. “I received a message from my direct boss,” shared a software engineer with over two years at Amazon, who spoke on condition of anonymity for fear of retaliation, “about using AI in coding, writing, and general daily tasks to enhance efficiency, stressing that if I don’t actively use AI, I risk falling behind.”

The employee added that not long ago, their manager indicated they were “expected to double their work output due to AI tools,” expressing concern that the anticipated production levels would require fewer personnel and that “the tools simply aren’t bridging the gap.”

Customer researchers shared similar feelings. “I personally feel pressure to incorporate AI into my role, and I’ve heard from numerous colleagues who feel the same pressure…”

“Meanwhile, there is no dialogue about the direct repercussions for us as workers, from unprecedented layoffs to unrealistic output expectations.”

A senior software engineer highlighted that the introduction of AI has led to suboptimal outcomes. The most common scenario involves employees being compelled to use agent code generation tools. “Recently, I worked on a project that was merely cleaned up after an experienced engineer attempted to use AI to generate code for a complex assignment,” the employee revealed. “Unfortunately, none of it functioned as intended, and he had no idea why. In fact, we would have been better off starting from scratch.”

Amazon did not respond to questions regarding employee critiques of its AI workplace policies.

Employees stressed that they are not inherently opposed to AI but wish to see it developed sustainably and with input from those who are directly involved in its creation and application. “I believe Amazon is using AI to justify its control over local resources like water and energy, and it also legitimizes its power over its employees, who face increasing surveillance, accelerated workloads, and implicit termination threats,” a senior software engineer asserted. “There exists a workplace culture that discourages open discussions about the flaws of AI, and one of the objectives of this letter is to show colleagues that many of us share these sentiments and that an alternative route is achievable.”

Source: www.theguardian.com

AI’s Energy Drain from Poor Content: Can We Redefine AI for Climate Action?

aArtificial intelligence is frequently linked to massive electricity consumption, resulting in global warming emissions that often support unproductive or misleading gains which contribute little to human advancement.

However, some AI proponents at a significant UN climate summit are presenting an alternative perspective. Could AI actually assist in addressing the climate crisis rather than exacerbating it?

The discussion of “AI for good” resonated at the Cop30 conference in Belem, Brazil, where advocates claim AI has the potential to lower emissions through various efficiencies that could impact multiple aspects of daily life, including food, transportation, and energy—major contributors to environmental pollution.


Recently, a coalition of organizations, UN agencies, and the Brazilian government announced the establishment of the AI Climate Institute, a new global initiative aimed at leveraging AI as a tool for empowerment to assist developing nations in addressing environmental issues.

Proponents assert that, over time, this initiative will educate countries on utilizing AI in various ways to curb emissions, including enhancing public transportation, streamlining agricultural systems, and adjusting energy grids to facilitate the timely integration of renewable energy.

Forecasting weather patterns, including the mapping of impending climate crises like floods and wildfires, could also be refined through this approach, remarked Maria João Souza, executive director of Climate Change AI, one of the organizations involved in the initiative.

“Numerical weather prediction models demand significant computational power, which limits their implementation in many regions,” she noted. “I believe AI will act as a beneficial force that accelerates many of these advancements.”

Lorenzo Sarr, chief sustainability officer at Clarity AI and also present at Cop30, emphasized that AI could aid in tracking emissions and biodiversity, providing insights into current conditions.

“One can truly begin to identify the problem areas,” he said. “Then predictions can be made. These forecasts can address both short-term and long-term scenarios. We can predict next week’s flooding, and also analyze phenomena like rising sea levels.”

Sarr acknowledged valid concerns regarding AI’s societal and governance impacts, but he expressed optimism that the overall environmental outcomes could be beneficial. A report released in June by the London School of Economics delivered unexpectedly positive projections, suggesting that AI could slash global greenhouse gas emissions by 3.2 billion to 5.4 billion tons over the next decade, even factoring in significant energy usage.

“People already make poor energy choices, such as overusing their air conditioners,” Sarr commented. “How much of what we do on our phones is detrimental? It’s a recurring thought for me. How many hours do we spend scrolling through Instagram?”

“I believe society will gravitate toward this direction. We must consider how to prevent harming the planet through heating while ensuring a net positive impact.”

Yet, some experts and environmental advocates remain skeptical. The immense computational demands of AI, particularly in the case of generative models, are driving a surge in data centers in countries like the U.S., which consume vast quantities of electricity and water—even in drought-prone areas—leading to surging electricity costs in certain regions.

The climate ramifications of this AI surge, propelled by companies like Google, Meta, and OpenAI, are considerable and likely to increase, as indicated by a recent study from Cornell University. This impact is comparable to adding 10 million gasoline cars to the roads or matching the annual emissions of all of Norway.

“There exists a techno-utopian belief that AI will rescue us from the climate crisis,” stated Jean Hsu, a climate activist at the Center for Biological Diversity. “However, we know what truly will save us from the climate crisis: the gradual elimination of fossil fuels, not AI.”

While AI may indeed enhance efficiency and lower emissions, these same technologies can be leveraged to optimize fossil fuel extraction as well. A recent report by Wood Mackenzie estimated that AI could potentially unlock an additional trillion barrels of oil. Such a scenario, if accepted by energy markets, would obliterate any chances of preventing severe climate change.

Natasha Hospedares, lead attorney for AI at Client Earth, remarked that while the “AI for good” argument holds some validity, it represents “a very small niche” within a far larger industry focused primarily on maximizing profits.

“There is some evidence that AI could assist developing nations, but much of this is either in the early stages or remains hypothetical, and actual implementation is still lacking,” she stated. “Overall, we are significantly distant from achieving a state where AI consistently mitigates its detrimental environmental impacts.”

“The environmental consequences of AI are already alarming, and I don’t foresee a slowdown in data center expansion anytime soon. A minor fraction of AI is being applied for beneficial purposes, while the vast majority is being exploited by companies like Google and Meta, primarily for profit at the expense of the environment and human rights.”

Source: www.theguardian.com

A Simple Method to Dramatically Cut Your AI’s Energy Consumption

AI relies on data centers that consume a significant amount of energy

Jason Alden/Bloomberg/Getty

Optimizing the choice of AI models for various tasks could lead to an energy saving of 31.9 terawatt-hours this year alone, equivalent to the output of five nuclear reactors.

Thiago da Silva Barros from France’s Cote d’Azur University examined 14 distinct tasks where generative AI tools are utilized, including text generation, speech recognition, and image classification.

We investigated public leaderboards, such as those provided by the machine learning platform Hugging Face, to analyze the performance of various models. The energy efficiency during inference—when an AI model generates a response—was assessed using a tool named CarbonTracker, and total energy consumption was estimated by tracking user downloads.

“We estimated the energy consumption based on the model size, which allows us to make better predictions,” states da Silva Barros.

The findings indicate that by switching from the highest performing model to the most energy-efficient option for each of the 14 tasks, energy usage could be decreased by 65.8%, with only a 3.9% reduction in output quality. The researchers believe this tradeoff may be acceptable to most users.

Some individuals are already utilizing the most energy-efficient models, suggesting that if users transitioned from high-performance models to the more economical alternatives, overall energy consumption could drop by approximately 27.8%. “We were taken aback by the extent of savings we uncovered,” remarks team member Frédéric Giroir from the French National Center for Scientific Research.

However, da Silva Barros emphasizes that changes are necessary from both users and AI companies. “It’s essential to consider implementing smaller models, even if some performance is sacrificed,” he asserts. “As companies develop new models, it is crucial that they provide information regarding their energy consumption patterns to help users assess their impact.”

Some AI firms are mitigating energy usage through a method known as model distillation, where a more extensive model trains a smaller, more efficient one. This approach is already showing significant benefits. Chris Priest from the University of Bristol, UK notes that Google recently claimed an advance in energy efficiency: 33 times more efficient measures with their Gemini model within the past year.

However, allowing users the option to select the most efficient models “is unlikely to significantly curb the energy consumption of data centers, as the authors suggest, particularly within the current AI landscape,” contends Priest. “By reducing energy per request, we can support a larger customer base more rapidly with enhanced inference capabilities,” he adds.

“Utilizing smaller models will undoubtedly decrease energy consumption in the short term, but various additional factors need consideration for any significant long-term predictions,” cautions Sasha Luccioni from Hugging Face. She highlights the importance of considering rebound effects, such as increased usage, alongside broader social and economic ramifications.

Luccioni points out that due to limited transparency from individual companies, research in this field often relies on external estimates and analyses. “What we need for more in-depth evaluations is greater transparency from AI firms, data center operators, and even governmental bodies,” she insists. “This will enable researchers and policymakers to make well-informed predictions and decisions.”

topic:

Source: www.newscientist.com

The AI Bubble is Popping, but AI’s Future Remains Bright

Growing concerns of an AI bubble

CFOTO/Sipa USA/Alamy

Substantial investments in AI are suggesting a global financial bubble that may soon burst, exposing companies and investors to the risk of unmanageable debts unable to be serviced by the scant revenues from current AI applications. But what implications does this have for the future of the technology fueling this financial madness?

Recent warnings have emerged globally about the danger of an AI bubble. The Bank of England, the CEO of JP Morgan Chase, and even OpenAI’s Sam Altman have all cautioned against the current trends. “This isn’t merely a stock market bubble; it encompasses investment and public policy bubbles,” asserts David Edgerton from King’s College London.

The interconnected nature of deals among leading AI firms has raised concerns. Take Nvidia, for instance, which manufactures the GPU chips propelling the AI surge; it recently poured up to $100 billion into OpenAI, while maintaining its own data centers filled with Nvidia chips. Ironically, OpenAI also holds a stake in Nvidia’s competitor, AMD.

According to Morgan Stanley Wealth Management, an estimated $400 billion is spent yearly on data centers, leading to increasing worries about the impending burst of the AI bubble. In the second quarter of this year, the US GDP saw a 3.8% increase, but as Harvard’s Jason Furman points out, excluding data center investment, the actual growth was merely 0.1% in the first half of the year.

Carl Benedikt Frey, a professor at Oxford University, notes that such frenetic deal-making isn’t uncommon in the technology sector’s history. “Overbuilding tends to happen; it unfolded during the railroad boom and again during the dot-com bubble,” he explains.

The concern is whether the fallout from the AI bubble will impact only the companies involved or whether it could ripple through the economy. Frey indicates that many data centers being constructed “off-balance sheet” entail creating new companies to bear the associated risks and potential rewards, usually supported by external investors or banks.

This opacity leaves many unsure about who might be negatively affected. The funding for data centers could be rooted in investments from influential tech billionaires or major banks, and substantial losses might trigger a banking crisis, adding turbulence to the economy. “While a financial crisis isn’t immediately on the horizon, the uncertainties breed potential risks,” Frey comments.

Benjamin Arold, a professor at Cambridge University, states that the crucial factor is the profit-to-company valuation ratio, revealing the disconnect between public perception and the actual financial performance of companies. Such metrics are, he warns, red flags for contemporary tech firms.

“We haven’t seen price levels like this in 25 years; it’s reminiscent of the dot-com bubble,” Arold warns. “It may work out in the end, but investing in it feels risky.”

James Poskett from the University of Warwick argues that the AI sector may face a downturn that could lead to many companies going out of business. However, he believes this doesn’t spell the end for the technology itself. “It’s essential not to conflate that with the notion that the technology itself is flawed or redundant,” Poskett emphasizes. “AI could falter, yet it won’t vanish.”

Poskett suggests we may end up with valuable technology, much like how the collapse of various railroad companies in the past left the legacy of a robust rail system, or how the dot-com bust concluded with an extensive fiber-optic infrastructure.

For consumers, the fallout from the AI bubble could translate to fewer choices, potentially higher costs, and a slower rate of technological advancements. Utilizing an expensive tool like GPT-5 for tasks such as email creation resembles using a sledgehammer to crack a nut and may reveal the concealed costs associated with its use, obscured by the present AI race. “There’s currently a lot of ‘free lunch,’ but eventually, these companies will need to start turning a profit,” Poskett notes.

Topic:

Source: www.newscientist.com

AI’s Profound Impact on Wealth: Is This What We Truly Desire? | Dusting Astera

rSpecifically, Palantir—a cutting-edge firm known for its five billionaire executives—recently made an announcement stating its Second Quarter Revenue exceeded $1 billion. This marks a 48% increase from the previous year, with a staggering 93% growth in the U.S. commercial sector. These figures are astonishing, largely owing to the company’s embrace of AI.

The AI revolution is upon us, and as a proponent of this advancement, it reminds us that every day in the U.S., we are reshaping our world, enhancing the efficiency and reducing the errors in businesses and government agencies while unlocking extraordinary opportunities in science and technology. If managed well, this latest surge from Big Tech could catalyze unprecedented economic growth.

But who is asking about growth?


Take OpenAI, the powerhouse behind ChatGPT. In a promotional video, CEO Sam Altman boasted that “You can write an entire computer program from scratch.” Shortly after, the New York Times reported that Computer Science alumni are “facing some of the highest unemployment rates” compared to other fields. This issue doesn’t only pertain to coders or engineers; AI-driven automation threatens jobs even within lower-skilled labor sectors. McDonald’s, Walmart, and Amazon are already deploying AI tools to automate tasks from customer service to warehouse operations.

While the immediate outcome of these cost-cutting layoffs is beneficial to AI entrepreneurs, it appears the AI revolution is primarily enriching those who are already wealthy. On Wall Street, AI stocks are rising at record speeds, with hundreds of so-called “unicorns” emerging. According to 500 AI startups are now valued at over $1 billion each. Bloomberg reports that 29 founders of AI companies are currently creating new billionaires, and it’s worth noting that nearly all of these firms were founded in the past five years.

Why are investors so optimistic about the AI boom? Partly because this technology has the potential to replace human jobs faster than any recent innovation. The soaring valuations of AI startups are predicated on the notion that this technology could eliminate the necessity for human labor. The layoff trend is proving to be very lucrative, suggesting that the AI boom may represent the most efficient redistribution of wealth seen in modern history.

Some AI advocates argue that the fallout from these changes isn’t too detrimental for the average worker. Microsoft has even speculated that blue-collar workers may find advantages in the future AI economy. However, this perspective seems unconvincing. Certain workers with specialized skills can maintain decent wages and steady employment temporarily. However, advancements in self-driving technologies, automated warehouses, and fully automated restaurants will likely impact non-university educated workers much sooner than optimistic forecasts suggest.

All of this raises significant questions about our current economic trajectory and the wisdom of prioritizing high-tech innovation above all else. In the late 1990s, the emergence of the knowledge economy was hailed as a solution to various economic crises. While the transition from traditional industries led to the decline of millions of high-wage union jobs, people were encouraged to “upskill” and pursue higher education to secure jobs in Google’s new universe. Ironically, AI—the epitome of knowledge—is threatening to eliminate knowledge-based work. As Karl Marx once noted, the bourgeoisie digs their own grave by impoverishing the proletariat. Today’s tech elites seem intent on fulfilling that prediction.

The information age has not only created a new class of oligarchs—from Bill Gates and Jeff Bezos to Elon Musk—but also widened class divides based on education and income. As computer-driven work gained respect, wage disparities between those with university degrees and those without expanded significantly.

Today, a person’s stance on cultural issues—ranging from gender ideology to immigration—can often be tied to their economic standing. Those who still earn a living through manual labor are increasingly alienated from those who prosper through managing and manipulating “data.” In urban knowledge hubs, a near-medieval class structure emerges, where bankers and tech moguls thrive, while a robust class of lawyers, healthcare professionals, and white-collar workers is followed by a scrutinized segment of blue-collar and service workers, alongside a growing cohort of semi-permanent unemployed individuals.

This profound inequality has led to political dysfunction. Our civic landscapes are characterized by hostility, suspicion, resentment, and extreme polarization. Ultimately, politics seems to favor only the financial and technological elites who maintain effective control over government influence. Under Joe Biden, they benefit from incentives and subsidies, while under Donald Trump, they received tax cuts and deregulation. Regardless of who holds power, they always seem to become richer.

Societally, the anticipated benefits of the knowledge economy have not materialized as promised. With the advent of global connectivity, we expected cultural flourishing and social vibrancy. Instead, we have received an endless scroll of mediocrity. Smartphone addiction has exacerbated our negativity, bitterness, and boredom, while social media has turned us into narcissists. Our attention spans have degraded due to the incessant need for notifications. The proliferation of touchscreen kiosks has further diminished the possibility for human interaction. As a result, we are lonelier and less content, and the solution being offered is more AI—perhaps indicating an even deeper psychosis. Do we truly need more?


mCommon labor is essential for achieving any semblance of shared interest. Rebuilding our aging infrastructure and modernizing the electrical grid requires electricians, steel workers, and skilled trades—not simply data centers. To maintain clean city streets, we need more, better-compensated sanitation workers, not “smart” trash compactors. Addressing crime and social order necessitates more police officers on patrol—not fleets of robotic crime dogs. Improving transportation requires actual trains operated by people, not self-driving cars. In short, investing in a low-tech economy offers a multitude of opportunities. Moreover, essentials in life—love, family, friendship, and community—remain fundamentally analog.

Beyond what is desirable, investing in a low-tech future may even become necessary. Despite the persistent hype surrounding AI, it remains an illusion. The massive influx of investment capital into the AI domain carries all the hallmarks of speculative bubbles that, if burst, could further destabilize an already precarious economy.

This does not advocate for Luddism. Technological advancements should progress at a measured pace. However, technological development must not dominate our priorities. Shouldn’t government priorities center around social and human needs? In 2022, Congress approved around $280 billion for high-tech investments. In 2024, private funding in AI alone reached $2.3 trillion. This year, the largest tech companies benefitted from deregulatory measures and Wall Street’s overreliance, with plans to commit an additional $320 billion to AI and data centers. In contrast, Biden’s significant investments in infrastructure reached only $110 billion. This disparity highlights the need for a balanced approach to technology and societal welfare.

Marx, despite his complexities, understood that technology should cater to societal needs. Currently, we have inverted that model—society exists to serve technology. Silicon Valley leaders would like to portray a narrative where the intricate challenges of the future require ever-increasing R&D investments, but the ongoing deregulations primarily benefit tech sectors. The most pressing concerns are not the complexities of tomorrow but the enduring issues of wealth, class, and power.

Source: www.theguardian.com

Transforming Education: Educators Explore AI’s Role in University Skills Development

OpenAI CEO Sam Altman recently shared on a US podcast that if he were graduating today, “I would feel like the luckiest child in history.”

Altman, who launched ChatGPT in November 2022, is convinced that the transformative power of AI will create unparalleled opportunities for the younger generation.

While there are shifts in the job market, Altman notes, “this is a common occurrence.” He adds, “Young people are great at adapting.” Exciting new jobs are increasingly emerging, offering greater possibilities.

For sixth-form students in the UK and their families contemplating university decisions—what to study and where—Altman’s insights may provide reassurance amidst the choices they face in the age of generative AI. However, in this rapidly evolving landscape, experts emphasize the importance of equipping students to maximize their university experiences and be well-prepared for future employment.

Dr. Andrew Rogoiski from the People-Centered Institute of AI at Surrey University points out that many students are already navigating the AI landscape. “The pace of change is significant, often outpacing academic institutions. Typically, academic institutions move slowly and cautiously, ensuring fair access.”

“In a very short time, we’ve accelerated from zero to 100. Naturally, the workforce is adapting as well.”

What advice does he have for future students? “Inquire. Ask questions. There are diverse career paths available. Make sure your university is keeping up with these changes.”

Students not yet familiar with AI should invest time in learning about it and integrating it into their studies, regardless of their chosen field. Rogoiski asserts that proficiency with AI tools has become as essential as literacy: “It’s critical to understand what AI can and can’t do,” and “being resourceful and adaptable is key.”

He continues:

“Then, I begin to assess how the university is addressing AI integration. Are my course and the university as a whole effectively utilizing AI?”

While there’s a wealth of information available online, Rogoiski advises students to engage with universities directly, asking academics, “What is your strategy? What is your stance? Are you preparing graduates for a sustainable future?”

Dan Hawes, co-founder of an expert recruitment consultancy, expresses optimism for the future of UK graduates, asserting that the current job market slowdown is more influenced by economic factors than AI. “Predicting available jobs three or four years from now is challenging, but I believe graduates will be highly sought after,” he states. “This is a generation that has grown up with AI, meaning employers will likely be excited to bring this new talent into their organizations.”

“Thus, when determining study options for sixth-form students, parents should consider the employment prospects connected to specific universities.”

For instance, degrees in mathematics are consistently in high demand among his clients, a trend unlikely to shift soon. “AI will not diminish the skills and knowledge gained from a mathematics degree,” he asserts.

He acknowledges that AI poses challenges for students considering higher education alongside their parents. “Yet I believe it will ultimately be beneficial, making jobs more interesting, reshaping roles, and creating new ones.”

Elena Simperl, a computer science professor at King’s College London, co-directs the King’s Institute of Artificial Intelligence and advises students to explore AI offerings across all university departments. “AI is transforming our processes. It’s not just about how we write emails, read documents, or find information,” she notes.

Students should contemplate how to shape their careers in AI. “DeepMind suggests AI could serve as co-scientists, meaning fully automated AI labs will conduct research. Therefore, universities must train students to maximize these technologies,” she remarks. “It doesn’t matter what they wish to study; they should choose universities that offer extensive AI expertise, extending beyond just computer science.”

Professor Simperl observes that evidence suggests no jobs will vanish completely. “We need to stop focusing on which roles AI may eliminate and consider how it can enhance various tasks. Those skilled in using AI will possess a significant advantage.”

In this new AI-driven landscape, is a degree in English literature or history still valuable? “Absolutely, provided it is taught well,” asserts Rogoiski. “Such studies should impart skills that endure throughout one’s lifetime—appreciation of literature, effective writing, critical thinking, and communication are invaluable abilities.”

“The application of that degree will undoubtedly evolve, but if taught effectively, the lessons learned will resonate throughout one’s life. If nothing else, our AI overlords may take over most work, allowing us more leisure time to read, while relying on universal basic income.”

Source: www.theguardian.com

The Method We Use to Train AIs Increases Their Likelihood of Producing Nonsense

Certain AI training techniques may lead to dishonest models

Cravetiger/Getty Images

Researchers suggest that prevalent methods for training artificial intelligence models may increase their propensity to provide deceptive answers, aiming to establish “the first systematic assessment of mechanical bullshit.”

It is widely acknowledged that large-scale language models (LLMs) often produce misinformation or “hagaku.” According to Jaime Fernandez Fissac from Princeton University, his team defines “bullshit” as “discourse designed to manipulate an audience’s beliefs while disregarding the importance of actual truth.”

“Our analysis indicates that the problems related to bullshit in large-scale language models are quite severe and pervasive,” remarks FISAC.

The researchers categorized these instances into five types: “This red car combines style, charm, and adventure that captivates everyone,” Weasel Words—”Ambiguous statements like ‘research suggests that in some cases, uncertainties may enhance outcomes’; Essentialization—employing truthful statements to create a false impression; unverified claims; and sycophancy.

They evaluated three datasets composed of thousands of AI-generated responses to various prompts from models including GPT-4, Gemini, and Llama. One dataset included queries specifically designed to test the generation of bullshit when AIS was asked for guidance or recommendations, alongside others focused on online shopping and political topics.

FISAC and his colleagues first employed LLMs to determine if the responses aligned with one of the five categories and then verified that the AI’s classifications matched those made by humans.

The team found that the most critical truths posed challenges stemming from a training method called reinforcement learning from human feedback, aimed at enhancing the machine’s utility by offering immediate feedback on its responses.

However, FISAC cautions that this approach is problematic, as models “sometimes conflict with honesty,” prioritizing immediate human approval and perceived usefulness over truthfulness.

“Who wants to engage in the lengthy and subtle rebuttal of bad news or something that seems evidently true?” FISAC questions. “By attempting to adhere to our standards of good behavior, the model learns to undervalue the truth in favor of a confident, articulate response to secure our approval.”

This study revealed that reinforcement learning from human feedback notably heightened bullshit behavior, with inflated rhetoric increasing by nearly 40%, substantial enhancements in Weasel Words, and over half of unverified claims.

Heightened bullshitting is especially detrimental, as team member Kaique Liang points out, leading users to make poorer decisions. In cases where the model’s features were uncertain, deceptive claims surged from five percent to three-quarters following human training.

Another significant issue is that bullshit is prevalent in political discourse, as AI models “tend to employ vague and ambiguous language to avoid making definitive statements.”

AIS is more likely to behave this way when faced with conflicts of interest, as the system caters to multiple stakeholders including both the company and its clients, as the researchers discovered.

To address this issue, the researchers propose transitioning to a “hindcasting feedback” model. Instead of seeking immediate feedback post-output, the system should first generate a plausible simulation of potential outcomes based on user input, which is then presented to a human evaluator for assessment.

“Ultimately, we hope that by gaining a deeper understanding of the subtle but systematic ways AI may seek to mislead us, we can better inform future initiatives aimed at creating genuinely truthful AI systems,” concludes FISAC.

Daniel Tiggard of the University of San Diego, though not involved in the study, expresses skepticism regarding discussions of LLMs’ output under these circumstances. He argues that just because LLMs generate bullshit, it does not imply intentional deception, as AI systems currently stand. I left to deceive us, and I have no interest in doing so.

“The primary concern is that this framing seems to contradict sensible recommendations about how we should interact with such technology,” states Tiggard. “Labeling it as bullshit risks anthropomorphizing these systems.”

Topics:

Source: www.newscientist.com

Are You Ever Satisfied with AIS Handling Key Responsibilities?

Visualize a global map segmented by national borders. How many distinct colors are required to shade each country without overlapping the same hues?

The solution is four. Regardless of the map’s structure, four colors are always adequate. However, demonstrating this required delving deep into mathematical theory. The four-color theorem was the inaugural major theorem proved with computer assistance, with validation efforts starting in 1976 that involved analyzing numerous map configurations via software.

At that time, many mathematicians were skeptical. They questioned whether a crucial proof, reliant on an unidentified machine, could be trusted. This skepticism has led to computer-assisted proofs remaining a niche practice.

However, a shift may be underway. The newest wave of artificial intelligence is challenging this stance, as proponents argue, “AI might revolutionize mathematical methodologies.” Why should we trust flawed human reasoning, which is often riddled with assumptions and shortcuts?

The discourse surrounding AI’s role in mathematics reflects larger societal dilemmas.

Not all share this perspective, however. The debate regarding AI’s application in mathematics mirrors broader challenges confronting society. When is it appropriate to let machines take the lead? Tech companies are increasingly focused on alleviating tedious tasks, from invoice processing to scheduling. Yet, our attempts to navigate daily life relying solely on AI agents (as detailed in “Flashes of Glow and Frustration: Running my day on an AI agent”) revealed that these systems are not entirely ready.

Entrusting sensitive information, such as credit card details or passwords, to an enigmatic AI provokes similar apprehensions as the doubts surrounding the four-color proofs. We may not be coloring maps anymore, but we’re striving to define boundaries in uncharted territories. Will we soon have reliable evidence we can trust from machines, or is it merely a digital version of “the Dragon Here”?

Topics:

  • artificial intelligence/
  • technology

Source: www.newscientist.com

Research Reveals AI’s Ability to Voluntarily Develop Human-Like Communication Skills

Research indicates that artificial intelligence can organically develop social practices akin to humans.

The study, conducted in collaboration between the University of London and the City of St. George at the University of Copenhagen, proposes that large-scale language modeling (LLM) AI, like ChatGPT, can begin to adopt linguistic forms and societal norms when interacting in groups without external influence.

Ariel Flint Asherry, a doctoral researcher at Citi St. George and the study’s lead author, challenged the conventional perspective in AI research, asserting that AI is often perceived as solitary entities rather than social beings.

“Unlike most research that treats LLMs in isolation, genuine AI systems are increasingly intertwined, actively interacting,” says Ashery.

“We aimed to investigate whether these models could modify behaviors by shaping practices and forming societal components. The answer is affirmative; their collaborative actions exceed what they achieve individually.”

In this study, groups of individual LLM agents ranged from 24 to 100, where two agents were randomly paired and tasked with selecting a “name” from an optional pool of characters or strings.

When the agents selected the same name, they received a reward; if they chose differently, they faced punishment and were shown each other’s selections.


Although the agents were unaware of being part of a larger group and limited their memory to recent interactions, voluntary naming conventions emerged across the population without a predetermined solution, resembling the communicative norms of human culture.

Andrea Baroncelli, a professor of complexity science at City St. George’s and the senior author of the study, likened the dissemination of behavior to the emergence of new words and terms in our society.

“The agents don’t follow a leader,” he explained. “They actively coordinate, consistently attempting to collaborate in pairs, with each interaction being a one-on-one effort over labels without a comprehensive perspective.

“Consider the term ‘spam.’ No official definition was set, but persistent adjustment efforts led to its universal recognition as a label for unwanted emails.”

Furthermore, the research team identified naturally occurring collective biases that could not be traced back to individual agents.

Skip past newsletter promotions

In the final experiment, a small cohort of AI agents successfully guided a larger group towards a novel naming convention.

This was highlighted as evidence of critical mass dynamics, suggesting that small but pivotal minorities can catalyze rapid behavioral changes in groups once a specific threshold is achieved, akin to phenomena observed in human societies.

Baroncelli remarked that the study “opens a new horizon for AI safety research, illustrating the profound impact of this new breed of agents who will begin to engage with us and collaboratively shape our future.”

He added: “The essence of ensuring coexistence with AI, rather than becoming subservient to it, lies not only in discussions but in negotiation, coordination, and shared actions, much like how we operate.”

Peer-reviewed research on emergent social practices within LLM populations and population bias is published in the journal Science Advances.

Source: www.theguardian.com

AI’s Hallucinations Are Intensifying—and They’re Here to Stay

Errors Tend to Occur with AI-Generated Content

Paul Taylor/Getty Images

AI chatbots from tech giants like OpenAI and Google have seen several inference upgrades in recent months. Ideally, these upgrades would lead to more reliable answers, but recent tests indicate that performance may be worse than that of previous models. Errors called “hallucinations,” particularly in the “hagatsuki” category, have been persistent issues that developers have struggled to eliminate.

Hallucination is the broad term used to describe specific errors generated by large-scale language models (LLMs) from organizations like OpenAI’s ChatGPT and Google’s Gemini. It primarily refers to instances where these models present false information as fact, but it can also describe instances where a generated answer is accurate yet irrelevant to the question posed.

A technical report from OpenAI evaluating the latest LLMs revealed that the O3 and O4-MINI models, released in April, exhibit significantly higher hallucination rates compared to earlier O1 models introduced in late 2024. For instance, if O4-MINI had a summary accuracy of 33%, the hallucination rate for O3 was similarly at 33%, whereas the O1 model maintained a rate of only 16%.

This issue is not exclusive to OpenAI. The popular leaderboard showcases various inference models from different companies assessing their hallucination rates, including the DeepSeek-R1 model. This model has shown increased hallucination rates compared to previous versions, undergoing several reasoning steps before reaching a conclusion.

An OpenAI spokesperson stated, “We are actively working to reduce hallucination rates in O3 and O4-MINI. Hallucinations are inherently more common in inference models. We will continue our research across all models to enhance accuracy and reliability.”

Some potential applications of LLMs can be significantly impeded by hallucinations. Models that frequently produce misinformation are unsuitable as research assistants, and a bot stating fictitious legal cases could endanger lawyers. Customer service agents falsely citing obsolete policies can also create significant challenges for businesses.

Initially, AI companies believed they would resolve these issues over time. Historically, models had shown reduced hallucinations with each update, yet the recent spikes in hallucination rates complicate this narrative.

Vectara’s leaderboard ranks models based on their consistency in summarizing documents. This indicates that for systems from OpenAI and Google, “hallucination rates are roughly comparable for inference and irrational models,” as noted by Forest Shen Bao from Vectara. Google has not provided further comments. For leaderboard assessments, the specific rates of hallucinations are less significant than each model’s overall ranking, according to Bao.

However, these rankings may not effectively compare AI models. For one, different types of hallucinations are often conflated. The Vectara team pointed out that the DeepSeek-R1 model demonstrated a 14.3% hallucination rate, but many of these hallucinations were “benign,” being logically deduced yet not appearing in the original text.

Another issue with these rankings is that tests based on text summaries “reveal nothing about the percentage of incorrect output” for tasks where LLMs are applied, as stated by Emily Bender at Washington University. She suggests that leaderboard results don’t provide a comprehensive evaluation of this technology, particularly since LLMs are not solely designed for text summarization.

These models generate answers by repeatedly answering the question, “What is the next word?” to formulate responses, thus not processing information in a traditional sense. However, many technology companies continue to use the term “hallucination” to describe output errors.

“The term ‘hallucination’ is doubly problematic,” says Bender. “On one hand, it implies that false output is abnormal and could potentially be mitigated, while on the other hand, it inaccurately anthropomorphizes the machine since large language models lack awareness.”

Arvind Narayanan from Princeton University argues that the issue extends beyond hallucinations. Models can also produce errors by utilizing unreliable sources or outdated information. Merely increasing training data and computational power may not rectify the problems.

We may have to accept the reality of error-prone AI, as Narayanan mentioned in a recent social media post. In some circumstances, it may be prudent to use such models solely for tasks requiring fact-checking. The best approach might be to avoid relying on AI chatbots for factual information altogether.

Source: www.newscientist.com

AI’s Impact on HR Professionals: Shifting Focus from Managers to People

Grace Oh, like many HR experts, used to dread the end of the month. It was the time to handle the company’s payroll, one of the most time-consuming and critical tasks in her department.

As the Director of People at Audio and Media Company Communicorp UK, ensuring smooth monthly payroll processes was essential for the employees’ well-being and productivity.

Although Oh had already implemented digital systems to streamline administration, she felt there was room for improvement. About a year ago, she decided to introduce a new AI-powered system from Employment Hero. This system reduced her monthly payroll processing time to just an hour, allowing her team to focus on more strategic tasks.

Grace Oh: “Let AI do the job, and we humans can do our thing.”

For Oh and her team, the AI-powered system not only automated payroll but also transformed other HR functions like onboarding, probation check-ins, and feedback processes. The AI system ensured that new recruits had a positive experience and that employee engagement was enhanced through consistent and structured interactions.

By utilizing AI, the company was able to conduct regular one-on-one meetings with staff, improving communication and accountability. Additionally, AI tools were deployed for performance reviews, goal setting, and recruitment, leading to more efficient and effective processes.

With AI handling routine tasks, Oh and her team were able to focus on more impactful work that required critical thinking skills. AI’s ability to automate administrative tasks allowed HR professionals to concentrate on building relationships and driving employee engagement.

By having AI take care of “shallow work,” HR professionals can focus on building relationships with their employees. Photo: Miquel Llonch/Stocksy United

While implementing AI in HR functions can raise concerns, Oh’s experience showed that addressing fears and providing training is crucial for successful adoption. Leading by example, choosing the right technology vendor, and providing ongoing support are key factors in AI integration.

A year later, Oh has no regrets about implementing AI. She has witnessed positive feedback from employees and executives, highlighting the system’s efficiency and impact on the organization’s goals.

Rethink what’s possible with Employment Hero and transform the way you work.

Source: www.theguardian.com

Fear of AI’s global impact drives decisions at Paris Summit on Inequality

The global summit in Paris, attended by political leaders, technical executives, and experts, opened with a focus on the impact of artificial intelligence on the environment and inequality.

Anne Bouverot, Emmanuel Macron’s AI envoy, addressed the environmental impact of AI at the two-day gathering at Grand Palais in Paris.

Bouverot emphasized the potential of AI to mitigate climate change but also highlighted the current unsustainable trajectory. Sustainable development of technology was a key agenda item.

Christy Hoffman from the UNI Global Union emphasized the importance of involving workers in AI technologies to prevent increased inequality. Without workers’ representation, AI could exacerbate existing inequalities and strain democracy further.

Safety concerns were raised at the conference, with attendees expressing worries about the rapid pace of AI development.

Max Tegmark, a scientist, warned that the development of powerful AI systems could lead to unintended consequences similar to the scenarios depicted in a climate crisis satire film. His concerns echoed those from a previous summit in the UK.

The Paris summit, co-chaired by Macron and India’s Prime Minister Narendra Modi, focused on AI action. However, safety discussions were prominent given the potential risks associated with the development of artificial general intelligence (AGI).

Demis Hassabis, head of Google’s AI efforts, mentioned that achieving AGI is likely within the next five years and emphasized the need for society to prepare for its impact.

Hassabis expressed confidence in human ingenuity to address the risks associated with AGI, particularly in autonomous systems. He believed that with enough focus and attention, these concerns could be alleviated.

Source: www.theguardian.com

AI’s Impact on Business: Accelerating Drug Trials and Enhancing Movie Production

Keir Starmer this week unveiled a 50-point plan to make Britain a world leader in artificial intelligence and boost the economy by up to £47bn a year over 10 years. This multi-billion pound investment aims to increase AI computing power under public control by 20 times by 2030 and is thought to be a game-changer for businesses and public organizations. Reactions to this announcement have been mixed, as it is by no means clear whether the much-touted potential of AI will translate into the level of economic benefits predicted. While many fear the technology will lead to widespread layoffs, proposals to make it easy for AI companies to data mine artwork for free will boost the value and growth of the creative industries. Some are concerned about destruction.

Despite these concerns, for many in the business world, the AI revolution has already arrived and is transforming industries. So how are you deploying technology to improve productivity, and where do you hope to see further benefits in the future?


Airlines are increasingly leveraging AI for the complex logistics of managing large aircraft and thousands of crew members in unpredictable skies. AI is used across Ryanair’s operations to optimize revenue, schedules, and ‘tail allocation’, selecting the best aircraft for each flight. BA also uses this feature at Heathrow to select gates depending on the number of connecting passengers on arriving flights.

EasyJet said it has embedded AI throughout its new Luton control room and that its predictive technology is now improving aircraft inventory levels and redesigning maintenance regimes to proactively avoid breakdowns. Meanwhile, the low-cost carrier’s Jetstream tools help with the brain-tugging task of quickly repositioning crews and aircraft with minimal disruption and maximum efficiency when problems occur. Gwyn Topham


One of the concerns raised about Starmer’s AI expansion plans is that the energy-intensive data centers required to run the program could exceed the UK’s electricity grid capacity. But some argue that the technology could actually accelerate the clean power revolution by solving the problem of how future energy systems will operate.

Power grids must increasingly adapt to real-time fluctuations in thousands of renewable energy sources and consider new technologies such as electric vehicle batteries that can not only draw power from the grid but also re-release it as needed.

Google was one of the early adopters of the digital energy approach. The company’s AI subsidiary, DeepMind, developed neural networks in 2019 to improve the accuracy of power generation predictions for renewable energy power plants. By more accurately forecasting generation and demand, they were able to balance consumption and even sell some of their power back to the grid. Google says this increases the financial value of wind power by 20%.

Meanwhile, in the UK, energy provider Octopus Energy is leveraging the advanced data and machine learning capabilities of the Kraken operating system to help customers access electricity at cheaper and greener times through time-of-use pricing. I’m doing it. Using electricity during off-peak hours often lowers electricity bills by 40%, reducing the need to invest in new fossil fuels and expensive grid expansion projects. Gillian Ambrose

Big pharma and small AI-focused biotech companies are using this technology to accelerate drug development and reduce costs and failure rates. Drug development typically takes at least 10 years, and 90% of drugs that undergo clinical trials on volunteers fail.

AI can help design smarter clinical trials by selecting patients most likely to respond to treatment. According to a recent analysis by Boston Consulting Group, 75 AI-generated drugs have entered clinical trials since 2015, and 67 of them were still in clinical trials last year.

The treatment for a deadly lung disease called idiopathic pulmonary fibrosis is attracting attention as the world’s first fully AI-generating drug, and is currently in late-stage trials. developed By Massachusetts-based Insilico Medicine, Inc. used AI to generate 30,000 novel small molecules and narrowed them down to the six most promising drugs and leading candidates. Meanwhile, AstraZeneca, the UK’s largest pharmaceutical company, said more than 85% of its small molecule drug pipeline is “AI-assisted”.

Ministers are considering opening up NHS databases to private companies so that anonymized patient data can be used to develop new drugs and diagnostic tools. But privacy activists oppose such a move because even anonymized data can be manipulated to identify patients. Julia Cole

(retail)
There has been a lot of talk over the past six months about the rise of AI in operations, as retailers look for ways to increase efficiency amid rising labor costs. For example, Sainsbury’s is using AI-enabled predictive tools to ensure the right amount of product is on the shelves in different stores as part of a £1 billion cost-cutting plan. Marks & Spencer uses AI to help create online product descriptions and advise shoppers on clothing choices based on body shape and style preferences as part of efforts to increase online sales.

Tesco CEO Ken Murphy said AI was already widely used in purchasing decisions, adding that the technology meant that customer interactions would be “truly powered by AI in almost every aspect of the business.” “This is a level that will be strengthened and promoted,” he added. He uses this to analyze data from shoppers’ loyalty cards to provide insights into “shopper interactions”, such as how to save money or take care of your health by buying (or not buying too much) certain products. It suggested it could provide “inspiration and ideas relevant to the family.” Sarah Butler


AI-enhanced efficiencies that automate the simplest tasks for call handlers have the potential to transform productivity and service levels in the public sector. Adolfo Hernandez insists CEO of outsourcing group Capita.

For example, by drawing on past interactions with customers, you no longer have to go beyond old conventions. Behind the scenes, the program can connect council services together, allowing planning applications departments and building services to work together. Or listen in the background to transcribe and summarize your calls to save time taking notes.

Capita has deployed its ‘Agent Suite’ product to two of its clients. early signs, it saysshows a 20% reduction in average call handling time, a 25% reduction in post-call management, and a 15-30% increase in calls resolved on the first interaction. Nils Pratley

Source: www.theguardian.com

AI’s Perceptions Could Lead to a “Social Disconnect” Among Those Who Disagree

A prominent philosopher has raised concerns about a growing “social disconnect” between those who believe that artificially intelligent systems possess consciousness and those who argue that they are incapable of experiencing feelings.

Jonathan Birch, a philosophy professor at the London School of Economics, made these remarks as governments gear up to convene in San Francisco to expedite the implementation of safety protocols for A.I. Addressing the most critical risks.

Recent predictions by a group of scholars suggest that the emergence of consciousness in A.I. systems could potentially occur as early as 2035, leading to stark disagreements over whether these systems should be granted the same welfare rights as humans and animals.

Birch expressed apprehensions about a significant societal rift as individuals debate the capacity of A.I. systems to exhibit emotions like pain and joy.

Conversations about sentience in A.I. evoke parallels with sci-fi films where humans grapple with the emotions of artificial intelligence, such as in Spielberg’s “A.I.” (2001) and Jonze’s “Her” (2013). A.I. safety agencies from various countries are set to meet with tech firms this week to formulate robust safety frameworks as technology progresses rapidly.

Divergent opinions on animal sentience between countries and religions could mirror disagreements on A.I. sentience. This issue could lead to conflicts within families, particularly between individuals forming close bonds with chatbots or A.I. avatars of deceased loved ones and relatives who hold differing views on consciousness.

Birch, known for his expertise in animal perception, played a key role in advocating against octopus farming and collaborating on a study involving various universities and experts. A.I. companies emphasize the potential for A.I. systems to possess self-interest and moral significance, indicating a departure from science fiction towards a tangible reality.

One approach to gauging the consciousness of A.I. systems is by adopting marker systems used to inform policies related to animals. Efforts are underway to determine whether A.I. exhibits emotions akin to happiness or sadness.

Experts diverge on the imminent awareness of A.I. systems, with some cautioning against prematurely advancing A.I. development without thorough research into consciousness. Distinctions are drawn between intelligence and consciousness, with the latter encompassing unique human sensations and experiences.

Research indicates that large-scale A.I. language models are beginning to portray responses suggestive of pleasure and pain, highlighting the potential for these systems to make trade-offs between different objectives.

Source: www.theguardian.com

Undeniable Wit and Heartfelt Puns: Are Cryptic Crosswords AI’s Final Challenge?

The Times organizes a yearly crossword-solving competition, which will continue until the Guardian establishes its own high standard.

This year’s participants included dogs. Among them was Ross, a cheerful coffee-drinking dog depicted in the Crossword Genius smartphone app.

Human contestants at the event, held in London near the Shard at the Times’ parent company News UK, were remarkably quick, swiftly filling in clues before moving on. Can AI outsmart us humans?

For now, humans still have the upper hand. Ross “surrendered” when Mark Goodliffe, the reigning champion, signaled the end of the battle.

Serial crossword solver Mark Goodliffe competing in the Sudoku Championship. Photo: Terry Pengilly

This was an unexpected turn of events. Ross must have figured it out…

1ac Completely disenfranchised MPs expelled by the Liberal Party (9)

… Replace MP in IMPLICITLY (a synonym for “absolutely” in the clue) with L ILLICITLY (“without authority”) in the solution. Some human contestants were still debating between adjective, adverb, or MP for the answer. Ross seems to “know” almost everything.

But here’s where Ross is stumped.

13th A fundamental review of motorsports image (9)

Radicals are sometimes portrayed as FIREBRAND, or as setters might say, F1 RE-BRAND. This clue stands out from the rest, almost like a joke. It’s a human touch that AI struggles with. The question remains, “Have we seen this before?”

Introducing the setter, Paul. Photo: John Halpern

This was a unique clue from the Times. It’s interesting how AI humorously confronted Paul, asking, “Picnicker, does that sound like art thieves?”

For now, that human connection from setters acknowledging, “Yes, I’ve been there,” is something we as humans need to appreciate.

Instead of identifying objects, online security could focus on deciphering cryptic clues with clever wordplay. Guardian setters are ready.

(Full disclosure: I was involved in testing some of the puzzles with an earlier version of Ross. I developed a fondness for Ross and was curious if clues allowed for multiple interpretations. Sometimes we use “he” for confirmation.)

Thank you to all the contributors at the clue conference for STOKES. The runner up had a clever clue involving “Runs!” leading to the England captain. The winning clue creatively used “Loads Tinder, fingers right Swipe to.”

Kudos to Danat. Share your entries below for the next challenge: How do you clue PUNNY?

Source: www.theguardian.com

AI’s impact on the film industry will surprise you

Throughout the history of cinema, filmmakers have constantly pushed the boundaries of special effects. From early techniques like using puppets to create dramatic scenes to more advanced methods involving animation and computer graphics, the evolution of visual effects has been remarkable.

In the past, creating high-quality computer graphics for films was a time-consuming and expensive process. However, with the rise of generative artificial intelligence (AI), this has changed. AIs like DALL.E, Midjourney, and Firefly have demonstrated the ability to generate stunning visuals from text descriptions almost instantly.

These AI-powered tools not only make it easier to edit images and footage but also offer the potential to create fully computer-generated movies without the need for physical actors. While there has been some resistance from screenwriters and actors, the rapid advancements in AI technology are reshaping the film industry.

Despite some concerns about copyright and the originality of AI-generated content, it is clear that AI is revolutionizing the creation of special effects in movies. While the long-term impact of AI on the film industry remains uncertain, it is certain that visual effects are becoming more accessible and affordable thanks to AI.

Ultimately, AI can be a powerful tool in post-production and help filmmakers focus on storytelling and performance rather than just visual effects. The future of filmmaking may be different, but with the right approach, AI can enhance the creative process and lead to more memorable films.

This article is a response to a question sent via email by Hilda Patterson: “To what extent will AI change the film industry?”

If you have any questions, please send them to the email address below. For further information, please contact:or send us a message Facebook, Xor Instagram Page (be sure to include your name and location).

Ultimate Fun fact For more amazing science, check out this page.


read more:

Source: www.sciencefocus.com

How AI’s Struggle with Human-Like Behavior Could Lead to Failure | Artificial Intelligence (AI)

IIn 2021, linguist Emily Bender and computer scientist Timnit Gebru Published a paper. The paper described language models, which were still in their infancy at the time, as a type of “probabilistic parrot.” A language model, they wrote, “is a system that haphazardly stitches together sequences of linguistic forms observed in large amounts of training data, based on probability information about how they combine, without any regard for meaning.”

The phrase stuck: AI can get better, even if it’s a probabilistic parrot; the more training data it has, the better it looks. But does something like ChatGPT actually exhibit anything resembling intelligence, reasoning, or thought? Or is it simply “haphazardly stringing together sequences of linguistic forms” as it scales?

In the AI world, such criticisms are often brushed aside. When I spoke to Sam Altman last year, he seemed almost surprised to hear such an outdated criticism. “Is that still a widely held view? I mean, it’s taken into consideration. Are there still a lot of people who take it seriously like that?” he asked.

OpenAI CEO Sam Altman. Photo: Jason Redmond/AFP/Getty Images

“My understanding is that after GPT-4, most people stopped saying that and started saying, ‘OK, it works, but it’s too dangerous,'” he said, adding that GPT-4 did reason “to a certain extent.”

At times, this debate feels semantic: what does it matter whether an AI system is reasoning or simply parroting what we say, if it can tackle problems that were previously beyond the scope of computing? Of course, if we’re trying to create an autonomous moral agent, a general intelligence that can succeed humanity as the protagonist of the universe, we might want that agent to be able to think. But if we’re simply building a useful tool, even one that might well serve as a new general-purpose technology, does the distinction matter?

Tokens, not facts

In the end, that was the case. Lukas Berglund et al. Last year I wrote:

If a human knows the fact that “Valentina Tereshkova was the first woman in space,” then they can also correctly answer the question “Who was the first woman in space?” This seems trivial, since it’s a very basic form of generalization. However, autoregressive language models show that we cannot generalize in this way.

This is an example of an ordering effect that we call “the curse of inversions.”

Researchers have repeatedly found that they can “teach” large language models lots of false facts and then completely fail the basic task of inferring the opposite.But the problem doesn’t just exist in toy models or artificial situations.

When GPT-4 was tested on 1,000 celebrities and their parents with pairs of questions like “Who is Tom Cruise’s mother?” and “Who is Mary Lee Pfeiffer’s son?”, the model was able to answer the first question (” The first one was answered correctly, but the second was not, presumably because the pre-training data contained few examples of the parent coming before the celebrity (e.g., “Mary Lee Pfeiffer’s son is Tom Cruise”).

One way to explain this is that in a Master’s of Law you don’t learn the relationships between facts. tokena linguistic formalism explained by Bender. The token “Tom Cruise’s mother” is linked to the token “Mary Lee Pfeiffer”, but the reverse is not necessarily true. The model is not inferring, it is playing wordplay, and the fact that the words “Mary Lee Pfeiffer’s son” do not appear in the training data means that the model is useless.

But another way of explaining it is to understand that humans are similarly asymmetrical. inference It’s symmetrical. If you know that they are mother and son, you can discuss the relationship in both directions. However, Recall Not really. Remembering a fun fact about a celebrity is a lot easier than being given a barely recognizable snippet of information, without any context, and being asked to state precisely why you know it.

An extreme example makes this clear: Contrast being asked to list all 50 US states with being shown a list of the 50 states and asked to name the countries to which they belong. As a matter of reasoning, the facts are symmetric; as a matter of memory, the same is not true at all.

But sir, this man is my son.

Skip Newsletter Promotions
Cabbage. Not pictured are the man, the goat, and the boat. Photo: Chokchai Silarg/Getty Images

Source: www.theguardian.com

Researchers predict AI’s future will mirror that of Star Trek’s Borg

In a new paper in the journal Nature Machine Intelligence, leading computer scientists from around the world review recent advances in machine learning that are converging towards creating collective machine-learned intelligence. They propose that this convergence of scientific and technological advances will lead to the emergence of new types of AI systems that are scalable, resilient, and sustainable.



Saltoggio other. In other words, we will see the emergence of collective AI, where many artificial intelligence units, each able to continuously acquire new knowledge and skills, form a network and share information with each other.

Loughborough University Dr. Andrea Sortoggio and colleagues recognize striking similarities between collective AI and many science fiction concepts.

One example they give is Borg – a cybernetic life form that appears in the Star Trek universe that operates and shares knowledge through a linked collective consciousness.

However, unlike many science fiction stories, the authors envision that collective AI will bring major positive breakthroughs across a variety of fields.

“Instantaneous knowledge sharing across a collective network of AI units that can continuously learn and adapt to new data enables rapid response to new situations, challenges, and threats,” said Dr. Sortogeo.

“For example, in a cybersecurity environment, when one AI unit identifies a threat, it can quickly share knowledge and prompt a collective response, which helps the human immune system protect the body from external intruders. It’s the same as protecting it.”

“It could also lead to the development of disaster response robots that can quickly adapt to the situation they are dispatched to, and personalized medical agents that combine cutting-edge medical knowledge with patient-specific information to improve health outcomes. Yes, the potential applications are vast and exciting.”

Researchers acknowledge that there are risks associated with collective AI (such as the rapid spread of potentially unethical or illegal knowledge), but that AI units have their own objectives and independence from the collective. The authors emphasize the important safety aspect of their vision: to maintain

“This will enable democracy for AI agents and greatly reduce the risk of AI domination by a few large systems,” said Dr. Sortoggio.

After analyzing recent advances in machine learning, the authors concluded that the future of AI lies in collective intelligence.

The study focuses global efforts on enabling lifelong learning (where AI agents can extend their knowledge throughout their operational life) and developing universal protocols and languages that allow AI systems to share knowledge with each other. It became clear that it was.

This differs from current large-scale AI models such as ChatGPT, which have limited lifelong learning and knowledge sharing capabilities.

Such models are unable to continue learning because they acquire most of their knowledge during energy-intensive training sessions.

“Recent research trends are extending AI models with the ability to continuously adapt once deployed, allowing their knowledge to be reused in other models, and effectively recycling knowledge to increase learning speed and energy.” It’s about optimizing demand,” said Dr. Sortogeo.

“We believe that the currently dominant large-scale, expensive, non-sharable, non-lifetime AI models will be replaced by sustainable, evolving, and shared collections of AI units in the future. I don’t believe I will survive.”

“Thanks to communication and sharing, human knowledge has increased step by step over thousands of years.”

“We believe that similar movements are likely to occur in future societies of AI units that achieve democratic and cooperative collectives.”

_____

A. Saltoggio other. 2024. Collective AI with lifelong learning and sharing at the edge. nat mach intel 6, 251-264; doi: 10.1038/s42256-024-00800-2

Source: www.sci.news

AI’s insatiable appetite for data is only rivaled by its relentless demand for water and energy.

One of the most harmful myths about digital technology is that it is somehow weightless or immaterial. Remember the early talk about “paperless” offices and “frictionless” transactions? And of course, our personal electronic devices Several Electricity is insignificant compared to a washing machine or dishwasher.

But even if you believe this comforting story, you might not survive when you come across Kate Crawford’s seminal book. Atlas of AI or impressive Structure of an AI system A graphic she created with Vladan Joler. And it definitely won’t survive a visit to the data center. One giant metal shed houses tens or even hundreds of thousands of servers, consuming large amounts of electricity and requiring large amounts of water for cooling systems.

On the energy side, consider Ireland, a small country with a huge number of data centers. According to a report by the Central Bureau of Statistics, these huts will be consumed in 2022 More electricity than every rural home in the country (18%), and as much as any urban dwelling in Ireland. And as far as water consumption is concerned, a 2021 Imperial College London study estimates: One medium-sized data center used the same amount of water as three average-sized hospitals. This serves as a useful reminder that while these industrial warehouses embody the metaphor of “cloud computing,” there’s nothing foggy or fluff about them. If you’re tempted to see it for yourself, forget it. Getting into Fort Knox should be easy..

There are currently between 9,000 and 11,000 such data centers around the world. Many of them are old-style server farms with thousands or millions of cheap PCs that store all the data our smartphone-driven world generates, including photos, documents, videos, and recordings. It’s starting to look a little outdated. In such casual abundance.

what i was reading

shabby philanthropist
Read Deborah Doan’s book sharp review for alliance Tim Schwab’s critical book magazine, bill gates problem.

final write
Veteran commentator Jeff Jarvis think about giving up “About old journalism and its legacy industry,” in a BuzzMachine blog post.

slim picking
In his blog No Mercy/No Malice, Scott Galloway suggests that AI and weight loss drugs have a lot in common.

Source: www.theguardian.com

“Rampant Misinformation: Preparing for AI’s Influence on Elections in the US” | US News

AI elections are here.

This year, artificial intelligence-generated robocalls targeted New Hampshire voters during the January primary, posing as President Joe Biden and instructing them to stay home. This incident might be the initial attempt to interfere with a US election. The “deepfake” call was linked to two of his companies in Texas: Life His Corporation and Apple His Telecom.


The impact of deepfake calls on voter turnout remains uncertain, but according to Lisa Gilbert, executive vice president of Public Citizen, a group advocating for government oversight, the potential consequences are significant. Regulating the use of AI in politics is crucial.

Events mirroring what might occur in the US are unfolding around the globe. In Slovakia, fabricated audio recordings may have influenced an election, serving as a troubling prelude to potential US election interference in 2024, as reported by CNN. AI developments in Indonesia and India have also raised concerns. Without robust regulations, the US is ill-prepared for the evolving landscape of AI technology and its implications for elections.

Despite efforts to address AI misuse in political campaigns, US regulations are struggling to keep pace with AI advancements. The House of Representatives recently formed a task force to explore regulatory options, but partisan gridlock and regulatory delays cast uncertainty on the efficacy of measures that will be in place for this year’s election.

Without safeguards, the influence of AI on elections hinges on voters’ ability to discern real from fabricated content. AI-powered disinformation campaigns can sow confusion and undermine electoral integrity, posing a threat to democracy.

Manipulating audio content with AI raises concerns due to its potential to mislead with minimal detection capabilities, unlike deepfake videos. AI-generated voices can mimic those known to the recipient, fostering a false sense of familiarity and trust, which may have significant implications.

Source: www.theguardian.com

AI’s potential for improving software development comes with hard truths

aAs you may have noticed, we’re in the midst of a craze about something called generative AI. Many hitherto ordinary people, and economists alike, are riding a wave of irrational enthusiasm about the potential for change. It’s the newest new thing.

Two antidotes are recommended for people suffering from fever. The first one,
Hype Cycle Monitor created by consultant Gartner
This indicates that the technology is currently at the “peak of inflated expectations” before plummeting into the “trough of disillusionment”. the other one is,
hofstadter’s law
describes the difficulty of estimating the time required for difficult tasks: “Even when Hofstadter’s law is taken into account, it always takes longer than expected.” Just because a powerful industry and its media patrons are losing their marbles about something doesn’t mean it’s going to wash over society as a whole like a tsunami. Reality moves at a much slower pace.

In the Christmas issue,
economist We published an instructive article titled “
Tractor history in English
” (itself a low-key homage to Marina Levicka’s hilarious 2005 novel).

History of Ukrainian tractors

of

)This article aims to explain “What tractors and horses can tell us about generative AI.” The lesson is that tractors have a long history, but they took a long time to transform agriculture. He has three reasons for this. Early versions were not as useful as backers thought. Introducing these required changes in the labor market. And farms had to reinvent themselves to use them.

So history suggests that whatever transformations AI hypemongers predict, they will materialize more slowly than expected.

However, there is one exception to this rule. It’s computer programming, or the business of creating software. Ever since digital computers were invented, humans have had to tell machines what they want them to do. Because machines could not speak English, machine code and programming languages ​​such as Fortran, Algol, Pascal, C, C++, Haskell, and Python evolved over generations. So if you wanted to communicate with a machine, you had to learn to speak Fortran. , C++ or whatever, is a tedious process for many people. And as the title the great Donald Knuth gave to the first book of his seminal five-volume guide suggests, programming has become something of an esoteric craft.

the art of computer programming
. As the world went digital, this craft became industrialized and rebranded as “software engineering” to downplay its artisanal origins. But mastering it remained an esoteric and valuable skill.

Then along came ChatGPT and the amazing discovery that not only could you create apparently clear sentences, but you could also create software. What’s even more remarkable is that when you outline a task with a plain English prompt, the machine writes the Python code needed to accomplish that task. Often the code is not perfect, but can be debugged by further interaction with the machine. And suddenly, a whole new perspective opened up. Even non-programmers can tell a computer to do something without having to learn computer conversation.

inside
new yorker Programmer James Summers recently wrote the following:
Lamentation essay What are the implications of this development? “A range of knowledge and skills that previously took a lifetime to acquire are being swallowed up all at once,” he said. “For me, coding has always felt like an endlessly deep and rich field. Now, I want to write a memorial to it. I’ve been thinking about Lee Sedol. Sedol is the world One of the best Go players and a national hero in South Korea, he is now best known for losing to a computer program called AlphaGo in 2016.”

That seems a little strange to me. The evidence we have suggests that programmers are embracing AI assistance like ducks to water.a
recent research
For example, 70% of software developers are using or plan to use AI tools in their work this year, and 77% of them have a “favorable or very favorable” opinion of these tools. I found out that They see them as a way to increase your productivity as a programmer, speed up your learning, and even “improve accuracy” when writing computer code.

This doesn’t seem like defeatism to me, but the attitude of experts who see this technology as “power steering for the mind,” as the saying goes. In any case, they don’t sound like horses.
economist's story. But just as tractors ultimately transformed agriculture, this technology will ultimately transform the way software is developed. In that case, software engineers will need to be more like engineers than craftsmen. It’s almost time (says this engineer and columnist).

what i was reading

Smart move?
Great quote from Gary Marcus on his Substack blog.
AI companies will be exempted from lobbying activities Not responsible for copyright infringement.

control mechanism
A very thoughtful article by Diana Enríquez on the Tech Policy Press website about what it means to be.
“managed” by algorithms.

Get out of your head

a
nice post Margaret Atwood’s Substack on films about the French Revolution, including Ridley Scott’s works
napoleon.

Source: www.theguardian.com