Revolutionary AI: The Ultimate Solution for Managing Your Phone Calls, Bills, and Life Tasks

Sure! Here’s an SEO-optimized rewrite of your content, retaining the HTML tags:

The Evolution of Generative AI: Meet OpenClaw

Since the launch of ChatGPT, Generative AI has transformed our digital landscape over the past three years. It has spurred a significant stock market boom, integrated into our search engines, and become an essential tool for hundreds of millions of users daily.

Despite its benefits, many still hesitate to use AI tools. But why? While asking AI for text, audio, images, and videos can save time, crafting the right prompts often becomes a burdensome task. Users still grapple with everyday chores like answering emails, booking appointments, and paying bills.

This is where AI’s true power lies; handling the mundane tasks. The promising concept of “agent AI” suggests that people desire an efficient, always-on assistant to tackle time-consuming tasks. The latest advancement in this field is OpenClaw.

What is OpenClaw?

OpenClaw, previously known as ClawdBot, is an AI agent poised to fulfill AI’s grand promises. Once granted access to your computer files, social media, and email accounts, it can efficiently complete various tasks. This capability is powered by Claude Code, a model released by the AI company Anthropic.

Developed by software engineer Peter Steinberger and launched in late November 2025, ClawdBot initially gained traction but was rebranded due to concerns from Anthropic. After temporarily adopting the name MoltBot, it is now officially known as OpenClaw. (Mr. Steinberger did not respond to multiple interview requests.)

How Does OpenClaw Work?

OpenClaw operates on your computer or a virtual private server and connects messaging apps like WhatsApp, Telegram, and Discord to coding agents powered by models like Anthropic’s Claude. Users often opt for a high-performance device, like the Apple Mac Mini, to host OpenClaw for optimal speed. Due to increasing demand, some shops are reporting sold-out status.

Although it can run on older laptops, OpenClaw needs to stay operational 24/7 to execute your specified commands.

Commands are sent through your preferred messaging app, enabling a simple conversational interface. When you message OpenClaw, the AI agent interprets your prompt, generates, and executes commands on your machine. This can include tasks such as finding files, running scripts, editing documents, and automating browser activities. The results are succinctly summarized and sent back to you, creating an efficient communication loop akin to collaborating with a colleague.

How Can OpenClaw Help You?

OpenClaw serves as an all-in-one assistant for both personal and professional tasks. Users typically start by decluttering files on their devices before transferring the tech’s prowess to more complex responsibilities. Some users report utilizing it to manage busy WhatsApp groups by summarizing necessary information and filtering out the irrelevant.

Other practical applications include:

  • Comparing supplier prices to minimize household spending.
  • Automating web browser tasks for seamless transactions.
  • Facilitating restaurant reservations by calling venues directly.
  • Preparing initial drafts for presentations while you sleep.

What Are the Risks?

While OpenClaw’s capabilities shine brightest when granted extensive access, this convenience raises significant risks. Experts warn that users may overlook potential vulnerabilities. For instance, OpenClaw could be exposed to prompt injection attacks or hacking if hosted on insufficiently secured virtual servers. This means sensitive data could be compromised.

Alan Woodward, a cybersecurity professor at the University of Surrey, cautions, “I can’t believe people would allow unrestricted access to sensitive software, including email and calendars.”

White hat hackers have already identified several security flaws in OpenClaw, raising concerns about the hands-off approach many users prefer, which simultaneously invites substantial risk.

Is This the Future of AI?

OpenClaw has recently launched its own social network, Moltbook, enabling its AI agents to interact and share insights. While humans can observe, they cannot engage directly in discussions, prompting fears about progression toward artificial general intelligence (AGI), potentially matching or exceeding human capabilities.

As we navigate this new realm, it’s vital to consider the implications of relinquishing extensive data access to AI agents. We may be standing on the brink of a new AI era—an agent capable of managing your life efficiently, if you’re prepared to grant it free access and relinquish control. It’s a thrilling yet daunting prospect.

Read more

This version includes SEO-friendly keywords and phrases while keeping the structure and HTML tags intact.

Source: www.sciencefocus.com

Prime Minister James Cameron Calls AI Actors ‘Terrifying’

Director James Cameron referred to AI actors as “terrifying” and remarked that what generative AI technology generates is merely “average.”

Cameron told CBS on Sunday morning. As the third Avatar film, titled Fire and Ash, approaches its release, he discussed the groundbreaking technology utilized in the film. He expressed admiration for the motion-capture performance, calling it “a celebration of the actor-director moment” but voiced his concerns about artificial intelligence. “Go to the other side of the spectrum.” [from motion capture] There is also a generative AI that allows for character creation. They can compose actors and build performances from scratch using text prompts. No, it’s not like that. That’s unsettling to me. It’s the antithesis of what we are not doing. ”

He added, “I don’t want a computer to perform tasks that I take pride in doing with actors. I have no desire to replace actors. I enjoy collaborating with them.”

Cameron, who is associated with UK-based company Stability AI, mentioned that the creative advantages of artificial intelligence are constrained. “Generative AI cannot create something new that hasn’t been seen before. The model can be trained on all previous works, but it lacks the ability to innovate beyond existing creations. Essentially, it yields a human art form born from a blend of experiences, which results in something average. What you miss is the distinctive lived experiences of individual playwrights and the unique traits of specific actors.”

“It also compels us to maintain high standards and to continue to think creatively. The act of witnessing an artist’s performance in real time becomes sacred.”

Source: www.theguardian.com

Ofcom Calls on Social Media Platforms to Combat Fraud and Curb Online ‘Pile-Ons’

New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.

Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.

The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.


Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.

The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.

These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.

While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.

The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.

“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.

Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.


Melanie Dawes, Ofcom’s chief executive. Photo: Zuma Press Inc/Alamy

“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”

Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.

These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.

Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.

Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”

Source: www.theguardian.com

Cyber Threats Can Be Conquered: GCHQ Chief Calls on Businesses to Strengthen Cybersecurity Efforts

The chief of GCHQ emphasized the importance for businesses to implement additional measures to mitigate the potential consequences of a cyber-attack, such as maintaining a physical paper version of their crisis plan for use in the event that an attack disables their entire computer infrastructure.

“What is your contingency plan? Because attacks will inevitably succeed,” stated Anne Keast Butler, head of GCHQ, the UK government’s cyber and signals intelligence agency, since 2023.

“Have you genuinely tested the outcome if that were to occur in your organization?” Keast Butler remarked Wednesday at a London conference organized by cybersecurity firm Record Future. “Is your plan… documented on paper somewhere in case all of your systems go offline? How do you communicate with each other if you are entirely reliant on those systems and they fail?”

Recently, the National Cyber Security Center, part of GCHQ, reported a 50% rise in “very serious” cyber-attacks over the last year. Security and intelligence agencies are now confronting new attacks several times a week, according to the data.

Keast Butler mentioned that governments and businesses must collaborate to address future threats and enhance defense mechanisms, as contemporary technology and artificial intelligence make risks more widespread and lower the “entry-level capabilities” that malicious actors need to inflict harm. He highlighted their efforts in “blocking millions of potential attacks” by partnering with internet service providers to eliminate harmful websites at their origin, but noted that larger companies need to ramp up their self-protection measures.

On Tuesday, a Cyber Monitoring Center (CMC) report revealed that the Jaguar Land Rover hack could cost the UK economy around £1.9 billion, marking it as the most costly cyberattack in British history.

After the attacks in August, JLR was forced to suspend all factory and office operations and may not achieve normal production levels until January.

Keast Butler pointed out that “[there are] far more attacks that have been prevented than those we highlight,” adding that the increased focus on the JLR and several other significant cyber incidents serves as a crucial reminder of the need for robust cybersecurity protocols.

She regularly converses with CEOs of major companies and has conveyed that they should include individuals on their boards who possess expertise in cybersecurity. “Often, due to the board’s composition, nobody knows the pertinent questions to ask, which results in interest, but the right inquiries go unposed,” she noted.

Earlier this year, the Co-op Group experienced a cyberattack that cost it up to £120 million in profits and compromised the personal data of several of its members. Shireen Khoury Haq, CEO of the group, mentioned in a public letter the critical role of cybersecurity training in formulating strategies to respond to attacks.

“The intensity, urgency, and unpredictability of a real-time attack are unparalleled to anything that can be rehearsed. Nonetheless, such training is invaluable; it cultivates muscle memory, sharpens instincts, and reveals system vulnerabilities.”

Keast Butler mentioned a “safe space” that has been created to encourage companies to exchange information about attacks with government entities, allowing them to do so without risking the disclosure of sensitive commercial data to competitors.

“I believe sometimes individuals struggle to come forward due to personal issues or challenges within the company, which hinders our ability to assist in making long-term strategic improvements to their systems,” she remarked.

Source: www.theguardian.com

20 Bird Species Can Comprehend Each Other’s Alarm Calls

A splendid fairy (left) attempts to evade the cuckoo

David Ongley

More than 20 bird species globally utilize similar “whining” alarm calls to alert others about the presence of cuckoos. These calls seem to resonate across species, shedding light on their evolutionary significance.

Cuckoos are among the numerous 100 species recognized as brood parasites, laying their eggs in the nests of other birds and relying on them to raise their young as if they were their own.

Will Feeney and his team at biological stations in Spain and Doñana identified 21 species that last shared a common ancestor around 53 million years ago. These species exhibit structurally similar “whimper” calls when they detect a breeding parasite.

Examples include the splendid fairy-wren (Malurus cyaneus) in Australia, the yellow-brown prinia (Prinia subflava) in Africa, Hume’s leaf warbler (Phylloscopus humei) in Asia, and the green warbler (Phylloscopus trochiloides) in Europe.

“It seems these diverse bird species worldwide have converged on the same vocalization to alert against their respective brood parasites,” observes Feeney.

Researchers often observe that species producing this alarm call tend to inhabit areas rich in brood parasites, which exploit various host species. When a potential host detects the whining, they often resort to aggressive defense behaviors.

“Brood parasites present a unique threat. They pose significant risks to offspring while largely being non-threatening to adult birds,” says Feeney. “Our findings suggest that [the call] plays a crucial role in promptly alerting fellow birds and potentially securing their protection.”

“In the case of the splendid fairy-wren, they are cooperative breeders, which likely means that the mobbing call is intended to attract additional individuals for support,” explains Rose Thorogood from the University of Helsinki, Finland.

To deepen their investigation, Feeney and colleagues recorded calls from brood-parasite hosts across continents and played them to potential host birds in Australia and China. They discovered that hearing foreign alarm calls prompted just as quick a response as calls from their own species.

“This indicates that the function of this vocalization is geared towards fostering interspecies communication rather than merely internal signaling,” highlights Feeney.

Thorogood cautions: “The ancestral alarm calls shared by our forebears may not have solely targeted brood parasites. Instead, they likely feature specific acoustic properties that are effective in repelling these threats.”

The research team also conducted similar experiments with yellow warblers (Setophaga petechia) in North America, which serve as egg hosts for brown-headed cowbirds (Molothrus ater) yet do not produce the distinctive whining alarm call. When exposed to the splendid fairy-wren’s alarm, warblers responded promptly by returning to their nests, demonstrating distress through various calls in addition to mobbing.

Feeney suggests that numerous bird species respond to innate components in alarm calls, while local birds in areas where brood parasites are prevalent adapt their calls and responses to convey information about local dangers.

“These birds have adapted distress calls for new contexts related to offspring threats,” he explains. “This provides insights into why birds across the globe utilize similar sounds.”

Charles Darwin proposed in his 1871 work, The Descent of Man, that spoken language’s origins could be traced back to imitation and adaptation of instinctual sounds made by humans and other animals. These instances may not only involve cries of fear but can also reflect pain. “A bird adapting these instinctual calls for different purposes might represent a foundational step towards language,” concludes Feeney.

Rob Magrath of the Australian National University notes, “Calls often convey specific meanings, sometimes referring to external objects or incidents, rather than merely indicating internal states like fear or traits such as gender or species.”

“This referential quality suggests that such vocalizations bear resemblance to human language, frequently referencing the external world,” he adds. “Thus, animal communication and human language may exist on a continuum rather than being distinct attributes of humans.”

Topics:

Source: www.newscientist.com

Rayner Calls Farage a “Failed Young Woman” Over Proposal to Repeal Online Safety Law

Angela Rayner has stated that Nigel Farage has “failed a generation of young women” with his plan to abolish online safety laws, claiming it could lead to an increase in “revenge porn.”

The Deputy Prime Minister’s remarks are the latest in a series of criticisms directed at Farage by the government, as Labour launches a barrage of attack ads targeting British reform leaders, including one featuring Farage alongside influencer Andrew Tate.

During a press conference last month, reform leaders announced initiatives that encourage social media companies to restrict misleading and harmful content, vowing not to promote censorship and avoiding the portrayal of the UK as a “borderline dystopian state.”

In retaliation, Science and Technology Secretary Peter Kyle accused Farage of siding with child abusers like Jimmy Savile, prompting a strong backlash from reform leaders.


In comments made to the Sunday Telegraph, Rayner underscored the risks associated with abolishing the act, which addresses what is officially known as intimate image abuse.

“We recognize that the abuse of intimate images is an atrocity, fostering a misogynistic culture on social media, which also spills over into real life,” Rayner articulated in the article.

“Nigel Farage poses a threat to a generation of young women with his dangerous and reckless plans to eliminate online safety laws. The absence of a viable alternative to abolish safety measures and combat the forthcoming flood of abuse reveals a severe neglect of responsibility.”

“It’s time for Farage to explain to British women and girls how he intends to ensure their safety online.”

Labour has rolled out a series of interconnected online ads targeting Farage. An ad launched on Sunday morning linked directly to Rayner’s remarks, asserting, “Nigel Farage wants to make it easier to share revenge porn online,” accompanied by a laughing image of Farage.

According to the Sunday Times, another ad draws attention to Farage’s comments regarding Tate, an influencer facing serious allegations in the UK, including rape and human trafficking, alongside his brother Tristan.

Both the American-British brothers are currently under investigation in Romania and assert their innocence against numerous allegations.

Labour’s ads depict Farage alongside Andrew Tate with the caption “Nigel Farage calls Andrew Tate an ‘important voice’ for men,” referencing remarks made during an interview on last year’s Strike IT Big podcast.

Lila Cunningham, a former magistrate involved in the reform, wrote an article for the Telegraph on Saturday, labeling the online safety law as “censorship law” and pointed out that existing laws already address “revenge porn.”

“This law serves as a guise for censorship, providing a pretext to empower unchecked regulators and to silence dissenting views,” Cunningham claimed.

Cunningham also criticized the government’s focus on accommodating asylum seekers in hotels, emphasizing that it puts women at risk and diverting attention from more pressing concerns.

Source: www.theguardian.com

High Court Calls on UK Lawyers to Halt AI Misuse After Noting Fabricated Case Law

The High Court has instructed senior counsels to implement immediate actions to curb the misuse of artificial intelligence, following numerous false cases presented to the court featuring entirely fictitious individuals or constructed references.

While attorneys are leveraging AI systems to formulate legal arguments, two cases this year have been severely affected by citations from fictitious legal precedents, which are believed to have originated from AI.

In a damages lawsuit amounting to £89 million against Qatar National Bank, the claimant referenced 45 legal actions. The claimant acknowledged the use of publicly accessible AI tools, and his legal team admitted to citing non-existent authorities.

When Haringey Law Center filed a challenge against the London Borough of Haringey for allegedly failing to provide temporary accommodation for its clients, the attorney referenced fictitious case law multiple times. Concerns were raised when the counsel representing the council had to repeatedly explain why they could not verify the supposed authorities.

This situation led to legal action over unwarranted legal expenses, with the court ruling that the Law Centre and its attorneys, including the student attorney, were negligent. Although the barrister in that case refused to use AI, she stated that she might have inadvertently done so while preparing for another case where she cited the fictitious authority. She mentioned that she might have assumed the AI summary was accurate without fully understanding it.

In the Regulation Judgment, Dr. Victoria Sharp, President of the King’s Bench Division, warned, “If artificial intelligence is misused, it could severely undermine public trust in the judicial system. Lawyers who misuse AI could face disciplinary actions, including court contempt sanctions and referrals to law enforcement.”

She urged the Council of Lawyers and the Law Society to treat this issue as an immediate priority and instructed the heads of legal chambers and administrative bodies to ensure all lawyers understand their professional and ethical responsibilities regarding the use of AI.

“While tools like these can produce apparently consistent and plausible responses, those responses may be completely incorrect,” she stated. “They might assert confidently false information, reference non-existent sources, or misquote real documents.”

Ian Jeffrey, CEO of the English and Welsh Law Association, remarked that the ruling “highlights the dangers of employing AI in legal matters.”

“AI tools are increasingly utilized to assist in delivering legal services,” he continued. “However, the significant risk of inaccurate outputs produced by generative AI necessitates that lawyers diligently verify and ensure the accuracy of their work.”

Skip past newsletter promotions

These cases are not the first to suffer due to AI-generated inaccuracies. At the UK tax court in 2023, an appellant allegedly assisted by an “acquaintance at a law office” provided nine fictitious historical court decisions as precedents. She acknowledged that she might have used ChatGPT but claimed there were other cases supporting her position.

Earlier this year, in a Danish case valued at 5.8 million euros (£4.9 million), the appellant narrowly avoided dismissal when relying on a fabricated ruling that the judge had identified. A 2023 case in the US District Court for the Southern District of New York faced turmoil when the court was shown seven clearly fictitious cases cited by the attorneys. After querying, ChatGPT summarized the previously invented cases, leading the judge to express concerns and resulted in a $5,000 fine for two lawyers and their firm.

Source: www.theguardian.com

State Calls Out Trump Administration for Freezing EV Charging Funding

A group of states spearheaded by Washington, Colorado, and California has filed a lawsuit against the Trump administration, claiming it is unlawfully withholding billions of dollars designated by Congress for electric vehicle charging stations nationwide.

The Bipartisan Infrastructure Act of 2021 allocated $5 billion to states for the construction of charging stations across the country. Research firm Atlas Public Policy reports that 71 stations have been established thus far, with more on the way.

Litigation filed in the U.S. District Court for the Western District of Washington in Seattle states that the federal agency has unlawfully frozen these funds, halted the approval of new stations, deprived states of critical resources, and harmed the developing electric vehicle industry.

The White House’s Budget Proposals announced last week indicated a cancellation of funds for the “Failed Electric Vehicles – Charger Grant Program.” President Trump had already targeted the program in January. Presidential Order from the Transportation Department echoed similar sentiments the following month. However, the lawsuit contends that a Congressional approval is necessary to entirely revoke funding.

“The president is making unconstitutional efforts to withhold funds allocated to programs that Congress supported,” stated California Attorney General Rob Bonta. “This time, he’s unlawfully diverting billions meant for electric vehicle charging infrastructure, lining the pockets of his oil industry allies.”

California has approximately 2 million “zero emission vehicles” available, accounting for one-third of the national total, as part of an ongoing initiative in the car-centric state to reduce air pollution. According to Bonta’s office, California relied on $384 million from the federal program for charging stations.

The state has heavily invested in its charging infrastructure from its own budget and revenue from carbon credits sold to polluters, leading to more public and shared private chargers than gas station pumps. However, challenges remain when crossing state lines for charging.

The National Electric Vehicle Infrastructure, or NEVI Program, initiated by President Joseph R. Biden Jr., aims to establish charging networks across urban and rural areas, including California, to combat climate change.

California officials remarked that one of the main beneficiaries of the national EV program is China, which currently leads in EV manufacturing and global sales. The most significant detriment would likely fall on Tesla, a Trump supporter, whose CEO Elon Musk expects the company to lead the EV market, despite a decline in sales during the first quarter of 2025.

“When America retreats, China prevails,” California Governor Gavin Newsom criticized the federal fund withholding as “another Trump gift to China.”

“Instead of promoting Teslas on the White House lawn, President Trump should prioritize aiding Elon and the nation by adhering to the law and unlocking this bipartisan funding,” Newsom stated.

The lawsuit includes attorneys general from Arizona, Delaware, Hawaii, Illinois, Maryland, Minnesota, New Jersey, New Mexico, New York, Oregon, Rhode Island, Wisconsin, Vermont, and the District of Columbia.

Transportation Department Notes indicate that state officials reported in February that the administration had considered the NEVI program and suspended approval of state plans. The lawsuit seeks a declaration that the memo is illegal and demands the administration release the funds.

An NEVI Funding Tracking Website operated by Atlas Public Policy shows that at least $521 million has been allocated, with approximately $44 million already spent. Data indicates that many operational stations are concentrated in Ohio and Pennsylvania.

Loren McDonald, chief analyst at EV analytics firm Paren, commented that while the federal government plays a minor role in the EV charging sector, most stations are constructed by private companies. McDonald noted that the process of building the infrastructure and selecting contracting firms is lengthy and has led to delays. His experience with constructing charging stations reflects this trend.

That said, the plaintiffs asserted that the president’s orders have been detrimental.

Colorado Attorney General Phil Weiser expressed that his state stands to lose tens of millions in funding after demonstrating significant advancements in establishing a robust foundation for electric vehicle adoption. He mentioned that federal support was crucial to bridging gaps in funding for rural Colorado and underserved communities.

“Congress showed foresight in approving funds for this essential infrastructure,” Weiser stated. “These funds need to be restored immediately.”

In Washington, the president’s directives halt 40 proposed projects and jeopardize $55 million in approved Congressional funding for electric vehicle charging infrastructures.

The White House and the Transportation Department have yet to respond to requests for comment.

Source: www.nytimes.com

FCC Chairman calls for investigations into Disney’s diversity, equity, and inclusion practices

The Federal Communications Commission chair said Friday that he has launched an investigation into Disney’s diversity, equity and inclusion program in his latest attempt under the Trump administration to stop such efforts.

In a letter to Disney’s CEO Robert Iger, chairman Brendan Kerr said the company’s program to increase job diversity and promote racial-based affinity groups appears to violate equal employment opportunities regulations.

“Disney wants to ensure that virtually all discriminatory initiatives will be completed, not just the name,” Kerr said in a letter sent Thursday. “In another case, Disney’s actions – whether they’re ongoing or recently terminated, we want to determine whether they’re always in compliance with applicable FCC rules.”

A Disney spokesman said the company is reviewing the FCC letter. “We look forward to being involved with the committee to answer that question.”

Veteran Republican regulator Kerr began his tenure as FCC chair in January by launching a drastic campaign to scrutinize the media, sought to eradicate left-leaning bias and policy allegations that were corned by the president.

Last month he began a similar diversity and inclusion investigation into Comcast, the parent company of NBCuniversal. Kerr also said the agency’s merger reviews will include a survey of the company’s DEI program.

The investigation continues Presidential Order Trump was banned from “illegal and immoral” DEI programs in the federal government on his first day. A day later, Kerr announced that he would be closing his promotion of diversity and equity in the FCC’s strategic planning, budget and economic reporting.

It is unclear whether the FCC, which normally serves as a cable television watchdog, distributing licenses to broadcast television and radio stations, has the power to punish media companies for its diversity initiative. Kerr argues that a wide range of “public interest” standards can be applied to scrutiny companies such as Disney, which owns ABC and ESPN, and Disney, such as television stations around the country.

FCC experts said Kerr’s investigation could be challenged in court.

“It’s about bullying and threats,” said Andrew Schwartzman, a senior advisor to the Benton Institute for Broadband Society. Kerr’s most powerful tool is his vote on the committee to approve mergers and acquisitions, he said.

Trump nominated Kerr has launched an investigation since he chaired several news outlets, including PBS and NPR, accused him of left-leaning political bias. He investigated an interview that CBS’s “60 minutes” was conducted with former vice president Kamala Harris and released an investigation by San Francisco radio station KCBS regarding reports of immigration enforcement measures.

Kerr publicly agreed to the administration’s promise to cut regulations significantly, chase big technology, and punish television networks for political bias. Kerr is restructuring independent bodies, expanding its duties and using it as a political weapon of rights, Telecommunications lawyers and analysts said.

Brooks Burns Contributed with a report from Los Angeles.

Source: www.nytimes.com

Cafe Owner Emily Watkins: Zoom Calls, Space Hogging, and Lap Topper Attitude

o
After a period of time, smoking indoors, wearing a flamboyant wide tie, and typing away on a typewriter at the office desk became socially acceptable. Norms evolve, and that’s often for the best. However, when it comes to cafe laptops, I urge society to reconsider. Don’t be the nuisance in my cafe – that’s the only place that keeps a solitary freelancer like me going, similar to the WFH Brigade.

My kitchen table, where I spend most of my working hours, is adequate. There’s a window nearby. You can make yourself a cup of tea whenever you please. You can transition to the couch, listen to your own music, take loud calls, or stand up. But variety is the spice of life, and if cafes were my only option, I’d truly be disheartened. Yes, I’m aware of coworking spaces, but they are a) filled with unpleasant individuals, and b) not within my budget. Thankfully, the calming ambient noise of distant conversations, keyboard clicks, is as close to a cafe as I can get for now.

The freedom to work from anywhere is one of the perks of being a writer, but this privilege is being misused by fellow laptop users, risking its potential revocation. Clogging up tables, engaging in hours of minimal spending, the antics of cafe owners – just buying a cup of tea and occupying space all day is clearly rude, not to mention bad for business.

It’s undeniable that a sea of laptops alters the ambiance of a place, transforming friendly hangouts to unbearable coworking spots. Consequently, our laptop-user to other patron ratio needs to be managed diligently. After years of observing this trend – even before the pandemic hit, I’ve drafted a code of conduct to maintain harmony within the cafe laptop ecosystem. And it’s essential to adhere to it, as if we continue to disrupt this balance, it might be back to the kitchen table for good.


The initial rule is to limit laptop usage in cafes to four hours and spend around £5 on two items. If you plan to occupy the space all day, you must also order at least one meal. Additionally, no Zoom calls or phone conversations are permitted under any circumstances. The objective of working with a laptop in a cafe is to blend in seamlessly, rather than disrupting the environment with endless productivity tasks. If you need to make a call, stay at home or step outside.

It goes without saying that you should choose the smallest available table. Don’t occupy a larger table when it’s just you and your laptop. Furthermore, if power outlets are visibly accessible, don’t hassle the staff to charge your devices. They are there to serve food and drinks, not to make your impromptu office setup easier. And of course, do not play loud music. It shouldn’t need to be said, but a recent encounter at a coffee shop proved otherwise. I wonder what he’s reading now. Otherwise, our refuge in local cafes is at risk.

In conclusion, be respectful, pay your dues, and don’t take advantage of the privilege of being in a cafe. Essentially: Don’t abuse the system.

Many British people abroad may wish to hide me under a rock or imagine French accents. While I feel ashamed to be grouped with them, there’s no reason why we can’t change the narrative.

Cafes not only provide a conducive work environment but also serve as a natural habitat. As historical origins suggest, they have always been a breeding ground for ideas. The vibrant, intellectual cafe setting is often what’s needed to spark creativity, while also reminding us of the presence of others (a aspect often missing in traditional office settings). It’s truly a valuable resource that shouldn’t be taken for granted. If cafes were no longer an option, and the kitchen table or coworking spaces were the only alternatives, I might have to reluctantly resort to seeking traditional employment.

  • Do you have any opinions on the issues raised in this article? If you would like to send a response of up to 300 words by email to consider being published in our Letters section, please click here.

Source: www.theguardian.com

NOAA cancels monthly calls for climate and weather updates

Staff cuts have impacted work at the National Oceanic and Atmospheric Administration (NOAA).

Kristoffer Tripplaar / Alamy

The US National Oceanic and Atmospheric Administration (NOAA) says it will “stop” monthly calls to update reporters on seasonal weather forecasts and global climate conditions.

A NOAA spokesman says recent cuts, resignation and resignation under President Donald Trump's control have led to staffing issues that have led to agents “no longer able to support them.” But they say every month Report It will be edited and continued to be published by the National Center for Environmental Information, operated by NOAA.

He says another reason the agency is closing calls could be due to fear of employees violating the new administration by talking about climate change. Tom Di RivatoNOAA's genius scientist and public relations specialist who was fired during widespread cuts in February. “They don't want to get stuck between telling the truth and then riding on the wrong side of a political appointee,” he says.

During the monthly call, NOAA scientists will provide you with updates on a variety of predictions and measurements the agency has created. In addition to information on global land and ocean temperatures, the description includes information on seasonal weather forecasts and droughts in the United States. These calls also give reporters the opportunity to ask questions to help them better understand new information.

In past briefings, researchers openly discussed the role of human-induced climate change in driving at record high temperatures. But last month's call – first held under the new administration – NOAA researchers declined to mention climate change when discussing record global temperatures in January. The call ended later New Scientist We asked the researchers directly to see what role climate change played at high temperatures.

Di Liberto says the agency has not explicitly directed researchers, let alone climate change. However, he knows from his current contact with staff there is an atmosphere of fear about saying the wrong thing.

“It's a fear of being cut, but I'm also afraid that the work they're doing is trying to help people, or that they're being told they can't say what they can say based on science,” he says.

Since January, the administration has fired almost 1,000 people from government agencies, and hundreds more have resigned. The government is It reportedly plans to cut more than 1,000 employeesone-tenth of the agency's workforce.

topic:

Source: www.newscientist.com

Navigating Zoom calls in 2025: Managing small group meetings with stationary backgrounds on the internet

WWhether it’s catching up with colleagues or gathering to set New Year’s resolutions, many of us will be reconnecting via Zoom, Teams, or Google Meet on Monday morning. But while such platforms have revolutionized flexible remote work in recent years, scientists are increasingly realizing that they can have a negative impact on people’s energy levels and self-esteem. So how can you have a healthier relationship with video conferencing in 2025?

Psychologists coined the term relatively early in the pandemic. “Zoom fatigue” Learn about the physical and psychological fatigue that can result from using video conferencing platforms such as Zoom for long periods of time. We found that people who had longer meetings using technology or who had a negative attitude toward meetings were more likely to feel: They made me even more exhausted..

Further research has found that the use of the self-view feature, which allows you to control whether your video is shown on screen during a meeting, is associated with increased fatigue levels. “We also found a gender effect, with women reporting more Zoom fatigue than men,” said Dr. Anna Carolina Queiroz, associate professor of interactive media at the University of Miami in Florida, who has been involved in these studies. says.

An insight from her the study People tend to feel more connected to others through frequent, short, and small group video calls rather than long meetings with many participants. This is likely because it takes longer to maintain nonverbal communication cues, such as eye contact, with many people. A lot of mental effort.

Those who are more sensitive to these communication cues may be more negatively affected, which may explain why women, who often feel greater pressure to present a positive image of themselves on video, tend to feel more fatigued. That could help explain things, Queiroz said.

She suggests keeping online meetings as short and small as possible and taking breaks between meetings to improve cognitive performance.

another the study This suggests that people who spend a lot of time video conferencing may become more conscious of their appearance and may be more likely to report greater dissatisfaction with it. Some people become so preoccupied with perceived flaws that they become anxious about attending gatherings and seek cosmetic surgery to change their appearance.

Dr. George Klompouzos, a professor of dermatology at Brown University and a practicing dermatologist, says, “If you’re worried about imperfections, continued exposure to images of yourself in virtual meetings tends to make those problems worse.” “There is,” he says. “Zoom dysmorphia is at least as common as body dysmorphia, which is a painful or disabling form of perceived or real defect that affects about 2% of the general population.” I’m thinking about it.

Dr. Cemre Turk, a dermatologist and postdoctoral fellow at Massachusetts General Hospital in Boston, US, says that Zoom dysmorphia is very likely to cause an increase in body dysmorphia, which can be devastating to people’s work and personal lives. It said it was important to identify it because it could have an impact. , in collaboration with Kroumpouzos. screening questionnaire It could help identify and treat more such patients.

Even if frequent video conferencing didn’t motivate people to seek facial surgery or “tweaks,” something else did. Recent research suggests It can unconsciously shape purchasing decisions in other ways.

Li Huang, Ph.D., an assistant professor of marketing at Hofstra University in New York, and his colleagues used a combination of eye tracking and surveys to determine how people liked different products after participating in different types of Zoom video calls and in-person meetings. Interest was assessed. Researchers found that video calls increased people’s anxiety about being negatively evaluated by others, whether they realized it or not, and increased their interest in self-help products in the aftermath of the call. It turns out.

Although it may sound negative, “this could actually have some positive consequences,” Huang said. “People are increasingly interested in self-improvement products, but this is not limited to body improvement products such as facial creams, but more general forms such as signing up for a LinkedIn learning course or participating in a health check-up. It also includes self-improvement.

“Most of the time, we are unaware that these types of virtual interactions are affecting our psychological well-being, and we may end up making impulse purchases online without knowing why. By learning about these findings, people can try to reduce these types of impacts.”

For example, the study found that this effect was reduced if study participants were able to turn off their webcams or use ring lights to emphasize their appearance during calls.

Skip past newsletter promotions

Switching to “Speaker View” instead of “Gallery View” and turning off “Self View” can also help, and asking participants to write about their strengths and characteristics after the call can also boost self-esteem. It was helpful.

Another factor that may help reduce the negative effects of video calls is zoom the background Selected. Dr. Heng Chan of Nanyang Technological University in Singapore and his colleagues assessed how tired people felt after video conferencing and found that virtual video backgrounds, such as videos of swaying palm trees or waves crashing on a beach, were associated with feelings of fatigue. I discovered that it does. There is a feeling of fatigue at the highest level, followed by a blurred background. Perhaps this is because the brain is forced to work harder by constantly reacting to new visual information, including the occasional intrusion of unblurred objects, Chan said.

People looking at static virtual backgrounds felt the least fatigued, especially if it was a nature-based image, and another study suggests it may have a calming effect. Masu.

The study didn’t assess the impact of people using real-world backgrounds, but Zhang, who uses backgrounds of trees and mountains for his video calls, said still images were still better. I think it might be better. “If you have your own office, that’s fine, but if you’re in a coffee shop or working outside, there’s a chance that people will be walking behind you or something else will happen that will distract your brain. Yes,” says Chan. “Even if you have your own office, you might be distracted by your personal belongings or worried about what others think of you.”

Huang hopes that in addition to using insights like this to help individuals protect themselves from the negative emotional impact of video conferencing, platforms will also take steps to foster a more positive user experience. I’m here. For example, instead of offering standard beauty filters, you can allow users to adjust lighting and background blur to improve their look more seriously.

“Increasing autonomy over privacy settings, such as controlling who can see and when, could also help reduce the pressure on users to always be visible to many people in meetings,” she said. I say.

Platforms could also consider leveraging artificial intelligence to detect signs of emotional distress in people’s voices and facial expressions, offering features such as discreet breaks and mindfulness exercises to help manage emotions. says Huang.

Source: www.theguardian.com

Scientists might have uncovered the answer to the mystery of whale calls

Approximately 50 million years ago, the ancestors of land-based whales transitioned into the oceans, developing various adaptations for their new aquatic life.

They acquired nostrils on the top of their heads for easier breathing at the surface, while their limbs evolved into flippers and fins for swimming. Although the vocalizations of humpback and other baleen whales were well-known, the method by which they produced these sounds remained a mystery until recently.

Studying the sounds of live whales in the vast oceans presented a significant challenge. In a groundbreaking study released in early 2024, scientists were able to examine the voice box of baleen whales by studying the larynxes and carcasses of three stranded whales – a humpback, a sei whale, and a minke whale, which were in relatively good condition.

https://c02.purpledshub.com/uploads/sites/41/2024/12/GettyImages-1254094926.mp4
Whales communicate through low bass sounds.

The larynx of baleen whales is a peculiar organ consisting of elongated cylinders that press against a fat cushion in a rigid U-shape. When air was blown into the larynx, the cushion vibrated, producing low-frequency sounds.

Live whales recycle air through their larynx, enabling them to vocalize without inhaling water or depleting their air supply. Researchers also developed a 3D computer model of the whale’s larynx to demonstrate how muscles control sound production.

This research revealed that the baleen whale’s vocalizations overlapped in frequency with the noise generated by ship propellers.

Due to the structure of whales’ larynx, they lack the ability to adjust their vocal pitch to avoid colliding with underwater ship sounds, making it challenging for them to communicate over long distances in increasingly noisy oceans.


This article addresses the query “How do whales sing in the ocean?” (submitted by Howard Hinchcliffe via email).

If you have any inquiries, please contact us at: questions@sciencefocus.comor reach out to us via Facebook, Twitter, or Instagram (please provide your name and location).

Explore more amazing science content on our fun facts page.


Further reading:

Source: www.sciencefocus.com

“Elon Musk Calls on Members of Congress to Address Threat to American People” | Elon Musk

Elon Musk has stated that British MPs will be summoned to the US to address issues of censorship and intimidation of American citizens, amidst rising tensions between the world’s wealthiest individual and the Labour Party.

Musk, a close associate of Donald Trump, is scheduled to testify before the House of Representatives’ Science and Technology Select Committee in the coming year. This comes in response to concerns raised by The Guardian regarding the spread of harmful content on social media following the August riots.




The committee’s chairman, Chi Onwura, seeks to understand how Musk balances freedom of expression with combating disinformation. Photo: Richard Gardner/Rex/Shutterstock

Labour MP Chi Onwura, chair of the committee, aims to scrutinize Musk’s approach on promoting freedom of speech while also preventing the dissemination of disinformation. She specifically references the hosting of controversial figures on the social media platform X.

Related: How Elon Musk became Donald Trump’s shadow vice-president

In response, Musk has called for Congress members to convene in the US for discussions. He criticizes the UK’s handling of social media posts and accuses the British Prime Minister and a government minister of labeling X as a problematic platform.

Musk further implies discontent with the UK government, likening the situation to a Stalinist regime and criticizing policies such as changes to farm inheritance tax. Despite tensions, some British officials emphasize the importance of collaboration with Musk for technological and commercial progress.

Secretary of State for Science and Technology Peter Kyle appreciates Musk’s contributions as an innovative figure, despite differing views. He advocates for constructive dialogue and identifies common goals.

Newsletter promotions content
Related: Trump’s cabinet picks are agents of his contempt, rage and vengeance | Sidney Blumenthal

British MPs face potential summons to the US following Musk’s statements, sparking concerns over threats to American citizens. However, the nature of these threats remains ambiguous, leading to speculations among online followers.

Onwura expresses interest in hearing Musk’s perspective on misinformation and freedom of expression, given his influential role within X. She highlights the importance of gathering evidence for their investigation.

Musk has embraced the moniker “first buddy” in relation to the president-elect and holds significant sway over AI regulations through his company xAI. His actions and statements continue to garner attention and debate.

Source: www.theguardian.com

Ofcom calls for action following allegations of Roblox being a ‘pedophile hellscape’ by US company

Child safety activists have urged the UK’s communications watchdog to enforce new online laws following accusations that video game companies have turned their platforms into “hellscapes for adult pedophiles.” They are calling for “gradual changes.”

Last week, Roblox, a popular gaming platform with 80 million daily users, came under fire for its lax security controls. An investment firm in the US criticized Roblox, claiming that its games expose children to grooming, pornography, violent content, and abusive language. The company has denied these claims and stated that safety and civility are fundamental to their operations.

The report highlighted concerning issues such as users seeking to groom avatars, trading in child pornography, accessible sex games, violent content, and abusive behavior on Roblox. Despite these concerns, the company insists that millions of users have safe and positive experiences on the platform, and any safety incidents are taken seriously.

Roblox, known for its user-generated content, allows players to create and play their own games with friends. However, child safety campaigners emphasize the need for stricter enforcement of online safety laws to protect young users from harmful content and interactions on platforms like Roblox.

Platforms like Roblox will need to implement measures to protect children from inappropriate content, prevent grooming, and introduce age verification processes to comply with the upcoming legislation. Ofcom, the regulator responsible for enforcing these laws, is expected to have broad enforcement powers to ensure user safety.

In response, a Roblox spokesperson stated that the company is committed to full compliance with the Online Safety Act, engaging in consultations and assessments to align with Ofcom’s guidelines. They look forward to seeing the final code of practice and ensuring a safe online environment for all users.

Source: www.theguardian.com

Calls for Royal Society to Expel Elon Musk Due to Behavior Concerns

The Royal Society is facing pressure to remove technology mogul Elon Musk from its membership due to concerns about his behavior.

As reported by The Guardian, Musk, known for owning the social media platform X, was elected to the British Academy of Sciences in 2018. Some view him as a contemporary innovator comparable to Brunel for his contributions to the aerospace and electric vehicle sectors.

Musk, a co-founder of SpaceX and the CEO of Tesla, has been commended for advancing reusable rocket technology and promoting sustainable energy sources.

Nevertheless, concerns have been raised by several Royal Society fellows regarding Musk’s membership status, citing his provocative comments, particularly following recent riots in the UK.

Critics fear that Musk’s statements could tarnish the reputation of his companies. In response to inquiries, Musk’s companies, including X, provided comments.

Musk’s social media posts during the unrest were widely condemned, with Downing Street rebuking his remarks about civil war and false claims about UK authorities.

The concerns around potentially revoking Musk’s membership focus on his ability to promote his beliefs responsibly and not on his personal views.

The Royal Society’s Code of Conduct emphasizes that fellowship entails upholding certain standards of behavior, even in personal communications, to safeguard the organization’s reputation.

Skip Newsletter Promotions

The Code stipulates that breaching conduct rules may result in disciplinary measures, such as temporary or permanent suspension. Specific procedures are outlined if misconduct allegations are raised against a Fellow or Foreign Member.

Expelling a member from the Royal Society is rare, with no records of such action in over a century. Previous controversies included a dean resigning over remarks about teaching creationism in schools.

A Royal Society spokesperson assured that any concerns regarding individual Fellows would be handled confidentially.

Source: www.theguardian.com

Amazon UK warehouse calls for ambulances 1,400 times in five years

Over the past five years, there have been more than 1,400 ambulance dispatches to Amazon warehouses, a figure that has been described as shocking by the GMB trade union. This raises concerns about the safety of Amazon’s UK workplaces.

The Dunfermline and Bristol Amazon centers had the highest number of ambulance attendees in the UK, with 161 and 125 respectively during the period.

In Dunfermline, a third of Scottish Ambulance Service call-outs were for chest pain, along with incidents related to convulsions, strokes, and breathing difficulties.

Since 2019, Amazon Mansfield has had 84 ambulance calls, with over 70% of them being for serious incidents such as heart attacks and strokes.

Accidents related to pregnancy, miscarriages, traumatic injuries, and suspected heart attacks have been reported at some Amazon sites, as well as exposure to harmful substances and severe burns.

The data was obtained through freedom of information requests to 12 emergency services covering more than 30 Amazon sites. However, the actual numbers may be higher as complete data was not available for all sites.


GMB staff campaigned for union recognition outside an Amazon warehouse in Coventry. Photo: Fabio De Paola/The Guardian

In Coventry, Amazon workers and GMB union members narrowly lost a crucial union recognition vote amid allegations of intimidation by the company.

Amanda Gearing, a GMB organizer, called for investigations into Amazon’s working practices, citing the shocking figures as evidence of unsafe working conditions.

Martha Dark from Foxglove emphasized the danger of working at Amazon, criticizing the company’s disregard for safety.


Workers work at an Amazon fulfillment center in Peterborough ahead of the store’s annual Black Friday sales. Photo: Daniel Leal Olivas/AFP/Getty Images

An Amazon spokesperson denied claims of dangerous working conditions, stating that safety is a top priority and ambulances are always called for emergencies.

The spokesperson also refuted claims that ambulances were not called, emphasizing that the majority of calls were for pre-existing conditions, not work-related incidents.

They encouraged individuals to visit Amazon fulfillment centers to see the truth for themselves.

Source: www.theguardian.com

Former Twitter CEO calls for Elon Musk’s arrest for provoking riots in the UK

A former Twitter executive has suggested that Elon Musk should be subject to “personal sanctions” and the possibility of an “arrest warrant” if he is found to be disrupting public order on his social media platform.

Bruce Daisley, Twitter’s former vice president for Europe, the Middle East, and Africa, expressed in the Guardian that it is unfair to let tech billionaires like Musk tamper with discord without facing personal consequences.

He urged Chancellor Keir Starmer to toughen online safety laws and assess whether media regulator Ofcom is equipped to handle fast-moving individuals like Musk.

Daisley emphasized that the threat of personal sanctions is more effective against executives than the risk of corporate fines, as it could impact the lavish lifestyles of tech billionaires.

The UK government has urged social media platforms to act responsibly following recent riots, attributing them to false information spread online, including claims about asylum seekers.

Musk’s inflammatory posts, such as predicting civil war in the UK, have garnered criticism from government officials, with some calling his remarks unacceptable.

Daisley, who worked at Twitter from 2012 to 2020, described Musk as someone who behaves like a reckless teenager and suggested that an arrest warrant might make him reconsider his actions.

He emphasized the need for legislation to establish boundaries for acceptable behavior on social media and questioned whether tech billionaires should be allowed to influence society without consequences.

Daisley urged for immediate strengthening of the Online Safety Act 2023 to hold tech executives accountable for their actions and to prioritize democratic governance over the influence of tech billionaires.

He also suggested that views deemed harmful, such as those from individuals like Tommy Robinson, should be removed from platforms under the guidance of regulators like Ofcom.

Daisley concluded that the focus should be on upholding acceptable behavior on social media rather than prioritizing profits, especially when influential tech figures like Musk are involved.

He emphasized the possibility of holding tech billionaires accountable for the content allowed on their platforms and called for stricter measures to prevent abuse of power.

Source: www.theguardian.com

UK think tank calls for system to track misuse and failures in Artificial Intelligence

The report highlighted the importance of establishing a system in the UK to track instances of misuse or failure of artificial intelligence. Without such a system, ministers could be unaware of alarming incidents related to AI.

The Centre for Long Term Resilience (CLTR) suggested that the next government should implement a mechanism to record AI-related incidents in public services and possibly create a centralized hub to compile such incidents nationwide.

CLTR emphasized the need for incident reporting systems, similar to those used by the Air Accident Investigation Branch (AAIB), to effectively leverage AI technology.

According to a database compiled by the Organisation for Economic Cooperation and Development (OECD), there have been approximately 10,000 AI “safety incidents” reported by news outlets since 2014. These incidents encompass a wide range of harms, from physical to economic and psychological, as defined by the OECD.

The OECD’s AI Safety Incident Monitor also includes instances such as a deepfake of Labour leader Keir Starmer and incidents involving self-driving cars and a chatbot-influenced assassination plot.

Tommy Shafer-Shane, policy manager at CLTR and author of the report, noted the critical role incident reporting plays in managing risks in safety-critical sectors like aviation and healthcare. However, such reporting is currently lacking in the regulatory framework for AI in the UK.

CLTR urged the UK government to establish an accident reporting regime for AI, similar to those in aviation and healthcare, to address incidents that may not fall under existing regulatory oversight. Labour has promised to implement binding regulations for most AI incidents.

The think tank recommended the creation of a government system to report AI incidents in public services, identify gaps in AI incident reporting, and potentially establish a pilot AI incident database.

In a joint effort with other countries and the EU, the UK pledged to cooperate on AI security and monitor “AI Harm and Safety Incidents.”

CLTR stressed the importance of incident reporting to keep DSIT informed about emerging AI-related risks and urged the government to prioritize learning about such harms through established reporting processes.

Source: www.theguardian.com

International Monetary Fund (IMF) calls for consideration of balancing the effects of AI with profit and environmental taxes

The International Monetary Fund (IMF) suggests that governments dealing with economic challenges brought about by artificial intelligence (AI) should look into implementing fiscal policies such as taxes on excessive profits or environmental taxes to offset the carbon emissions linked to AI.

The IMF highlights generative AI, which enables computer systems like ChatGPT to create human-like text, voice, and images from basic prompts, as a technology advancing rapidly and spreading at a swift pace compared to past innovations like the steam engine.

To address the impact on jobs due to AI, the IMF proposes policies like a carbon tax considering the environmental effects of operating AI servers. The IMF emphasizes the importance of taxing carbon emissions from AI servers to incorporate environmental costs into the technology’s price.


The IMF report released on Monday highlights the significance of taxing carbon emissions associated with AI servers due to their high energy consumption and the potential to impact data centers’ electricity use. Data centers, servers, and networks currently contribute up to 1.5% of global emissions, according to a recent report.

In addition, the report cautions that introducing AI could reduce wages, widen inequality, and empower tech giants to strengthen their market dominance and financial gains. It recommends higher taxes on capital income, including corporate taxes and personal income on dividends, interest, and capital gains, to address these challenges.

Furthermore, the report stresses the need for governments to prepare for the impact of AI on various job sectors, both white-collar and blue-collar, and suggests measures like extending unemployment insurance, targeted Social Security payments, and tailored education and training to equip workers with necessary skills.

To overhaul the tax system and introduce new taxes reflecting real-time market values, the IMF recommends leveraging AI’s analytical capabilities. While cautioning against universal basic income due to its high cost, the IMF suggests considering it if AI disrupts jobs significantly in the future.

Ella Dabra Norris, deputy director of the IMF’s Fiscal Affairs Department and co-author of the report, encourages countries to explore the design and implementation of systems like UBI if AI disruption intensifies.

Source: www.theguardian.com

Research: African elephants use individualized calls similar to nicknames to communicate with each other

A team of scientists from Colorado State University, Save the Elephants and Elephant Voices used machine learning to: African savanna elephant (African brown) The calls included name-like elements that identified the intended recipient. When the authors played the recorded calls, the elephants responded positively to the calls, either by returning the call or by approaching the speaker.

Two young elephants greet each other in the Samburu National Reserve in Kenya. Image by George Wittemyer.

“Dolphins and parrots call each other by name, imitating each other's distinctive sounds,” says Dr. Michael Pardo, a postdoctoral researcher at Colorado State University and Save the Elephants.

“In contrast, our data suggest that elephants do not imitate the sounds of their mates when calling, but rather use a method that resembles the way humans communicate names.”

“The ability to learn to produce new sounds is unusual among animals, but it is necessary for identifying individuals by name.”

“Arbitrary communication, expressing ideas through sounds but not imitating them, greatly expands communication abilities and is considered a next-level cognitive skill.”

“If we could only make sounds that resembled what we say, our ability to communicate would be severely limited,” added George Wittemyer, a professor at Colorado State University and chairman of Save the Elephants' science committee.

“The use of arbitrary phonetic labels suggests that elephants may be capable of abstract thought.”

For their study, the researchers used machine learning techniques to analyze 469 recordings of rumbles made by wild female African elephant calves in the Samburu Buffalo Springs National Reserve in Amboseli National Park, Kenya, between 1986 and 2022.

The machine learning model correctly identified the recipient in 27.5% of these calls, which the researchers noted was a higher percentage than the model detected when control voice was input.

The researchers also compared the responses of 17 wild elephants to recordings of calls that were originally directed at them or at other elephants.

The researchers observed that the elephants approached the speaker playing the recordings more quickly and were more likely to respond vocally when they were called to, compared to when other elephants were called to.

This suggests that elephants recognise individual calls addressed to them.

“The discovery that elephants are not simply mimicking the calls of calling individuals is most intriguing,” said Dr. Kurt Fristrup, a researcher at Colorado State University.

“The ability to use arbitrary acoustic labels for other individuals suggests that other kinds of labels or descriptors may exist for elephant calls.”

The new insights revealed by this study into elephant cognition and communication reinforce the need to protect elephants.

Elephants are classified as follows: EndangeredThey are endangered due to poaching for their ivory and habitat loss due to development.

Due to their large size, they require a lot of space and can cause damage to property and pose a danger to people.

“Communicating with pachyderms is still a distant dream, but being able to communicate with them could be a game changer for their conservation,” Prof Wittemyer said.

“Living with elephants is difficult when you are trying to share the land but the elephants eat the crops.

“I want to warn them: 'Don't come here. If you come here, you will be killed.'”

a paper The findings were published in the journal. Natural Ecology and Evolution.

_____

MA Pardo othersAfrican elephants call out to each other by different names for each individual. Nat Ecol EvolPublished online June 10, 2024; doi: 10.1038/s41559-024-02420-w

Source: www.sci.news

‘Divergent Views on Personalization in Big Tech Prompt New EU Calls for Default Turning Off of Profiling-Based Content Feeds’

Another policy tug-of-war may be emerging in the European Union over Big Tech’s content recommendation systems, with the European Commission ruling out profiling-based content feeds (also known as “personalization” engines that process user data). Many members of Congress are calling for the government to curb this. To determine what content to display. The tracking and profiling of users by mainstream platforms to power “personalized” content feeds has long raised concerns about potential harm to individuals and democratic societies, and whether this technology is fueling social media addiction. , some critics say poses mental health risks to vulnerable people. There are also concerns that this technology is undermining social cohesion through its tendency to amplify divisive and polarizing content that can push individual anger and anger towards political extremes.

Of letter, 17 MPs from political groups including S&D, the Left, the Greens, EPP and Renew Europe have signed the petition, which calls for recommendation systems on technology platforms to be switched off by default. The idea emerged during negotiations over the bloc’s Digital Services Act (DSA). ), but it was not included in the final regulations because it did not have a democratic majority. Instead, EU lawmakers agreed to transparency measures for recommender systems, along with a requirement that large platforms (so-called VLOPs) must provide at least one content feed that is not based on profiling. But in a letter, lawmakers are calling for a complete dedefault on the technology. “Interaction-based recommender systems, especially hyper-personalized systems, pose a serious threat to the public and society as a whole, as they prioritize emotional and extreme content and target individuals who are particularly likely to be provoked. ” they wrote. “This insidious cycle exposes users to sensational and dangerous content, prolonging their engagement with the platform in order to maximize ad revenue.”

Amnesty International’s experiment on TikTok showed that the algorithm were exposed to videos glorifying suicide within just an hour. Additionally, Meta’s internal research found that 64% of joins to extremist groups were due to recommended tools, and that extremists It has become clear that we are exacerbating the spread of ideology.” The phone is: Draft online safety guidelines for video sharing platforms, was announced earlier this month by the Irish Media Commission (Coimisiún na Meán). The committee will be responsible for overseeing the DSA when regulations become enforceable for covered services next February. Coimisiún na Meán is currently consulting on guidance proposing that video sharing platforms “take steps to ensure that profiling-based recommendation algorithms are turned off by default.” The publication of the guidance occurred after the following episodes. violent civil unrest in Dublin The country’s police authorities suggested the attack was fabricated by far-right “hooligans” with false information spread on social media and messaging apps. And earlier this week, Irish Civil Liberties and Human Rights Council ICCL, which has been campaigning on digital rights issues for many years, also called on the European Commission to support the Koimisiun na Mean proposal and to make it public. my report They say social media algorithms are tearing society apart and are calling for personalized feeds to be turned off by default.

In their letter, MEPs said they also accepted proposals from Ireland’s media regulator, which similarly tend to promote “emotional and extremist content” that they say could undermine civic cohesion. It suggests that it “effectively” addresses issues related to recommender systems. The letter also references recently adopted regulations. Report by the European Parliament On the addictive design of online services and consumer protection, they highlight the negative impact of recommender systems on online services, which involve the profiling of individuals, especially minors. , which aims to keep users on the platform for as long as possible, thus manipulating them.” Artificial amplification of hatred, suicide, self-harm, and disinformation. ” “We call on the European Commission to follow Ireland’s lead and not only approve this measure under TRIS, but also take decisive action.” [Technical Regulations Information System] In addition to following the steps, you can also recommend this measure as a mitigation measure for large online platforms to take. [VLOPs] 35(1)(c) of the Digital Services Act, to give citizens meaningful control over their data and online environment,” the MEPs wrote, adding: “The protection of our citizens, especially young people, is of paramount importance” We believe that the European Commission has an important role to play in ensuring a safe digital environment for everyone. We look forward to your prompt and decisive action on this issue. ”

Under TRIS, EU member states must submit proposals before they are adopted into national law so that the EU can carry out a legal review to ensure that they are consistent with the bloc’s rules, in this case the DSA. draft technical regulations must be notified to the European Commission. . This system means that domestic laws that seek to “golden” EU regulations are unlikely to pass scrutiny. As such, the Irish Media Commission’s proposal to turn off video platforms’ recommender systems by default appears to go further than the text of the relevant legislation and may not survive the TRIS process. be. However, no company has gone that far yet. And clearly not the kind of step that ad-funded, engagement-driven platforms would choose as their commercial default.

When we asked, the European Commission declined public comment on the MEP’s letter (or the ICCL report). Instead, the spokesperson pointed to the “clear” obligations regarding her VLOP’s recommendation system set out in Article 38 of the DSA. This mandate requires platforms to provide at least one non-profiling-based option for each of these systems. However, we were able to discuss the profiling feed debate with EU officials who provided background to speak more freely. They agreed that platforms could choose to turn off profiling-based recommender systems by default as part of DSA systemic risk mitigation compliance, but they still do not have initiatives that stray too far from their own policies. I have confirmed that the platform you are using does not exist. So far, we have only seen examples where non-profiling feeds are optionally provided to users, such as on TikTok and Instagram, in order to meet the aforementioned (Article 38) DSA requirement to provide users with the option of circumvention. not. Personalization of this type of content. However, this requires active opt-out by the user. On the other hand, setting a feed to non-profiling by default is clearly a stronger type of content regulation, as it requires no user action to enable. EU officials we spoke to said that the European Commission, in its capacity as enforcer of the DSA on VLOPs, is considering a recommender system, including the formal process initiated in relation to X earlier this week. admitted that. The recommendation system has also been the focus of some of the formal requests for information the commission has sent to his VLOP, including one to Instagram that focuses on child safety risks. they spoke. And they agreed that the EU could use its enforcer role, or law-abiding power, to force large platforms to stop personalized feeds by default. However, they indicated that the commission would only take such action if it determined it would be effective in mitigating a particular risk. The official noted that multiple types of profiling-based content feeds are in place, even on a platform-by-platform basis, and emphasized that each must be considered in context.

More generally, they appealed for “nuance” in the debate over the risks of recommendation systems. They suggested that the Commission’s approach here would be to conduct a case-by-case assessment of concerns and advocate for data-driven policy interventions on VLOPs rather than blanket measures. did. After all, it’s a collection of platforms as diverse as video-sharing and social media giants, as well as retail and information services and (most recently) porn sites. The risk that an enforcement decision will not be selected by legal challenge in the absence of solid evidence to support the decision is clearly a concern for the Commission. The official also wants to collect more information before making a decision on whether to recommend.

Source: techcrunch.com