Disney stated that its AI image generator Midjourney was developed using films like ‘The Lion King’
Maximum Film/Alamy
Since the launch of ChatGPT, OpenAI’s generative AI chatbot, three years ago, we’ve witnessed dramatic shifts across various aspects of our lives. However, one area that remains unchanged is adherence to copyright law. We still strive to uphold pre-AI standards.
It’s widely recognized that leading AI firms have developed models by harvesting data from the internet, including copyrighted content, often without securing prior approval. This year, prominent copyright holders have retaliated, filing various lawsuits against AI companies for alleged copyright violations.
The most notable lawsuit was initiated in June by Disney and Universal, claiming that the AI image generation platform Midjourney was trained using their copyrighted materials and enabled users to produce images that “clearly included and replicated Disney and Universal’s iconic characters.”
The proceedings are still underway, with Midjourney’s recent response in August asserting, “The limited monopoly granted by copyright must yield to fair use,” suggesting that the outcome would be transformative, permitting AI companies to educate models with copyrighted works.
Midjourney’s statements highlight that the copyright debate is more complex than it might seem at first glance. “Many believed copyright would serve as the ultimate barrier against AI, but that’s not entirely true,” remarks Andres Guadams from the University of Sussex, UK, expressing surprise at how little impact copyright has had on the progress of AI enterprises.
This is occurring even as some governments engage in discussions on the matter. In October, the Japanese government made an official appeal to OpenAI, urging the company behind the Sora 2 AI video generator to honor the intellectual property rights of its culture, including its manga and beloved video games like those from Nintendo.
Sora 2 is embroiled in further controversy due to its capability to generate realistic footage of real individuals. OpenAI recently tightened restrictions on representations of Martin Luther King Jr. after family representatives raised concerns about a depiction of his iconic “I Have a Dream” speech that included inappropriate sounds.
“While free speech is crucial when portraying historical figures, OpenAI believes that public figures and their families should ultimately control how their likenesses are represented,” the company stated. This restriction was only partially effective, as celebrities and public figures must still opt-out from having their images utilized in Sora 2. Some argue this remains too permissive. “No one should have to tell OpenAI if they wish to avoid being deepfaked,” states Ed Newton Rex, a former AI executive and founder of the campaign group Fairly Trained.
In certain instances, AI companies face legal challenges over their practices, as highlighted by one of the largest proposed lawsuits from the past year. In September, three authors accused Anthropic, the firm behind the Claude chatbot, of deliberately downloading over 7 million pirated books for training its AI models.
A judge reviewed the case and concluded that even if the firm had utilized this material for training, it could be considered a sufficiently “transformational” use that wouldn’t inherently infringe copyright. However, the piracy allegations were serious enough to warrant trial proceedings. Anthropic ultimately decided to settle the lawsuit for at least $1.5 billion.
“Significantly, AI companies appear to be strategizing their responses and may end up disbursing a mix of settlements and licensing deals,” Guadams noted. “Only a small number of companies are likely to collapse due to copyright infringement lawsuits,” he adds. “AI is here to stay, even if many established players may fail due to litigation and market fluctuations.”
The deployment of flying drones during the Ukraine conflict has drastically transformed ground combat strategies. A similar evolution appears to be underway beneath the waves.
Global navies are in a race to incorporate autonomous submarines. The Royal Navy is set to introduce a fleet of unmanned underwater vehicles (UUVs) aimed at tracking submarines and safeguarding undersea cables and pipelines for the first time. Australia has committed $1.7 billion (£1.3 billion) to develop a ‘Ghost Shark’ submarine to combat the growing presence of Chinese submarines. Concurrently, the expansive US Navy is investing billions in multiple UUV initiatives, including one already operational that can be deployed from nuclear submarines.
Scott Jamieson, managing director of sea and land defense solutions at BAE Systems—the UK’s foremost arms manufacturer and nuclear submarine builder—asserted that autonomous unmanned submarines signify “a significant shift in the underwater combat domain.” New unmanned vessels under development will enable the Navy to “scale operations in ways not previously possible” at “a fraction of the cost of manned submarines,” he noted.
Established defense giants like BAE Systems, General Dynamics, and Boeing are competing with innovative startups such as Anduril, creator of the Ghost Shark, and Germany’s Hellsing for lucrative new market possibilities. Startups argue that they can deliver solutions more rapidly and cost-effectively.
Anduril’s Ghost Shark is a large autonomous underwater vehicle (XLAUV) commissioned by the Royal Australian Navy. Photo: Rodney Braithwaite/Australian Defense Force/AFP/Getty Images
The contest for underwater dominance has persisted almost continuously for the last century, both during peacetime and in conflict.
The first nuclear-powered submarine, the American Nautilus—named after Jules Verne’s fictional vessel—was launched in 1954. Today, nuclear-powered vessels constitute the backbone of the military forces of six nations: the United States, Russia, Britain, France, China, and India, with North Korea potentially joining this group recently. This occurs amidst ongoing debates about the value of such costly weapons and their effectiveness as deterrents.
Naval forces engage in a constant game of hide and seek beneath the waves. Submarines seldom surface to evade detection. Recently, due to maintenance issues with other vessels, some British submarines spent an unprecedented nine months submerged, carrying Trident nuclear missiles that could be deployed at a moment’s notice.
Monitoring Russia’s underwater nuclear capabilities, which have been largely inactive in recent years, is crucial for the Royal Navy, especially around the Greenland-Iceland-UK (GIUK) Gap, a critical juncture for NATO allies to observe Russian activities in the North Atlantic. An executive from an arms company mentioned that the South China Sea represents another promising opportunity as China and its neighbors confront each other in a protracted territorial standoff.
Underwater drones have the potential to enhance the tracking of competing submarines. Some sensors are designed to be deployed by other unmanned probes and can remain underwater for extended periods, as per the aspirations of executives looking to market them to Britain.
A growing concern is the increase in attacks on oil and gas pipelines, exemplified by the 2022 Nord Stream incident, where a Ukrainian suspect was identified, and the 2023 attack on the Baltic Connector pipeline linking Finland and Estonia. Undersea power and internet cables are vital for the global economy, as evidenced by the disruption caused to an undersea power cable between Finland and Estonia last Christmas—just two months following the severing of two communication cables in the Baltic Sea.
Parliament’s Defense Select Committee has raised alarms about the UK’s susceptibility to undersea sabotage—so-called “grey zone” actions—which can lead to significant disruptions without escalating to outright war. The committee warned that damage to any of the 60 undersea data and energy cables around the British Isles could “have a devastating effect on the UK.”
Andy Tomis, CEO of Cohort, a British military technology firm renowned for developing sonar sensors, highlighted that traditional manned ships, aircraft, and submarines used to track nuclear-powered submarines and potential sabotage vessels are “highly sophisticated and costly.” However, he added, “by integrating unmanned vessels with these systems, we can achieve human-like decision-making capabilities without endangering lives.”
BAE is already testing Herne’s underwater drone. Photo: BAE Systems
Cohort hopes to implement some of its towed sensors (named Crait after a sea snake) on smaller autonomous vessels.
Modern naval ships are equipped with five times more sonar sensors than active submarines. Reduced power needs are crucial for small unmanned vessels, which cannot accommodate nuclear reactors. Passive sensors that do not emit sonar “pings” complicate detection and destruction.
The Royal Navy, along with the British Army, has historically lagged in rapidly adopting the latest technologies. However, lessons from the Ukrainian military underscore the importance of swiftness and cost-effectiveness in drone production for aerial and maritime applications. In response, the Defense Ministry is advocating for the swift development of a technology demonstrator under Project Cabot.
BAE has already conducted tests using a candidate dubbed Herne. Hellsing is establishing a facility to manufacture underwater drones in Portsmouth, the Royal Navy’s home base. Anduril, led by Donald Trump fundraiser Palmer Lackey, is planning to set up a manufacturing site in the UK.
Initial contracts are expected to be awarded this year, with tests likely to take place in north-west Scotland conducted by defense company QinetiQ. A full-scale order for one or two companies, including Atlantic Net, is anticipated to address sensor needs in the GIUK area.
Sources indicate that the Royal Navy has termed the initiative “anti-submarine warfare as a service,” a play on the phrase “software as a service.” A £24 million tender announcement was published in May.
Anduril’s Dive LD autonomous underwater vehicle. American companies are considering manufacturing bases in the UK. Photo: Holly Adams/Reuters
Sidharth Kaushal, a senior fellow specializing in seapower at the Royal United Services Institute think tank, emphasized that the submarine-hunting strategies employed in recent decades “are not scalable in conflict” due to their reliance on costly and highly specialized assets.
The warship will tow a cable extending over 100 meters, equipped with an array of sonar sensors designed to detect the faintest sounds and lowest frequency vibrations. Aircraft from Britain’s fleet, like the Boeing P-8s, deploy disposable sonobuoys to locate deep-sea submarines. Simultaneously, satellites monitor the surface for wake trails left by submarine communication antennas and observe for patrols of hunter-killer submarines lurking below.
The proposal that inexpensive drones could handle much of this task is intriguing. However, Kaushal cautioned that the cost benefits “remain to be verified.” Industry leaders have indicated that large UUV fleets will still incur significant maintenance costs.
Safeguarding submarine cables presents a dual challenge, as sabotage may become more accessible and less expensive. One executive remarked that the likelihood of drones engaging each other underwater is “entirely plausible.”
The Ministry of Defense describes this initiative as “contractor-owned, contractor-operated, and naval-surveilled,” marking the first instance in which a civilian-owned vessel might partake in anti-submarine missions, thus raising the potential of becoming a military target.
“Russia’s immediate response will likely be to test and gauge this capability,” commented Ian McFarlane, head of underwater systems sales at Thales UK. Thales currently supplies the Royal Navy with sonar arrays for submarine detection, unmanned surface craft, and aerial drones, aiming to contribute to Project Cabot by integrating relevant data.
However, Mr. McFarlane insisted that involving private firms is crucial as the Royal Navy and its allies require “mass and resilience now” to address the threats posed by “increasing aggressors.”
AI firms need to be upfront about the risks linked to their technologies to avoid the pitfalls faced by tobacco and opioid companies, as stated by the CEO of Anthropic, an AI startup.
Dario Amodei, who leads the US-based company developing Claude chatbots, asserted that AI will surpass human intelligence “in most or all ways” and encouraged peers to “be candid about what you observe.”
In his interview with CBS News, Amodei expressed concerns that the current lack of transparency regarding the effects of powerful AI could mirror the failures of tobacco and opioid companies that neglected to acknowledge the health dangers associated with their products.
“You could find yourself in a situation similar to that of tobacco or opioid companies, who were aware of the dangers but chose not to discuss them, nor did they take preventive measures,” he remarked.
Earlier this year, Amodei warned that AI could potentially eliminate half of entry-level jobs in sectors like accounting, law, and banking within the next five years.
“Without proactive steps, it’s challenging to envision avoiding a significant impact on jobs. My worry is that this impact will be far-reaching and happen much quicker than what we’ve seen with past technologies,” Amodei stated.
He described the term “compressed 21st century” to convey how AI could accelerate scientific progress compared to previous decades.
“Is it feasible to multiply the rate of advancements by ten and condense all the medical breakthroughs of the 21st century into five or ten years?” he posed.
As a notable advocate for online safety, Amodei highlighted various concerns raised by Anthropic regarding their AI models, which included an alarming trend of perceived testing and blackmail attempts against them.
Last week, the newspaper reported that a Chinese state-backed group leveraged its Claude Codeto tool to launch attacks on 30 organizations globally in September, leading to “multiple successful intrusions.”
The company noted that one of the most troubling aspects of the incident was that Claude operated largely autonomously, with 80% to 90% of the actions taken without human intervention.
“One of the significant advantages of these models is their capacity for independent action. However, the more autonomy we grant these systems, the more we have to ponder if they are executing precisely what we intend,” Amodei highlighted during his CBS interview.
Logan Graham, the head of Anthropic’s AI model stress testing team, shared with CBS that the potential for the model to facilitate groundbreaking health discoveries also raises concerns about its use in creating biological weapons.
“If this model is capable of assisting in biological weapons production, it typically shares similar functionalities that could be utilized for vaccine production or therapeutic development,” he explained.
Graham discussed autonomous models, which play a crucial role in the justification for investing in AI, noting that users desire AI tools that enhance their businesses rather than undermine them.
“One needs a model to build a thriving business and aim for a billion,” he remarked. “But the last thing you want is to find yourself locked out of your own company one day. Thus, our fundamental approach is to start measuring these autonomous functions and conduct as many unconventional experiments as possible to observe the outcomes.”
“Enshittification,” much like “shrinkflation” and “greenwashing,” is a newly coined term that feels familiar, perfectly expressing a widespread yet subtle issue.
We’re acutely aware that websites and apps often deteriorate or become worse over time as their owners exploit users for profit. This is visible everywhere, from Instagram swapping your chronological feed for a mashup of influencer content to Apple compelling users to upgrade by limiting repair options.
Cory Doctorow introduced the term in 2022 and elaborated on it in his recent book. More information: Why everything suddenly went bad and what to do about it serves as both an analysis and a call to action.
The strategy behind enshittification is for platforms like Facebook to establish and provide excellent services. Users flock to them for convenience and enjoyment. The company then waits until we are deeply connected—friends, local groups, schools, etc.—making it cumbersome to leave.
Once the user base becomes substantial, advertisers get locked in as well. The company then shifts focus to profits, inundating services with ads and algorithms, leading to a decline in user experience. This creates pressure on advertisers. The platform then becomes toxic, primarily benefitting shareholders, and users find it difficult to quit. As Doctorow states, we are trapped in a decaying entity.
In the past, poor businesses would have faced market consequences. If a café serves bad coffee, we’d simply find another. Today, however, tech companies have formed monopolies, making substantial profits that allow them to sustain their dominance. They purchase competitors merely to shut them down, lobby for lenient regulations, and secure exclusive contracts. (For instance, Google pays Apple $20 billion annually to remain the default search engine in Apple’s Safari browser.)
Enshifted Companies that seemingly harvest excessive personal data expose industry secrets, knowing they’ll charge more for goods during paydays since consumers are less likely to negotiate. Or companies deploy algorithms to suppress gig economy wages or implement keystroke monitoring systems that alert supervisors when we pause typing.
Although these negative aspects aren’t entirely new to readers, consuming them in large quantities can leave a sour taste. They can even lead intelligent individuals to resent themselves for being misunderstood in various ways.
Indeed, the goal is simply to do what the company was designed to do: maximize profit. However, with advances in computers, algorithms, and the Internet, things have spiraled out of control, allowing for techniques far more sophisticated than those available just a decade ago.
Doctorow cautions that regulators meant to protect us are often outmatched by the companies they monitor. Yet, he firmly believes they are part of the solution.
While there have been favorable developments in the European Union and under President Joe Biden in the U.S., substantial work remains to be done, as tech companies may innovate ways to harm us more quickly than can be counteracted. We can demand more accountability from politicians, and well-crafted legislation supported by effective regulators can help.
However, the potential power of boycotts remains largely unaddressed—tech companies need us more than we need them. It’s feasible to abandon social media, favor local businesses, and utilize ethical search engines. The more individuals take such actions, the likelier others will follow.
Whether it pertains to travel, clothing, or food, many of us attempt to “vote” with our wallets. Perhaps it’s time we extend this practice to our online choices.
Certainly. Plants convert sunlight into food (stored energy), so transforming that food into fuel seems like it should yield a sustainable biofuel with zero carbon emissions, right? Wrong. In reality, the surge in biofuels is driving up emissions and harming both people and wildlife. Yet, production is ramping up rapidly. What gives?
If you believe biofuels are beneficial, you may be misled by the pervasive greenwashing. Evidence suggests that biofuels generally do more harm than good. A recent report by the campaign group Transport and Environment (T&E) reveals that the shift to biofuels has resulted in a 16% increase in carbon dioxide emissions on average, as compared to sticking with fossil fuels.
Why is this? Because agricultural cultivation is one of the leading sources of greenhouse gases. To be fair, the 16% figure is a global average according to the T&E report. Some regions, like Europe, argue that biofuels marginally reduce emissions overall, but only by a slight amount. We are making substantial sacrifices for minimal emissions reductions, given the numerous adverse effects of biofuels.
For starters, rising food costs are a significant consequence we’re all experiencing. Converting wheat and corn into bioethanol and vegetable oil into biodiesel escalates demand, leading to soaring prices. It’s difficult to quantify, but experts I’ve consulted over the years generally agree this is a major factor in food price inflation.
Moreover, biofuel crops frequently necessitate irrigation, worsening water scarcity in various areas. According to the T&E report, producing enough biofuel for a vehicle to travel just 100 kilometers (62 miles) consumes 3,000 liters of water. In contrast, a solar-powered electric car only requires 20 liters for the same distance.
We also need land. Agricultural land continues to expand globally to accommodate growing populations that are consuming more meat. As biofuel production rises, more land is needed. This often results in deforestation, such as clearing rainforests in Indonesia to establish new palm oil plantations. In essence, biofuels are exacerbating another global crisis: the loss of wildlife and biodiversity.
What’s particularly concerning is the inefficiency of biofuel production. A report from T&E suggests that if solar panels were installed on the same land, the equivalent amount of energy could be generated using just 3% of the space. In other words, solar energy can mitigate emissions with a significantly lower environmental footprint. It appears we can outperform nature when it comes to harnessing the sun’s energy.
In contrast, biofuels contend with all the same pollution problems as traditional agriculture, from pesticides detrimental to humans and wildlife to nutrient runoff that devastates rivers, lakes, and seas. Utilizing non-food biofuel sources like waste could help address some of these challenges. However, by 2030, over 90% of biofuel production is still expected to rely on food crops, according to the T&E report.
So why are numerous countries incentivizing the production of more biofuels than ever? A financial interest in biofuels drives influential lobby groups to advocate for more government support. Simultaneously, some nations and organizations look to meet emissions targets without confronting the inconvenient truths.
For instance, politicians across the American political spectrum have aimed to maintain favor with Corn Belt farmers growing corn for bioethanol. Earlier this year, tax incentives for biofuels were introduced in the U.S. in 2022 and further extended.
Additionally, the shipping and aviation sectors claim they are reducing emissions but view biofuels as a means to maintain their traditional operations. Aviation industry standards for “sustainable aviation fuels” at least consider emissions linked to increased land use and impose limits on biofuels, which are the highest emitters. The shipping industry could be facing even greater repercussions, as it has yet to decide whether to account for land use. The usage of biofuels for maritime purposes alone could double by the 2030s, as warned by the T&E report—this could be catastrophic for all the reasons discussed.
For years, it has been evident that producing biofuels to minimize emissions is counterproductive, and continuing on this path is sheer madness.
Oil corporations are making minimal investments in wind energy.
Associated Press/Alamy
Significant oil and gas firms hold under 1.5 percent of the global renewable electricity capacity, raising concerns about their dedication to green energy transition, despite their public assertions.
Marcel Llabero Pasquina and Antonio Bontempi, researchers from the Autonomous University of Barcelona, analyzed ownership data of over 53,000 renewable energy projects—including wind, solar, hydroelectric, and geothermal—tracked by the NGO Global Energy Monitor. They compiled this information to determine the proportion of these projects owned by the 250 largest oil and gas companies, which together dominate 88% of global hydrocarbon production.
As the world shifts away from fossil fuels, many chief energy companies have committed to investing in renewables, yet findings indicated that these top firms own merely 1.42% of operational renewable energy capacity worldwide. Notably, more than half (around 54%) of this capacity was acquired rather than developed by these companies. Their analysis of total energy output showed that just 0.13% of energy produced by these companies comes from renewable electricity.
“The findings were astonishing even to me,” remarks Llabero-Pasquina. “We understood they played a limited role in the energy transition. We thought it was merely for appearances. Yet, the numbers are even lower than we anticipated.”
Llavero Pasquina and Bontempi are associated with Environmental Justice, a collective dedicated to researching and advancing the global environmental justice movement. Llabero-Pasquina believes that the campaign’s stance bolsters his research. “It is crucial for us to maintain high rigor in our work so that we can effectively persuade others and demonstrate the truth.”
It is not surprising that major energy corporations, renowned for their oil and gas ventures, do not hold substantial stakes in renewable energy, says Thierry Bros from the Institute of Sciences in Paris. “Ultimately, [the energy transition] must be disruptive and not play into the hands of these companies.”
However, Bros argues that big energy firms are misleadingly portraying their efforts towards energy transition. “They represent themselves as incorporating methods like carbon capture for emissions from fossil fuels. Yet, I believe their actual engagement leans more towards carbon capture and sequestration, which may extend beyond their genuine expertise,” he states.
Offshore Energies UK, representing the UK’s offshore energy sector, including oil, gas, wind, carbon capture, and hydrogen, refrained from commenting directly on these findings. Nevertheless, it highlighted a previous statement from CEO David Whitehouse: “Rather than being in conflict, oil and gas, wind, and emerging low-carbon technologies form a unified system. The expertise of our workforce, the same individuals who developed the North Sea, is instrumental for achieving this transition,” he remarked.
Aides to the Senior Minister stated that AI firms would not be required to compensate creators for using their content to train their systems.
Kirsty Innes, recently appointed as special advisor to Liz Kendall, Secretary of State for Science, Innovation and Technology, remarked, “Philosophically, whether you believe large companies should compensate content creators or not, there is no legal obligation to do so.”
The government is currently engaged in discussions regarding how creatives should be compensated by AI companies. This has prompted well-known British artists, including Mick Jagger, Kate Bush, and Paul McCartney, to urge Kiel’s predecessors to advocate for the rights of creators and safeguard their work.
Innes, who previously worked with the Tony Blair Institute (TBI) Think Tank, has since deleted her statement. In a post on X from February—before her ministerial appointment—she commented:
Additionally, she stated:
The TBI received donations amounting to $270 million (£220 million) last year from Oracle Tech billionaire Larry Ellison. Oracle is backing the $600 million Stargate project aimed at developing AI infrastructure in the U.S. with OpenAI and the Japanese investment firm SoftBank.
Since Labour initiated consultations on copyright law reform, tensions have arisen with the UK creative community. The suggested approach to allow AI companies to utilize copyrighted works without permission from their owners indicates a desire to circumvent the process.
Some content creators, such as The Guardian and The Financial Times, have entered agreements with OpenAI, the creators of ChatGPT, to license and monetize their content for use in the San Francisco startup.
The government now asserts that it no longer prefers requiring creatives to opt out and has assembled working groups from both the creative and AI sectors to develop solutions to the issues at hand.
Ed Newton-Rex, founder of Foally Trained—a nonprofit dedicated to respecting creative rights and certifying generative AI companies—emphasizes his advocacy for creative rights.
“I hope she takes this into account with her advisor. This perspective aligns more closely with public sentiment, which is rightly concerned about the implications of AI and the influence of large technology firms.”
He noted that Kendall’s appointment presents an opportunity to redefine a relationship that has become increasingly complicated between the creative industry and the dominance of big technology companies. This move appears to reflect the demands from Big AI firms for copyright reform without the obligation to compensate creators.
Both Innes and Kendall chose not to comment.
Beevan Kidron, a crossbench peer who has rallied against the loosening of copyright laws, stated, “Last week, Creative sent a letter to the Prime Minister asking him to clearly define the rights of individuals in their scientific and creative endeavors, especially in terms of the unintended consequences of government policies that overlook British citizenship.”
A significant portion of UK employers, about one-third, are utilizing “bossware” technology to monitor employee activities, predominantly through methods like email and web browsing surveillance.
Private sector firms are the most inclined to implement onsite monitoring, with one in seven employers reportedly recording or assessing screen activities, as per a comprehensive UK study on office surveillance.
These insights, disclosed by the Chartered Management Institute (CMI) to the Guardian, are derived from feedback from numerous UK managers, indicating a recent uptick in computer-based work monitoring.
According to 2023 research by the Information Commissioner’s Office (ICO), less than 20% of respondents believed they were being monitored by their employers. The finding that roughly one-third of managers are aware of their organizations tracking employees’ online activities on company devices likely underrepresents the issue.
Many of these surveillance tools are designed to mitigate insider threats, safeguard confidential data, and identify dips in productivity. However, this growing trend seems to be inducing anxiety among employees. CMI highlights that many managerial figures oppose such practices, arguing they erode trust and infringe on personal privacy.
A manager at an insurance firm developing an AI system for monitoring staff screen activity expressed feelings of “unease,” questioning, “Do they trust employees to perform their roles? Is there an intention to replace them with AI?”
One employee monitoring service provides insights into workers’ “idle hours,” tracks “employee productivity,” flags unapproved AI or social media use, and offers “real-time data on employee behavior, including screenshots, screen recordings, keystrokes, and application usage.”
In light of these findings, the ICO emphasized that employers “must inform employees about the nature, scope, and reasons for surveillance,” noting that excessive monitoring “can infringe on personal privacy,” especially for remote workers. They warned of potential actions if necessary.
Last year, the ICO prohibited outsourced company Serco from utilizing facial recognition technology and fingerprint scanning to manage staff attendance at various leisure centers.
Monitoring often includes ensuring that inappropriate content isn’t accessed, according to CMI. However, they cautioned, “If it feels like an invasion, there can be long-term implications.”
Petra Wilton, policy director at CMI, stated, “If implemented, this could be of significant concern to employers and raise serious data privacy and protection issues.”
PWC has recently rolled out a “traffic light” system utilizing badge swipes and WiFi connection data to ensure staff attend the office at least three days a week. A spokesperson from PWC noted this was “well received by most of our employees.”
A former senior public transport worker, who requested anonymity, shared their experience of facing online surveillance, describing it as “distracting and deeply intrusive.”
“It began with surveillance, and I eventually left because I was extremely frustrated,” they noted. CMI research revealed that one in six managers would contemplate seeking new employment if their organization started monitoring online activities on work devices.
Among managers aware of their employers monitoring them, 35% indicated surveillance of emails. Overall, tracking login/logout times and system access emerged as the most prevalent form of monitoring.
The survey showed that 53% of managers endorse monitoring employee online activity on company devices, but 42% feel this not only undermines trust but also fails to enhance performance, potentially resulting in misuse or unjust disciplinary action.
tIt may conjure images of battery production lines and the extensive “gigafactory” projects of Elon Musk and Tesla across the globe, or thoughts of batteries powering everything from electric toothbrushes to smartphones and vehicles. However, at Invinity Energy Systems’ modest factory in Basgate, near Edinburgh, employees are nurturing the hope that Britain will also contribute to the battery revolution.
These batteries, which are based on vanadium
tIt may conjure thoughts of battery production lines and the expansive “gigafactory” projects of Elon Musk and Tesla worldwide, or images of batteries powering devices from electric toothbrushes to smartphones and cars. However, at Invinity Energy Systems’ modest factory in Basgate, near Edinburgh, employees are fostering hope that Britain will also play a pivotal role in the battery revolution.
These batteries, utilizing vanadium ions, can be housed within a 6-meter (20-foot), 25-ton shipping container. While they may not be used in vehicles, manufacturers aspire for this technology to find its place in the global storage rush, propelling a transition to net-zero carbon grids.
Renewable electricity represents the future of a cleaner and more economical energy system compared to fossil fuels. Its primary challenge lies in the fact that renewable energy generation is contingent on weather conditions—sunshine and wind may not be available when energy demand peaks. Battery storage allows for the shift of energy production, enabling it to be saved for later use, which is essential for a well-functioning electric grid.
“What has suddenly become apparent is that people have recognized the necessity of energy storage to integrate more renewable energy into the grid,” stated Jonathan Mullen, CEO of Invinity, at the factory where a series of batteries are stacked and shipped.
For a long time, experts have explored various methods for storing renewable electricity, but the issue of grid reliability gained political attention in April when Spain and Portugal experienced the largest blackouts in Europe in two decades. While some rushed to criticize renewable energy, a Spanish government report clarified that it was not the cause. Nonetheless, battery storage assists grids worldwide in avoiding similar complications as those seen in the Iberian Peninsula.
Power blackouts in Spain and Portugal in April highlighted the issues of energy security. Photo: Fermín Rodríguez/Nurphoto/Rex/Shutterstock
Much of the attention in battery research has focused on maximizing energy storage in the smallest and lightest containers suitable for electric vehicles. This development was crucial for the transition away from carbon-intensive gasoline and diesel, which are significant contributors to global warming. It also led to substantial reductions in the costs associated with lithium-ion batteries.
As with many aspects of the shift from fossil fuels to electric technologies, China is driving demand at an incredible scale. According to data from Benchmark Mineral Intelligence, China has installed batteries with a capacity of 215 gigawatt hours (GWh).
China’s battery installations are expected to nearly quadruple by the end of 2027 as new projects are completed. For instance, the state-owned China Energy Engineering Corporation recently bid on a 25GWh battery project utilizing lithium iron phosphate technology, typically used in more affordable vehicles.
Iola Hughes, research director at a Benchmark subsidiary, Rho Motion, stated that declining prices and increased adoption of renewable energy are propelling the rise in demand. By 2027, total global battery storage installations could increase fivefold, Hughes noted, adding, “This figure could rise even further as technological advancements and reduced costs enable developers to construct battery energy storage systems at an unprecedented pace.”
The majority of this growth (95% of current figures) will involve projects utilizing lithium-ion batteries, including a site in Aberdeenshire managed by UK-based Zenobē Energy, which claims to have “the largest battery in Europe.”
Energy storage companies harnessing various technologies must navigate a challenging landscape to secure early-stage funding while proving that their technologies are economically viable. Invinity’s flow batteries use vanadium, while U.S.-based rival EOS Energy employs zinc. However, flow batteries often excel in applications requiring storage durations of over 6-8 hours, where lithium batteries typically fall short.
Cara King, an R&D scientist at Invinity Energy Systems, holds a vial of vanadium electrolyte in various states of charge. Photo: Murdo Macleod/The Guardian
Flow batteries leverage the unique properties of certain metals that can stably exist with varying electron counts. One transport unit contains two tanks of vanadium ions, each with different electron counts—one is “Royal Purple” and the other “IRN-Bru Red.” The system pumps the vanadium solution through a membrane stack that allows protons to pass, while electrons travel around the circuit to provide power. If electrons are driven in the opposite direction by solar panels or wind turbines, the process reverses, charging the battery, which can support a charge of up to 300 kilowatts.
A significant benefit of flow batteries is their relative ease of manufacturing compared to lithium-ion counterparts. Invinity managed to assemble a battery stack with just 90 employees, primarily sourced from Scottish parts.
Throughout the project’s lifespan, Mullen has maintained that “on a cost-per-cycle basis, it offers more value than lithium.” While the upfront costs are higher than those for lithium batteries—Invinity estimates around £100,000 per container—the longer lifespan without capacity loss and the absence of flammability means no costly fire safety equipment is necessary. The shipping container is already deployed next to Vibrant Motivation in Bristol, Oxford Auto Chargers, casinos in California, and solar parks in South Australia.
“We can commission the entire site within a few days,” Mullen remarked.
Invinity is valued at just over £90 million in the London AIM junior stock market and aspires for the UK to spearhead the flow battery niche.
UK manufacturing could be favorably considered in government contests for support under a “cap and floor” scheme that ensures electricity prices remain within a specified range. Should they succeed, the company anticipates a substantial increase in production from its current rate of five containers per week. Mullen envisions the possibility of employing up to 1,000 workers if the company flourishes.
“The potential for growth is immense,” Mullen stated. “Have we moved past the question of whether technology can scale effectively?”
The companies involved in technology, surveillance, and private prison services that are supporting Donald Trump’s vast escalation and militarization of immigration enforcement are celebrating after announcing their recent financial performance.
Palantir, a tech firm alongside Geo Group and CoreCivic—both private prison and surveillance providers—reported this week that their earnings exceeded Wall Street’s forecasts, driven by the administration’s aggressive immigration policies.
“As usual, I was advised to temper my enthusiasm regarding our impressive numbers,” stated Alex Karp, CEO of Palantir, earlier this week. He then praised the company’s “remarkable numbers” and expressed his “immense pride” in its achievements.
Executives from private prison companies did not hesitate to highlight the chance for “unprecedented growth” in the immigration detention sector during their financial discussions.
Palantir reported that revenues from US government contracts exceeded $1 billion in the second quarter of 2025, a significant rise compared to the same period last year. Analysts had predicted revenue of $939.4 million.
Firms that aggregate and analyze various data sets, enabling clients to leverage that information for product development, will derive a substantial portion of their income from government deals. The largest customer in the US is the Department of Defense, which houses the US Army and recently announced a $10 billion contract with Palantir. Additionally, the Department of Homeland Security (DHS) has enhanced its partnership with Palantir since the Trump administration commenced, maintaining a collaboration that dates back to 2011. Immigration and Customs Enforcement (ICE) primarily focuses on the apprehension, detention, and deportation of immigrants.
“We provide safety and uphold values, so Palantir may face backlash simply because we help improve this nation,” Karp remarked. “The fact that we can succeed while holding a distinct viewpoint ought to provoke some jealousy and discomfort, given our perceptions of those we deem less desirable.”
While Palantir facilitates immigration enforcement, private prison companies Geo Group and CoreCivic have reported higher-than-expected earnings. Geo Group posted revenue of $636.2 million for the quarter, surpassing analysts’ forecasts of $623.4 million, while CoreCivic announced $5.382 million for the second quarter of this year, marking a 9.8% increase from the same period last year. George Zoley, CEO of Geo Group, noted that detention centers are fuller than ever, utilizing 20,000 beds across 21 Geo Group facilities and approximately one-third of the 57,000 available beds in ICE detention centers nationwide. Zoley also mentioned in a call that he is investigating detention centers on US military sites, one of the many “unprecedented growth opportunities” he discussed during the call.
Awaiting the Surveillance Boom
Though Geo Group’s detention sector has experienced a significant uplift, the growth of its monitoring division has not yet materialized as anticipated by executives earlier this year.
Executives anticipate that the Intensive Supervision Emergency Program (ISAP), an immigration monitoring initiative managed for the past 20 years by its subsidiary Bi Inc, will exceed its previous high of 370,000 monitored immigrants. Recent months have seen the number remain around 183,000 individuals.
“[ICE hasn’t] communicated any ISAP expansion at this time,” Zoley explained during an investor call.
Nevertheless, the company expects ISAP figures to rise next year, aiming to “maximize detention capacity.” The Trump administration has expressed interest in increasing the number of immigrants under surveillance through ankle monitors. Many immigrants have described ISAP surveillance as invasive and at times physically uncomfortable and ineffective.
In a discussion with investors, CoreCivic executives shared that they are offering ICE around 30,000 beds for detaining immigrants across their national network.
ICE Expansion Signals Future Financial Gains
A significant funding bill passed by Congress and signed by Trump last month has facilitated a substantial influx of funds into DHS. ICE received $45 billion to expand its detention infrastructure.
Currently, ICE has approximately 41,500 beds available, while detaining around 57,000 individuals across its network. This funding influx could lead agents to detain thousands more, making it advantageous for private prison contractors.
“Our business is perfectly aligned with the demands of this moment,” stated CoreCivic CEO Damon T. Hininger during an investor call on Thursday. “We are in a unique situation, witnessing a rapid escalation of federal detention requirements nationwide, along with a continual need for our solutions.”
Management and budget offices are financially primed due to the spending package, allowing private prison firms to act swiftly in offering services to immigration officials.
“As we understand, the budget reflects moral priorities, and last month Congress decided to fully fund actions targeting the immigrant community at the cost of crucial programs benefiting all Americans.” “Since last November, private prison companies have been eagerly eyeing the potential for profit at the expense of everyone else.”
Since Trump’s re-inauguration this year, CoreCivic has amended, extended, or signed new contracts to detain immigrants at eight different facilities, as per the company’s financial reports. Geo Group has done similarly at five facilities.
Both firms expect to generate revenues amid increasing scrutiny from immigration rights and human rights organizations regarding conditions in immigration detention facilities across the nation.
Setareh remarked that the benefits from private prisons arise from “the devastation of human lives, orchestrated by the Trump administration, and made feasible by a complicit Congress.”
Cibola Correctional Facility, a facility in New Mexico housing both immigrants and federal prisoners, is currently facing investigation from the FBI for alleged drug trafficking activities. Since 2018, at least 15 individuals have died in the facility.
Last September, the company promoted Cibola as an ideal location for detaining additional migrants.
This week’s Donald Trump AI Summit in Washington was a grand event that received a warm response from The Tech Elite. The president took to the stage on Wednesday evening, with a blessing echoing over the loudspeakers before he made his declaration.
The message was unmistakable: the technology regulatory landscape that once dominated Congressional discussions has undergone a significant transformation.
“I’ve been observing for many years,” Trump remarked. “I’ve experienced the weight of regulations firsthand.”
Addressing the crowd, he referred to them as “a group of brilliant minds… intellectual power.” He was preceded by notable figures in technology, venture capitalists, and billionaires, including Nvidia CEO Jensen Huang and Palantir CTO Shyam Sankar. The Hill and Valley Forum, a powerful industry group, co-hosted the event alongside the Silicon Valley All-in-Podcast led by White House AI and Crypto Czar David Sacks.
Dubbed “AI Race Winnings,” the forum provided the president with a platform to present his “AI Action Plan,” aimed at relaxing restrictions on artificial intelligence development and deployment.
At the heart of this plan are three executive orders, which Trump claims will establish the U.S. as an “AI export power” and unwind some regulations introduced by the Biden administration, particularly those governing safe and responsible AI development.
“Winning the AI race necessitates a renewed spirit of patriotism and commitment in Silicon Valley.”
One executive order focuses on what the White House terms “wake up” AI, urging companies receiving federal funds to steer away from “ideological DEI doctrines.” The other two primarily address deregulation—a pressing demand from American tech leaders who have increasingly supported government oversight.
One order will enhance the export of “American AI” to foreign markets, while the other will ease environmental regulations permitting data centers with high power demands.
Lobbying for Millions
In the lead-up to this moment, tech companies have forged friendly ties with Trump. CEOs from Alphabet, Meta, Amazon, and Apple contributed to the President’s Inaugural Fund and met him at Mar-A-Lago in Florida. Sam Altman, CEO of OpenAI, which developed ChatGPT, has become a close ally of Trump, with Huang from Nvidia pledging a joint investment of $500 million in U.S. AI infrastructure over the next four years.
“The reality is that major tech firms are pouring tens of millions into building relationships with lawmakers and influencing tech legislation,” remarked Alix Fraser, Vice President of Advocacy for the nonprofit.
In a report released on Tuesday, it was revealed that the tech industry is investing record amounts in lobbying, with the eight largest tech companies collectively spending $36 million.
The report noted that Meta accounted for the largest share, spending $13.8 million and employing 86 lobbyists this year. Nvidia and OpenAI reported the steepest increases, with Nvidia spending 388% more than last year and OpenAI’s investment rising over 44%.
Prior to Trump’s AI plan announcement, over 100 prominent labor, environmental, civil rights, and academic organizations rebutted the president’s approach by endorsing the “People’s AI Plan.” In their statement, they stressed the necessity for “relief from technology monopolies,” which often prioritize profits over the welfare of ordinary people.
“Our freedoms, the happiness of our workers and families, the air we breathe, and the water we drink cannot be compromised for the sake of unchecked AI advancements, influenced by big tech and oil lobbyists,” the group stated.
In contrast, tech firms and industry associations celebrated the executive order. Companies like Microsoft, IBM, Dell, Meta, Palantir, Nvidia, and Anthropic praised the initiative. James Czerniawski, head of emerging technology policy at Proview Celebrity Lobbying Group Consumer Choice Center, described Trump’s AI plan as a “bold vision.”
“This marks a significant departure from the Biden administration’s combative regulatory stance,” Czerniawski concluded.
Reports indicate that numerous companies across four UK sectors have fallen prey to cyberattacks, putting the situation at risk unless they take immediate measures.
A survey of facilities conducted by facility managers, service providers, and chartered surveyors under RICS and shared with the Guardian revealed that many buildings experienced cyberattacks in the last year. This figure has risen from 16% the previous year.
Nearly three-quarters of over 8,000 business leaders (73%) anticipate that cybersecurity incidents will impact their operations in the next 12-24 months. RICS has recognized cybersecurity and digital risks as significant and rapidly evolving threats for building owners and occupants.
Marks & Spencer had to pause orders on its website for nearly seven weeks following a major attack in April, causing clothing sales to fall significantly until May 25th. They lost market share to competitors such as Next, Zara, and H&M.
As cybercriminal techniques advance, incidents targeting critical infrastructure and data breaches have become increasingly frequent, as noted by RICS. This trend will likely intensify with the enhanced capabilities of artificial intelligence and rapid technological advancements.
RICS cautioned that some buildings might be relying on dangerously outdated operating systems. For instance, a building that was opened in 2013 might still be using Windows 7, which has not received security updates from Microsoft for over five years.
Paul Bagust, head of the property practice at RICS, remarked: “Buildings have transformed from mere bricks and mortar into smart, interconnected digital environments that leverage continuously evolving technology to enhance the experience of occupancy.
This technology collects data to inform decision-making. At the levels of property management, building users, occupants, and owners, these advancements provide various benefits, including enhanced efficiency and reduced environmental impact. However, they also present multiple risks and vulnerabilities that could be exploited by malicious entities.”
The report highlights operational technologies such as building management systems, CCTV networks, Internet of Things devices, and access control systems as potential risk areas. This encompasses everything from automated lighting and heating to building management systems and advanced security protocols.
Bagust further commented: “It’s challenging to envision a scenario where technology does not continue to elevate the risks within building operations. Identifying these burgeoning digital challenges and implementing adequate security measures is essential but increasingly complex.”
If Chinese automakers can be believed, there’s a significant love for karaoke among the populace. Some enthusiasts are so passionate that they want karaoke features integrated into their family vehicles.
Arno Antlitz, Volkswagen’s CFO, expressed that this was something that would have baffled the European mindset just a few years back. Nevertheless, the innovations found in electric vehicles from Chinese brands like BYD and XPENG are illustrative of the lessons Volkswagen and its European counterparts have had to absorb as they strive to catch up with their Chinese rivals in the global electric vehicle arena.
“No one in Wolfsburg thinks karaoke is necessary in a car,” Antlitz remarked during a Financial Times meeting last week. “Yet, you need it.”
XPENG G6 family SUV undergoing testing in London. Photo: Jasper Jolly/The Guardian
A decade ago, such openness from the world’s second-largest automaker would have been surprising. Little advocacy existed for Chinese brands in Europe, where the automotive industry was largely dominated by long-established manufacturers from Germany, France, the UK, and Japan, as well as South Korea. The rise of battery technology, however, paved the way for Chinese manufacturers, bolstered by substantial state subsidies, to aim for dominance in the burgeoning electric vehicle sector.
They seized this opportunity. Data from EV analyst Matthias Schmidt shows that in early 2024, Chinese brands gained over 10% of European EV sales, though that figure slid back to 7.7% by February. Yet, the scale of the Chinese domestic market is unmatched, with 12.8 million battery and hybrid cars sold in China by 2024, exceeding the entire European auto market.
The swift advancements from China have caught competitors off guard, especially following a technological leap during the pandemic. Bentley’s Frank Stephen Walliser described the innovations unveiled at the 2023 Shanghai Motor Show as a “shock that comes after a period of silence.”
Chinese manufacturers are increasingly vying for a future where vehicles are seamlessly integrated into users’ digital lives and predominantly self-driving. While Tesla remains a leader among Western automakers, China’s BYD is close behind, with CEO Elon Musk reportedly more focused on supporting Donald Trump’s presidential ambitions than on automotive innovation. Despite backing health measures, Trump’s policies are projected to significantly hinder American manufacturers.
Chris McNally, an analyst from Evercore ISI, noted in a report after attending the Shanghai show that experiences like handling driving tasks while enjoying massage seats in an Aito M8 Luxury SUV and watching films on a retractable projector screen showcase the innovation at a fraction of the price of Western luxury vehicles.
According to McNally, the global market share held by major automakers in Detroit, Germany, and Japan has dropped from 74% to 60% over the past five years. “If you’re a US/EU manufacturer not planning to offer affordable, scalable EVs in the next five years, you could face serious challenges by the 2030s,” he warned.
He further questioned whether the fight is lost for Western makers, suggesting they may make a strong comeback during this phase of automotive evolution.
Shanghai Motor Show in April. Photo: Go to Nakamura/Reuters
BYD’s Seagull, priced around £6,000 in China, showcases autonomous technology comparable to much costlier vehicles, branded as “God’s Eye.” This pricing was achieved using heavier sodium-ion batteries, which compromise range for affordability, yet it highlights a challenge that European manufacturers face.
A consulting firm Bain & Company evaluated that Chinese automakers, on average, can develop cars at just 27% of the cost of European counterparts.
This isn’t just about undercutting prices. Last week, during a test run organized by the British lobby group for automakers and traders, BYD’s £33,300 Seal U DM-I, a plug-in hybrid family SUV, went head-to-head with Volkswagen’s plug-in hybrid Tiguan, which can cost upwards of £10,000 more.
Participating state-owned automakers included Omoda and Jaecoo Brands alongside Leapmotor, Geely (which owns Volvo, Polestar, and Smart Brands), and Xpeng. During a week of trials, the Guardian discovered an abundance of driver assistance features and a spacious interior rivaling that of the Tesla Model Y.
All these vehicles are priced competitively with minimal distinction from European offerings. They provide a smooth ride and impressive voice assistance, allowing drivers to open the sunroof without diverting their attention from the road. A standout from the trials was the swift MG Cyberstar Electric Sports car manufactured by state-owned SAIC.
There are indications of resistance from Europe. Priced at £23,000, the Renault 5 has rapidly gained traction as one of the first affordable electric vehicles manufactured in Europe. While Renault is working diligently to lower production costs, its profitability remains uncertain, though the model has garnered significant popularity.
The French carmaker is also aiming to cut the sales timeline from three years to two, with assistance from an unnamed Chinese partner, for its upcoming models like the Renault 4 and the next Twingo.
If coexistence isn’t feasible, joining forces seems to be a favored strategy among Europeans. Volkswagen has invested in XPENG (also known as Xiaopeng), while Stellantis is planning to introduce jumping cars in Europe and utilize that technology. Additionally, Scandinavian brands such as Volvo and Polestar are increasingly reliant on technology from their parent company, China’s Geely.
UK’s JLR is collaborating with Chery to produce more affordable vehicles under the revived Land Rover Freelander name. According to JLR CEO Adrian Mardell, the vehicle set to launch in the latter half of 2026 “could be global.” Nissan’s Ivan Espinosa hinted that Japanese manufacturers could assemble Chinese cars in Sunderland, northeastern England, to fill excess capacity.
Shunning Chinese technology is not an option for many firms, even if they desire to do so. Most batteries are produced in China, with a few competitors from Japan and Korea. Europe’s battery champion, Northvolt, has faced setbacks. In the meantime, BYD announced in March that its new battery could offer a 250-mile range with just a five-minute charge, causing CATL shares to surge 16% during its market debut in Hong Kong.
Europe possesses certain defensive advantages, including a vast network of dealerships (still preferred by consumers for purchasing) and maintenance garages, which slow the progress of Chinese brands.
“European consumers tend to be quite conservative and very brand loyal,” remarked Eric Zeyer, head of Bain & Company’s European automotive division. “It’s exceedingly challenging for Chinese manufacturers to break into Europe and replicate their domestic success.”
He warned that without strategic moves, Chinese brands risk disappearing from the market, similar to the fate of US electric brand Fisker.
Despite the prevalent challenges, European automotive leaders assert the game isn’t lost, even as it’s evident that China is set to capture a significant share of the global automotive market.
Bentley’s Walliser noted that “Chinese manufacturers are more agile and quicker to adapt,” while also embracing new technologies. “This isn’t magic,” he stated. “It can be achieved here too.”
“Don’t underestimate the resilience of automotive companies,” added Luca de Meo, CEO of Renault.
Prior to the deployment of the omnipotent system, AI companies are encouraged to replicate the safety assessments that formed the basis of Robert Oppenheimer’s initial nuclear test.
Max Tegmark, a prominent advocate for AI safety, conducted analyses akin to those performed by American physicist Arthur Compton before the Trinity test, indicating a 90% likelihood that advanced AI could present an existential threat.
The US government went ahead with Trinity in 1945, after providing assurances that there was minimal risk of the atomic bomb igniting the atmosphere and endangering humanity.
In a paper published by Tegmark and three students at the Massachusetts Institute of Technology (MIT), the “Compton constant” is suggested for calculation. This is articulated as the likelihood that omnipotent AI could evade human control. Compton mentioned in a 1959 interview with American author Pearlback that he approved the test after evaluating the odds for uncontrollable reactions to be “slightly less” than one in three million.
Tegmark asserted that AI companies must diligently assess whether artificial superintelligence (ASI)—the theoretical system that surpasses human intelligence in all dimensions—can remain under human governance.
“Firms developing superintelligence ought to compute the Compton constant, which indicates the chances of losing control,” he stated. “Merely expressing a sense of confidence is not sufficient. They need to quantify the probability.”
Tegmark believes that achieving a consensus on the Compton constant, calculated by multiple firms, could create a “political will” to establish a global regulatory framework for AI safety.
A professor of physics at MIT and an AI researcher, Tegmark is also a co-founder of The Future of Life Institute, a nonprofit advocating for the secure advancement of AI. The organization released an open letter in 2023 calling for a pause in the development of powerful ASI, garnering over 33,000 signatures, including notable figures such as Elon Musk and Apple co-founder Steve Wozniak.
This letter emerged several months post the release of ChatGPT, marking the dawn of a new era in AI development. It cautioned that AI laboratories are ensnared in “uncontrolled races” to deploy “ever more powerful digital minds.”
Tegmark discussed these issues with the Guardian alongside a group of AI experts, including tech industry leaders, representatives from state-supported safety organizations, and academics.
The Singapore consensus, outlined in the Global AI Safety Research Priority Report, was crafted by distinguished computer scientist Joshua Bengio and Tegmark, with contributions from leading AI firms like OpenAI and Google DeepMind. Three broad research priority areas for AI safety have been established: developing methods to evaluate the impacts of existing and future AI systems, clarifying AI functionality and designing systems to meet those objectives, and managing and controlling system behavior.
Referring to the report, Tegmark noted that discussions surrounding safe AI development have regained momentum following remarks by US Vice President JD Vance, asserting that the future of AI will not be won through mere hand-raising and safety debates.
Feedback presents the latest updates in science and technology from New Scientist. We encourage you to email Feedback@newscientist.com with intriguing items you think our readers would enjoy.
Is It Really a Flower?
In recent years, the landscape of AI companies has exploded, leading to a mix of excitement and surprise (depending on your early stock investments). However, this influx has also resulted in a surge of nearly identical logos among these companies.
A fascinating observation made by multiple publications is the prevalence of similar designs in these logos. Sociologist James I. Bowie writes for Fast Company about how the trend has shifted towards “stylized hexagons” with an implicit rotation. He notes that these designs evoke a “portal to a mysterious new world,” suggestive of “the expansion of Yetian Gaia,” and humorously, “toilet flushing.”
On a similar note, Radek Sienkiewicz, a developer at VelvetShark, observed that most of these logos share common features: circular shapes, a focal point at the center, radiating elements, and soft organic curves. He refers to this phenomenon as an “apt explanation” for its resemblance to a “butthole“.
Feedback analyzed logos for companies like OpenAI, Apple Intelligence, and Claude, and noted their resemblance to anatomical features more than you might expect. Exceptions like DeepSeek and Midjourney, whose logos depict a whale and a yacht, stand out, but they may soon succumb to the trend of circular designs.
What’s behind the proliferation of stylized hexagons? Perhaps they symbolize the recursive nature of thought, reflecting AI’s capacity to enhance our comprehension of the world.
However, OpenAI offers a different perspective. Their branding guidelines describe their company logo as a “flower,” designed deliberately to avoid any interpretations associated with openings. The logo symbolizes the dynamic interplay between humanity and technology, merging the fluidity of human-centric design through circles with the precision needed for technological structures, allowing for creative freedom.
Personally, Feedback proposes a working hypothesis regarding these logos, invoking the concept of “GroupThink.”
The Challenging Second Album
One of my favored areas of inquiry is the notion that “it’s a common understanding, yet there’s an obvious counterargument that people are either aware of or not.” Thus, we found it pertinent to explore Musical Psychology, focusing on the “Second Album Slump,” where musicians’ sophomore albums often fail to measure up to their debut releases.
This research was originally published last November, highlighting these trends. As noted by science writer Philip Ball on Bluesky in April, and here we are in May finally addressing this topic. Feedback is nothing if not timely.
The study claims it is “the first comprehensive multistudy analysis aimed at discerning the existence of a second-album slump.” The authors analyzed over 2,000 reviews and feedback from more than 4,000 fans. The results indicated a decline in album quality ratings throughout artists’ careers, with significant dips noted in critic reviews during the second album phase.
This raises discussions surrounding the causes: Is it cognitive bias at play? Or is there a “return to the mean”? A standout debut album is an anomaly that garners disproportionate attention, yet subsequent efforts typically don’t replicate that success due to random chance. Furthermore, this notion can be traced back, as Elvis Costello noted as early as 1981: “I had 20 years to write my first album and six months to pen my second.”
It’s important to note that the second-album slump is merely a statistical trend. Numerous artists have released second albums that surpass their debuts, such as Black Sabbath, Led Zeppelin, and Nirvana—alongside Beastie Boys, Pixies, and Taylor Swift. There’s a broader response to Ball’s observations.
Moreover, Feedback wonders whether this second-album phenomenon is confined to rock and pop genres, or if it similarly affects less mainstream styles. Are composers of acid jazz and ambient music also facing second-album challenges? If so, how can we recognize this?
While serious, Tim wanted to highlight a particular detail: the article mentions the “source of the ant trade” concerning the necessary documentation for the legal export of M. cephalotes from Kenya. The trade is described as a “small world,” and thus this individual was “requested not to be named.”
Have thoughts to share with Feedback?
You can email stories to feedback@newscientist.com. Please include your home address. This week and previous editions of Feedback can also be found on our website.
The Communication Watchdog is accused of endorsing major technology for the safety of under-18s after England’s children’s commissioners criticized new measures to address online harm. Rachel de Souza warned Offcom last year that the proposals to protect children under online safety laws are inadequate. She expressed disappointment that the new code of practice published by WatchDog ignored her concerns, prioritizing the business interests of technology companies over child safety.
De Souza, who advocates for children’s rights, highlighted that over a million young people shared their concerns about the online world being a significant worry. She emphasized the need for stronger protection measures and criticized the lack of enhancements in the current code of practice.
Some of the measures proposed by Ofcom include implementing effective age checks for social media platforms, filtering harmful content through algorithms, swiftly removing dangerous material, and providing children with an easy way to report inappropriate content. Sites and apps covered by the code must adhere to these changes by July 25th or face fines for non-compliance.
Critics, including the Molly Rose Foundation and online safety campaigner Beavan Kidron, argue that the measures are too cautious and lack specific harm reduction targets. However, Ofcom defended its stance, stating that the rules aim to create a safer online environment for children in the UK.
The Duke and Duchess of Sussex have also advocated for stricter online protections for children, calling for measures to reduce harmful content on social media platforms. Technology Secretary Peter Kyle is considering implementing a social media curfew for children to address the negative impacts of excessive screen time.
Overall, the new code of practice aims to protect children from harmful online content, with stringent measures in place for platforms to ensure a safer online experience. Failure to comply with these regulations could result in significant fines or even legal action against high-tech companies and their executives.
As of July, social media and other online platforms must block harmful content for children or face severe fines. Online Safety Law requires tech companies to implement these measures by July 25th or risk closure in extreme cases.
The Communications Watchdog has issued over 40 measures covering various websites and apps used by children, from social media to games. Services deemed “high-risk” must implement effective age checks and algorithms to protect users under 18 from harmful content. Platforms also need to promptly remove dangerous content and provide children with an easy way to report inappropriate material.
Ofcom CEO Melanie Dawes described these changes as a “reset” for children online, warning that businesses failing to comply risk consequences. The new Ofcom code aims to create a safer online environment, with stricter controls on harmful content and age verification measures.
Additionally, there is discussion about implementing a social media curfew for children, following concerns about the impact of online platforms on young users. Efforts are being made to safeguard children from exposure to harmful content, including violence, hate speech, and online bullying.
Online safety advocate Ian Russell, who tragically lost his daughter to online harm, believes that the new code places too much emphasis on tech companies’ interests rather than safeguarding children. His charity, the Molly Rose Foundation, argues that more needs to be done to protect young people from harmful online content and challenges.
British companies are being advised to conduct job interviews via video or in-person to avoid the risk of inadvertently hiring North Korean employees.
The caution comes after analysts noted that the UK has become a prime target for misinformed IT workers recruited by North Korea. These individuals are typically hired to work remotely, evade detection, and funnel earnings back to Kim Jong-un’s regime.
In a recent report, Google revealed an incident from last year involving a lone North Korean operative, with at least 12 aliases operating across Europe and the US. These IT workers were seeking positions in defense and government sectors. The new tactic involves fake IT professionals threatening to leak sensitive company data post-termination.
John Hultquist, chief analyst at Google’s Threat Intelligence Group, highlighted North Korea’s shift towards Europe, particularly targeting the UK.
He explained, “North Korea is feeling the heat in the US and has shifted its focus to the UK to expand its IT worker tactics. The UK offers a broad spectrum of businesses in Europe.”
Fraudulent IT worker schemes typically involve individuals with a physical presence in countries aided by “facilitators” or agents of North Korea.
These facilitators play crucial roles like providing fake passports and maintaining local addresses. Laptops used by these individuals often connect to servers in Pyongyang, not their current location. However, they seek jobs that offer unique devices for easier monitoring.
“Ultimately, having a physical presence in the UK is key to their expansion strategy across various sectors in the country,” mentioned Hultquist.
Hultquist suggested that conducting job interviews in-person or via video could disrupt North Korea’s tactics.
Sarah Kern, a North Korean specialist at cybersecurity firm SecureWorks, emphasized that the threat is more widespread than perceived by companies.
She recommended thorough candidate screening and HR education on deception tactics. Companies should prioritize in-person or video interviews to verify the legitimacy of potential employees.
“In the US, conducting in-person or video interviews to verify candidates’ background details is effective in ensuring you’re engaging with truthful candidates,” she added.
Kern noted that IT workers may propose unconventional methods like frequent address changes or the use of money exchange services over traditional bank accounts.
Bogus IT experts are infiltrating Europe through online platforms like Upwork, Freelancer, and Telegram. Upwork stated that attempts to use false identities go against their terms of service, and they take strict action to remove such individuals.
As pointed out by Kern, North Korean IT workers often try to avoid video interviews, likely due to their working conditions in cramped spaces resembling call centers.
Technology leaders in the artificial intelligence sector have been pushing for regulations for over two years. They have expressed concerns about the potential risks of generative AI and its impact on national security, elections, and jobs.
Openai CEO Sam Altman testified before Congress in May 2023 that AI is “very wrong.”
However, following Trump’s election, these technology leaders have shifted their stance and are now focused on advancing their products without government interference.
Recently, companies like Meta, Google, and Openai have urged the Trump administration to block state AI laws and allow the use of copyrighted material to train AI models. They have also sought incentives such as tax cuts and grants to support their AI development.
This change in approach was influenced by Trump declaring AI as a strategic asset for the country.
Laura Karoli, a senior fellow at the Wadwani AI Center, noted that concerns about safety and responsible AI have diminished due to the encouragement from the Trump administration.
AI policy experts are concerned about the potential negative consequences of unchecked AI growth, including the spread of disinformation and discrimination in various sectors.
Tech leaders took a different stance in September 2023, supporting AI regulations proposed by Senator Chuck Schumer. Afterward, the Biden administration collaborated with major AI companies to enhance safety standards and security.
(The New York Times sued Openai and Microsoft over copyright infringement claims related to AI content. Openai and Microsoft denied the allegations.)
Following Trump’s election victory, tech companies intensified lobbying efforts. Google, Meta, and Microsoft donated to Trump’s inauguration, and leaders like Mark Zuckerberg and Elon Musk engaged with the president.
Trump embraced AI advancements, welcoming investments from companies like Openai, Oracle, and SoftBank. The administration emphasized the importance of AI leadership for the country.
Vice President JD Vance advocated for optimistic AI policies at various summits, highlighting the need for US leadership in AI.
Tech companies are responding to the President’s executive orders on AI, submitting comments and proposals for future AI policies within 180 days.
Openai and other companies are advocating for the use of copyrighted materials in AI training, arguing for legal access to such content.
Companies like Meta, Google, and Microsoft support the legal use of copyrighted data for AI development. Some are pushing for open-source AI to accelerate technological progress.
Venture capital firm Andreessen Horowitz is advocating for open-source models in AI development.
Andreessen Horowitz and other tech firms are engaged in debates over AI regulations, emphasizing the need for safety and consumer protection measures.
Civil rights groups are calling for audits to prevent discrimination in AI applications, while artists and publishers demand transparency in the use of copyrighted materials.
The Food and Drug Administration said Thursday Requirements are delayed by 30 months Its food companies and grocery stores quickly track and pull contaminated food through their supply chains and pull them off the shelf.
The rule, which aimed to “limit food-borne illness and death,” required businesses and individuals to maintain a better record to identify where food was cultivated, packed, processed and produced. It is expected to come into effect in January 2026 as part of the groundbreaking food safety law passed in 2011, and progressed during President Trump’s first term.
Health Secretary Robert F. Kennedy Jr. has shown interest in food chemical safety, moving to ban food dyes and making public debuts that people can move to ban food dyes. Track toxins in food. However, other actions in the Trump administration’s first months have undermined efforts to tackle the bacteria and other contaminants of diseased food. The administration cut its way through the company closed down jobs for major food safety commissions, frozen scientists’ credit card spending, and routine testing was conducted to detect food pathogens.
In recent years, there have been several well-known outbreaks, including cases related to last year’s fatal listeria of wild boar headmeat and E. coli in the onion of MacDonald’s quarter pounders.
The postponement issued an alarm among several advocacy groups on Thursday.
“The decision is extremely disappointing and consumers are at risk of getting sick with unsafe foods as small segments of the industry are seeking delays despite their 15 years of preparation,” said Brian Ronholm, Food Policy Director for the Advocacy Group’s Consumer Report.
Many retailers have already taken steps to adhere to the rules. Still, food industry trade groups lobbyed to delay the implementation of the December regulations. To the Los Angeles Times.
In a letter to President Trump in December, food manufacturers and other corporate trade groups cited many regulations that they said were “strangled our economy.” They asked Food traceability rules stored and delayed.
“This is a huge step towards food safety,” said Sarah Sosher, director of regulatory affairs at the advocacy group, Science Center for the Public Interest. “The surprising thing about that is that this was a bipartisan rule.”
Sosher said there is widespread support for the measure to protect consumers and businesses.
Tesla, led by Elon Musk, is cautioning about the potential repercussions of Donald Trump’s trade war. They warned that retaliatory tariffs could harm not only electric car makers but also other American automakers.
In a letter to US trade representative Jamieson Greer, Tesla emphasized the importance of considering the broader impacts of trade actions on American businesses. They stressed the need for fair trade practices that do not inadvertently harm US companies.
Tesla urged the US Trade Representative (USTR) office to carefully evaluate the downstream effects of proposed actions to address unfair trade practices. They highlighted the disproportionate impact that US exporters often face when other countries respond to trade actions taken by the US.
The company, which has been a supporter of Trump, expressed concerns about potential tariffs on electric vehicles and parts imported to targeted countries. They cited past instances where trade disputes led to increased tariffs on vehicles and parts manufactured globally.
As Tesla continues to navigate the challenges of trade policies, they emphasized the importance of considering implementation timelines and taking a step-by-step approach to allow US companies to prepare and adapt accordingly.
Meanwhile, German automaker BMW reported a decline in net profit due to trade tariffs. They highlighted the impact of US trade actions on their business performance and reiterated the challenges posed by a competitive global environment.
BMW’s forecast takes into account various tariffs, including those on steel and aluminum. The company faces challenges in China, where local EV manufacturers are gaining market share, leading to a decline in BMW and Mini sales.
Despite these obstacles, BMW remains committed to navigating the complexities of trade and geopolitical developments to maintain business resilience and performance.
Niantic Labs announced the sale of its video games division to Saudi-owned Scopely for a whopping $3.5 billion. This move comes as U.S. augmented reality companies pivot towards geospatial technology, unable to recreate the success of the 2016 sensation, Pokémon Go.
The deal, revealed on Wednesday, also propels Saudi Arabia closer to its goal of becoming the ultimate global gaming hub. The Kingdom’s Sovereign Wealth Fund acquired Scopely for $4.9 billion in 2023 as part of a broader strategy to diversify beyond fossil fuels.
As per the agreement, Niantic will distribute an additional $350 million to its shareholders. Additionally, it will separate its Geospatial Artificial Intelligence (AI) business into a new entity named Niantic Spatial, led by John Hanke, the founder, and CEO of Niantic.
Niantic Spatial will receive $250 million in capital from Niantic’s balancesheet and an additional $50 million from Scopely. All former investors of Niantic will retain their shares in Niantic Spatial.
This move marks the end of a challenging period for Niantic, which struggled post the success of Pokémon Go, leading to employee layoffs in 2022 and 2023.
Saudi Arabia, already known for being a gaming and esports center, is steadfast in its plan to invest nearly $38 billion in gaming-related ventures through its savvy gaming group.
Savvy Games, a prominent investor in global video game companies, including Nintendo, holds a 7.54% stake despite a slight profit decrease last year.
What is the difference between artificial intelligence and quantum computing? One is sci-fi sound technology that has long been committed to revolutionizing our world, providing researchers can sort out some technical wrinkles, such as the tendency to cause errors. In fact, the other one is too.
Still, AI seems breathless and inevitably inevitable, but the average person has no experience with quantum computing. Is this important?
Practitioners in both fields certainly commit the crime of hyping their products, but part of the problem with quantum advocates is that the current generation of quantum computers are essentially useless. With a special report on the state of the industry (see “Quantum Computers Finally Arrived, Will They Be Useful?”), races are intended to build machines that can actually do useful calculations. Currently underway. This is not possible on a regular computer.
There is no clear use case to prevent high-tech giants from forcing AI into the software they use every day, but the subtle nature of this hardware makes quantum computing the masses more difficult. It is much more difficult to bring in the same way. You probably won’t own a personal quantum computer. Instead, the industry is targeting businesses and governments.
Practitioners in both AI and quantum computing fields are guilty of hyping their products
Perhaps that’s why quantum computer builders seem to keep their feet on science, drumming business while publishing peer-reviewed research. It appears that the major AI companies have all those who have given up on publishing. Why are you troubled when you can simply charge a monthly fee to use your technology, whether it actually works or not?
The quantum approach is correct. When you are committed to technology that transforms research, industry and society, explaining how it works in the most open way possible is the only way to persuade people to believe in the hype. .
It may not be flashy, but in the long run it’s not style, it’s substance. So, I will definitely aim to revolutionize the world, but please show me your work.
High-tech companies are urging the UK government to support the growth of AI data centers in remote areas of the UK by offering the lowest electricity prices in Europe.
A report commissioned by high-tech companies Amazon and Openai calls on the government to reform the UK electricity market by implementing zonal pricing, where prices vary based on different zones to incentivize investment in areas with lower electricity costs.
This zonal pricing model, according to a report by SMF Think Tank, highlights Scotland as a hotspot for AI data centers due to its abundant wind farms and population density.
Political leader Keir Starmer has emphasized the importance of artificial intelligence in positioning the UK as a global technology leader.
However, concerns have been raised about hosting data centers in the UK due to high industrial electricity prices and ambitious targets to phase out fossil fuels from the electricity system.
The SMF report suggests that zonal pricing could significantly reduce electricity costs for data centers, making Scotland’s electricity prices the lowest in Europe.
Support for zonal pricing has been recommended by cross-party Think Tanks to expedite the deployment of AI data centers by connecting more low-carbon electricity to the grid and addressing planning delays.
The report also backs the government’s plan to build small modular reactors outside traditional nuclear areas to facilitate the development of Data Centre Hubs in England and Wales.
According to Sam Robinson of SMF, urgent action is needed to address rising energy costs and planning delays to maintain the UK’s position as a global innovation leader.
Zone pricing alignment has garnered support from SMF clients and tech companies in government consultations on the future of electricity markets.
The proposed zoning system aims to attract high-energy users to regions with lower electricity prices, creating new job opportunities outside of southeastern England while balancing demand on the local grid.
However, concerns have been raised that changes in energy pricing may impact profitability of remote clean energy projects, potentially hindering investment in green energy.
The government is expected to make a decision on the future of the electricity market in the coming months.
The gambling company is secretly tracking visitors to its website and sending data to Facebook’s parent company without obtaining consent, a clear violation of data protection laws.
Meta, the owner of Facebook, uses this data to profile individuals as gamblers and bombard them with ads from casinos and betting sites, as reported by the observer. Hidden tracking tools embedded in many UK gambling websites extract visitor data and share it with social media companies.
According to the law, data should only be used and shared for marketing purposes with explicit permission from users on the website. However, an investigation by the observer found numerous violations across 150 gambling sites.
A call for immediate intervention was made by Ian Duncan Smith, chairman of the All-Parliamentary Group on Gambling Reform, criticizing the illegal use of tools like Metapixel without consent. Concerns were raised about the lack of regulation and accountability in the gambling industry.
Data sharing and profiling practices by gambling operators are raising concerns about targeted advertising and potential harm to individuals. The Information Committee (ICO) has taken action against companies like Sky Betting & Gaming for illegally processing personal data.
The gambling industry is under scrutiny for its marketing strategies, with calls for stricter regulations to protect consumers. Meta and other social media platforms are being called out for their role in facilitating these illegal data practices.
Concerns about the misuse of Metapixel tracking tools extend beyond the gambling industry to other sectors, prompting calls for more transparency and accountability in data collection and usage.
Openai has issued a warning that Chinese emerging companies are developing competing products using DeepSeek technology and the AI model from Chatgpt manufacturer.
Investing $13 billion in SAN Francisco-based AI developers, Openai and their partner Microsoft are now looking into whether their proprietary technology was illegally obtained through a process known as distillation.
The latest chatbot from DeepSeek has caused quite a stir in the market, surpassing free app store rankings in Aping and causing a $1 drop in the market value of US tech stocks related to AI. This impact stems from claims that the AI model behind DeepSeek was trained at a fraction of the cost and hardware used by competitors like Openai and Google.
Openai’s CEO, Sam Altman, initially praised DeepSeek, calling it a “legally active new competitor.”
However, Openai later revealed evidence of “distillation” by a Chinese company, using advanced models to achieve similar results in a specific task by distilling the performance of a smaller model. Openai’s statement did not explicitly mention DeepSeek.
An Openai spokesperson stated, “We are aware that Chinese companies and others are continuously attempting to distill models from major US AI companies. As a leading AI developer, we are taking IP protection measures. Our released models undergo a meticulous process that includes cutting-edge features.”
Openai has faced allegations of training its own models with data unauthorized by publishers or creative industries, and has been actively working to prevent distillation of its models.
The Openai spokesperson emphasized the importance of collaboration with the US government to safeguard their most advanced models from the efforts of enemies and competitors to replicate US technology.
Donald Trump’s recent statement highlighted the impact of DeepSeek within Silicon Valley. Photo: Lionel Bonaventure/AFP/Getty Images
The number of cats increasing that have died or become ill after consuming raw pet food and raw milk contaminated with the H5n1 virus has prompted health authorities to take special precautionary measures to protect pet food companies from bird flu. They are advising pet food makers to follow food safety plans such as sourcing ingredients from healthy flocks and applying heat treatments to inactivate viruses, as suggested in recent guidance from the Food and Drug Administration.
Since the H5n1 virus started spreading in 2022, there have been bird outbreaks under all conditions. Cats appear to be particularly susceptible to the H5N1 virus, with many household cats and wild cats becoming infected since its emergence in 2022. Some farm cats have fallen ill after consuming raw milk, while others have died after consuming contaminated raw pet food.
Despite the FDA guidance, some experts like Dr. Jane Cycks from the University of California, Davis School of Veterinary Medicine have raised concerns about the lack of detailed instructions on guaranteeing the absence of H5N1 in food. The FDA has advised pet owners to cook raw pet food to eliminate risks and follow USDA guidelines for safe food handling.
In response to the situation, some raw pet food companies have implemented safety measures such as sourcing quality ingredients and using processes like high-pressure pasteurization. However, experts emphasize that cooking is the only certain way to eliminate the risk of H5N1 in pet food.
While high-pressure pasteurization is advertised as a method to kill pathogens, experts caution that cooking to internal temperature is the most reliable way to ensure food safety. Consumers are advised to cook raw pet food thoroughly before feeding it to their pets to reduce the risk of transmission of bird flu.
For those who prefer raw pet food brands, experts suggest cooking the food before feeding it to ensure the safety of pets.
Major technology giants criticized their competitors following Donald Trump’s announcement of significant investments in AI the day before.
President Trump revealed Stargate, a $500 billion initiative funded by OpenAI, Oracle, and SoftBank. The announcement featured leaders from both companies: Sam Altman, Larry Ellison, and Masayoshi Son, with Son as the project chairman. A representative from Abu Dhabi’s state-run AI fund MGX, another major investor, was notably absent.
The partnership aims to establish data centers and computing infrastructure crucial for AI development. While the initial investment amount is substantial, estimates suggest that developing AI will require as much funding.
Notably missing from the event was Elon Musk, CEO of Tesla, SpaceX, and xAI, who is also the wealthiest person globally. Despite Musk’s close ties to Trump and rumored office in the White House, he dismissed Stargate as a financial sham the following night.
When OpenAI announced on X (Musk’s social network) that they would immediately deploy $100 billion, Musk countered, stating that they lacked the funds and criticizing SoftBank’s funding of less than $10 billion. Musk, with a net worth of about $430 billion, tweets prolifically on a variety of subjects.
President Trump has yet to respond to Musk’s comments, focusing instead on Melania’s anniversary on his social network, Truth Social.
Musk continued his criticism on Twitter, sharing a leaked image of a research tool supposedly used to calculate Stargate’s $500 billion cost. He spent much of Wednesday afternoon attacking the project.
Sam Altman initially praised Musk’s work but later questioned his motives for criticizing SoftBank. Satya Nadella, CEO of Microsoft, responded diplomatically when asked about the situation, emphasizing Microsoft’s plans to invest in Azure.
The tension between Musk and Altman dates back to their history at OpenAI, where Musk eventually parted ways with Altman. The heads of Oracle and SoftBank involved in Stargate have not yet spoken on the matter.
○On August 4, 2024, the riots and unrest following the murder of three children in Southport, Merseyside, escalated further. That day, violence struck Rotherham, Middlesbrough and Bolton, where people tried to set fire to hotels housing asylum seekers, amid chaos amid far-right misinformation and rumors. Elon Musk showed a renewed interest in British affairs, posting a photo of the violence in Liverpool on X with the characteristically cautious caption: “Civil war is inevitable.” And 24 hours later, a wave of unrest reached the city of Plymouth.
It struck the city center throughout the evening of August 5th. To quote the Guardian, “150 police officers in riot gear and with dogs tried to separate the far-right mob and anti-racism demonstrators.” Others defended the mosque. Bricks, bottles and fireworks were thrown. Six people were arrested, several police officers were injured, and two civilians were taken to hospital. local civil servant He said the events were “unprecedented.”
Where should the city's 260,000 residents turn for reliable information? As ever, people's social media feeds are filled with falsehoods and provocations, making more traditional media the obvious choice. But if you had been listening to your local BBC radio station while the riots were going on, you might not have known anything about them. BBC Radio Devon ran reports of the violence on the 6 o'clock news, but Plymouth was not mentioned at all on the 7pm and 9pm news. Other breaking news stories mentioned what was happening but failed to make it into a major story. The violence was horrifying and very important, but the attention of the city's supposedly most reliable news sources was clearly elsewhere.
We now know all this thanks to BBC reaction to Complaint by David LloydHe is a radio veteran who has worked in both corporate and commercial stations. The relevant official document written by the company's complaints manager is very easy to read. It included an admission that “there was little evidence that the BBC was present at the scene” and that some of the content related to “some logistical issues” on the day. . Issues include “securing journalists with the necessary riot training'' and “technical problems with broadcasting kits.''
there were, The report says:“Elements of System Failure.” Even online, where modern businesses say they need to focus most of their efforts, there is no dedicated live coverage of the Plymouth riots, and as the report suggests, major social media platforms lack sufficient updates. Not posted. Regarding the latter point, he said, “If it weren’t for staff vacations, we could have done more.”
A spokesperson said: The BBC accepted the findings of its complaints department and had “already made adjustments to its working practices” before the Plymouth complaint was investigated. But the mix of excuses and admitted shortcomings remains mind-boggling. And the larger story of this corporate degradation of local broadcasting and how it fits into similar changes in commercial radio and the dire state of Britain’s local press is left untouched. As Mark Zuckerberg abandons meta fact-checking and Musk becomes endlessly radicalized by his platform, the result is a growing vacuum in local news. There is a growing susceptibility to online lies that may soon surpass people’s ability to fully understand what is going on in their immediate lives. someone's control.
The story of Plymouth is a case study in the impact of change, which still appears to be chronically overlooked. These include the forced cuts to BBC Radio’s broadcasts in 2023, and the fact that many local stations now only broadcast regionally specific programs until the afternoon. Share produce locally or nationally until breakfast time the next day. Number of spectators This drastic cut has further diminished an already fragile part of the national media landscape, further reduced listeners and hastened the decline of local radio, while our nation’s public broadcasters have The obvious question is whether the survival of such a major broadcasting station can be guaranteed. Grassroots news, who will do it?
It’s certainly not commercial radio. Eight years ago, broadcasting regulator Ofcom announced a relaxation of rules allowing commercial station owners to reduce the minimum hours of daytime local programming from seven hours a day to three. In 2019, radio giant Global consolidated more than 40 independent breakfast shows featuring local news and takeaways into three nationally broadcast programs, exposing its newsroom to fluctuations in efficiency. Since then, a single reporting team has been assigned to cover an area stretching from Cornwall to Gloucester.
And then there is the terrible fate of local newspapers that may have successfully transitioned into the online world, but have been repeatedly mismanaged, cut and wiped out, especially by online giants. Between 2009 and 2019, more than 320 such titles closed in the UK. Just over a year ago, Reach, the owner of Mirror, Express and a number of local titles grouped online under the “Live” banner, announced its third job cuts in a year. This reduced the total number of roles lost. The company's local and regional news websites drew a healthy audience of about 35 million people per month, but its reliance on siphoning digital advertising revenue put its long-term survival at risk. As one anonymous Reach official stated, the results were clear. “Manchester, Birmingham, Bristol, Newcastle, Liverpool, Cardiff and many other major cities will soon no longer have a local newspaper, and it is increasingly likely that they will no longer have a well-known local newspaper.”Local authorities and others Accountable news website. ”
In some areas, nimble local news outlets are beginning to fill the gap. In Hull, a start-up company called story of the hull It was founded in 2020 as an online operation by two former Hull Daily Mail employees and expanded into print last year. Last week's headlines reflected the city's experience with the 2024 riots: “Shame, Resilience, Justice.” won an award On this year’s cover. Bristol Cable has long pioneered a new kind of investigative and political reporting, driven by the fact that its titles are owned by its readers. Manchester has a Substack newsletter The Millis currently setting up branches in Liverpool, Birmingham, Sheffield and London.Former Guardian staffer Jim Waterson has also started up to fill the void left by the retrenchment of the Evening Standard. central london. All of these projects highlight one stark point: a place not only needs its own journalism, but can provide an audience to support it.
The problem is that they still outnumber some parts of the country, let alone the world, where the worst kinds of news cycles are unfortunately a reality. Something happens, but what do people read or hear about it? Is it nothing at all, plucked from the corners of the internet by some foreign billionaire, or amplified by an algorithm, true or false? It’s such a bad version that the question of whether or not is gone and the deceptive narrative creates its own shockwaves. If that is the future we all need to avoid, then local reporting should be our first antidote.
TThere's a lot to admire about America here. Some 200 years ago, the great French social observer Alexis de Tocqueville extolled the legacy of our Puritan founders: their commitment to civic virtue, individual self-improvement, and hard work.
Those characteristics are still evident today, but darker features have also appeared alongside them. The United States, which was a 20th century hegemon and still firmly adhered to democracy, has changed. It has transformed into an imperial power indifferent to democracy but willing to demand economic tribute from its vassals.
No country has been more a vassal state of the United States than Britain. This evolution is laid out in an eye-opening book. Vassal States: What happened to America? running around uk. President Donald Trump's impending inauguration, accompanied by threats to impose tariffs and lower commitments to NATO unless client nations further comply with his wishes, has shaken Western capitals. But as author Angus Hunton carefully documents, this is nothing new. The United States has maintained an America First policy for decades. President Trump is only elevating a long-standing phenomenon. Changing this situation will require more than appointing the crooked Lord Mandelson as British ambassador to the United States. It's about recognizing what's going on and then fighting fire with fire. It's time to put Britain first.
Mr Hunton writes that 25% of the UK's GDP is made up of the sales of the 1,256 US multinational companies operating in the UK. This includes breakfast cereals, soft drinks, car manufacturing, taxis, food delivery, online shopping, travel, coffee, social media, and entertainment (Kellogg, Coca-Cola, Ford, Uber, Deliveroo, Amazon, Expedia, Starbucks, X) This includes everyday areas such as: , Netflix) – knowledge-intensive sectors ranging from data (Apple, Meta/Facebook, Google, Microsoft) to finance (Goldman Sachs, Morgan Stanley, BlackRock). Every time he unpacks the statistics and scope of exploitative control, it's dizzying.
Because this is not benign. The UK is so blind to the negative aspects of loss of control, from tax avoidance to the stripping of strategic skills, that it is surprising that, as Mr Hunton writes, politicians are unable to control this process. He cheerfully praises the city for being “open for business.'' Thus, over the past two decades there has been a tsunami of takeovers of great British technology companies by US companies and private equity firms. For example, the groundbreaking artificial intelligence company DeepMind is now owned by Google. Cyberspace pioneer Darktrace was recently acquired by US private equity firm Thoma Bravo, and biotechnology company Abcam was acquired by Washington DC-based Danaher. Spend $12.7 billion on Cambridge University companies Even in 2024 alone. At Oxford University, the newly established luxury Ellison Institute, funded by Oracle founder Larry Ellison, is poised to launch a U.S.-like attack on its intellectual property, spinouts and startups. There are concerns that there may be.
Some decision-making and research will remain in the UK, but Mr Hunton has observed that post-acquisition headquarters have increasingly moved to the US. We bid farewell to our significant presence in space as Inmarsat was acquired by California's Viasat and the UK was downgraded from a potential tier 1 space power to tier 3. High-tech 3D printer Meggitt has transitioned to Cleveland-based Parker Hannifin (along with Chobham and Ultra, part of what was a defense and aerospace “crown jewel” identified by the U.S. International Trade Administration in 2019) However, it is now entirely US-owned), and Worldpay, which was spun out from NatWest, is now headquartered in Cincinnati. Not only was important intellectual property lost, Hunton reported. Immigration makes cities across the United States more prosperous, something the British can only dream of in terms of geographic equity.
Technology entrepreneur and financier Hermann Hauser is the co-founder of Arm, currently listed in New York, which started its operations in the UK and is now our third largest listed company. However, he writes that there are three litmus tests for technology acquisitions. We still control British technology. Is there access from other countries? If not, are UK sellers guaranteed unrestricted and secure access? If the answer to all three is no, then there is a risk of becoming a new client state for these tech giants. And a new kind of colonialism could be happening.'' It's happening while we're watching.
Next is the US attitude towards taxes. The tax departments of US multinational corporations are seen as profit centers, selling to the UK from low-tax Ireland, channeling profits through tax havens often controlled by the UK, or through transfer pricing. Taxes are averaged using all available means, including artificially lowering profits in the UK. It represents only 5% of profits. If the effective tax rate on profits alone, as we know it, was just 15%, Britain would be at least $10bn (£8bn) a year richer. The actual number will almost certainly be further halved. And if the UK dares to propose even modest amendments, such as the 2% digital services tax proposed in the 2018 budget, it will have to be withdrawn due to intense lobbying from the US government. You won't get any more.
What is so disappointing about this whole story is that if we had more courage and determination to put Britain first, we could be Europe's tech powerhouse, with a dynamic economy and a growing tax base. We have many of the necessary assets, from great universities to huge pools of risk capital, that have enabled us to fuel America's growth. Of course, the United States is a powerful magnet because of its size and dynamism, but not as much as we are spoiled.
Mr Hunton said that to fight back, the UK first needs to stop the decline in stocks, and the first step is to reduce both profitable US direct investment (starting business in the UK) and destructive US direct investment (very large numbers of (acquisitions of high-tech companies). intellectual property and their export to the United States). Second, the UK, like the US, must get serious about R&D and innovation and start building its own group of high-tech growth companies. Like Americans, we must invest in our college education, not ignore it. And we need to recognize that an effective counterattack means making common cause with Europe.
Amen, but the omens are not very good. Nigel Farage portrays himself as some sort of national savior, rather than being called out as a de facto US vassal aided primarily by a fifth column media seeking to strengthen our vassal state. are. The Labor government appointed Claire Barclay, CEO of Microsoft UK, as chair of the Industrial Strategy Council, while the BlackRock board met in Downing Street and received five-star treatment. Ta. There is little momentum for strengthening cooperation with the EU.
To be fair, the government's planned industrial strategy does show potential for a better direction. And the good thing about Trump's impending inauguration is that he embodies the essence of our vassal status. How about Make Britain Great Again instead? Progressive and wealthy donors – Dale Vince? Gary Lubner? Clive Cowdery? – Must ensure copies are sent to all MPs and peers. vassal state. Our true American friends will applaud us for trying to rebalance our relationships. After all, that's what they would do if the boots were reversed.
Authors, publishers, musicians, photographers, filmmakers, and newspaper publishers have all opposed the Labor government’s proposal to create a copyright exemption for training algorithms by artificial intelligence companies.
Representing thousands of creators, various organizations released a joint statement rejecting the idea of allowing companies like Open AI, Google, and Meta to use public works for AI training unless owners actively opt out. This was in response to the ministers’ proposal announced on Tuesday.
The Creative Rights in AI Coalition (Crac) emphasized the importance of respecting and enforcing existing copyright laws rather than circumventing them.
Included in the coalition are prominent entities like the British Recording Industry, the Independent Musicians Association, the Film Institute, the Writers’ Association, as well as Mumsnet, the Guardian, the Financial Times, the Telegraph, Getty Images, the Daily Mail Group, and Newsquest.
The intervention from these industry representatives follows statements by Technology and Culture Minister Kris Bryant in Parliament, where he promoted the proposed system as a way to enhance access to content for AI developers while ensuring rights holders have control over its use. This stance was reinforced after Bryant mentioned the importance of controlling the training of AI models using UK content accessed from overseas.
Nevertheless, industry lobbying group Tech UK is advocating for a more permissive market that allows companies to utilize and pay for copyrighted data. Caroline Dinenage, chair of the Conservative Party’s culture, media, and sport select committee, criticized the government’s alignment with AI companies.
Mr. Bryant defended the proposed system to MPs by highlighting the need for a flexible regime that allows for overseas developers to train AI models with UK content. He warned that a strict regime could hinder the growth of AI development in the UK.
Creatives in the industry are urged to seek permission from generative AI developers, obtain licenses, and compensate rights holders if they wish to create or train algorithms for various media formats.
A collective statement from the creative industry emphasized the importance of upholding current copyright laws and ensuring fair compensation for creators when licensing their work.
Renowned figures like Paul McCartney, Kate Bush, Julianne Moore, Stephen Fry, and Hugh Bonneville have joined a petition calling for stricter regulations on AI companies that engage in copyright infringement.
Novelist Kate Mosse is also supporting a campaign to amend the Data Bill to enforce existing copyright laws in the UK to protect creators’ rights and fair compensation.
During a recent House of Lords debate, supporters of amendments to enforce copyright laws likened the government’s proposal to asking shopkeepers to opt-out of shoplifting rather than actively preventing it.
The government’s plan for a copyright exemption has faced criticism from the Liberal Democrats and other opponents who believe it is influenced by technology lobbyists and misinterpretations of current copyright laws.
Science Minister Patrick Vallance defended the government’s position by emphasizing the need to support rights holders, ensure fair compensation, and facilitate the development of AI models while maintaining appropriate access.
According to proposals from the UK government, tech companies would have the freedom to utilize copyrighted material for training artificial intelligence models, unless creative professionals or companies opt out of the process.
The proposed changes aim to resolve conflicts between AI companies and creatives. Sir Paul McCartney has expressed concerns that without new laws, technology “could just take over.”
A government consultation has suggested an exception to UK copyright law that currently prohibits the use of someone else’s work without permission, allowing companies like Google and ChatGPT developer OpenAI to apply copyrighted content in training their models. This proposal permits writers, artists, and composers to “reserve their rights,” meaning they can choose not to have their work utilized in AI training or request a license fee for it.
Chris Bryant MP, the Data Protection Minister, described the proposal as a “win-win” for both parties who have been in conflict over the new copyright regulations. He emphasized the benefit of this proposal in providing creators and rights holders with greater control in these complex circumstances, potentially leading to increased licensing opportunities and a new income source for creators.
British composer Ed Newton Rex, a prominent voice in advocating for fair contracts for creative professionals, criticized the opt-out system as “completely unfair” to creators. Newton Rex, along with more than 37,000 other creative professionals, raised concerns about the unauthorized use of creative work in training AI models, labeling it as a substantial threat to creators’ livelihoods.
Furthermore, the consultation considered requiring AI developers to disclose the content used for training their models, providing rights holders with more insight into how and when their content is utilized. The government emphasized that new measures must be available and effective before they are implemented.
The government is also seeking feedback on whether the new system will apply to existing models in the market, such as those in ChatGPT and Google’s Gemini.
Additionally, the consultation will address the potential need for “moral rights” akin to those in the US, to protect celebrities from having their voices and likenesses replicated by AI without their consent. Hollywood actress Scarlett Johansson had a dispute with OpenAI last year when a voice assistant closely resembling her signature speech was revealed. OpenAI halted the feature after receiving feedback that it sounded similar to Johansson’s voice.
MArietje Schake is a former member of the European Parliament from the Netherlands. She currently serves as the Director of International Policy at the Stanford University Cyber Policy Center and an International Policy Fellow at Stanford University. Human Centered Artificial Intelligence Research Institute. The title of her new book is High-tech coup: How to save democracy from Silicon Valley.
What are the key differences between big technology companies and traditional big companies in terms of power and political influence?
The difference is the role these technology companies play in various aspects of people’s lives, including nation-states, economies, and geopolitics. Thus, although former monopolies had accumulated a lot of capital and important positions, they were usually in one sector, such as oil or automobile production. These technology companies are like octopuses with tentacles pointing in different directions. They have so much data, location data, search, communications, critical infrastructure that they can now combine all that power to build AI that we’ve never seen before. It’s very different from what we’ve seen.
Peter Kyle, UK Technology Secretary recently proposed The government is “Feeling humble”
with major technology companies treat them like a nation-state
. What do you think about that?
I think this is a baffling misunderstanding of the role of democratically elected and responsible leaders. Yes, these companies have become incredibly powerful. The comparison with the role of the state is therefore understandable. Because these companies are increasingly making decisions that were once the exclusive domain of states. But the answer, especially from governments on the rise, should be less about showing humility and more about reinforcing the primacy of democratic governance and oversight. What is needed is confidence on the part of democratic governments to ensure that these companies and services are playing their proper role within, and not overtaking, a system based on the rule of law.
What impact do you think the inauguration of President Donald Trump will have?
The election of Donald Trump changes everything. Because he has brought certain technology interests closer together than any previous political leader, especially in the United States, a powerful geopolitical and technological hub. There are many cryptocurrencies that support Trump. There are many VCs [venture capitalists] And, of course, he promoted Elon Musk and announced an agenda of deregulation. Every step his administration takes will be influenced by these factors, whether it’s the personal interests of Elon Musk and his companies or the personal preferences of the president and his supporters. On the other hand, Musk is actually critical of some of the dynamics surrounding AI, namely the existential risks. We’ll have to wait and see how long the honeymoon between him and Trump lasts, and how other big tech companies react. Because they’re not happy that Mr. Musk dictates technology policy more than his competitors. I think there will be difficult times ahead.
Why have politicians taken such a casual approach in the face of the digital technology revolution?
All of the most powerful companies we see today are based on this kind of progressive, liberal trend of the California counterculture, a few guys in shorts writing code in their basements and garages, and superpowers. It was rooted in a romantic story about challenging the world. Publishers of media companies, hotel branches, taxi companies, financial services, etc. had a pretty bad reputation from the beginning. There was certainly room for chaos, but this kind of underdog spirit was incredibly powerful. Both companies have done a really smart job of framing what they’re doing as decentralization, much like the Internet itself. Companies like Google and Facebook have consistently argued that any regulatory action would harm the internet. So it’s a combination of wanting to believe in promises and not understanding how very narrow corporate interests were won at the expense of the public interest.
Are any major politicians prepared to stand up to big tech interests? well someone likes [US senator] Elizabeth Warren has the clearest vision of excessive power and abuse by corporations, including the technology industry. She has consistently tried to address this issue. But broadly speaking, I worry that political leaders are not taking this the way they should. There is not much vision in the European Commission. I’ve seen elections, including in my own country, where technology was not a topic at all. We also see comments like this from the UK government, and it may seem logical to have democratic guardrails around overly powerful companies.
Are politicians held back by technological ignorance?
Yes, I think they are threatened. But I also believe that the framework for government agencies is intentional by technology companies. It’s important to understand that how we are taught to think about technology is shaped by the technology companies themselves. And you get the whole narrative that the government is so stupid, so outdated, so poor in service delivery that it’s basically unqualified to deal with technology. The message is, if you can’t even process your taxes on time, what are you going to do with AI? This is a caricature of the government, and the government should not accept that caricature.
Do you think the UK’s position with big tech companies has weakened as a result of Brexit?
Yes and no. Australia and Canada have technology policies, but their numbers are smaller than the population of the UK. I don’t know if that’s the case. I think it’s actually a much more deliberate choice to want to attract investment. So maybe it’s just self-interest that goes beyond the Conservative and Labor governments. Because I expected changes, but I don’t see much change in technology policy. I was clearly too optimistic.
We are talking about the restoration of sovereignty. Do you think most people are aware? Does this mean that sovereignty has been lost?
One of the reasons I wrote this book was to reach the average news reader, not technology experts. It’s a tough job to explain that this is an issue that concerns people. It will be interesting to see how the impact of the Trump administration invites reactions not only from European leaders but also from other countries around the world who believe they cannot afford to rely on American tech companies. . That’s not what you want. Because, essentially, we’re sending euros and pounds to Silicon Valley, and what do we get in return? Even more dependence. As incredibly difficult as it is, things won’t get better if you do nothing.
vinegarWe've reached a point where the CEO of a major social network is being arrested and detained. This is a big change, and it happened in a way that nobody expected. From Jennifer Rankin in Brussels:
French judicial authorities on Sunday extended the detention of Telegram's Russian-born founder. Pavel DurovHe was arrested at Paris airport on suspicion of misconduct related to the messaging app.
Once this detention phase is over, the judge can decide whether to release the defendant or to charge him or her and detain him further.
French investigators had issued a warrant for Durov's arrest as part of an investigation into charges of fraud, drug trafficking, organized crime, promoting terrorism and cyberbullying.
Durov, who holds French citizenship in addition to the United Arab Emirates, St. Kitts and Nevis and his native Russia, was arrested as he disembarked from a private jet after returning from the Azerbaijan capital, Baku, on Sunday evening. Telegram released a statement::
⚖️ Telegram complies with EU law, including the Digital Services Act, and its moderation is within industry standards and is constantly being improved.
✈️ Telegram CEO Pavel Durov has nothing to hide and travels frequently to Europe.
😵💫 It is absurd to claim that the platform or its owners are responsible for misuse of their platform.
French authorities said on Monday that Durov's arrest was part of a cybercrime investigation.
Paris prosecutor Laure Vecuot said the investigation concerns crimes related to illegal trading, child sexual abuse, fraud and refusal to provide information to authorities.
On the surface, the arrests seem decidedly different from previous years. Governments have had tough talk with messaging platform providers in the past, but arrests have been few and far between. Often, when platform operators are arrested, as in the cases of Silk Road's Ross Ulbricht and Megaupload's Kim Dotcom, authorities can argue that the platforms would not have existed without the crimes.
Telegram has long operated as a lightly moderated service, partly because of its roots as a chat app rather than a social network, partly because of Durov's own experience dealing with Russian censors, and partly (as many argue) because it is simply cheaper to have fewer moderators and less direct control over the platform.
But even if a company's moderation team's weaknesses can expose it to fines under laws such as the UK's Online Safety Act or the EU's Digital Services Act, they rarely lead to personal charges, and even less to executives being jailed.
Encryption
But Telegram has one feature that makes it slightly different from its peers, such as WhatsApp and Signal: the service is not end-to-end encrypted.
WhatsApp, Signal and Apple's iMessage are built from the ground up to ensure that content shared on the services cannot be read by anyone other than the intended recipient, including not only the companies that run the platforms but also law enforcement agencies that may be called upon to cooperate.
This has caused endless friction between the world's largest tech companies and the governments that regulate them, but for now, it seems the tech companies have won the main battle: No one is seriously calling for end-to-end encryption to be banned anymore, and regulators and critics are instead calling for messaging services to be monitored differently, with approaches such as “client-side scanning.”
Telegram is different. The service offers end-to-end encryption through a little-used opt-in feature called “Secret Chats,” but by default, conversations are encrypted only enough to be unreadable by anyone connected to your Wi-Fi network. To Telegram itself, messages sent outside of “Secret Chats” (including all group chats, and all messages and comments in one of the service's broadcast “channels”) are effectively unencrypted.
This product decision sets Telegram apart from the pack, yet oddly enough, the company's marketing suggests that the difference is almost the exact opposite. Cryptography expert Matthew Green:
Telegram CEO Pavel Durov continues to aggressively promote the app as a “secure messenger.” issued a scathing criticism He blocked Signal and WhatsApp in his personal Telegram channel, suggesting that these systems were rigged with US government backdoors and that only Telegram's independent encryption protocol could truly be trusted.
Watching Telegram urge people to forego using a messenger that's encrypted by default while refusing to implement a key feature that would broadly encrypt messages for its own users is no longer amusing. In fact, it's starting to feel a bit sinister.
I can't v won't
Paper planes are placed outside near the French Embassy in Moscow in support of Pavel Durov, who was arrested in France. Photo: Yulia Morozova/Reuters
The result of Telegram's mismatch between technology and marketing is a disappointing one: The company, and Durov personally, are selling the app to people who worry that even the gold standards of secure messengers — WhatsApp and Signal — aren't secure enough for their needs, especially from the U.S. government.
At the same time, if the government were to knock on Telegram's door and ask for information about actual or suspected criminals, Telegram would not have the same security as other services. End-to-end encrypted services could honestly tell law enforcement that they could not cooperate. In the long run, this could easily create a rather hostile atmosphere, but the conversation could also become a general conversation about privacy and policing principles.
Telegram, by contrast, is faced with a choice: cooperate with law enforcement, ignore it, or declare that it will not actively cooperate. This is no different from the choice facing the vast majority of online companies, from Amazon to Zoopla, except that Telegram's user base is the only one that demands security from law enforcement.
Every time Telegram says “yes” to police, it infuriates its user base; every time it says “no,” it plays a game of chicken with law enforcement.
The contours of the differences between France and Telegram will inevitably be swamped in conversations about “content moderation” and supporters will rally around it accordingly (Elon Musk has already weighed in, saying, “#FreePavel“) But the conversations are usually about publicly available material and what X or Facebook should or shouldn't do to moderate the discussion on their sites. Private messaging services and group messaging services are fundamentally different services, which is why mainstream end-to-end encrypted services exist. But by trying to straddle both markets, Telegram may have lost both defenses.
Final Question
My last day at the Guardian is fast approaching and next week's emails will be handed over to you, the reader. If you have a question you'd like an answer to, a doubt that's been simmering in the back of your mind for years, or are just curious about the inner workings of Techscape, please reply to this email or get in touch with me directly at alex.hern@theguardian.com. Ask me anything.
If you'd like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.
It’s been a tough week for the Grand St. Seven, a group of technology stocks that have played a leading role in the U.S. stock market, buoyed by investor excitement about breakthroughs in artificial intelligence.
Last year, Microsoft, Amazon, Apple, chipmaker Nvidia, Google parent Alphabet, Facebook owner Meta and Elon Musk’s Tesla accounted for half of the S&P 500’s gains. But doubts about returns on AI investments, mixed quarterly earnings, investor attention shifting elsewhere and weak U.S. economic data have hurt the group over the past month.
Things came to a head this week when the shares of the seven companies entered a correction, with their combined share prices now down more than 10% from their peak on July 10.
Here we answer some questions about Seven and the AI boom.
Why did AI stocks fall?
First, there are concerns that the huge investments being made by Microsoft, Google and others in AI will pay off. These have been growing in recent months. Goldman Sachs analysts The memo was published In June, the Wall Street bank released a report titled “Gen AI: Too Much Spending, Too Little Reward?” which asked whether $1 trillion in investment in AI over the next few years “will ever pay off,” while an analysis by Sequoia Capital, an early investor in ChatGPT developer OpenAI, estimated that tech companies would need $600 billion in rewards to recoup their AI investments.
Gino said “The Magnificent Seven” is also hit by these concerns.
“There are clearly concerns about the return on the AI investments that they’re making,” he said, adding that big tech companies have “done a good job explaining” their AI strategies, at least in their most recent financial results.
Another factor at play is investor hope that the Federal Reserve, the U.S. central bank, may cut interest rates as soon as next month. The prospect of lower borrowing costs has boosted investors’ support for companies that could benefit, such as small businesses, banks and real estate companies. This is an example of “sector rotation,” in which investors move money between different parts of the stock market.
Concerns about the Big 7 are affecting the S&P 500, given that a small number of tech stocks make up much of the index’s value.
“Given the growing concentration of this group within U.S. stocks, this will have broader implications,” said Henry Allen, macro strategist at Deutsche Bank AG.Concerns about a weakening U.S. economy also hit global stock markets on Friday.
What happened to tech stocks this week?
As of Friday morning, the seven stocks were down 11.8% from last month’s record highs, but had been dipping in and out of correction territory — a drop of 10% or more from a recent high — in recent weeks amid growing doubts.
Quarterly earnings this week were mixed. Microsoft’s cloud-computing division, which plays a key role in helping companies train and run AI models, reported weaker-than-expected growth. Amazon, the other cloud-computing giant, also disappointed, as growth in its cloud business was offset by increased spending on AI-related infrastructure like data centers and chips.
But shares of Meta, the owner of advertising-dependent Facebook and Instagram, rose on Thursday as the company’s strong revenue growth offset promises of heavy investment in AI. Apple’s sales also beat expectations on Thursday.
“Expectations for the so-called ‘great seven’ group have perhaps become too high,” Dan Coatsworth, an analyst at investment platform AJ Bell, said in a note this week. “These companies’ success puts them out of reach in the eyes of investors, and any shortfall in greatness leaves them open to harsh criticism.”
A general perception that tech stocks may be overvalued is also playing a role: “Valuations have reached 20-year highs and they needed to come down and take a pause to digest some of the gains of the past 18 months,” says Angelo Gino, a technology analyst at CFRA Research.
The Financial Times reported on Friday that hedge fund Elliott Management said in a note to investors that AI is “overvalued” and that Nvidia, which has been a big beneficiary of the AI boom, is in a “bubble.”
Can we expect to see further advances in AI over the next 12 months?
Further breakthroughs are almost certain, which may reassure investors. The biggest players in the field have a clear roadmap, with the next generation of frontier models already underway to train, and new records are being set almost every month. Last week, Alphabet Inc.’s Google DeepMind announced that its system had set a new record at the International Mathematical Olympiad, a high school-level math competition. The announcement has observers wondering whether the company will be able to tackle long-unsolved problems in the near future.
The question for labs is whether these breakthroughs will generate enough revenue to cover the rapidly growing costs of achieving them: The cost of training cutting-edge AI has increased tenfold every year since the AI boom really began, raising questions about how even well-funded companies such as OpenAI, the Microsoft-backed startup behind ChatGPT, will cover those costs in the long run.
Is generative AI already benefiting the companies that use it?
In many companies, the most successful uses of generative AI (the term for AI tools that can create plausible text, voice, and images from simple prompts) have come from the bottom up: people who have effectively used tools like Microsoft’s Copilot or Anthropic’s Claude to figure out how to work more efficiently, or even eliminate time-consuming tasks from their day entirely. But at the enterprise level, clear success stories are few and far between. Whereas Nvidia got rich selling shovels in the gold rush, the best story from an AI user is Klarna, the buy now, pay later company, which announced in February that its OpenAI-powered assistant can: Resolved two-thirds of customer service requests In the first month.
Dario Maisto, a senior analyst at Forrester, said a lack of economically beneficial uses for generative AI is hindering investment.
“The challenge remains to translate this technology into real, tangible economic benefits,” he said.
According to insurers, a global technology outage caused by a faulty CrowdStrike update is estimated to cost Fortune 500 companies in the United States $5.4 billion. Cybersecurity companies have pledged to take measures to prevent such incidents in the future.
The projected economic losses do not factor in tech giant Microsoft, which experienced widespread system outages during the event.
Banking, healthcare, and major airlines are anticipated to bear the brunt of the impact, as reported by insurance company Parametric. Total insured losses for Fortune 500 companies, excluding Microsoft, are estimated to range between $540 million and $1.08 billion.
The CrowdStrike outage led to the disruption of thousands of flights, hospitals, and payment systems, marking it as the largest IT outage in history. Companies across industries are still struggling to recover from the damages. This incident exposed the fragility of modern technology systems, where a single faulty update can halt operations globally.
CrowdStrike, a Texas-based cybersecurity company worth billions, has seen a 22% drop in its shares since the outage. It has apologized for causing the tech crisis and has released a report detailing the issues with the update.
The root cause of the outage was an update pushed to CrowdStrike’s Falcon platform, a cloud-based service aimed at protecting businesses from cyber threats. The update contained a bug that resulted in 8.5 million Windows machines crashing simultaneously.
CrowdStrike has committed to conducting more thorough testing of its software before updates and implementing staged updates to prevent similar widespread outages in the future. It also plans to provide a more detailed report on the outage’s causes in the upcoming weeks.
As one of the largest cybersecurity companies globally, valued at around $83 billion prior to the outage, CrowdStrike serves many Fortune 1000 companies worldwide. The impact of the failed update was substantial due to its broad reach, underscoring how heavily reliant companies are on similar products for their operations.
Several companies continue to face challenges in recovering from the outage, with Delta Air Lines still experiencing disruptions after canceling or rescheduling numerous flights. This situation has left frustrated passengers stranded. Panicked Parents Delta Air Lines has launched an investigation into reaching the affected children, and the U.S. Department of Transportation is investigating its handling of the matter.
IBlocky World Chipotle Burrito BuilderIn Chipotle, players don the uniforms of the Tex-Mex restaurant chain and make burritos for virtual customers. Available toppings are taken from Chipotle’s real-world menu; shirts and caps feature the Chipotle logo. And when the game launched two years ago, the first 100,000 players earned “Burrito Bucks” to use in their burritos. Chipotle website.
after that Hyundai Mobility Adventure You can test drive models made by Korean manufacturers. Samsung Galaxy Station Here’s a mockup of the company’s latest smartphone designed to help travel to extraterrestrial worlds. Telefonica Town The challenge is to climb an assault course made from products featured in the telecommunications giant’s catalog. Vans World They just hand you a skateboard so you can do a few kickflips in a park plastered with shoe companies’ logos.
These are just a few of the corporate theme parks available. Robloxis one of the world’s most popular online video game platforms, with an average of 77 million players per day earlier this year, and is especially popular with children and younger players (58% of users self-reported as being under 16 years old). The end of last year), Roblox lets you explore fantastical virtual worlds, jumping over obstacles, finding hidden collectibles, and role-playing different tasks just like a kid would on the playground.
But the platform’s biggest selling point is its basic development tools, which allow anyone with little to no computer knowledge to create and share their own video games. Though this toolset is limited by design, it has attracted many people over the past few years, and not just aspiring game developers. This toolset has made Roblox a favorite playground for corporate advertisers, who use the development tools to create branded Roblox games to share with the game’s millions of players.
These advergames (advertisements presented in the form of video games) typically sprinkle corporate branding onto a set of game mechanics simple enough for Roblox’s younger player base. Broader suspicions Criticism that Roblox does not adequately protect children (which the company denies) has led to companies rushing to develop ad-supported games. Brands from Walmart to Wimbledon, McDonald’s to Gucci, Nike to the BBC have launched ad-supported games on the platform. Some have garnered hundreds of thousands of hits, others tens of millions. Seeking more brand involvement By promoting its large, young user base as a major attraction in a competitive advertising market.
An action shot from Vans World, where the company built a virtual skatepark in Roblox complete with footwear messaging. Photo: Vans / Roblox
“In the context of the attention economy, where consumers are exposed to hundreds, even thousands, of ads a day, capturing and maintaining attention is crucial,” says Yusuf Ochi, associate professor of marketing at Bayes Business School, City, University of London. “We are exposed to thousands of ads every day, many of which we don’t remember. Advagames circumvent these filters more effectively by integrating brand messaging into games.”
Öç’s own research has found that ads that utilize interactive features like touching, swiping, and tilting a phone screen can influence consumer preferences and purchase intent. Roblox allows brands to bring these interactive elements into a ready-made, engaging space.
“Roblox’s popularity with a younger demographic opens up new avenues for us to reach and engage the next generation of consumers in a sector where we’re already investing heavily,” said Robert Jan van Dormael, vice president of marketing for consumer audio at Samsung-owned Harman.
JBL, one of Harman’s hi-fi brands, released an official Roblox game in February, where players can collect audio snippets and arrange them into custom tracks, explore pastel-colored worlds and collect virtual currency to spend on cosmetic headphones and portable speakers, all accurately modeled after real-life JBL products. Since its release, it has attracted 1.4 million players, with average playtime over six minutes and engagement metrics orders of magnitude higher than other games. A few seconds A person typically spends an hour reading a social media post…
Vermont’s groundbreaking new law is set to become the first in the United States to mandate that fossil fuel companies contribute to the expenses associated with weather-related disasters caused by climate change.
The bill was authorized by Republican Governor Phil Scott on Thursday night without his signature, following its passage in the state Legislature with majority support from Democrats.
According to Vermont law, the Climate Superfund Act is designed to hold companies accountable, similar to the EPA’s Superfund program, by requiring large oil and high-emission companies to cover expenses related to preparing for and recovering from extreme weather events resulting from climate change.
The companies subject to taxation and the specific amounts they must pay will be determined based on a calculation of the role of climate change in Vermont’s weather disasters and the costs incurred by the state. Each company’s share will be based on their carbon dioxide emissions between 2000 and 2019.
Following the bill’s passage in Vermont, there was uncertainty among state lawmakers regarding Governor Scott’s potential veto of the legislation. In a memo to lawmakers, Scott expressed concerns about the bill’s impacts.
However, supporters of the law celebrated its enactment, viewing it as a step towards holding major polluters accountable for environmental damage. Elena Millay, vice president of the Vermont Environmental Protection Law Foundation, praised the legislation.
Ethan Poplawski’s family home was destroyed in a landslide in July 2023 in Lipton, Vermont. Jessica Rinaldi/The Boston Globe via Getty Images file
Lauren Hierle, executive director of Vermont Environmental Voters, highlighted the importance of the Climate Superfund in distributing cleanup costs fairly and preventing taxpayers from bearing the burden alone.
The funds collected from fossil fuel companies under the new law will go towards upgrading infrastructure, securing schools and public buildings against extreme weather, storm cleanup, and reducing public health expenses related to climate change. State agencies will determine each company’s financial obligations by 2027.
While the law is expected to face legal challenges, including potential lawsuits, critics like the American Petroleum Institute argue that the fees are unjust and damaging to the energy industry.
Other states such as Massachusetts, Maryland, and New York are also contemplating similar legislation in response to escalating climate disasters, showcasing a growing need for financial resources to support recovery efforts.
Jennifer Rushlow, a Vermont Law School professor, emphasized the significance of Vermont’s law in setting a precedent for resilient climate Superfund legislation that could be adopted by other states.
“MMonopoly is Silicon Valley’s answer to Darth Vader and is “a condition of all successful business,” said Peter Thiel. This aspiration is widely shared by Valley giant Gamman, his new acronym for Google, Apple, Microsoft, Meta, Amazon, and Nvidia. And with the advent of AI, each one’s desire to reach that blessed state before others gets there is even greater.
One sign of their anxiety is that they are spending insane amounts of money on the 70-odd generative AI startups that have proliferated since it became clear that AI was going to be the new thing. Microsoft, for example, reportedly spent $13bn (about £10.4bn) on OpenAI, while leading a $1.3bn funding round for DeepMind co-founder Mustafa Suleiman’s startup Inflection. He was also an investor. Amazon invested $4 billion in Anthropic, a startup founded by refugees from OpenAI. Google invested $500 million in the same business, he pledged an additional $1.5 billion, and he invested an unknown amount in A121 Labs and Hugging Face.
(Yes, I know the name doesn’t mean anything.) Microsoft also invested in his French AI startup, Mistral. and so on. In 2023, only $9 billion of the $27 billion invested in AI startups was invested. From a venture capitalist company –Until recently, the company was by far the largest funder of emerging technology companies in Silicon Valley.
what’s happening? After all, the big tech companies have their own “fundamental” AI models and don’t need what smaller companies have built or are building. And every penny drops. We’ve seen this strategy before. An existing company discovers and captures potential competitors at an early stage. For example, Google acquired YouTube in his 2006. Facebook acquired Instagram for $1 billion in 2012 when it had only 13 employees, and WhatsApp in 2014 (for $19 billion, which seemed an exorbitant amount at the time).
With the 20/20 vision of hindsight, we now see that these were all anti-competitive acquisitions that should have been resisted at the time and were not. That’s why it’s so refreshing to know that at least one regulator, the UK’s Competition and Markets Authority (CMA), seems determined to learn from its history.
in Speech given at a gathering of American antitrust lawyers Just over a week ago in Washington, CMA CEO Sara Cardel called for ensuring the market for fundamental AI models is supported by fair, open and effective competition and strong consumer protections. announced that he had decided to do so. Her concern is that the growing presence of a few large incumbents across the AI value chain (the series of steps required to turn inputs into usable outputs) will undermine competition and limit companies’ options. This meant that there was a possibility that these markets could be formed in a way that degraded quality. and consumers.
She cited three major risks to competition. One is that companies that control critical inputs for developing the underlying model may restrict access to protect themselves from competition. Powerful incumbents may exploit their positions in consumer and business markets to limit competition in model deployment and thereby distort choice. And we believe that partnerships between key players have the potential to strengthen or expand existing market power across the value chain.
He also said the CMA would take action to assess and mitigate competition risks from new technologies through its formidable investigatory powers, including merger control reviews, market investigations and possible designations under new digital competition laws. I warned you.
It was truly amazing to hear a major regulator speak like this about the technology industry. Cardel said the CMA will be a technology industry that believes in being proactive and (as is often said) moving quickly to break things, rather than waiting for problems to arise before acting. He suggested that he would try to stay ahead of the big players rather than lag behind them. He said the CMA is already preparing for this task based on what it has learned so far from adapting to technology platforms. Rather than focusing only on individual parts of the chain, the value of AI model deploymenthe aims to look at the entire chain holistically. It also plans to use its merger review powers more aggressively to assess the impact of alliances and AI investments on competition.
Isn’t that exciting? But in some ways it is no surprise as it is one of the few British institutions that seems able to use the post-Brexit freedoms as an opportunity for creativity and innovation. And bigwigs who are tempted to dismiss Cardel’s speech as mere fiery rhetoric should reflect on the CMA’s recent track record. A thorough investigation into Microsoft’s acquisition of Activision Blizzard, for example; or how Meta forced the sale of Giphy, an online database and search engine that allows users to find and share animated GIF files. Cardel may be lower profile than her U.S. FTC counterpart Lina Khan, but it’s clear she means business. People with strong possessiveness should be careful.
○On April 6, Maryland passed the first “Kids Code” bill in the US. The bill is designed to protect children from predatory data collection and harmful design features by tech companies. Vermont’s final public hearing on the Kids Code bill took place on April 11th. This bill is part of a series of proposals to address the lack of federal regulations protecting minors online, making state legislatures a battleground. Some Silicon Valley tech companies are concerned that these restrictions could impact business and free speech.
These measures, known as the Age-Appropriate Design Code or Kids Code bill, require enhanced data protection for underage online users and a complete ban on social media for certain age groups. The bill unanimously passed both the Maryland House and Senate.
Nine states, including Maryland, Vermont, Minnesota, Hawaii, Illinois, South Carolina, New Mexico, and Nevada, have introduced bills to improve online safety for children. Minnesota’s bill advanced through a House committee in February.
During public hearings, lawmakers in various states accused tech company lobbyists of deception. Maryland’s bill faced opposition from tech companies who spent $250,000 lobbying against it without success.
Carl Szabo, from the tech industry group NetChoice, testified before the Maryland state Senate as a concerned parent. Lawmakers questioned his ties to the industry during the hearing.
Tech giants have been lobbying in multiple states to pass online safety laws. In Maryland, these companies spent over $243,000 in lobbying fees in 2023. Google, Amazon, and Apple were among the top spenders according to state disclosures.
The bill mandates tech companies to implement measures safeguarding children’s online experiences and assess the privacy implications of their data practices. Companies must also provide clear privacy settings and tools to help children and parents navigate online privacy rights and concerns.
Critics are concerned that the methods used by tech companies to determine children’s ages could lead to privacy violations.
Supporters argue that social media companies should not require identification uploads from users who already have their age information. NetChoice suggests digital literacy education and safety measures as alternatives.
During a discussion on child safety legislation, a NetChoice director emphasized parental control over regulation, citing low adoption rates of parental monitoring tools on platforms like Snapchat and Discord.
NetChoice has proposed bipartisan legislation to enhance child safety online, emphasizing police resources for combating child exploitation. Critics argue that tech companies should be more proactive in ensuring child safety instead of relying solely on parents and children.
Opposition from tech companies has been significant in all state bills, with representatives accused of hiding their affiliations during public hearings on child safety legislation.
State bills are being revised based on lessons learned from California, where similar legislation faced legal challenges and opposition from companies like NetChoice. While some tech companies emphasize parental control and education, critics argue for more accountability from these companies in ensuring child safety online.
Recent scrutiny of Meta products for their negative impact on children’s well-being has raised concerns about the company’s role in online safety. Some industry experts believe that tech companies like Meta should be more transparent and proactive in protecting children online.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.