EU Fines Elon Musk’s X €120 Million in First Enforcement of New Digital Law

Elon Musk’s social media platform X has received a €120m (£105m) fine from the European Commission after being found in violation of new EU digital laws. This high-profile ruling is expected to cause friction with US billionaire Donald Trump.

The violations include what the EU described as “misleading” blue checkmark verification badges given to users, as well as insufficient transparency in advertising practices, which have been under investigation for two years.

The EU’s regulations mandate that tech companies maintain public lists of advertisers to ensure their structures prevent illegal fraud, false advertising, and organized political campaign manipulations.

Additionally, the EU found that X had not granted sufficient access to public data typically available to researchers observing critical topics like political content.

This significant ruling marks the conclusion of an inquiry that started two years ago.

On Friday, the commission announced that X had failed to meet its transparency responsibilities under the Digital Services Act (DSA), marking the first judgment against the platform since the enforcement of regulations on social media and major tech platforms began in 2023.

In December 2023, the Commission began formal proceedings to determine if X violated the DSA regarding illegal content distribution and the effectiveness of measures to address information manipulation, with the investigation ongoing.

Under the DSA, X could face fines of up to 6% of its global revenue, which is projected to be between $2.5bn and $2.7bn (£1.9bn and £2bn) in 2024.

There are still three additional investigations underway, two of which examine alterations to content and algorithms implemented after Musk’s acquisition of Twitter in October 2022, when it was rebranded to “X.”

The commission is also exploring whether laws against inciting violence or terrorism have been violated.

Moreover, the company is evaluating a system that would permit users to report and flag content they suspect to be illegal.

The fine, divided into three components, includes a €45 million penalty for the introduction of a blue “authentication” checkmark that users could purchase, which obscured the reliability of account holders, according to senior officials.

Prior to Musk’s takeover, blue checkmarks were exclusively awarded to verified account holders, including politicians, celebrities, public bodies, and established journalists from mainstream and emerging media like bloggers and YouTubers. Following the acquisition, users subscribing to X Premium can now obtain blue check status.

“With the DSA’s first compliance decision, we aim to hold Company X accountable for infringing on users’ rights and evading responsibility,” stated Hena Virkunen, executive vice president of the European Commission overseeing technology regulation.

“Deceiving users with blue checkmarks, obscuring information in advertisements, or restricting access for researchers is unacceptable online within the EU.”

X was also fined €35 million for advertising violations and €40 million for failures related to data access for researchers.

Skip past newsletter promotions

This ruling could provoke backlash from the Trump administration. Recently, US Commerce Secretary Howard Lutnick stated that the EU might have to explore technical regulations to cut steel tariffs by 50%.

His statement was labeled “blackmail” by Spanish commissioner Teresa Rivera.

EU officials asserted that the ruling stands independent of allegations brought forth by a US delegation meeting with trade ministers in Brussels last week. The EU emphasized its right to regulate US tech firms, noting that 25 companies, including non-US entities like TikTok, must adhere to the DSA.

Musk, who is on the path to becoming the first trillionaire, has 90 days to draft an “action plan” to address the fine, though he remains free to contest the EU’s decision, similar to appeals made by other corporations like Apple to the European Court of Justice.

In contrast, the EU announced it had secured a commitment from TikTok to establish an advertising repository, addressing transparency concerns raised with the European Commission earlier this year.

The DSA mandates that platforms maintain accessible and searchable ad repositories to enable researchers and civil society representatives to detect fraudulent, illegal, or age-inappropriate advertisements.

Government officials indicated that the emerging issue of fraudulent political ads and ads featuring impersonated celebrities cannot be adequately analyzed without compliance from social media companies.

Mr. X has been contacted for commentary. The EU confirmed that the company has been made aware of the decision.

Source: www.theguardian.com

Physicists Discover Universal Law Governing How Objects Fracture

Sure! Here’s a rewritten version of the content while preserving the HTML tags:

How many pieces can a dropped vase break into?

Imaginechina Limited / Alamy

The physics behind a dropped plate, a crumbled sugar cube, and a shattered glass shows striking similarities regarding how many pieces result from each object breaking.

For decades, researchers have recognized a universal behavior related to fragmentation, where objects break apart upon falling or colliding. If one counts the fragments of varying sizes and plots their distribution, a consistent shape emerges regardless of the object that is broken. Emmanuel Villemaux from the University of Aix-Marseille in France has formulated equations to illustrate these shapes, thereby establishing universal laws of fragmentation.

Instead of concentrating on the appearance of cracks leading to an object’s breakup, Villermaux employed a broader approach. He considered all potential fragment configurations that could result in shattering. Some configurations produce precise outcomes, such as a vase breaking into four equal parts; however, he focused on capturing the most probable set that represents chaotic breakage, namely the one with the highest entropy. This mirrors methods used to derive laws concerning large aggregates of particles in the 19th century, he notes. Villermaux also applied the principles of physics that govern changes in fragment density during shattering, knowledge previously uncovered by him and his colleagues.

By integrating these two elements, they succeeded in deriving a straightforward equation that predicts the size distribution of fragments in a broken object. To verify its accuracy, Villermaux compared it against a number of earlier experiments involving glass rods, dry spaghetti, plates, ceramic tubes, and even fragments of plastic submerged in water and waves crashing during stormy weather. Overall, the fragmentation patterns observed in each of these experiments conformed to his novel law and reflected the universal distribution shapes previously noted by researchers.

He also experimented by dropping objects from varying heights to crush sugar cubes. “This was a summer endeavor with my daughters. I had done it a long time ago when they were young, and later revisited the data to further illustrate my concept,” Villermaux explains. He observes that this equation fails to hold when randomness is absent, or the fragmentation process is overly uniform, as occurs when a liquid stream divides into uniform droplets based on the deterministic rules of fluid dynamics, or in instances when fragments engage with each other during fragmentation.

Mr. Ferenc and his colleagues at the University of Debrecen in Hungary argue that the graphical pattern highlighted in Villermaux’s analysis is so fundamentally universal that it may derive from a more extensive principle. Simultaneously, they express surprise at how broadly applicable it is, as well as its adaptability to accommodate specific variations, such as in plastics where cracking can be “healed.”

Fragmentation is not merely a captivating challenge in physics; a deeper understanding could significantly impact energy expenditures in mining operations or guide preparations for increasing rockfalls in mountainous areas as global temperatures tend to rise, Kuhn remarks.

Looking ahead, it may prove beneficial to explore not only the sizes of the fragments but also their shape distributions, suggests Kuhn. Additionally, identifying the smallest conceivable size of a fragment remains an unresolved issue, according to Villermaux.

Topic:

Source: www.newscientist.com

Living Systems Might Require a Fourth Law of Thermodynamics

HeLa Cell in Telophase with Separated Chromosomes

Dr. Matthew Daniels/Science Photo Library

The principles of thermodynamics, particularly aspects like heat and entropy, provide valuable methods for assessing how far a system of ideal particles is from achieving equilibrium. Nevertheless, it’s uncertain if the existing thermodynamic laws adequately apply to living organisms, whose cells are complexly intertwined. Recent experiments involving human cells might pave the way for the formulation of new principles.

Thermodynamics plays a crucial role in living beings, as their deviations from equilibrium are critical characteristics. Cells, filled with energetic molecules, behave differently than simple structures like beads in a liquid. For instance, living cells maintain a “set point,” operating like an internal thermostat with feedback mechanisms that adjust to keep functions within optimal ranges. Such behaviors may not be effectively described by classical thermodynamics.

N. Narinder and Elisabeth Fischer-Friedrich from the Technical University of Dresden aimed to comprehend how the disequilibrium in living systems diverges from that in non-living ones. They carried out their research using HeLa cells, a line of cancer cells derived from Henrietta Lacks in the 1950s without her consent.

Initially, the scientists employed chemicals to halt cell division, then analyzed the outer membranes of the cells using an atomic force microscope. This highly precise instrument can engage with structures just nanometers in size, enabling researchers to measure how much the membranes fluctuated and how these variations were affected by interference with cell processes, such as hindering the development of certain molecules or the movement of proteins.

The findings showed that conventional thermodynamic models used for non-living systems did not fully apply to living cells. Notably, the concept of “effective temperature” was found to be misleading, as it fails to account for the unique behaviors of living systems.

Instead, the researchers emphasized the significance of “time reversal asymmetry.” This concept examines how the distinctions in biological events (like molecules repeatedly joining to form larger structures only to break apart again) differ when observed forwards versus backwards in time. These asymmetries are directly linked to the functional purposes of biological processes, such as survival and reproduction, according to Fischer-Friedrich.

“In biology, numerous processes are reliant on a system being out of equilibrium. Understanding how far the system deviates is crucial,” states Chase Brodersz from Vrije Universiteit Amsterdam. Recent findings have unveiled a promising new metric for assessing this deviation.

This development marks a significant stride toward enhancing our knowledge of active biological systems, as observed by Yair Shokev at Tel Aviv University. He notes the novelty and utility of the team successfully measuring time-reversal asymmetry alongside other indicators of non-equilibrium simultaneously.

However, to understand life through the lens of thermodynamic principles, further advancements are necessary. Fischer-Friedrich and her team aspire to formulate a concept akin to the fourth law of thermodynamics, specifically applicable to organisms with defined processes. They are actively investigating physiological observables—key parameters measurable within cells—from which such laws could potentially be derived.

Topic:

Source: www.newscientist.com

German Court Rules ChatGPT Violates Copyright Law by ‘Learning’ from Song Lyrics

A court in Munich has determined that OpenAI’s ChatGPT breached German copyright laws by utilizing popular songs from renowned artists to train its language model, which advocates for the creative industry have labeled a pivotal ruling for Europe.

The Munich District Court supported the German music copyright association GEMA, stating that ChatGPT gathered protected lyrics from well-known musicians to “learn” them.

GEMA, an organization that oversees the rights of composers, lyricists, and music publishers with around 100,000 members, initiated legal action against OpenAI in November 2024.

This case was perceived as a significant test for Europe in its efforts to prevent AI from harvesting creative works. OpenAI has the option to appeal the verdict.


ChatGPT lets users pose inquiries and issue commands to a chatbot, which replies with text that mimics human language patterns. The foundational model of ChatGPT is trained on widely accessible data.

The lawsuit focused on nine of the most iconic German hits from recent decades, which ChatGPT employed to refine its language skills.

This included Herbert Groenemeyer’s 1984 synthpop hit manners (male), and Helen Fischer’s Atemlos Durchi die Nacht (Breathless Through the Night), which became the unofficial anthem for the German team during the 2014 World Cup.

The judge ruled that OpenAI must pay undisclosed damages for unauthorized use of copyrighted materials.

Kai Welp, GEMA’s general counsel, mentioned that GEMA is now looking to negotiate with OpenAI about compensating rights holders.

The San Francisco-based company, co-founded by Sam Altman and Elon Musk, argued that its language learning model utilizes the entire training set rather than retaining or copying specific songs, as stated by the Munich court.

OpenAI contended that since the outputs are created in response to user prompts, the users bear legal responsibility, an argument the court dismissed.

GEMA celebrated the ruling as “Europe’s first groundbreaking AI decision,” indicating that it might have ramifications for other creative works.

Tobias Holzmuller, the company’s CEO, remarked that the verdict demonstrates that “the internet is not a self-service store, and human creative output is not a free template.”

“Today, we have established a precedent to safeguard and clarify the rights of authors. Even AI tool operators like ChatGPT are required to comply with copyright laws. We have successfully defended the livelihood of music creators today.”

The Berlin law firm Laue, representing GEMA, stated that the court’s ruling “creates a significant precedent for the protection of creative works and conveys a clear message to the global tech industry,” while providing “legal certainty for creators, music publishers, and platforms across Europe.”


The ruling is expected to have ramifications extending beyond Germany as a legal precedent.

The German Journalists Association also praised the decision as a “historic triumph for copyright law.”

OpenAI responded that it would contemplate an appeal. “We disagree with the ruling and are evaluating our next actions.” The statement continued, “This ruling pertains to a limited set of lyrics and does not affect the millions of users, companies, and developers in Germany who utilize our technology every day.”

Furthermore, “We respect the rights of creators and content owners and are engaged in constructive discussions with various organizations globally that can also take advantage of this technology.”

OpenAI is currently facing lawsuits in the U.S. from authors and media organizations alleging that ChatGPT was trained on their copyrighted materials without consent.

Source: www.theguardian.com

Meta Found in Violation of EU Law Due to ‘Ineffective’ Illegal Content Complaint System

The European Commission has stated that Instagram and Facebook failed to comply with EU regulations by not offering users a straightforward method to report illegal content, such as child sexual abuse and terrorism.

According to the EU enforcement agency’s initial findings released on Friday, Meta, the California-based company valued at $1.8 trillion (approximately £1.4 trillion) that operates both platforms, has implemented unnecessary hurdles for users attempting to submit reports.

The report indicated that both platforms employ misleading designs, referred to as “dark patterns,” in their reporting features, which can lead to confusion and discourage users from taking action.

The commission concluded that this behavior constitutes a violation of the company’s obligations under the EU-wide Digital Services Act (DSA), suggesting that “Meta’s systems for reporting and addressing illegal content may not be effective.” Meta has denied any wrongdoing.

The commission remarked, “In the case of Meta, neither Facebook nor Instagram seems to provide user-friendly and easily accessible ‘notification and action’ systems for users to report illegal content like child sexual abuse or terrorist content.”

A senior EU official emphasized that the matter goes beyond illegal content, touching on issues of free speech and “overmoderation.” Facebook has previously faced accusations of “shadowbanning” users regarding sensitive topics such as Palestine.

The existing reporting system is deemed not only ineffective but also “too complex for users to navigate,” ultimately discouraging them from reaching out, the official noted.

Advocates continue to raise concerns about inherent safety issues in some of Meta’s offerings. Recent research released by Meta whistleblower Arturo Bejar revealed that newly introduced safety features on Instagram are largely ineffective and pose a risk to children under 13.

Meta has refuted the report’s implications, asserting that parents have powerful tools at their disposal. The company implemented mandatory Instagram accounts for teenagers as of September 2024 and recently announced plans to adopt a version of its PG-13 film rating system to enhance parental control over their teens’ social media engagement.

The commission also pointed out that Meta complicates matters for users whose content has been blocked or accounts suspended. The report indicated that the appeal mechanism does not allow users to present explanations or evidence in support of their case, which undermines its efficacy.

The commission stated that streamlining the feedback system could also assist platforms in combating misinformation, citing examples like: an Irish deepfake video. Leading presidential candidate Catherine Connolly has claimed she will withdraw from Friday’s election.

This ongoing investigation has been conducted in partnership with Coimisiún na Meán, Ireland’s Digital Services Coordinator, which oversees platform regulations from its EU headquarters in Dublin.

The commission also made preliminary findings indicating that TikTok and Meta are not fulfilling their obligation to provide researchers with adequate access to public data necessary for examining the extent of minors’ exposure to illegal or harmful content. Researchers often encounter incomplete or unreliable data.

The commission emphasized that “granting researchers access to platform data is a crucial transparency obligation under the DSA, as it allows for public oversight regarding the potential effects these platforms have on our physical and mental well-being.”

These initial findings will allow the platforms time to address the commission’s requests. Non-compliance may result in fines of up to 6% of their global annual revenue, along with periodic penalties imposed to ensure adherence.

Skip past newsletter promotions

“Our democracy relies on trust, which means platforms must empower their users, respect their rights, and allow for system oversight,” stated Hena Virkunen, executive vice-chair of the commission for technology sovereignty, security, and democracy.

“The DSA has made this a requirement rather than a choice. With today’s action, we are sharing preliminary findings on data access by researchers regarding four platforms. We affirm that platforms are accountable for their services to users and society, as mandated by EU law.”


A spokesperson for Meta stated: “We disagree with any suggestions that we have violated the DSA and are actively engaging with the European Commission on these matters. Since the DSA was implemented, we have made changes to reporting options, appeal processes, and data access tools in the EU, and we are confident that these measures meet EU legal requirements.”

TikTok mentioned that fully sharing data about its platform with researchers is challenging due to restrictions imposed by GDPR data protection regulations.

“TikTok values transparency and appreciates the contributions of researchers to our platform and the industry at large,” a spokesperson elaborated. “We have invested significantly in data sharing, and presently, nearly 1,000 research teams have accessed their data through our research tools.

“While we assess the European Commission’s findings, we observe a direct conflict between DSA requirements and GDPR data protection standards.” The company has urged regulators to “clarify how these obligations should be reconciled.”

Source: www.theguardian.com

Record-Breaking Chip Defies Moore’s Law by Expanding Vertically

Stacking semiconductor transistors could aid in overcoming Moore’s law

Kaust

As semiconductor manufacturers make their products smaller, they encounter limitations on the computing power that can be integrated into a single chip. A groundbreaking chip may offer a solution to this dilemma and advance the creation of sustainable electronics.

Since the 1960s, enhancing electronic capabilities has revolved around miniaturizing their fundamental components, transistors, and packing them more densely onto chips. This trend was encapsulated by Moore’s Law, which posited that the number of components on a microchip doubles every year. However, this phenomenon began to falter around 2010. Li Xiaohan and colleagues at Saudi Arabia’s King Abdullah University of Science and Technology have suggested that the answer to this challenge might be to build upwards instead of inwards.

They engineered a chip featuring 41 vertical layers of two distinct semiconductor types, separated by insulating material. This stack of transistors is approximately ten times taller than any previously created. To evaluate its efficiency, the team produced 600 duplicates, all demonstrating consistent performance. Some of these stacked chips were utilized to execute various fundamental operations required by computers or sensing devices, showing performance levels comparable to traditional non-stacked counterparts.

Li mentions that producing these stacks necessitates a manufacturing method that requires less energy compared to standard chip production. Team members, including Thomas Anthopoulos from the University of Manchester in the UK indicates that while the new chip may not lead to advanced supercomputers, its application in everyday devices like smart home gadgets and wearable health monitors could significantly lower the carbon footprint of the electronics industry while enhancing functionality with each additional layer.

How high will the stack rise? “The possibilities are endless; we can keep pushing the limits. It’s just a journey of determination,” Anthopoulos states.

However, he notes that engineering hurdles persist regarding the temperature tolerance of the chip before it fails. Muhammad Alam from Purdue University in Indiana comments that it’s analogous to trying to keep cool by layering on multiple hoodies; each additional layer raises the heat. Alam asserts that the chip’s current thermal threshold of 50 degrees Celsius would need to rise by over 30 degrees Celsius to become practical for real-world application. Nonetheless, he believes that for electronics to progress in the near future, pursuing vertical growth is the only viable strategy.

topic:

Source: www.newscientist.com

Apple Advocates for Revisions to Anti-Mass Law, Threatens to Suspend Shipments to the EU

Apple is requesting the European Commission to revoke the technology legislation, cautioning that if changes are not made, it may halt the shipment of specific products and services to the 27-member bloc.

In its latest dispute with Brussels, the iPhone manufacturer argued that the digital market regulations have resulted in poorer experiences for Apple users, increased security risks, and disrupted the integration of Apple products.

The Silicon Valley company faced scrutiny from a three-year-old anti-Monopoly Act committee review aimed at regulating the dominance of major digital companies, including search engines, app developers, and messaging platforms.

It claimed that the legislation has already postponed the introduction of features such as live translation via AirPods and the demands for interoperability with non-Apple products, including live translation and screen mirroring from iPhones to laptops.

“The DMA implies that the list of features delayed for EU users will likely grow, leading to further delays in their experience with Apple products,” the company stated. It also noted that Brussels is fostering unfair competition, as the same rules don’t apply to Samsung, the leading smartphone vendor in the EU.

Some DMA requirements necessitate that Apple ensures headphones from other brands operate on iPhones. Apple expressed that this is a barrier preventing the rollout of live translation services in the EU, as competing companies could access conversation data, raising privacy concerns.

Apple argued that the DMA should be retracted or at least replaced with more suitable regulations. While it did not clarify which products could hinder future sales in the EU, it mentioned that the Apple Watch, first introduced a decade ago, would not be able to launch in the EU today.

This marks another confrontation between the California-based firm and the European Commission. Earlier this year, Apple appealed a €500 million fine levied by the EU for allegedly hindering app developers from exploring cheaper alternatives outside the app store.

In August, former US President Donald Trump threatened tariffs on unspecified nations in retaliation for regulations impacting US tech companies.

In a post on Truth Social, he remarked: “I stand against a country that attacks our incredible American tech companies. Digital taxes, digital service laws, and digital market regulations are all aimed at harming or discriminating against American technology.”

“They also provide the largest high-tech firms with an outrageous advantage, effectively giving a free pass to China. This needs to end, and it needs to end now!”

Referring to the DMA, Apple stated: “Rather than competing through innovation, already successful companies are twisting these laws to further their agendas to collect more data from EU citizens or to gain access to Apple’s technology without cost.”

It emphasized that the regulations under this law affect how users access apps. “Certain adult apps are available on iPhones from other markets that are not permitted in the app store, particularly due to risks posed to children.”

The European Commission has been asked for a statement on this matter.

Source: www.theguardian.com

Rayner Calls Farage a “Failed Young Woman” Over Proposal to Repeal Online Safety Law

Angela Rayner has stated that Nigel Farage has “failed a generation of young women” with his plan to abolish online safety laws, claiming it could lead to an increase in “revenge porn.”

The Deputy Prime Minister’s remarks are the latest in a series of criticisms directed at Farage by the government, as Labour launches a barrage of attack ads targeting British reform leaders, including one featuring Farage alongside influencer Andrew Tate.

During a press conference last month, reform leaders announced initiatives that encourage social media companies to restrict misleading and harmful content, vowing not to promote censorship and avoiding the portrayal of the UK as a “borderline dystopian state.”

In retaliation, Science and Technology Secretary Peter Kyle accused Farage of siding with child abusers like Jimmy Savile, prompting a strong backlash from reform leaders.


In comments made to the Sunday Telegraph, Rayner underscored the risks associated with abolishing the act, which addresses what is officially known as intimate image abuse.

“We recognize that the abuse of intimate images is an atrocity, fostering a misogynistic culture on social media, which also spills over into real life,” Rayner articulated in the article.

“Nigel Farage poses a threat to a generation of young women with his dangerous and reckless plans to eliminate online safety laws. The absence of a viable alternative to abolish safety measures and combat the forthcoming flood of abuse reveals a severe neglect of responsibility.”

“It’s time for Farage to explain to British women and girls how he intends to ensure their safety online.”

Labour has rolled out a series of interconnected online ads targeting Farage. An ad launched on Sunday morning linked directly to Rayner’s remarks, asserting, “Nigel Farage wants to make it easier to share revenge porn online,” accompanied by a laughing image of Farage.

According to the Sunday Times, another ad draws attention to Farage’s comments regarding Tate, an influencer facing serious allegations in the UK, including rape and human trafficking, alongside his brother Tristan.

Both the American-British brothers are currently under investigation in Romania and assert their innocence against numerous allegations.

Labour’s ads depict Farage alongside Andrew Tate with the caption “Nigel Farage calls Andrew Tate an ‘important voice’ for men,” referencing remarks made during an interview on last year’s Strike IT Big podcast.

Lila Cunningham, a former magistrate involved in the reform, wrote an article for the Telegraph on Saturday, labeling the online safety law as “censorship law” and pointed out that existing laws already address “revenge porn.”

“This law serves as a guise for censorship, providing a pretext to empower unchecked regulators and to silence dissenting views,” Cunningham claimed.

Cunningham also criticized the government’s focus on accommodating asylum seekers in hotels, emphasizing that it puts women at risk and diverting attention from more pressing concerns.

Source: www.theguardian.com

UK Online Safety Law Poses a Threat to Free Speech and Internet Safety

Elon Musk’s platform, X, has warned that the UK’s Online Safety Act (OSA) may “seriously infringe” on free speech due to its measures aimed at shielding children from harmful content.

The social media company noted that the law’s ostensibly protective aims are marred by the aggressive enforcement tactics of Communications Watchdog Ofcom.

In a statement shared on its platform, X remarked: “Many individuals are worried that initiatives designed to safeguard children could lead to significant violations of their freedom of expression.”

It further stated that the UK government was likely aware of the risks, having made “conscious decisions” to enhance censorship under the guise of “online safety.”

“It is reasonable to question if British citizens are also aware of the trade-offs being made,” the statement added.

The law, a point of contention politically on both sides of the Atlantic, is facing renewed scrutiny following the implementation of new restrictions on July 25th regarding access to pornography for those under 18 and content deemed harmful to minors.

Musk, who owns X, labeled the law as an “oppression of people” shortly after the enactment of the new rules. He also retweeted a petition advocating for the repeal of the law, which has garnered over 450,000 signatures.

X found itself compelled to establish age restrictions for certain content. In response, the Reformed British Party joined the outcry, pledging to abolish the act. This commitment led British technology secretary Peter Kyle to accuse Nigel Farage of aligning himself with pedophile Jimmy Saville, prompting Farage to describe the comments as “under the belt” and deserving of an apology.

Regarding Ofcom, X claimed that the regulators are employing “heavy-handed” tactics in implementing the act, characterized by “a rapid increase in enforcement resources” and “additional layers of bureaucratic surveillance.”

The statement warned: “The commendable intentions of this law risk being overshadowed by the expansiveness of its regulatory scope. A more balanced and collaborative approach is essential to prevent undermining free speech.”

While X aims to comply with the law, the threat of enforcement and penalties—potentially reaching 10% of global sales for social media platforms like X—could lead to increased censorship of legitimate content to avoid repercussions.

The statement also referred to plans for a National Internet Intelligence Research Team intended to monitor social media for indications of anti-migrant sentiments. While X suggested the proposal could be framed as a safety measure, it asserted that it “clearly extends far beyond that intention.”

Skip past newsletter promotions

“This development has raised alarms among free speech advocates, who characterize it as excessively restrictive. A balanced approach is essential for safeguarding individual freedoms, fostering innovation, and protecting children.”

A representative from Ofcom stated that the OSA includes provisions to uphold free speech.

They asserted: “Technology companies must address criminal content and ensure children do not access defined types of harmful material without needing to restrict legal content for adult users.”

The UK Department of Science, Innovation and Technology has been approached for comment.

Source: www.theguardian.com

UK Online Safety Law Requires Porn Sites to Implement 5 Million Daily Age Checks | Internet Safety

Recent statistics indicate that since the implementation of age verification for pornographic websites, the UK is conducting an additional five million online age checks daily.

The Association of Age Verification Providers (AVPA) reported a significant increase in age checks across the UK since Friday, coinciding with the enforcement of mandatory age verification under the Online Safety Act.

“We are thrilled to assist you in maximizing your business potential,” remarked Iain Corby, executive director of AVPA.

In the UK, the use of virtual private networks (VPNs), which allow users to bypass restrictions on blocked sites, is rapidly increasing as they mask users’ actual locations. Four of the top five free applications in the UK Apple Download Store are VPNs, with popular provider Proton reporting an astonishing 1,800% surge in downloads.

Last week, Ofcom, the UK communications regulator, indicated it may initiate a formal inquiry into the inadequate age checks reported this week. Ofcom stated it will actively monitor compliance with age verification requirements and may investigate specific services as needed.

AVPA, the industry association representing UK age verification companies, has been assessing the checks performed on UK porn providers, which were mandated to implement “very effective” age verification by July 25th.

Companies that verified ages were instructed to report “the number of checks conducted today for a very effective age guarantee.”

While the AVPA stated it couldn’t provide a baseline for comparison, it noted that effective age verification measures are newly introduced to dedicated UK porn sites, which previously only required a confirmation check for age.

An Ofcom spokesperson said: “Until now, children could easily stumble upon pornographic and other online content without seeking it out. Age checks are essential to prevent that. We must ensure platforms are adhering to these requirements and anticipate enforcement actions against non-compliant companies.”

Ofcom stresses that service providers should not promote the use of VPNs to circumvent age management.

Penalties for breaching online safety regulations, including insufficient age verification processes, can range from 10% of global revenue to complete blockage of the site’s access in severe cases.

Age verification methods endorsed by OFCOM and utilized by AVPA members include facial age estimation, which analyses a person’s age via live photos and videos; verification through credit card providers, banks, or mobile network operators; photo ID matching, where a user’s ID is compared to a selfie; and a “digital identity wallet” containing age verification proof.

Prominent pornographic platforms, including Pornhub, the UK’s leading porn site, have pledged to adopt the stringent age verification measures mandated by the Act.

The law compels sites and applications to protect children from various harmful content, specifically material that encourages suicide, self-harm, and eating disorders. Advanced platforms must also take action to prevent the dissemination of abusive content targeting individuals with characteristics protected under equality laws, such as age, race, and gender.

Free speech advocates argue that the restrictions on child-related content have caused the classification of X-rated materials to age unnecessarily, along with several Reddit forums dedicated to discussions around alcohol abuse.

Reddit and X have been approached for their feedback.

Source: www.theguardian.com

High Court Calls on UK Lawyers to Halt AI Misuse After Noting Fabricated Case Law

The High Court has instructed senior counsels to implement immediate actions to curb the misuse of artificial intelligence, following numerous false cases presented to the court featuring entirely fictitious individuals or constructed references.

While attorneys are leveraging AI systems to formulate legal arguments, two cases this year have been severely affected by citations from fictitious legal precedents, which are believed to have originated from AI.

In a damages lawsuit amounting to £89 million against Qatar National Bank, the claimant referenced 45 legal actions. The claimant acknowledged the use of publicly accessible AI tools, and his legal team admitted to citing non-existent authorities.

When Haringey Law Center filed a challenge against the London Borough of Haringey for allegedly failing to provide temporary accommodation for its clients, the attorney referenced fictitious case law multiple times. Concerns were raised when the counsel representing the council had to repeatedly explain why they could not verify the supposed authorities.

This situation led to legal action over unwarranted legal expenses, with the court ruling that the Law Centre and its attorneys, including the student attorney, were negligent. Although the barrister in that case refused to use AI, she stated that she might have inadvertently done so while preparing for another case where she cited the fictitious authority. She mentioned that she might have assumed the AI summary was accurate without fully understanding it.

In the Regulation Judgment, Dr. Victoria Sharp, President of the King’s Bench Division, warned, “If artificial intelligence is misused, it could severely undermine public trust in the judicial system. Lawyers who misuse AI could face disciplinary actions, including court contempt sanctions and referrals to law enforcement.”

She urged the Council of Lawyers and the Law Society to treat this issue as an immediate priority and instructed the heads of legal chambers and administrative bodies to ensure all lawyers understand their professional and ethical responsibilities regarding the use of AI.

“While tools like these can produce apparently consistent and plausible responses, those responses may be completely incorrect,” she stated. “They might assert confidently false information, reference non-existent sources, or misquote real documents.”

Ian Jeffrey, CEO of the English and Welsh Law Association, remarked that the ruling “highlights the dangers of employing AI in legal matters.”

“AI tools are increasingly utilized to assist in delivering legal services,” he continued. “However, the significant risk of inaccurate outputs produced by generative AI necessitates that lawyers diligently verify and ensure the accuracy of their work.”

Skip past newsletter promotions

These cases are not the first to suffer due to AI-generated inaccuracies. At the UK tax court in 2023, an appellant allegedly assisted by an “acquaintance at a law office” provided nine fictitious historical court decisions as precedents. She acknowledged that she might have used ChatGPT but claimed there were other cases supporting her position.

Earlier this year, in a Danish case valued at 5.8 million euros (£4.9 million), the appellant narrowly avoided dismissal when relying on a fabricated ruling that the judge had identified. A 2023 case in the US District Court for the Southern District of New York faced turmoil when the court was shown seven clearly fictitious cases cited by the attorneys. After querying, ChatGPT summarized the previously invented cases, leading the judge to express concerns and resulted in a $5,000 fine for two lawyers and their firm.

Source: www.theguardian.com

Pornhub Owners Suspend French Sites in Protest of New Age Verification Law

Visitors from France accessing adult sites like PornHub, YouPorn, and RedTube will encounter a message that criticizes the nation’s age verification laws, as announced by the company on Tuesday.

A spokesperson indicated that Iro, the parent company, has set a requirement for users to be 18 years or older, responding to French legislation mandating that adult sites implement stricter measures to verify the ages of their users.

“It’s clear that Iro has made the tough choice to restrict access for French users on platforms like Pornhub, Youporn, and Redtube. Tomorrow, we will utilize these platforms to directly engage with the French public,” stated a Pornhub representative on Tuesday.

Instead of providing a vast array of adult content on PornHub, Iro aims to “directly communicate with the French populace about the dangers and invasiveness of privacy, along with the ineffectiveness of French laws,” said Solomon Friedman, owner of Iro and a partner at Ethical Capital Partners, during a video call with reporters on Tuesday.

This year, France will gradually implement new requirements for all adult sites, enabling users to verify their ages using personal information such as credit cards and identification documents.

To safeguard privacy, operators are required to offer third-party “double-blind” options that prevent the sites from accessing users’ identities.

However, Iro contends that this approach is flawed and jeopardizes user data to potential threats, hacks, and leaks.

The company argues that France should focus on the developers of operating systems like Microsoft’s Windows, Apple’s iOS, and Google’s Android, rather than targeting pornographic platforms.

“Iro takes age verification seriously,” executive Alex Kekesi noted during a media call.

She emphasized that individual platforms carry a “significant risk” to privacy rights concerning age verification.

Friedman from ECP stated, “Google, Apple, and Microsoft have integrated features within their operating systems to verify a user’s age at the device level.”

The capacity to “supply age signals to any site or application” can enable control over access to adult content while keeping sensitive information private, offering a viable solution,” he argued.

“We recognize that these three companies are powerful, but that doesn’t excuse France’s actions,” he added.

Iro’s message to adult content viewers includes imagery promoting freedom, inspired by Eugene Delacroix’s renowned painting featuring nude figures.

Culture Minister Auroa Berge mentioned that if adult sites choose to block French users instead of adhering to the law, it would be “very positive.”

“Minors in France will have less access to violent, degrading, and humiliating content,” she remarked.

“If Iro prefers to withdraw from France rather than comply with our regulations, they are free to do so,” stated Clara Chappaz, the French Minister of Artificial Intelligence and Digital Technology, on X.

According to Arcom, 2.3 million minors visit porn sites each month, even though they are legally prohibited from doing so.

Elsewhere in the European Union, adult content platforms face increased scrutiny. EU regulators announced last month that several sites, including Pornhub, are under investigation for failing to uphold child protection regulations.

Source: www.theguardian.com

Alabama Paid Millions to Law Firms for Prison Protection: AI-Generated Fake Citations Uncovered

Frankie Johnson, an inmate at William E. Donaldson Prison near Birmingham, Alabama, reports being stabbed approximately 20 times within a year and a half.

In December 2019, Johnson claimed he was stabbed “at least nine times” in his housing unit. Then, in March 2020, after a group therapy session, officers handcuffed him to a desk and exited the unit. Shortly afterward, another inmate came in and stabbed him five times.

In November that same year, Johnson alleged that an officer handcuffed him and transported him to the prison yard, where another prisoner assaulted him with an ice pick and stabbed him “five or six times,” all while two corrections officers looked on. Johnson contended that one officer even encouraged the attack as retaliation for a prior conflict between him and the staff.

In 2021, Johnson filed a lawsuit against Alabama prison officials, citing unsafe conditions characterized by violence, understaffing, overcrowding, and significant corruption within the state’s prison system. To defend the lawsuit, the Alabama Attorney General’s office has engaged law firms that have received substantial payments from the state to support a faulty prison system, including Butler Snow.

State officials have praised Butler Snow for its experience in defending prison-related cases, particularly William Lansford, the head of their constitutional and civil rights litigation group. However, the firm is now facing sanctions from a federal judge overseeing Johnson’s case, following incidents where its lawyers referenced cases produced by artificial intelligence.

This is just one of several cases reflecting the issue of attorneys using AI-generated information in formal legal documents. A database that tracks such occurrences has noted 106 identified instances globally, where courts have encountered “AI hallucinations” in submitted materials.

Last year, lawyers received one-year suspensions for practicing law in Florida’s Central District after it was found that they were citing cases fabricated by AI. Earlier this month, a federal judge in California ordered a firm to pay over $30,000 in legal fees for including erroneous AI-generated studies.

During a hearing in Birmingham on Wednesday regarding Johnson’s case, U.S. District Judge Anna Manasco mentioned that she was contemplating various sanctions, such as fines, mandatory legal education, referrals to licensing bodies, and temporary suspensions.

She noted that existing disciplinary measures across the country have often been insufficient. “This case demonstrates that current sanctions are inadequate,” she remarked to Johnson’s attorney. “If they were sufficient, we wouldn’t be here.”

During the hearing, attorneys from Butler Snow expressed their apologies and stated they would accept any sanctions deemed appropriate by Manasco. They also highlighted their firm policy that mandates attorneys seek approval before employing AI tools for legal research.

Reeves, an attorney involved, took full responsibility for the lapses.

“I was aware of the restrictions concerning [AI] usage, and in these two instances, I failed to adhere to the policy,” Reeves stated.

Butler Snow’s lawyers were appointed by the Alabama Attorney General’s Office and work on behalf of the state to defend ex-commissioner Jefferson Dunn of the Alabama Department of Corrections.

Lansford, who is contracted for the case, shared that the firm has begun a review of all previous submissions to ensure no additional instances of erroneous citations exist.

“This situation is still very new and raw,” Lansford conveyed to Manasco. “We are still working to perfect our response.”

Manasco indicated that Butler Snow would have 10 days to file a motion outlining their approach to resolving this issue before she decides on sanctions.

The use of fictitious AI citations has subsequently influenced disputes regarding case scheduling.

Lawyers from Butler Snow reached out to Johnson’s attorneys to arrange a deposition for Johnson while he remains incarcerated. However, Johnson’s lawyers objected to the proposed timeline, citing outstanding documents that Johnson deemed necessary before he could proceed.

In a court filing dated May 7, Butler Snow countered that case law necessitates a rapid deposition for Johnson. “The 11th Circuit and the District Court typically allow depositions for imprisoned plaintiffs when relevant to their claims or defenses, irrespective of other discovery disputes,” they asserted.

The lawyers listed four cases that superficially supported their arguments, but all turned out to be fabricated.

While some case titles were reminiscent of real cases, none were actually relevant to the matter at hand. For instance, one was a 2021 case titled Kelly v. Birmingham; however, Johnson’s attorneys noted that “the only existing case titled Kelly v. City of Birmingham could be uniquely identified by the plaintiff’s lawyers.”

Earlier this week, Johnson’s lawyers filed a motion highlighting the fabrications, asserting they were creations of “generative artificial intelligence.” They also identified another clearly fictitious citation in prior submissions related to the discovery dispute.

The following day, Manasco scheduled a hearing regarding whether Butler Snow’s counsel should be approved. “Given the severity of the allegations, the court conducted an independent review of each citation submitted, but found nothing to support them,” she wrote.

In his declaration to the court, Reeves indicated he was reviewing filings drafted by junior colleagues and included a citation he presumed was a well-established point of law.

“I was generally familiar with ChatGPT,” Reeves mentioned, explaining that he sought assistance to bolster the legal arguments needed for the motion. However, he admitted he “rushed to finalize and submit the motions” and “did not independently verify the case citations provided by ChatGPT through Westlaw or PACER before their inclusion.”

“I truly regret this lapse in judgment and diligence,” Reeves expressed. “I accept full responsibility.”

Damien Charlotin, a legal researcher and academic based in Paris, notes that incidents of false AI content entering legal filings are on the rise. Track the case.

“We’re witnessing a rapid increase,” he stated. “The number of cases over the past weeks and months has spiked compared to earlier periods.”

Thus far, the judicial response to this issue has been quite lenient, according to Charlotin. More severe repercussions, including substantial fines and suspensions, typically arise when lawyers fail to take responsibility for their mistakes.

“I don’t believe this will continue indefinitely,” Charlotin predicted. “Eventually, everyone will be held accountable.”

In addition to the Johnson case, Lansford and Butler Snow have contracts with the Alabama Department of Corrections to handle several large civil rights lawsuits. These include cases raised by the Justice Department during Donald Trump’s presidency in 2020.

The contract for that matter was valued at $15 million over two years.

Some Alabama legislators have questioned the significant amount of state funds allocated to law firms for defending these cases. However, this week’s missteps have not appeared to diminish the Attorney General’s confidence in Lansford or Butler Snow to continue their work.

On Wednesday, Manasco addressed the attorney from the Attorney General’s office present at the hearing.

“Mr. Lansford remains the Attorney General’s preferred counsel,” he replied.

Source: www.theguardian.com

Face ID: A Useful Resource or a Source of Concern? The Subtle Integration of Facial Recognition in Law Enforcement

The future is arriving ahead of schedule in Croydon. While it may not initially seem like the UK’s forefront, North End is a pedestrian-friendly high street filled with typical pawn shops, fast-food restaurants, and a blend of branded clothing stores. It’s anticipated that this area will host one of the UK’s first permanent fixed facial recognition cameras.

Digital images of passersby will be captured discreetly and processed to derive biometric data, which includes facial measurements. This data will be rapidly compared against a watchlist via artificial intelligence, and a match will trigger an alert that might lead to an arrest.

As per the latest violence reduction strategy from the South London Borough, North End and its adjacent streets are identified as “major crime hotspots.” However, they do not rank among the most hazardous routes in the capital.

The crime rate here is the 20th worst among the 32 London Boroughs, excluding the City of London. Plans to launch permanent cameras for a trial phase later this summer are not an emergency measure; instead, North End and nearby London Roads might soon see more surveillance.

When approached about the surveillance initiative, most shopkeepers and visitors in the North End were unaware of the police’s plans or the underlying technology.

For many, the cameras appear as just another form of street furniture alongside signs promoting safe cycling. While some express concern, others reference studies indicating widespread exhaustion of the public facing rising crime rates.

The police began experimenting with facial recognition cameras in the UK and Wales in 2016. Recent documents released under the Freedom of Information Act (FOI) and police statistics shared with the Guardian reveal substantial growth in usage over the last year. This technology is evolving from a niche tool to a regular component of police strategies.

Last year, police scanned almost 4.7 million faces using live facial recognition cameras, with deployments more than doubling in 2023. In 2024, live facial recognition vans were utilized at least 256 times, up from 63 the previous year.

There’s speculation that mobile units of 10 live facial recognition vans may operate throughout the country.

Meanwhile, civil servants collaborate with law enforcement to develop a new national facial recognition system called strategic facial matchers. This platform will enable searches through various databases, including custody images and immigration files.

“The implementation of this technology could become a common sight in city centres and transit hubs across England and Wales,” states one funding document submitted by the South Wales police to the Home Department and released by Metropolitan Police under FOI.

Activists warn that this technology may disrupt everyday public life by subjecting individuals to impromptu identity checks facilitated by extensive facial recognition systems. Advocates of the technology acknowledge its risks but emphasize its importance for safety.

Recently, David Scheneller, a 73-year-old registered sex offender from Lewisham, who had served nine years for 21 offenses, was sentenced to two years in prison for breaching probation terms.

Officers were alerted by the live facial recognition cameras to Scheneller walking alone with his six-year-old child.

“He was on the watchlist due to his compliance conditions,” said Lindsay Chiswick, Metropolitan’s Intelligence Director and advisor to the National Police Chief of Facial Recognition.

“He formed a relationship with his mother over time and began picking up his daughter from school. If something went wrong that day, he was aware of the repercussions. This exemplifies how police could track him. Without facial recognition, recognizing him would have posed a challenge.”

Many see this as a compelling argument, but critics raise concerns about the unanticipated ramifications as law enforcement adopts technology without legislative guidance.

Madeline Stone from the NGO Big Brother Watch, who has observed mobile camera deployments, reported witnessing misidentifications of schoolchildren in uniforms undergoing “long, humiliating, and unnecessary police stops,” where they were compelled to verify their identities and provide fingerprints.

In these instances, the affected individuals were young Black boys, leaving them frightened and distressed, she noted.

Skip past newsletter promotions

“The effectiveness diminishes as the threshold rises,” Stone added. “The police might not prefer employing it in specific environments. There are no legal mandates requiring them to do so. The notion that police could unilaterally create their own guidelines for usage is truly alarming.”

A judicial review was initiated by Londoner Sean Thompson, with backing from Big Brother Watch, after he was wrongly identified as a person of interest due to the technology and detained for 30 minutes upon returning from a volunteer shift with the anti-knife initiative Street Father.

Additionally, Dr. Dara Murray, tasked with an independent evaluation of the trials by the Met in 2019, highlights the potential “chilling” effect this technology might have on society, suggesting that considerations must go beyond just the technology’s implementation.

“It’s akin to police tailing you, recording your interactions, where you go, how often, and for how long,” he remarked. “I believe most would be uncomfortable with such reality. Democracy thrives on dissent and discourse; if surveillance stifles that, it risks entrenching the status quo and limiting future opportunities.”

Live facial recognition is being utilized to apprehend individuals for traffic violations, growing cannabis, and neglecting community orders. Is this truly justified?

Fraser Sampson, former biometrics and surveillance camera commissioner in England and Wales until his position was dissolved in October 2023, currently serves as a non-executive director for FaceWatch, the leading UK firm in retail security systems designed to prevent shoplifting.

While he acknowledges the technology’s potential, he expresses concern that independent regulations concerning surveillance haven’t kept pace with its deployment by the state.

Sampson commented: “There’s an abundance of information about the technology’s functionalities, yet in practical terms—its application, the reason for its use, and the avenues for challenges or complaints—those clarity elements seem lacking.”

Chiswick noted her understanding of the concerns while recognizing the potential advantages of regulatory measures. The Met is cautiously making “small strides” that are continually reviewed, she stated. With limited resources, law enforcement needs to adapt and capitalize on the possibilities brought by AI. They are cognizant of potential “chilling effects” on society and have made it clear that cameras will not be deployed in protest areas.

“Will this become common? I cannot say,” Chiswick remarked. “We need to approach that assumption with caution. There are numerous possible scenarios; areas like the West End? It’s conceivable, instead of the static trials we’re conducting in Croydon, we could utilize it there. However, that’s not our current plan.”

She added: “I believe the integration of technology, data, and AI will continue to rise in the coming years, as personally, that’s how we can improve our operations.”

Source: www.theguardian.com

Trump’s latest method of eliminating regulations: My word is law

This week, President Trump oversaw 10 federal agencies, including the Environmental Protection Agency, the Energy Agency and the Nuclear Regulation Authority. Implement a new procedure Discarding a wide array of years of energy and environmental regulations.

He told the agency that oversees everything from gas pipelines to power plants and oversees everything that inserts “sunset” provisions, which automatically expire by October 2026. If an agency wanted to maintain the rules, it could only extend it for up to five years at a time.

Experts say the directive faces major legal hurdles. But it was one of three executive orders from Trump on Wednesday, and he declared that he was pursuing new shortcuts to weaken or eliminate restrictions.

in Another orderhe directed a rollback of federal regulations that restrict the water flow of shower heads with a very unusual legal justification.

“No notices and comments are required as I’m ordering it to be abolished,” Trump’s order said.

Legal experts called the sentence a surprising, violating decades of federal law. 1946 Management Procedures Federal agencies require that they go through a lengthy “notice and comment” process when issuing, amending or repealing key rules, and in general, agencies that do not follow these procedures often find actions blocked by the court.

“In that respect, this is all completely illegal,” said Jody Freeman, director of the Harvard Law School Environment and Energy Law Program. A former White House official under President Barack Obama. “They don’t care if the real lawyers have left the building, they want to hug all of these cases and see if the court bites or not.”

The regulatory process has often been criticized as troubling and time-consuming, and the idea of ​​periodically expiring all government regulations has been promoted in conservative circles for many years. It is known as Zero-based regulatory budgets, A twist on a zero-based financial budget. This is a system in which budgets are built from scratch each year, instead of taking over historic spending amounts.

The idea may have received recent boost from Elon Musk, the billionaire adviser to Trump. “Essentially, regulations should have no default,” Musk said. Public Call His social media site X in February. “The default is gone, not the default. And if it turns out that the restrictions have missed the mark, you can always add it again.”

“We have to clean up the wholesale prostitution of regulations and we have to keep government away from the backs of everyday Americans so that people can get things done,” Musk added.

It is unclear how much the order of the sunset will affect it. Legal experts said the executive order “does not apply to a regulatory permit system that allows regulations approved by the law.”

“We’re excited to see the importance of our efforts to help people change,” said Michael Gerrard, director of the Sabin Climate Change Law Center at Columbia University. “Most environmental laws appear to fall into that category.”

“The president is right to assure that he doesn’t see Americans mentioning that they are unconstitutional or that they are restraining American energy and competitiveness that is inconsistent with federal law,” White House spokeswoman Taylor Rogers said in a statement.

In another order called “title”Instructing the abolition of illegal regulationsTrump gave 60 days to ministers 60 days to identify federal rules they deemed illegal and to plan to abolish them. The order said that agency managers can bypass the notification and comment process by taking advantage of the exceptions that experts say are usually booked for emergencies.

However, legal experts said the laws written by Congress, which govern the way federal agencies remove regulations, are extremely strict.

Typically, if a federal agency, such as the EPA, issues or changes regulations, it will first publish the proposed rules and make the time to comment. Agency officials then read and respond to the comments, providing detailed evidence in support of the changes they want to make, indicating that they have addressed public concerns. The agency then publishes the final rules.

“The Management Procedure Act is a boring, sounding law that no one cares about, but we treat it as a basis in our legal profession,” Freeman said. “It tells the federal government that it needs to purposefully do things, take public opinions and rationally adhere to their actions. It’s a promise that the government is not arbitrary.”

There is Specific conditions If the agent can bypass certain steps. For example, if emergency regulations regarding plane safety need to be issued.

However, the Trump administration appears to be using this so-called legitimate cause exception to push for revoking much broader federal rules.

In the past, courts have had little patience when federal agencies tried to circumvent the regulatory process. During Trump’s first term, officials sometimes announced that they had taken important measures and that they had wiped the restrictions out just to be reversed by the court. According to a database held by New York University, the administration lost 76% of cases where environmental policy was challenged, losing a much higher loss rate than previous administrations. Research Institute for Policy Integrity.

This time, Trump administration officials may want the court to be more sympathetic. With three Supreme Court judges appointed by Trump, the court now has a conservative vast majority who have expressed deep skepticism about environmental regulations.

In some cases, administration actions may be legally defensible. For example, when moving to abolish shower water flow restrictions, Trump called for a redefine “shower heads.” In that case, the White House can try to argue that it is abolishing what is called interpretive rules rather than a major regulation, and does not need to go through the same legal process. But experts said that just because Trump said that, the agency couldn’t argue that it was allowed to skip those steps.

“No notifications and comments may be necessary,” said Jonathan Adler, a conservative legal scholar at Case Western Reserve University. “Not because Trump orders it to be abolished, but because there’s a question of whether the only thing that’s been abolished is a definition, then whether it’s an interpretive rule.”

Some say Trump’s plan, which allows regulations to expire every five years, could make it difficult for businesses to plan for the future.

For example, the Federal Energy Regulation Commission has everything from power lines to utility accounting, said Aripescoe, director of Harvard Law School’s Electrical Law Initiative. In theory, new orders should expire regularly.

“The first section of that order talks about how businesses are sure they need,” says Lisa Heinzerling, a law professor at Georgetown University. “But the whole order is a recipe for eternal uncertainty.”

Source: www.nytimes.com

Paul McCartney warns that AI law revision may deceive artists

In a recent statement, Sir Paul McCartney cautioned that artificial intelligence could potentially become an artist if copyright laws were to be revised.

Speaking to the BBC, he expressed concerns that such a proposal might diminish the incentives for writers and artists, ultimately stifling creativity.


The issue of using copyrighted materials to train AI models is currently a topic of discussion in government talks.

As a member of the Beatles, McCartney emphasized the importance of copyright protection, stating that anyone could potentially exploit creative works without proper compensation.

He raised concerns about the financial ramifications of unauthorized use of copyrighted materials for AI training, urging the need for fair compensation for creators.

While the debate continues within the creative industry over the usage of copyrighted materials, some organizations have entered into licensing agreements with AI companies for model training.

McCartney has previously voiced apprehensions about the impact of AI on art, co-signing a petition alongside other prominent figures to address concerns about the unauthorized use of creative works for AI training.

In light of these developments, the government is conducting consultations to address the balance between AI innovation and protecting creators’ rights.

McCartney urged the government to prioritize the protection of creative thinkers and artists in any legislative updates, emphasizing the need for a fair and equitable system for all parties involved.

The intersection of AI technology and creative industries remains a complex and evolving space, with stakeholders advocating for clarity and fairness in policy making.

Source: www.theguardian.com

The arrest of Telegram CEO proves tech giants are not exempt from the law

ohOn August 24, when the Russian tech tycoon’s private jet landed at Le Bourget airport northeast of Paris, officers from the French judicial police were waiting for him. He was duly arrested and taken in for questioning. Four days later, he was indicted on 12 charges, including distribution of child exploitation material and complicity in drug trafficking, banned from leaving France, placed under “judicial supervision,” and required to report to the gendarmes twice a week until further notice.

The tycoon in question, Pavel Durov, is a tech entrepreneur who collects nationalities the way he collects airline miles. His Nationality Durov is French and was generously donated by French President Emmanuel Macron in 2021. Durov also appears to be a fitness fanatic with a strict daily routine: “After a recorded eight hours of sleep, Financial Times According to the report, “Without exception, he starts his days with 200 push-ups, 100 sit-ups and an ice bath. He doesn’t drink alcohol, smoke, eat sugar or meat, and takes time to meditate.” When he’s not engaged in these demanding activities, he’s also found time to be a sperm donor, father over 100 children, and rival Elon Musk as a free speech extremist.

Durov’s media profiles recall Churchill’s famous description of Russia as “an enigma wrapped in an enigma.” Durov left Russia after the Facebook clone he co-founded with his brother Nikolai in 2006 brought him into conflict with the Kremlin. He eventually emigrated to the United Arab Emirates, where he launched Telegram, a private social media platform that is as mysterious as its founder.

Telegram has around 950 million regular users. It is also a messaging system like WhatsApp, but allows groups up to 200,000 people, whereas WhatsApp has a limit of 1,024, so in that sense it is also a broadcasting system like X. One-to-one communication is only end-to-end encrypted if the user selects the “Secret Chat” option, but since many internet users do not change the default settings, in effect, According to one security expert“The vast majority of Telegram one-to-one conversations, and literally all group chats, are likely viewable on Telegram’s servers.”

Given that, it’s puzzling why there are so many bad actors on the platform. After all, rats generally hate sunlight. One critic says:“Telegram is the closest thing to a widespread dark web. Nearly a billion ordinary people are in contact with criminals, hackers, terrorists and child abusers. Despite the lack of technical security and privacy, the platform is a honeypot for people operating in the shadows.” And the reason they stay may be because Durov doesn’t believe in content moderation. In fact, he sometimes boasts about how lean he is running his operation. Like Musk, he doesn’t believe in expensive moderation teams. And it is believed that one of the reasons France prosecuted him is the way his company refused to cooperate with law enforcement agencies investigating criminal activity on the platform.

Telegram’s finances are also shrouded in mystery. Financial Times A detailed look at the company’s 2023 business plan reveals a loss of $173 million for that year. The company’s business model is vague, consisting of basic advertising, subscriptions, and (wait for it!) Toncoin cryptocurrency. There was talk of an IPO before Durov’s arrest, but that now seems like a pipe dream.

But all this is just noise obscuring the landmark importance of Durov’s arrest in a broader context. For the past 30 years, the democratic world has been gloomy about two challenges posed by technology and its corporate-controlled world. The first is the immunity given to tech tycoons by Article 230 of the Constitution. The Communications Decency Act of 1996,This absolved them from responsibility for the content displayed on their ,platform.,The second concern was the conflict between local laws and ,global technology that transcends borders.

Now, just as Durov’s plane landed in Le Bourget, a U.S. district court judge Landmark ruling This signals that the free ride given to companies by Section 230 may be coming to an end. French law officials have also signaled to tech moguls that while they may think they rule the world, France controls its own airspace. That’s why Musk might have to think twice about flying over Europe in the future. Long live France!

What I’m Reading

Hold that thought
Those who think think A lovely, quirky essay by Joseph Epstein. London Review of Books On the art of difficult thinking.

Skip Newsletter Promotions

Authority
read The dangers of state powerA transcript of a wonderful interview that Yasha Maunk conducted with the late, great anthropologist James C. Scott.

Black Book
Roland Allen’s entertaining essays Moleskine Mania: How the Notebook Conquered the Digital Age of Walrus His eyes turn to the strange persistence of the black notebook.

  • Do you have an opinion on any issue raised in this article? If you would like to submit a letter of 250 words or less for consideration for publication, please email it to observer.letters@observer.co.uk

Source: www.theguardian.com

Limitations of Social Media Law Exposed by Musk’s Incitement: A TechScape Analysis

What actions can the UK government take regarding Twitter? Should What are your thoughts on Twitter? What interests does Elon Musk have?

The billionaire proprietor of the social network, still officially referred to as X, has had an eventful week causing disruptions on his platform. Besides his own posts, which include low-quality memes sourced from 8chan and reposted fake concerns from far-right figures, the platform as a whole, along with the other two of the three “T’s,” TikTok and Telegram, briefly played a significant role in orchestrating this chaos.

There is a consensus that action needs to be taken: Bruce Daisley, former VP EMEA at Twitter, proposes individual accountability.

In the near term, Musk and other executives should be reminded of their legal liability for their actions under current laws. The UK’s Online Safety Act 2023 should be promptly bolstered. Prime Minister Keir Starmer and his team should carefully consider if Ofcom, the media regulator frequently criticized for the conduct of organizations like GB News, can effectively manage the rapid behavior of someone like Musk. In my view, the threat of personal consequences is much more impactful on corporate executives than the prospect of a corporate fine. If Musk continues to incite unrest, an arrest warrant could create sparks from his fingertips, though as a jet-setting personality, an arrest warrant could be a compelling deterrent.

Last week, London Mayor Sadiq Khan presented his own suggestion.

“The government swiftly realized the need to reform the online safety law,” Khan told the Guardian in an interview. “I believe that the government must ensure that this law is suitable immediately. I don’t think it currently is.”

“Responsible social media platforms can take action,” Khan remarked, but added that “if they fail to address their own issues, regulation will be enforced.”

When I spoke to Euan McGaughey, a law professor at King’s College London on Monday, he provided more precise recommendations on what actions the government could take. He mentioned that the Communications Act 2003 underlies many of Ofcom’s authorities and is applied to regulate broadcast television and radio, but extends beyond those media.

Simply as section 232 specifies that “television licensable content services” involve distribution “by any means involving the use of an electronic communications network,” this Act empowers Ofcom to regulate online media content. While Ofcom could exercise this power, it is highly improbable as Ofcom anticipates challenges from tech companies, including those fueling riots and conspiracy theories.

Even if the BBC or the government were reluctant to interpret the old law differently, minor modifications could subject Twitter to stricter broadcasting regulatory oversight, he added.

For instance, there is no distinction between Elon Musk posting a video on X about (so-called) two-tier policing, discussing “detention camps” or asserting “civil war is inevitable” and ITV, Sky, or the BBC broadcasting the news… Online Safety Act Grossly insufficient, as the constraints merely aim to prevent “illegal” content and do not inherently address false or dangerous speech.

The law of keeping promises


Police in Middlesbrough responded to a mob spurred by social media posts this month. Photo: Gary Culton/Observer

It may seem peculiar to feel sympathy for an inanimate object, but the Online Safety Act has likely been treated quite harshly given its minimal enforcement. A comprehensive law encompassing over 200 individual clauses, it was enacted in 2023, but most of its modifications will only take effect once Ofcom has completed the extensive consultation process and established a code of practice.

The law introduces a few new offenses, such as bans on cyber-flashing and upskirt photography. Sections of the old law, referred to as malicious communications, have been substituted with new, more precise laws like threatening and false communications, with two of the new offenses going into effect for the first time this week.

But what if this had all happened earlier and Ofcom was operational? Would the outcome have been different?

The Online Safety Act is a peculiar piece of legislation: an effort to curb the worst impulses on the internet, drafted by a government taking a stance in favor of free speech amidst a growing culture war and enforced by regulators staunchly unwilling to pass judgment on individual social media posts.

What transpired was either a skillful act of navigating a tricky situation or a clumsy mishap, depending on who you ask. The Online Safety Act does not outright criminalize everything on the web; instead, it mandates social media companies to establish specific codes of conduct and consistently enforce them. For certain forms of harm like incitement to self-harm, racism, and racial hatred, major services must at least provide adults with the option to opt out of such content and completely block it from children. For illegal content ranging from child abuse imagery to threats and false communications, it requires new risk assessments to aid companies in proactively addressing these issues.

It’s understandable why this legislation faced significant backlash upon its passage: its main consequence was a mountain of new paperwork in which social networks had to demonstrate adherence to what they had always purportedly done: attempting to mitigate racist abuse, addressing child abuse imagery, enforcing their terms of use, and so forth.

Advocates of the law argue that it serves more as a means for Ofcom to impose its promises on companies rather than forcing them to alter their behavior. The easiest way to impose a penalty under the Online Safety Act – potentially amounting to 10% of global turnover if modeled after GDPR – is to announce loudly to customers that steps are being taken to tackle issues on the platform, only to do nothing.

One could envision a scenario where the CEO of a tech company, the key antagonist in this play, stands before an inquiry, solemnly asserting that the reprehensible behavior they witness violates their terms of service, then returning to their office and taking no action.

The challenge for Ofcom lies in the fact that multinational social networks are not governed by cartoonish villains who flout legal departments, defy moderators, and whimsically enforce one set of terms of service on allies and a different one on adversaries.

Except for one.

Do as I say, don’t do as I do

Elon Musk’s Twitter has emerged as a prime test case for online safety laws. On the surface, the social network appears relatively ordinary: its terms of service prohibit the dissemination of much of the same content as other major networks, with a slightly more lenient stance on pornographic material. Twitter maintains a moderation team that employs both automated and human moderation to remove objectionable content, an appeals process for individuals alleging unfair treatment, and progressive penalties that could ultimately lead to account suspensions for violations.

However, there’s an additional layer to how Twitter operates: Elon Musk follows through on what he says. For instance, last summer, after a prominent right-wing influencer shared child abuse images, the account’s creator received a 129-year prison sentence. The motive remains unclear, but the account was swiftly suspended. Musk then intervened:

The only people who have seen these photos are members of the CSE team. At this time, we will remove these posts and reinstate your account.

— Elon Musk (@elonmusk) July 26, 2023


While Twitter’s terms of service theoretically prohibit many of the egregious posts related to the UK riots, such as “hateful conduct” and “inciting, glorifying, or expressing a desire for violence,” they do not seem to be consistently enforced. This is where Ofcom may potentially take aggressive actions against Musk and his affiliated companies.

If you wish to read the entire newsletter, subscribe to receive TechScape in your inbox every Tuesday.

Source: www.theguardian.com

EU accuses Meta of breaking digital law by charging for ad-free social network

According to the European Commission, Meta, led by Mark Zuckerberg, has breached the EU’s new digital law with its advertising strategy. This model involved charging users for access to ad-free versions of Facebook and Instagram.

Last year, Meta introduced a “pay or consent” system to comply with EU data privacy regulations. Under this model, users could pay a monthly fee to use Facebook and Instagram without ads and with their personal data not utilized for advertising. Non-paying users agree to have their data used for personalized ads during the signing-up process.

The European Commission, the executive body of the EU, stated that this model does not align with the Digital Markets Act (DMA) created to regulate big tech companies. The Commission’s initial findings of the “Pay or Consent” investigation revealed that this model coerces users into consenting to data collection across various platforms. Additionally, users are not given the option to choose services that use less data but are similar to the ad-supported versions of Facebook and Instagram.

The Commission expressed that this alternative does not offer users a comparable less personalized version of the Meta network, forcing them to agree to data integration. To comply with the DMA, Meta would need to launch a version of Facebook or Instagram using less user data.

In response, a Meta spokesperson mentioned that the new model was designed to adhere to regulatory requirements such as the DMA. They highlighted that subscriptions as an alternative to advertising are a common business model and were implemented to address various obligations.

Skip Newsletter Promotions

The European Commission is required to complete its investigation by the end of March next year. Meta may face fines of up to 10% of its global turnover, amounting to $13.5 billion (£10.5 billion). The Commission recently found Apple guilty of violating the DMA by impeding competition in its app store.

Source: www.theguardian.com

Impact of the EU’s Proposed AI Regulation Law on Consumers | Artificial Intelligence (AI)

The European Parliament has approved the EU’s proposed AI law, marking a significant step in regulating the technology. The next step is formal approval by EU member states’ ministers.

The law will be in effect for three years, addressing consumer concerns about AI technology.

Guillaume Cournesson, a partner at law firm Linklaters, emphasized the importance of users being able to trust vetted and safe AI tools they have access to, similar to trust in secure banking apps.

The bill’s impact extends beyond the EU as it sets a standard for global AI regulation, similar to the GDPR’s influence on data management.

The bill’s definition of AI includes machine-based systems with varying autonomy levels, such as ChatGPT tools, and emphasizes post-deployment adaptability.

Certain risky AI systems are prohibited, including those manipulating individuals or using biometric data for discriminatory purposes. Law enforcement exceptions allow for facial recognition use in certain situations.

High-risk AI systems in critical sectors will be closely monitored, ensuring accuracy, human oversight, and explanation for decisions affecting EU citizens.

Generative AI systems are subject to copyright laws and must comply with reporting requirements for incidents and adversarial testing.

Deepfakes must be disclosed as human-generated or manipulated, with appropriate labeling for public understanding.

AI and tech companies have varied reactions to the bill, with concerns about limits on computing power and potential impacts on innovation and competition.

Penalties under the law range from fines for false information provision to hefty fines for breaching transparency obligations or developing prohibited AI tools.

The law’s enforcement timeline and establishment of a European AI Office will ensure compliance and regulation of AI technologies.

Source: www.theguardian.com

The Future of Communication: What Changes with Britain’s New Snooper Charter Law | John Norton

WBack in 2000, the Investigatory Powers Regulation Bill was introduced by the Blair government, which enshrined formidable surveillance powers into law. Long before Edward Snowden revealed his secrets, it was clear to those paying attention that the British deep state was gearing up for the digital age. The powers implicit in this bill were so broad that some expected it to pass the House with a bang.

However, the majority of MPs surveyed didn’t seem interested in the bill. Only a handful of his 659 elected members seemed concerned at all about what was being proposed. Most of the work to improve bills as they pass through Parliament is done by a small number of members of the House of Lords, some of them hereditary members, rather than elected members. It was eventually revised and became law (nicknamed Ripa) in July 2000.

In 2014, the government commissioned David Anderson QC (now KC) to investigate its operation and recommended that new legislation be enacted to clarify the questions Ripa raises. Home Secretary Theresa May introduced a new investigatory powers bill in the House of Commons in 2015, which was scrutinized by a joint committee of the Lords and the House of Commons. This bill became the Investigatory Powers Act (or “Peep Charter”) in November 2016. The following month, the European Court of Justice ruled that the general retention of information legalized by the law was unlawful.

In 2022, the Home Office conducted a review of how the act worked. It concluded that the law had “largely achieved its objectives” but that further significant reforms were needed “to take into account advances in technology and the evolving demands of protecting national security and tackling serious crime.” Spies needed legislative support and more formally sanctioned wiggle room.

The Investigatory Powers (Amendment) Bill is currently before the Lords of Westminster. “The world has changed,” the blurb says. “Technology is advancing rapidly and the types of threats the UK faces continue to evolve.” It aims to enable security and intelligence agencies to respond to a range of evolving threats. And of course, this is global Britain, so “world-leading safeguards within the IPA will be maintained and strengthened”.

Upon closer inspection, the bill should give security services more latitude in building and leveraging so-called “mass datasets of personal information” and collecting and using CCTV footage and facial images. The bill also allows for the “collection and processing of Internet connection records” for generalized mass surveillance.

The bill will force technology companies, including overseas bases, to inform the UK Government of any plans that may require improving security or privacy measures on their platform before these changes take effect. For instance, Apple views this as an “unprecedented overreach by the government” that could see the UK “covertly veto new user protections globally and prevent us from delivering them to our customers”.

A hat-trick, at least for global Britain.

what i am reading

intestinal level
Cory Doctorow’s Marshall McLuhan Lecture on enshift, or the way digital platforms tend to deteriorate. A record of an event you’ll never forget.

X factor
a great blog post written by Charles Arthur, former technology editor guardian. Summary: Think before you tweet. Or maybe you should just quit.

Apocalypse again
a solemn politiko column Jac Schaefer on the recent wave of layoffs in American news organizations.

Source: www.theguardian.com

India’s New Telecommunications Law raises Privacy Concerns as it Clears the Way for Musk’s Starlink

With more than 1.17 billion phone connections and 881 million internet subscribers, India aims to modernize connectivity and introduce new services such as satellite broadband just months before general elections. Congress passed a telecommunications bill that replaced the 100-year-old rule.

India’s upper house of parliament on Thursday approved the Telecommunications Bill 2023 by voice vote, with many opposition leaders absent due to suspension, just a day after the bill was passed by the lower house. The bill would repeal rules dating back to 1885 during the telegraph era, giving Prime Minister Narendra Modi’s government a mandate to use and manage telecommunications services and networks in the interest of national security, and to It gives the authority to monitor data. There is also a basis for the Indian government to intercept communications.

A newly passed telecommunications bill also allows spectrum to be allocated to satellite-based services without participating in an auction, and OneWeb wants to launch satellite broadband services in the world’s most populous country. The move is to give preferential treatment to companies such as , Starlink, and Amazon’s Kuiper. A long-standing demand for a “management process” surrounding spectrum allocation auctions. India’s Jio is trying to compete with three global companies with its homegrown satellite broadband service, but has relatively limited resources and has previously faced administrative opposition to its spectrum allocation model. Ta.

The bill also requires biometric authentication for subscribers to limit fraud and limits the number of SIM cards each subscriber may use. Additionally, it includes provisions for civil monetary penalties of up to $12,000 for violations of certain provisions and up to $600,400 for violations of conditions established by law.

The bill includes amendments to the Indian Telecom Regulatory Authority Act, 1997, targeting the telecom regulator, as the Indian government seeks to attract foreign investors by increasing private participation. These amendments would allow executives with more than 30 years of private sector experience to be appointed to regulatory agency positions. The chairman can become a member if he or she has served for 25 years or more. The country previously allowed only retired civil servants to serve as chairmen and commissioners of regulators.

“This is a very comprehensive and very large-scale structural reform born out of the vision of Prime Minister Shri Narendra Modi Ji. The legacy of old fraudsters in the telecom sector will remain and this bill Arrangements will be made to make the telecom sector a rising sector through this,” said Ashwini Vaishno, India’s Telecom Minister, while introducing the bill in Parliament.

Interestingly, the Telecommunications Bill excludes the term “OTT” that was used in the first draft last year, setting out regulations for over-the-top (OTT) messaging apps such as WhatsApp, Signal, and Telegram. . Industry groups such as the Internet and Mobile Association of India, whose members include Google and Meta, have praised the changes. However, the scope of the regulation is not clearly defined throughout the document. Shivnath Thukral, head of India public policy at Meta, warned in an internal email that the government may have the power in the future to classify OTT apps as telecommunications services and subject them to licensing regimes. report By Indian outlet Moneycontrol.

Digital rights activists and privacy advocates have also raised concerns about the ambiguity surrounding the regulations and the lack of public consultation on the final version of the bill.

Apal Gupta, founding director of the digital rights group Internet Freedom Foundation, said at a public event earlier this week that the bill lacks safeguards for those targeted.

“The Ministry of Telecommunications still refuses to create a central repository on internet shutdowns, thereby reducing transparency. We are completely ignoring the core of the required telecommunications rules.” he emphasized.

Digital rights group Access Now called for the bill to be withdrawn and a new draft to be drafted through consultation.

“This bill is regressive because it strengthens colonial-era governments’ powers to intercept communications and shut down the internet. It undermines end-to-end encryption, which is critical to privacy.” said Namrata Maheshwari, Asia-Pacific policy advisor at Access Now, in a prepared statement.

The bill is currently awaiting approval from the President of India to become an official law.

Source: techcrunch.com

Apple requests court order to disclose customer information to law enforcement officers

WASHINGTON, Dec. 12 (Reuters) – Apple (AAPL.O) says it is seeking a judge’s order to turn over information about its customers’ push notifications to law enforcement, bringing the iPhone maker’s policy in line with rival Google’s and allowing authorities to obtain app data about users. The hurdles that must be cleared have been raised.

The new policy was not officially announced, but was announced in the past few days. Law enforcement guidelines published by Apple. This follows revelations by Oregon Sen. Ron Wyden that officials had requested such data not only from Apple but also from Alphabet Inc.’s Google. (GOOGL.O) Create an operating system for Android phones.

Apps of all kinds rely on push notifications to notify smartphone users of incoming messages, breaking news, and other updates. These are the audible “sounds” or visual indicators that users receive when they receive an email or when a sports team wins a game. What users often do not realize is that almost all such notifications are sent through Google and his Apple servers.

In the letter, first revealed by Reuters last week, Wyden said the practice gives the companies unique insight into the traffic flowing to users from these apps, and that the two companies can “see how users use specific apps.” “We are in a unique position to facilitate government oversight of what is happening.”


Although Apple did not officially announce this new policy, it was included in Apple’s published law enforcement guidelines within the past few days. Getty Images

Apple and Google both acknowledged receiving such requests. Apple added a section to its guidelines stating that such data can be obtained “via subpoena or larger legal process.” This text has now been updated to refer to more stringent warrant requirements.

Apple has not released an official statement. Google did not immediately respond to a request for comment.

Wyden said in a statement that Apple is “doing the right thing by aligning with Google in seeking a court order to turn over data related to push notifications.”

Source: nypost.com

Understanding the Law of X: A Guide for Cloud Leaders on Balancing Growth and Profits

As an interest rate Returning to historical norms, the world has returned its focus to cost of capital and free cash flow generation. In order for companies to adhere to traditional heuristics like the Rule of 40 (i.e., the idea that the sum of revenue growth and profit margin must equal 40% or more, a metric that Bessemer helped popularize) We are working hard. Executives at both private and public cloud companies agree that free cash flow (FCF) margins are just as important (if not more important) than growth, and that the trade-off is he says 1:1. I often think about it. Many finance executives love the “Rule of 40” for its clarity, but placing equal emphasis on growth and profitability in late-stage businesses is flawed and leads to bad business decisions. I am.

our view

For companies with adequate FCF margins, growth must remain a top priority. There are good reasons to emphasize efficiency, but Traditional Rule of 40 Mathematics Is Completely Wrong When a company approaches its break-even point and has positive free cash flow,

The world has hyper-rotated to an FCF margin mindset instead of a growth mindset, which is counter to efficient business growth. Long-term models show that growth should be valued at least two to three times more than his FCF margin, even in tight markets.

Equivalent emphasis on growth and profitability in late-stage businesses is flawed and leads to bad business decisions.

why?

An increase in margin has a linear effect on value, but an increase in growth rate can have a compound effect on value. We provide detailed calculations below, but when we backtest the relative importance of growth and FCF margins, the correlation of public market valuations confirms it. Actual ratios vary widely in the short term (ranging from about 2x to about 9x over the past few years), but over the long term they are typically 2x to 3x growth value over profitability. It comes down to proportions.

Even the most conservative financial planner recommends that you can safely use a growth rate of up to 2x for late-stage private company profitability. Publicly traded companies with a low cost of capital can use multiples of up to 2-3x (as long as growth is efficient).

Image credits: Bessemer Venture Partners

Source: techcrunch.com