On Tuesday, Donald Trump’s media organization announced that institutional investors are set to acquire $2.5 billion in stock, with plans to build Bitcoin reserves from the generated revenue.
Around 50 institutional investors are expected to put $1.5 billion into a private placement for Trump Media and Technology Group, the firm behind Truth Social, along with a $1 billion conversion of senior notes into common stock, as per the company’s statement.
Trump Media aims to utilize its revenues to establish a “Bitcoin Treasury Department.” This initiative will mirror the president’s actions and develop a “strategic Bitcoin Reserve” for the U.S. government.
Devin Nunes, former Congressman and current CEO and Chairman of Trump Media, stated in a press release: “We view Bitcoin as the pinnacle of financial freedom. Currently, Trump Media holds cryptocurrency as a significant portion of their assets. Nunes added that purchasing a substantial amount of Bitcoin will enhance subscription payments and promote a true social “utility token,” which is a form of cryptocurrency used for app purchases on a designated blockchain.
During his initial term, Trump, who once described cryptocurrency as “not money,” critiquing its value as “based on thin air,” has since shifted his perspective on technology. He was the first major candidate to accept donations in cryptocurrency during his campaign. Since assuming office, he has introduced his own cryptocurrency.
Just last week, Trump compensated 220 individuals involved in another cryptocurrency venture, Trump’s Memecoin, leading to allegations that he has blurred the lines between his responsibilities as president and personal interests during a lavish dinner at a luxury golf club in Northern Virginia.
At an event hosted at his Mar-A-Lago club in Florida during the May 2024 presidential election, Trump received confirmation that supporters from the cryptocurrency sector would significantly fund his re-election. He plans to address major Bitcoin events throughout the campaign, with Vice President JD Vance scheduled to speak at a gathering this week.
Pioneers in the automotive industry realized they had to transform their worldview, using their influence to acquire media outlets. Their tirades embolden anti-democratic forces globally, tapping into humanity’s darker instincts.
This notion may evoke Elon Musk’s social media platform X in 2025, yet it also harks back to Henry Ford and his publication, the Dearborn Independent, from the 1920s. Ford, who created the Model T, routinely acquired suburban spaces to propagate anti-Semitic narratives. The Dearborn Independent published a notorious series titled “The International Jew,” casting blame on Jewish communities for various societal ailments and disseminating the fabricated “Protocols of the Elders of Zion.” The Nazis even awarded Ford a medal for his fervent beliefs.
Ford epitomizes a longstanding trend of influential figures purchasing media platforms to endorse controversial perspectives. These notable individuals have reached vast audiences through the evolving technology of their times, whether through rapid newspaper distribution or Ford’s extensive network of automobile dealers.
If you take a ride in the latest Model T, you’ll encounter the Dearborn Independent. In the seat. During that era, local newspapers were significant entities. The Dearborn Independent emerged as one of the country’s top-circulating newspapers, reaching over 750,000 copies per issue at its zenith.
Following Henry Ford’s acquisition of the Dearborn Independent, he launched a long-standing series entitled “International Jews.” credit… Library of Congress
Unlike Ford, other media moguls like Rupert Murdoch generally employed like-minded editors and anchors to express their views. The Dearborn Independent boldly announced its title as “Ford International Weekly,” even featuring a full-page editorial signed by Ford himself.
Musk’s behavior mirrors Ford’s personal approach. Tesla and SpaceX’s billionaires have eagerly shared, reposted, and endorsed false or sensational claims on X, such as the assertion that Social Security is fraudulent.
Numerous precedents exist for Musk’s actions with X. However, he has escalated the process to an unprecedented level. The platform claims he has 220 million followers, a staggering number to substantiate. Even if only a fraction of that figure is accurate, X is optimized to amplify user posts as widely as possible, ensuring they are seen and discussed.
In 2022, Musk acquired Twitter for $44 billion, which seemed absurd to many. Initially dismissed as a billionaire’s toy, it transformed into a tool for political leverage during last year’s elections. Musk formed an alliance with Donald J. Trump, using the platform to shut down adversarial voices within the government.
The repercussions are continuously evolving. For Musk, this maneuver represented a significant triumph. Under the guise of enhancing government efficiency, he dismissed regulators poised to oversee his vast empire. Now, Musk wields far more authority over his vehicle and rocket ventures. (X representatives declined to comment.)
Rick Perlstein, author of a comprehensive history of modern American conservatism, stated, “This is something we’ve never encountered before.” Historians note Musk’s frequent use of memes and visuals, observing that “It reflects the politics of the nervous system rather than higher cognitive functions. There’s no rational discourse; it’s simply fear-based.”
Moguls in both the U.S. and Britain have controlled media to wield influence since modern newspapers emerged in the late 19th century. During World War I, Viscount Northcliffe of England dominated about 40% of morning and 45% of evening newspaper circulations. His publications included the Daily Mail, popular among the working class, and The Times, appealing to the elite.
Viscount Northcliffe, also known as Alfred Harmsworth, played a pivotal role in forcing Prime Minister Herbert Asquith to resign in December 1916. His sway was so significant that, in 1917, German forces attempted to assassinate him, shelling coastal residences.
In the U.S., media ownership often exhibited local nuances. In West Texas during the early 1960s, the ultra-conservative Wittenbergs owned the Amarillo Daily News and controlled dominant NBC TV and radio stations, facing little opposition.
Historian Jeff Roche, author of “Conservative Frontier,” noted, “Providing the populace with a far-right media diet can push them towards extreme right-wing views. Amarillo has consequently become America’s most conservative city.”
Simon Potter, a contemporary history professor at the University of Bristol specializing in mass media, remarked, “From the inception of the newspaper industry, media ownership and political influence have been intertwined. Concerns about this close relationship with politics have persisted over time. Does it truly serve the public interest?”
Central to that inquiry is the question: Do these megaphones genuinely empower them, or are they merely shouting into the void? Musk’s American counterpart, William Randolph Hearst, offers insight. Hearst, owner of the New York Journal, dispatched a correspondent to Cuba in 1897 to cover the conflict with Spain, driven less by humanitarian motives and more by then-popularism.
The March 25, 1898 edition of the New York Journal documented Hearst’s war coverage with correspondents in Cuba. credit… Library of Congress
One interpretation of this narrative positions Hearst as a powerful media magnate.
A reporter affiliated with the journal discovered there was no conflict. “Everything is tranquil,” remarked artist Frederick Remington. “There’s no fighting.” They desired to depart.
Hearst retorted: “Depart. You will provide the images; I will provide the war.” This strategy led to his newspaper becoming a significant force amid tumult sparked by President William McKinley, who rapidly escalated tensions, culminating in the liberation of Cuba and acquisition of a vital segment of the Spanish Empire.
This tale, initially told by Hearst’s colleague James Creelman, later resonated in Orson Welles’ Citizen Kane. Though thoroughly debunked, the anecdote persists, illustrating a powerful figure capable of instigating war from nothing.
However, Hearst’s ambitions faltered when he attempted to leverage his wartime momentum to advance his political career. He secured a House of Representatives seat in 1902 but was subsequently defeated in two mayoral bids in New York. His campaign for Governor of New York in 1906 also ended in failure.
David Nasaw, author of “Chief: The Life of William Randolph Hearst,” believes Musk employs X to consolidate support.
“I’ve never seen Twitter as a platform for MAGA supporters anywhere,” he stated.
Nasaw further suggests that Hearst reflected readers’ sentiments rather than shaping them. However, he acknowledged the unique developments occurring with Musk. Hearst, Ford, Viscount Northcliffe before World War II, and even the lords of British media all shared a commonality.
“They were positioned outside the rooms, shouting to be heard,” Nasaw remarked. “Musk’s relationship with Twitter was crucial, but it served as a conduit, enabling his entry into political discourse. This dynamic is unprecedented.”
As Tesla’s sales plummet, both Hearst and Ford could serve as cautionary tales for Musk. Seeking controversy through hateful rhetoric can severely damage one’s reputation and typically harms business.
Ford faced legal action for libel stemming from the Dearborn Independent, leading to a boycott against him. He ultimately shuttered the publication in 1927, though he never renounced his ideology. The remnants of that controversy still echo.
In the 1930s, Hearst confronted President Franklin D. Roosevelt, positioning an anti-Roosevelt manifesto on his publication’s front page. As the editorials grew increasingly harsh, readers were compelled to choose: whom to support, the CEO or the publisher?
“They chose Roosevelt,” Nasaw recalled. “This choice led to Hearst’s ultimate self-destruction and the downfall of his newspaper.”
The media landscape is experiencing a significant transformation, with numerous traditional publications fading away, while various YouTube channels assert their influence rivals that of conventional television networks.
A former newspaper editor and current presenter, engaged in fundraising efforts to expand his YouTube venture, anticipates that more prominent media figures will migrate to this increasingly impactful streaming platform as viewer preferences continue to evolve.
“It’s similar to the shift from vinyl to digital music,” he noted. “People believed it would take ages, but the change happened swiftly.”
“In the UK, specific newspapers are disappearing. Which will still have a print edition in a decade? Observing younger demographics shows that those under 45 rarely purchase print newspapers.”
Morgan holds the rights to his YouTube channel, *Piers Morgan Uncensored*, having acquired them from the Rupert Murdoch empire after his previous agreements with News UK, which totaled £50 million over three years, ended. Now over 60, he acknowledges that his transition is a “learning curve,” yet he champions YouTube for its flexibility and low cost.
He emphasized that his decision to fully embrace the streaming service was influenced by his four children. “All of them are watching YouTube,” he remarked. “I rarely watch traditional TV, aside from live sports. Until last year, I was part of the outdated, structured 8PM live news format.”
While Morgan is known for his sharp commentary, his shift to YouTube reflects a broader trend where media personalities, especially within the U.S. conservative landscape, amass millions of subscribers. Morgan aims to replicate the success of DailyWire, a conservative American media outlet co-founded by commentator Ben Shapiro, which includes Canadian psychologist Jordan Peterson.
YouTube wields significant influence in the media sphere, with various content originating from networks like ITV and Channel 4. Podcasters continue to enhance their presence, contributing considerable financial strength. In just the first quarter of 2025, YouTube’s ad revenue exceeded $8.9 billion (£6.644 billion), marking a growth of over 10% from the previous year. Meanwhile, Channel 4’s total revenue for all of 2023 hovered around £1 billion, a figure available for the most recent year.
Morgan cited last year’s U.S. election, mentioning that YouTube reported over 45 million views on election-related content on Election Day. In contrast, 42.3 million viewers tuned into 18 cable and broadcast networks that night. Although the figures aren’t directly comparable, Morgan stated:
“Prominent journalists have reached out to me, inquiring about a shift to my platform. I believe legacy media companies need to analyze why individuals like myself are venturing out into this realm,” he said. “More will be inspired to follow my lead, and I’m receiving intriguing inquiries from journalists.”
Morgan plans to emulate Gary Lineker’s Goalhanger Productions, which has produced successful podcasts in the UK. He envisions creating channels under uncensored brands that cover various genres, including true crime, history, and sports, with a direct focus on the U.S. and global audiences rather than just the UK.
“Look at what Gary Lineker achieved; he’s a close friend and with Goalhanger in the UK, he’s the first to credit his success. [podcast for Goalhanger] In terms of revenue, it’s substantial in America, but that’s just the beginning,” he stated. “It’s not solely about football; it’s about history. They travel to America and stage large live shows, which is massively successful there.”
“I seldom cover British news. We didn’t even discuss the final election results because my scope is broader: ‘Is this of interest to viewers in the Middle East? What about in Australia?’
Morgan shared his vision of decreasing reliance on his brand, aspiring to build something sustainable and independent. Though he considers it an “early era,” he is optimistic about attracting investors, as his venture is already profitable.
“We don’t require funding,” he stated. “With nearly 4 million subscribers, my inquiry to investors isn’t, ‘Just give me your money.’ It’s ‘What value do you bring to the table?’ ”
The Federal Trade Commission on Monday accused Meta of creating a monopoly that robbed the competition by buying startups that were on the road, and by launching a groundbreaking antitrust trial that could dismantle a social media empire that changed the way the world connects online.
In a packed courtroom in the District of Columbia, the FTC launched its first anti-trust trial under the Trump administration by claiming that Meta illegally solidified its social networking monopoly when Instagram and WhatsApp were small startups. These actions were part of a “buyer or boring strategy,” the FTC said.
Ultimately, the purchase combined the power of meta, robbing consumers of other social networking options and pulling away the competition, the government said.
“For over 100 years, American public policy has argued that businesses must compete if they want to succeed,” Daniel Matheson, the lead FTC litigant in the case, said in his opening remarks. “The reason we’re here is because Meta broke the deal.”
“They decided that it was too difficult to compete and it would be easier to buy a rival than to compete with them,” he added.
The Trials – Federal Trade Commission vs. Metaplatform – poses the most consequential threat to the business empire of the company’s co-founder Mark Zuckerberg. If the government is successful, the FTC could ask Meta to sell Instagram and WhatsApp, shift the way Silicon Valley does business and change the long pattern of big tech companies that snapped their younger rivals.
Still, legal experts warned that the FTC might be difficult to win. That’s because we have to prove something that the government doesn’t know. This is because Meta, previously known as Facebook, would not achieve the same success without the acquisition. Also, legal experts said it is very rare to unlock a merger that was approved several years ago.
“One of the hardest things antitrust laws are when industry leaders buy small potential competitors,” said Gene Kimmelman, a former senior official at the Obama Administration Department. Meta said, “I bought a lot of things that weren’t pan-out or integration-integrated. How is Instagram and WhatsApp different?
This effort continues a long-standing bipartisan pursuit to reduce the vast power that a small number of high-tech companies have beyond commercial, exchange of ideas, entertainment and political discourse. Despite attempts by tech executives to President Trump, his antitrust appointees have shown they will continue on the course.
The FTC’s case against Meta is the third major technological antitrust lawsuit to be tried in the last two years. Last year, DOJ won antitrust laws against Google because it monopolized internet search. The federal judge will hear debate over the relief package, including a potential dissolution next week. DOJ also completed another exam against Google to monopolize AD technology, which is still decided by a federal judge.
Robert W. McShesney, an influential, left-leaning media critic who argued that corporate ownership was bad for American journalism and that the Silicon Valley billionaire who dominated online information was a threat to democracy, died on March 25th at his home in Madison, Wisconsin.
The cause was glioblastoma, an aggressive brain tumor, said his wife, Inger, stole it.
Both Professor McChesney were grounded in academia. He had a PhD. I’m taught communication and at university. And Ink-On Paper Journalism: He was the founder of Rocket, the Seattle music magazine that reviewed Nirvana’s first single.
His main papers were expressed in more than a dozen books and numerous articles and interviews, but the corporate-owned news media was overly compliant with a certain political force, limiting the views that Americans were exposed to. He further argued that the internet (the promise of the wild west market of opinion) was squeezed by some huge owners of online platforms.
An early book, Rich Media, Poor Democracy (1999) warned that the integration of journalism undermines democratic norms. Perhaps his most famous work, “Digital Cutting: How Capitalism Does the Internet Against Democracy” (2013), he rejected the utopian view that the digital revolution would arrive at the public frontier of sources and stimulate democracy.
Instead, he shows how the internet is destroying the business model of newspapers, while local government civilly hearted coverage features the lowest common denominator fluff, celebrity gossip, cat videos, and personal naval gaze.
Professor McChesney condemned capitalism.
“Profit motivation, commercialism, public relations, marketing, advertising – all the critical features of modern corporate capitalism – are the basis for an assessment of how the Internet can develop and potentially develop,” he writes.
Elon Musk’s Xai artificial intelligence company has purchased Musk’s X, a social media platform formerly known as Twitter, for $330 billion, showcasing the billionaire’s rapid integration strategy.
The deal, announced on Friday, merges two of Musk’s numerous portfolio companies, including Tesla and SpaceX, potentially aiding Musk in training his AI model, Grok.
In a post on X, Musk declared, “The future of Xai and X are intertwined. Today, we have taken a step towards combining data, models, calculation, distribution, and talent.”
There has been no immediate response from X or Xai representatives to requests for comment. Many transaction details remain unknown, including investor compensation, integration of X’s leadership into the new company, and potential regulatory examination.
Paolo Pescatore, an analyst at PP, described the development as “surprising and somewhat unexpected.” He added, “To some extent, it marks the end of a tumultuous chapter for X.”
Gil Luria, an analyst at Da Davidson & Co, noted, “The $45 billion price tag is no coincidence, exceeding Twitter’s 2022 Take-Private Transaction by $1 billion. This move allows Xai investors to share the value of the business with X co-investors.”
Musk, the world’s wealthiest individual, has accumulated significant power in Washington, D.C., overseeing government efficiency and cost-cutting efforts during the Trump administration through Doge. This positions him to potentially influence the institutions overseeing his business dealings.
Xai investors, now part of the combined entity, expressed no surprises over the deal, viewing it as a merger of leadership and management teams within Musk’s own organization. They rejected the proposed name change.
While Musk did not seek investor approval, both companies are working closely together to deepen integration with Grok.
According to reports, Musk’s Xai startup commenced two years ago and secured $10 billion in funding, valuing it at $75 billion.
In February, Musk made a $97.4 billion bid for Openai, a ChatGpt maker consortium, which was subsequently rejected. Musk co-founded Openai in 2015 with CEO Sam Altman.
Musk has been involved in direct competition with Openai, filing a lawsuit in California federal courts to prevent rivals from transitioning from non-profits to commercial entities. A judge recently denied a request for a provisional injunction to block the conversion.
The widespread adoption of AI software has sparked increased investment and competition in Silicon Valley. Companies are seeking ways to integrate software across various business functions for improved efficiency.
As AI competition intensifies, Xai is enhancing its data centers to train more advanced models. Their supercomputer cluster, Colossus, located in Memphis, Tennessee, is touted as the world’s largest.
In February, Xai introduced Grok-3, the latest chatbot iteration, poised to compete with Chinese AI firms Deepseek and Microsoft-backed Openai. The X platform can facilitate the distribution of Xai products and provide real-time user feedback.
In 2022, Musk acquired X and subsequently Twitter for $44 billion, taking the platform private after its 2013 IPO and stating, “the birds will be released” post-acquisition.
Following the acquisition, Musk restructured the company, urged advertisers to leave the platform, resulting in a significant revenue decline. However, as Musk’s influence grew, the brand eventually returned to X.
Sources familiar with the transaction revealed that seven banks provided loans to Musk for the X acquisition, extending their loans to XK for the X deal, maintaining their book debt for two years, due to heightened interest in exposure to AI companies and improved X operational performance.
After the merger, investors who acquired debts from banks are expected to profit, according to Espen Robak, founder of Pluris Aluation Advisors. He stated, “Even if not fully repaid, the debt holds increased value.”
Additionally, a US judge rejected Musk’s attempt to dismiss a lawsuit alleging he misled former Twitter shareholders by delaying disclosure of his initial investment in the company.
A study of over 1,500 children suggests that smartphones are beneficial for mental and social well-being unless they begin using social media.
Justin Martin The University of South Florida surveys state children ages 11 to 13. 25 years of national research To explore the link between digital media and happiness.
The researchers found that 78% of the 1,510 children surveyed owned smartphones, and 21% of these reported symptoms of depression and anxiety. Children with phones were also more likely to report spending time in person with friends.
“We thought ownership of a smartphone was related to negative outcomes or negative measures,” Martin says. “But it wasn’t.”
The researchers found that children with low-income parents are more likely to own smartphones than children with rich parents. The highest prevalence of 87% smartphone ownership was found in children living in households collected between $50,000 and $90,000, while only 67% of children in households who own smartphones over $150,000 have a smartphone.
Martin suggests that this may reflect the school policies that children attended, in response to a greater awareness of negative headlines about the supposed risks of social media affecting their mental health.
But such a ban — Florida was the first US state to introduce in 2023 — could be in a volatile scientific position, Martin says. “We were careful to emphasize associations rather than causality, but children with smartphones probably use them for social purposes and like many adults,” he says.
However, not all smartphone use is a benefit of dirt. The researchers also found that children who said they were often posted on social media were twice as likely to report sleep problems or symptoms of depression or anxiety compared to people who never use these platforms. That said, the study failed to determine whether increased use of social media has led to mental health and sleep problems, or whether the opposite is true, says Martin.
“We recommend that parents and adults consider protecting their children from the social platforms that their children post frequently, or try to avoid posting on social platforms,” says Martin. “Of course, it’s hard to tell your kids. ‘You can use Instagram. You can use Tiktok, but don’t post it.” ”
Children surveyed are evenly divided on the merits of social media, with 34% agreeing that social media is more harmful than good, 33% disagreeing, and the rest are undecided about the issue.
“This is an attractive study that makes an important distinction, especially between smartphones and social media,” he says. Jess Maddox At the University of Alabama. “These two are synonyms for each other, but this study shows that they are not actually the same.”
“These are truly subtle findings and we hope that parents, educators and politicians will not be banned, but will encourage them to think more about their children’s education on smartphones and social media,” she says.
David Ellis At Bath University in the UK, this work confirms similar findings from previous studies, but understanding more work to understand what the data is directing us before deciding what to do about children’s smartphone use is that “the lack of analysis will strengthen conclusions that are more difficult to justify policy changes.”
a
The skeleton is at the center of the T. So, that’s because of Joyce Carroll Oates’s online infamy. In 2021, the award-winning novelist provided her most important contribution to literature: Diabolical Tweet anti-minating about existentialism in Halloween.
(You can always recognize places where no one feels or feels sad for the lost loved one and death, the one who dies, and the person who likes to break down into bones is just a joke.) https://t.co/1OuMqgw550
For beginners, the 86-year-old five-time Pulitzer finalist used the platform on X (formerly Twitter) to share photos of impressive American-style Halloween ornaments featuring dozens of plastic skeletons climbing the front of the house. “You can always recognize a place where no one has experienced much grief for a lost loved one,” she wrote. “Everyone who likes to break it down into bones is kidding.”
This unique take on Halloween tradition was filled with some kind of confused online glee. In an increasingly polarized political world, we rarely encounter such injustice and strong opinions. There is no culture war, nor popular discourse, Just a thought From Zeus’ forehead it turned like Athena. Or a creepy ghost who pops out at you in a haunted house – you shouldn’t be kidding about it.
However, this is not just a one-off phenomenon. In fact, this is one example of a bewildering array of classic Joyce Carroll Oats tweets. This makes another addition to her trophy shelf an attractive case. Winner of the Poet Award on Social Media. I’m not mean here. I really love her tweets and they are one of the only things that bring joy to me on the Nazi awarded platforms rapidly. As Twitter user Kaitlin Ruiz puts, “She doesn’t need to answer specialization. Oracles don’t see consistency.”
It is important to understand that Joyce Carroll Oates has definitive opinions on trance rights (good!) and dinosaur poaching (bad!), skeletons (worrisome!). She doesn’t need to answer any specialties. Oracles are not consistent.
Oates is a prolific writer in all aspects of her life. She published 58 novels, and more importantly, she wrote 170k tweets. Her targets are wide and abundant. There’s a political view: dozens of tweets a day Harris v Trump and Israel’s war with Gaza. She was one day outright about trans rights and wrote about it JD Vance’s other small eyes – But she is also not afraid to challenge a whimsical world. Her cat is pondering the problem with the trolley.
The point is that she posts frequently without agenda to sell anything and has seemingly embarrassing honesty. In an interview, she rejected the medium “It was short-lived and quickly forgotten.”. However, she feels her relationship with the platform is pure. How to use Twitter.
The unexamined premise of the “trolley problem” is that individuals, as we know, do not have any kind of subjective tendencies, including the Catnip’s drunkenness, the philosophers who have lived in the past, not subjective preferences/unconscious motives/free philosophers. You kill the poor… https://t.co/gze1kcjzyw
– Joyce Carol Oats (@joycecaroloates) May 27, 2024
All thoughts come directly from her huge creative brain, from the online farm to the table. “What we’ve heard about ISIS is pure and punitive. Is there anything fun to celebrate?” She asked in 2015. “Are there any examples of women getting obsessed with historical events?” She meditated in 2023. Then there was a time when she posted a really conflicting photo of her Infected feet online.
wHEN Congress was postponed to the holiday in December. This is a groundbreaking bill aimed at overhauling how technology companies protect the youngest users. The Kids Online Safety Act (KOSA) introduced in 2022 was intended to be a massive calculation for Big Tech. Instead, the bill waned and died in the House despite sailing through the Senate in July with a 91-3 vote.
Kosa is passionately defended by families who say children have fallen victim to the harmful policies of social media platforms, and advocates who say bills that curb the unidentified power of big technology have been postponed for a long time is. They are seriously disappointed that a strong chance to check out Big Technology has failed due to Congress' indifference. However, human rights groups argued that the law could have led to unintended consequences that impacted freedom of speech online.
What is the Kids Online Safety Act?
Kosa was introduced nearly three years ago in the aftermath of a bomb revelation by former Facebook employee Frances Haugen, and the extent to which the social media platform's impact on younger users. Platforms like Instagram and Tiktok would have required that children be affected through design changes and address online risks to allow younger users to opt out of algorithmic recommendations.
“This is a basic product praise bill,” said Alix Fraser, director of the Council on Responsible Social Media Issues. “It's complicated because the internet is complex and social media is complex, but essentially it's just an effort to create basic product driving standards for these companies.”
The central and controversial element of the bill is its “duty of care” clause, declaring that businesses “have an obligation to use the platform to act in the best interests of minors,” and the regulatory authority It has declared it open to interpretation by They would have also requested that the platform implement measures to reduce harm by establishing “safeguards for minors.”
Critics argued that the lack of clear guidance on what constitutes harmful content encourages businesses to filter content more aggressively, resulting in unintended consequences for free speech. Delicate but important topics such as gun violence and racial justice can be considered potentially harmful and may subsequently be ruled out by the corporation itself. These censorship concerns are particularly prominent in the LGBTQ+ community, saying that opponents of the Kosa could be disproportionately affected by conservative regulators and reduce access to critical resources.
“Using Kosas we see a truly intentional but ultimately ambiguous bill that requires online services to adopt online services to take unspecified actions to keep children safe. A policy analyst at the Center for Democracy Technology, who opposes the law and receives money from technology donors such as Amazon, Google, and Microsoft.
The complex history of the Kosa
When the bill was first introduced, over 90 human rights groups signed letters against it, highlighting these and other concerns. In response to such criticism, the bill's author published a revision in February 2024. Most notably, the state attorney general changed the enforcement of its “duty of care” provisions to the Federal Trade Commission. Following these changes, many organizations, including the Glaad, the Human Rights Campaign and the Trevor project, have withdrawn their opposition, saying the amendments “significantly reduce the risk of the matter.” [Kosa] It has been misused to suppress LGBTQ+ resources and to curb young people's access to online communities. ”
However, other civil rights groups have maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and the future battle, calling Kosa a “censorship bill” that harms vulnerable users and freedom of speech. They argued that the duty-of-care provision could easily be weaponized by conservative FTC chairmen against LGBTQ+ youth, as well as the state attorney general. These concerns are reflected in the appointment of Republican Andrew Ferguson, Trump's FTC chairman; Who said in the leaked statement He had planned to use his role to “fight the trans agenda.”
Concerns about how Ferguson will manage online content are “what LGBTQ youth wrote and called Congress hundreds of times over the past few years in this fight,” says Saraphilips of the Future Fight. Ta. “The situation they were afraid of has come to fruition. Anyone who ignores it is really just putting their heads in the sand.”
Opponents say that even if KOSA doesn't pass, they've already achieved a calm effect on content available on certain platforms. recently Report User MAG has found that hashtags for LGBTQ+-related topics are classified as “sensitive content” and are restricted from search. Laws like Kosa, Bhatia of the Center for Democracy Technology, said it doesn't take into account the complexity of the online landscape, and it's likely that the platform will lead preemptive censorship to avoid litigation.
“Children's safety holds an interesting and paradoxical position in technology policy, where children benefit greatly from the internet, as well as vulnerable actors,” she said. . “Using policy blunt instruments to protect them can often lead to consequences that don't really take this into consideration.”
Supporters will make backlash at Kosa an aggressive lobbying from the tech industry, but fight for the future – two top opponents – EFF will be supported by large tech donors Not there. Meanwhile, the large tech companies have been split up by KOSA, with X, SNAP, Microsoft and Pinterest quietly supporting the bill, Meta and Google.
“The Kosa was a very robust law, but what's more robust is the power of big technology,” Fraser is the power of problem 1. “They hired all the lobbyists in town to take it down, and they succeeded with it.”
Fraser added that supporters are disappointed that Kosa didn't pass, but “will not take a break until federal law is passed to protect children online.”
Potential revival of Kosa
Besides Ferguson as FTC Chairman, it is unclear what the changing composition of the new Trump administration and Congress will mean for the future of Kosa. Trump has not directly expressed his views on Kosa, but some of his close circles are Revealed support After last minute amendments to the 2024 bill Promoted by Elon Musk's X.
The death of the Congress in Kosa may seem like the end of a winding and controversial path, but defenders on both sides of the fight say it's too early to write legislative obituaries.
“We shouldn't expect the Kosa to go quietly,” said Prem Trivedi, policy director at the Institute for Open Technology, which opposes Kosa. “Whether it's being reintroduced or seeing if a different incarnation is introduced, it will continue to focus more broadly on online safety for children.”
Senator Richard Blumental, who co-authored the bill with Senator Marsha Blackburn, has promised to reintroduce it in future legislative sessions, and other defenders of the bill say they won't give It’s.
“I want to talk about the worst days of their lives over and over again, in front of lawmakers, in front of staff, in front of the press, knowing something is known. I've worked with a lot of parents who think that, and to change,” Fraser said. “They don't intend to stop.”
SSocial media has always served as an entertainment mirror for society as a whole. The algorithms and amplification of our always-on online presence have highlighted the worst parts of our lives while obscuring the best parts. This is part of why we are so polarized today, with two tribes screaming at each other on social media and plunging into a gaping chasm of despair.
This is what makes a statement released by one of the tech giants this week so alarming. Let those who enter give up hope. With less than two weeks until Donald Trump returns to the White House for the second runoff of the US presidential election, Meta, the parent company of Facebook, WhatsApp, Instagram and Threads, is making major changes to its content moderation. added. In doing so, it appears consistent with the president-elect's views.
Meta CEO Mark Zuckerberg announced in a bizarre video message posted to his Facebook page on Tuesday that the platform would be eliminating fact checkers. Instead of them? mob rules.
Zuckerberg said the platform: Over 3 billion people The company, which around the world logs on to its app every day, plans to adopt an Elon Musk-style community note format to police what is and isn't acceptable speech on its platform. . Starting in the United States, the company plans to dramatically shift the Overton window to those who can shout it loudest.
Meta's CEO largely acknowledged that the move was politically motivated. “It's time to go back to our roots around freedom of expression,” he said, adding that “restrictions on topics like immigration and gender… […] It deviates from mainstream discourse. ” He acknowledged past “censorship mistakes,” by which he likely meant the past four years of suppressing political speech during the Democratic president's tenure, and added that he “worked with President Trump to ensure that U.S. companies We will prevent foreign governments from attacking the United States.” Please check more. ”
The most dog-whistle comment was that Meta's remaining trust and safety and content moderation teams would be relocated from liberal California, and that its U.S. content moderation arm would now be based in solidly Republican Texas. It was a throwaway line. The only thing missing from the video was Zuckerberg wearing a MAGA hat and carrying a shotgun.
Let me be clear: all businessmen make smart decisions based on political circumstances. And few storms are as violent as Hurricane Trump as it approaches the United States. But few people's decisions are as important as Mark Zuckerberg's.
Over the past 21 years, Meta CEO has found himself a central figure in society. Initially, he oversaw a website used by college students. Now billions of people from all walks of life use it. In the early 2000s, the eccentric pursuit of online fun was nowde facto public town squareIn the words of Elon Musk. Where the meta goes, the world follows, online and offline. And Meta just decided to do a dramatic handbrake right turn.
Please don't believe it. Trust the watchdog. “Today’s Meta announcement is a retreat from a healthy and safe approach to content moderation.” The Real Facebook Oversight Committeesaid in a statement that he is an independent person who sees himself as the arbiter of Meta's movements.
They say that because if there's one thing we've learned from social media polarization over the past decade, it's that the angriest person wins the argument. Anger and lies can spread on social media, and are only partially contained by the platforms' ability to intervene if things get out of hand. (Recall that exactly four years ago, Meta suspended Donald Trump from Facebook and Instagram for two years for inciting the violence that stormed the Capitol on January 6, 2021.)
Social networks have always struggled with controlling speech on their platforms. Regardless of the outcome of the debate, what they are sure to do is annoy 50% of the population. These platforms are chronically underinvested in growing their businesses at all costs. Platforms have long argued that effective moderation is a problem of scale, and this is the problem they have created by pursuing scale at all costs.
To be sure, policing online speech is difficult, and the level of content moderation that companies like Meta are trying to operate at doesn't work. But abandoning it completely in favor of community notes is not the answer. Suggesting that it is a rational, evidence-based decision masks the reality. It’s a politically expedient move for someone who this week supported the resignation of self-proclaimed “radical” centrist Nick Clegg as head of global policy. A person who leans toward the Republican Party. He appointed Dana White, CEO of Ultimate Fighting Championship and a close Trump ally, to Meta's board of directors.
In many ways, you can't blame Zuckerberg for bending the knee to Donald Trump. The problem is that his decisions have a huge impact.
This is an extinction event for the idea of objective truth on social media. The creature was already on life support, but one of the reasons it's hanging on is that Meta has decided to fund an independent fact-checking organization to try to keep some elements of social media afloat. This is because he was ambitious. Authenticity and freedom from political bias. Night is day. The top is the bottom. Meta is X. Mark Zuckerberg is Elon Musk. Live out four tumultuous, bitter and unfounded years online.
Britain’s Culture Secretary Lisa Nandy has reached out to video-sharing platforms like YouTube and TikTok, urging them to prioritize the promotion of high-quality educational content for children.
Recent data indicates a substantial shift in children’s viewing habits, with a significant decrease in TV consumption over the past decade. Instead, children, aged between 4 and 8, are increasingly turning to platforms like YouTube and TikTok for entertainment, according to Nandy.
During an interview on BBC Radio 4’s Today program, Nandy mentioned the government’s intention to engage in dialogue with these platforms initially, but warned of potential interventions if they do not respond positively.
She emphasized the importance of the high-quality educational content produced in the UK, which plays a crucial role in informing children about the world, supporting their mental well-being and development, and providing entertainment. However, she expressed concerns about the lack of similar quality in content on video-sharing platforms compared to traditional broadcasters.
Former BBC presenter Floella Benjamin, acting as a guest editor on the show, described these platforms as a “wild west” filled with inappropriate content.
Nandy highlighted the government’s efforts to remove harmful content for children and stressed the need to address deeper issues related to the quality of content children consume.
She acknowledged the democratic nature of platforms like YouTube, where individuals can build careers from home, but also emphasized the responsibility to ensure the content is appropriate for young viewers.
Regarding the decrease in funding for children’s television, Nandy mentioned the Young Audiences Content Fund as a positive initiative to boost production. She believed that increasing investment might not be the solution, as the focus should be on reaching all children, including those who do not watch traditional TV.
Despite concerns raised by Benjamin about a crisis in children’s television, Nandy praised the sector as a valuable asset for Britain, from networks like CBeebies to beloved shows like Peppa Pig. She emphasized the government’s role in supporting and nurturing this content, even if it may not be highly profitable.
Nandy admitted the challenges of monitoring her own son’s online activities but commended the platform’s filtering mechanisms and highlighted the positive influence of educational content like news programs.
Nandy confirmed contacting Ofcom to elevate the importance of children’s television in their regulatory considerations and urged a review of public broadcasting, anticipated in the summer.
She stressed the necessity of balancing the influx of investment from platforms like Netflix and Disney with preserving and promoting uniquely British content without overshadowing it.
This involves forming partnerships with public broadcasters to expand online content availability and ensure adequate recognition and support for their contributions, as per Nandy’s statements.
On TikTok, people claim that pouring castor oil on their belly buttons can cure endometriosis, aid in weight loss, improve complexion, and promote healthy hair. However, it’s important to question the scientific basis behind this viral trend. Castor oil is known for its stimulant and laxative effects, which can be beneficial for treating constipation and inducing labor, although there are more commonly used medications for these purposes.
In addition to its medicinal uses, castor oil is also utilized in cosmetics like lip balms and moisturizers due to its moisturizing and antibacterial properties. Nevertheless, there is a lack of research supporting or refuting the health benefits of applying castor oil to the belly button.
This practice may not make sense from a physiological standpoint, as the belly button served as a connection to the placenta during fetal development, providing oxygen and removing waste products. However, this connection is severed at birth, and oil does not enter the body through the belly button.
While massaging castor oil into the skin may offer temporary relief for certain conditions, such as menstrual cramps, it is not proven to be effective for weight loss or pain relief when taken orally or applied topically. Essential oils have shown to be more effective for aromatherapy purposes compared to unscented oils like castor oil.
Overall, while abdominal massage with castor oil may provide some relief for symptoms like constipation, it is not a substitute for proper medical treatment. It’s important to approach health trends with caution and rely on scientifically proven methods for healthcare.
Authors, publishers, musicians, photographers, filmmakers, and newspaper publishers have all opposed the Labor government’s proposal to create a copyright exemption for training algorithms by artificial intelligence companies.
Representing thousands of creators, various organizations released a joint statement rejecting the idea of allowing companies like Open AI, Google, and Meta to use public works for AI training unless owners actively opt out. This was in response to the ministers’ proposal announced on Tuesday.
The Creative Rights in AI Coalition (Crac) emphasized the importance of respecting and enforcing existing copyright laws rather than circumventing them.
Included in the coalition are prominent entities like the British Recording Industry, the Independent Musicians Association, the Film Institute, the Writers’ Association, as well as Mumsnet, the Guardian, the Financial Times, the Telegraph, Getty Images, the Daily Mail Group, and Newsquest.
The intervention from these industry representatives follows statements by Technology and Culture Minister Kris Bryant in Parliament, where he promoted the proposed system as a way to enhance access to content for AI developers while ensuring rights holders have control over its use. This stance was reinforced after Bryant mentioned the importance of controlling the training of AI models using UK content accessed from overseas.
Nevertheless, industry lobbying group Tech UK is advocating for a more permissive market that allows companies to utilize and pay for copyrighted data. Caroline Dinenage, chair of the Conservative Party’s culture, media, and sport select committee, criticized the government’s alignment with AI companies.
Mr. Bryant defended the proposed system to MPs by highlighting the need for a flexible regime that allows for overseas developers to train AI models with UK content. He warned that a strict regime could hinder the growth of AI development in the UK.
Creatives in the industry are urged to seek permission from generative AI developers, obtain licenses, and compensate rights holders if they wish to create or train algorithms for various media formats.
A collective statement from the creative industry emphasized the importance of upholding current copyright laws and ensuring fair compensation for creators when licensing their work.
Renowned figures like Paul McCartney, Kate Bush, Julianne Moore, Stephen Fry, and Hugh Bonneville have joined a petition calling for stricter regulations on AI companies that engage in copyright infringement.
Novelist Kate Mosse is also supporting a campaign to amend the Data Bill to enforce existing copyright laws in the UK to protect creators’ rights and fair compensation.
During a recent House of Lords debate, supporters of amendments to enforce copyright laws likened the government’s proposal to asking shopkeepers to opt-out of shoplifting rather than actively preventing it.
The government’s plan for a copyright exemption has faced criticism from the Liberal Democrats and other opponents who believe it is influenced by technology lobbyists and misinterpretations of current copyright laws.
Science Minister Patrick Vallance defended the government’s position by emphasizing the need to support rights holders, ensure fair compensation, and facilitate the development of AI models while maintaining appropriate access.
Over 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder as a result of being exposed to distressing social media content, including violent acts, suicides, child abuse, and terrorism.
Dr. Ian Kananya revealed that these moderators, based at a facility in Kenya contracted by social media companies, worked long hours and were diagnosed with PTSD, generalized anxiety disorder (GAD), and major depressive disorder (MDD) by the Head of Mental Health Services at Kenyatta National Hospital in Nairobi.
A lawsuit filed against Meta, Facebook’s parent company, and the outsourcing company Samasource Kenya, which employed moderators from across Africa, brought to light the distressing experiences faced by these employees.
Images and videos depicting disturbing content caused some moderators to have physical and emotional reactions such as fainting, vomiting, screaming, and leaving their workstations.
The lawsuit sheds light on the toll that moderating such content takes on individuals in regions where social media usage is on the rise, often in impoverished areas.
Many of the moderators in question turned to substance abuse, experienced relationship breakdowns, and felt disconnected from their families, due to the nature of their work.
Facebook and other tech giants use content moderators to enforce community standards and train AI systems to do the same, outsourcing this work to countries like Kenya.
A medical report submitted to the court depicted a bleak working environment where moderators were constantly exposed to distressing images in a cold, brightly lit setting.
The majority of the affected moderators suffered from PTSD, GAD, or MDD, with severe symptoms affecting a significant portion of them, even after leaving their roles.
MetaSource and Samasource declined to comment on the allegations due to the ongoing litigation.
Foxglove, a nonprofit supporting the lawsuit, highlighted the lifelong impact that this work has had on the mental health of the moderators.
The lawsuit aims to hold the companies accountable for the traumatic experiences endured by the moderators in the course of their duties.
Content moderation tasks, though often overlooked, can have significant long-term effects on the mental health of those involved, as seen in this case.
Meta stresses the importance of supporting its content moderators through counseling, training, on-site support, and access to healthcare, while implementing measures to reduce exposure to graphic material.
Social media platforms are required to take action to comply with UK online safety laws, but they have not yet implemented all the necessary measures to protect children and adults from harmful content, according to the regulator.
Ofcom has issued a code of conduct and guidance for tech companies to adhere to in order to comply with the law, which includes the possibility of hefty fines and site closures for non-compliance.
Regulators have pointed out that many of the recommended actions have not been taken by the largest and most high-risk platforms.
John Higham, Director of Online Safety Policy at Ofcom, stated, “We believe that no company has fully implemented all necessary measures. There is still a lot of work to be done.”
All websites and apps covered by the law, including Facebook, Google, Reddit, and OnlyFans, have three months to assess the risk of illegal content appearing on their platforms. Safety measures must then be implemented to address these risks starting on March 17, with Ofcom monitoring progress.
The law applies to sites and apps that allow user-generated content, as well as large search engines covering over 100,000 online services. It lists 130 “priority crimes,” including child sexual abuse, terrorism, and fraud, which tech companies need to address by implementing moderation systems.
The new regulations and guidelines are considered the most significant changes to online safety policy in history according to Technology Secretary Peter Kyle. Tech companies will now be required to proactively remove illegal content, with the risk of heavy fines and potential site blocking in the UK for non-compliance.
Ofcom’s code and guidance include designating a senior executive responsible for compliance, maintaining a well-staffed moderation team to swiftly remove illegal content, and improving algorithms to prevent the spread of harmful material.
Platforms are also expected to provide easy-to-find tools for reporting content, with a confirmation of receipt and timeline for addressing complaints. They should offer users the ability to block accounts, disable comments, and implement automated systems to detect child sexual abuse material.
Child safety campaigners have expressed concerns that the measures outlined by Ofcom do not go far enough, particularly in addressing suicide-related content and making it technically impossible to remove illegal content on platforms like WhatsApp.
In addition to addressing fraud on social media, platforms will need to establish reporting channels for instances of fraud with law enforcement agencies. They will also work on developing crisis response procedures for events like the summer riots following the Southport murders.
a
Elon Musk’s departure from X resulted in the loss of around 2.7 million active Apple and Android users in the U.S. in a span of two months. In comparison, Bluesky, a competing social media platform, also saw a decline, losing nearly 2.5 million users during the same time frame.
This leak coincided with the exit of several prominent figures, including directors Guillermo del Toro and Mike Flanagan, and actors Quinta Brunson and Mark Hamill. Some, like Alexandria Ocasio-Cortez, still have an X account but are now using Bluesky more frequently.
According to digital market intelligence firm Similarweb, the number of daily active users on X in the U.S. has dropped by 8.4% since early October, from 32.3 million to 29.6 million.
On the other hand, Bluesky has experienced a significant increase of 1,064% since October 6, growing from 254,500 to approximately 2.7 million users. This surge began when Musk started using the @america X handle to promote his pro-Donald Trump super pack and began posting in support of the former president.
Following Trump’s election victory, this trend accelerated further. Within a week of November 5th, Bluesky’s total user count doubled from 743,900 to 1.4 million. A week later, the number doubled again to 2.8 million. Since Musk’s super pack formation on X, the platform’s U.S. active users have increased significantly compared to Bluesky.
Bruce Daisley, a former vice president at Twitter, attributed the shift away from X to Musk’s concept of a “digital town square” and the platform’s evolving nature. He expressed concerns about the rise of extreme views on X under Musk’s leadership.
French journalist Salome Sake, who had a significant following on X, deactivated her account due to harassment and misinformation on the platform. She found Bluesky to be a healthier space and shifted her focus there.
Despite finding a new platform for journalism, Salome believes that the exodus of users from X enables those who spread hate, propaganda, and misinformation online. She emphasized the importance of diverse opinions and critical thinking.
Notable exits from X also include a German football club, St. Pauli, and Werder Bremen, citing the platform’s radicalization and departure from their values. Werder Bremen chose to leave X due to its stance against hate speech and discrimination.
Christoph Pieper, the director of communications at Werder Bremen, highlighted the club’s moral values and commitment to fighting against discrimination. Despite the potential economic impact, the club prioritized its principles over online visibility on X.
Pieper expressed uncertainty about the club’s future on Bluesky but firmly stated that any platform allowing hate speech is not suitable for Werder Bremen. Many other clubs are also considering a transition to Bluesky, signaling a shift away from X.
WWhen the baby parrots were delivered to Alice Soares de Oliveira’s desk, they had no wings and could barely open their eyes. The pair, housed in a dirty cardboard box, were just a month old and showed signs of not feeding well.
The parrot, along with two young toucans who arrived just under a month later, were victims of wildlife traffickers. They were all put up for sale on social media, probably snatched from their mother’s nest by poachers.
They were taken to Soares de Oliveira, a veterinarian. CeMaCAS, Wildlife Conservation Center in a forest on the outskirts of Brazil’s largest city, São Paulo, after being rescued by police surveillance networks on platforms such as Facebook and WhatsApp.
Illegal advertising of snakes for sale online in Brazil. Photo: Provided by RENCTAS
Social media has become an important tool for wildlife traffickers, experts say. For example, more people are using Facebook to promote the sale of endangered animals and their byproducts, often switching to messaging apps like WhatsApp to complete the sale.
Report published in October The Global Initiative against Transnational Organized Crime flagged 477 advertisements for 18 protected animals in Brazil and South Africa alone in three months this year. 78% of this was on social media.
The illegally traded parrot arrives at the CeMaCAS conservation center in poor condition after being rescued by the police. Photo: undefined/provided by CeMaCAS
Simone Haytham, director of environmental crime at the Global Initiative, said traders moved online after authorities cracked down on street markets. “The online space now provides a means for many of the world’s most endangered and most highly protected species to find consumers,” she says. “There’s a huge treasure trove of endangered species available for purchase online, but it’s no easy feat.”
Crawford Allan, vice president of nature crime at the World Wildlife Fund, said the pandemic has “systemized” wildlife crime online. “A lot of the public markets were closing down,” he says. “People couldn’t move, a lot of things went online, and it became the norm.”
Laws regarding the sale of wild animals vary by jurisdiction and species, so social media companies face a difficult situation in determining whether such ads are illegal. Nevertheless, experts say tech companies need to do more to determine when posts are risky.
Global Initiative combines AI technology and human analysis to detect suspicious ads online. The company’s reporting system, part of a project called Eco-Solve, covers Brazil, South Africa and Thailand, and will soon be expanded to India, Indonesia and the UAE.
Richard Scobie, executive director of TRAFFIC, an organization focused on wildlife trafficking, said advertising on social media allows sellers to “circumvent” the law and sell goods without telling buyers where they come from. He says it happens often.
“Companies need to allocate far more resources to regulating how users illegally trade wildlife parts and derivatives on their platforms,” he says. “Social media companies are working to combat illegal transactions on their platforms…but there is much more they can do.”
Some tech companies are taking steps to combat this problem. In 2020, Facebook introduced tags for some search terms to warn users of the dangers of wildlife trafficking, and meta was removed. 7.6 million posts in 2023according to the Coalition Against Online Wildlife Trafficking.
The coalition is a voluntary association that includes most of the major social media companies in the United States and China.
It announced that in 2021, 11.6 million posts were blocked or deleted by members.
Parrots illegally traded after being recovered at CeMaCAS. Photo: undefined/provided by CeMaCAS
WWF’s Alan was a founding member of the federation and continues to oversee its activities. He said tech companies have been receptive to activists’ attempts to clamp down on their activities, but job cuts in the industry are hurting progress.
“As a conservation organization, we always feel that people need to do more, but we also understand that they are dealing with terrorism, child safety and all the evil in the world that flows through social media channels. They have bigger and scarier problems to deal with,” he added.
“I feel that some companies have found a balance. Others haven’t. They’re not working hard enough or they’re inactive for some reason, so they step up and do more. You need to make an effort.”
A spokesperson for Meta, which owns Facebook and WhatsApp, said: “We do not allow activities related to the purchase, sale, lottery, gifting, transfer or trading of endangered or protected species on our services.
“We use a combination of technology, team reviews, and user reports to identify behavior that violates our Terms of Service and respond to valid requests from law enforcement.”
Wildlife trafficking threatens biodiversity and can lead to the extinction of certain species. According to 2023 Forensic Science International articlesapproximately 5,209 animal species are endangered or nearly endangered due to “use and trade.”
Illegal online advertising of macaws for sale in Brazil. Photo: Undefined/Courtesy of RENCTAS
Mr Haytham said: [being advertised for sale online] It is protected as it is on the verge of extinction. They are protected because trade poses a major threat to their survival. ”
Soares de Oliveira of São Paulo believes the birds in his care have a bright future. Veterinarians at CeMaCAS care for hundreds of birds and animals at a time. She is confident that the parrot and toucan will make a full recovery and be released back into the wild.
“They are in the middle of rehabilitation. They are still young so we are monitoring them, but I think they will be able to live a free life in three months,” she says.
Find more coverage of extinction ages here and follow biodiversity reporters Phoebe Weston and Patrick Greenfield on the Guardian app for more nature coverage.
HHello. Welcome to TechScape. Happy belated Thanksgiving to all my American readers. I hope you all enjoy a fun holiday party this weekend. I’m looking forward to baking gritty bunts for the Feast of St. Nicholas. This week in tech: Australia causes panic, Bluesky raises the issue of custom feeds, and we cover the online things that brought me joy over the holidays.
Australia on Thursday passed a law banning children under 16 from using social networks.
My colleague Helen Sullivan reports from Sydney: The Online Safety Amendment (Social Media Minimum Age) would prohibit social media platforms from allowing users under the age of 16 to access their services, with penalties of up to A$50 million (A$3,200) for failure to comply. He is threatening to impose a fine of US$ 1,000,000. However, it does not contain any details about how it will work, only that companies are expected to take reasonable steps to ensure that users are over 16 years of age. Further details will be available by the time the Age Assurance Technology trials are completed in mid-2025. The bill will not take effect for another 12 months.
The bill also does not specify which companies would be subject to the ban, but Communications Minister Michel Rolland has said that Snapchat, TikTok, X, Instagram, Reddit, and Facebook are likely to be subject to the ban. YouTube is not included because it is for “important” educational purposes, she said.
The new law was drafted in response to Labor Prime Minister Anthony Albanese saying there was “a clear causal link between the rise of social media and the harm it causes to Australian youth mental health.”
TikTok, Facebook, Snapchat, Instagram, and X are angry. Following the bill’s passage, Mehta said the process was “fast-tracked” and that it would take a long time to hear from young people, the steps the tech industry has already taken to protect them, and existing research on the impact of their social media use. He said he did not consider the evidence.
Australian children are not a significant user base for these companies. According to UNICEF, in 2023, there were 5.7 million people under the age of 18 living in Australia. Facebook reported 3 billion monthly users in 2023. May 2023. There are approximately 370 million Facebook users in India. Even if all Australian children were to leave social media, which is unlikely, the number of users would not decline significantly.
If countries around the world turn their young people away from social media, social media companies will face an uncertain future.
Of concern to tech companies is the precedent set by the new law. Tech companies also fiercely opposed measures in both Australia and Canada that would require them to pay for news content. The issue was not the amount requested, but what happened next. If countries around the world required people to pay for news, the financial burden it would place on Facebook and others would be enormous, as would the responsibility of determining what is news. As countries around the world turn their young people away from social media, social media companies will face an uncertain future. The pipeline of incoming users will dry up.
What tech companies want in Australia is a measure that would require parental consent, but this would be a more vague standard and one that would divide responsibility between companies and users. Mehta and others opposed a 2023 law passed in France requiring parents to approve accounts for children under 15 with far less vigor than Australia’s new law. However, in an ominous sign for Australia’s measures, local French media reports that technical challenges mean the under-15 rule has not yet been implemented. Also, does the parental consent feature work? Data from several European countries shows that it doesn’t. Nick Clegg from Meta said the company’s data shows that parents are not using parental control measures on social networks.
Australian law shows that this is indeed possible in any country. We have seen the laws of one country tilt the global governance of social networks before. In the United States, a law governing children’s privacy passed in 2000 imposed a minimum age of 13 for social media users. Social network privacy policy.
Click here for a comparison of Australia’s social media ban laws with those of other countries.
JJust by clicking on the “shiny babe” filter, the teenager’s face was subtly elongated, her nose was streamlined, and her cheeks were sprinkled with freckles. Then, she used the Glow Makeup filter to remove blemishes from her skin, make her lips look like rosebuds, and extend her eyelashes in a way that makeup can’t. On the third click, her face returned to reality.
Today, hundreds of millions of people use beauty filters to change the way they look on apps like Snapchat, Instagram, and TikTok. This week TikTok announced new global restrictions on children’s access to products that mimic the effects of cosmetic surgery.
The publication researched the feelings of around 200 teens and their parents in the UK, US, and several other countries and found that girls reported “feelings of low self-esteem” as a result of their online experiences. The announcement was made after it was discovered that the patient was sensitive to
There are growing concerns about the impact of rapidly advancing technology on health, with generative artificial intelligence enabling what has been called a new generation of “micropersonality cults.” This is no small thing. TikTok has around 1 billion users.
Upcoming research by Professor Sonia Livingstone, Professor of Social Psychology at the London School of Economics, will show that the pressures and social comparisons that result from the use of increasingly image-manipulated social media are more psychologically traumatic than viewing violence. They would argue that it can have major health implications. .
TikTok effect filters (left to right): Original image without filter, Bold Glamor, BW x Drama Rush by jrm, and Roblox Face Makeup. Synthesis: Tiktok
Hundreds of millions of people use alternate reality filters on social media every day, from cartoon dog ears to beauty filters that change the shape of your nose, whiten your teeth, and enlarge your eyes.
Dr Claire Pescot, an educationist at the University of South Wales who has studied children aged 10 and 11, agreed that the impact of online social comparisons is being underestimated. In one study, children who were dissatisfied with their appearance said, “I wish I had put on a filter right now.”
“There is a lot of education going on about internet safety, about protecting yourself from pedophiles and catfish. [using a fake online persona to enable romance or fraud]” she said. “But in reality, the dangers are mutual. Comparing yourself to others has more of an emotional impact.”
But some people resist restrictions on the influence they feel is a fundamental part of their online identity. Olga Isupova, a Russian digital artist living in Greece who designs beauty filters, called such a move “ridiculous.” She added that having an adapted face is a necessary part of being “multiple people” in the digital age.
“People live normal lives, but it’s not the same as their online lives,” she said. “That’s why you need a straightened face for your social media life. For many people, [online] It’s a very competitive field and it’s about Darwinism. Many people use social media not just for fun, but also as a place to make money and improve their lives and futures. ”
In any case, age restrictions on some of TikTok’s filters are unlikely to solve the problem anytime soon. 1 in 5 8 to 16-year-olds lie about being over 18 on a social media app. the study Rules tightening age verification will not come into force until next year, Britain’s communications regulator Ofcom has found.
A growing body of research shows that some beauty filters are dangerous for teenagers. Last month, a small survey was conducted among female students in Delhi who use Snapchat. Found Most people report “lower self-esteem and feelings of inadequacy when juxtaposing their natural appearance with filtered images.” A study conducted in 2022 found that the opinions of more than 300 Belgian adolescents who were found to use face filters were associated with the likelihood of accepting the idea of cosmetic surgery.
“Kids who are more resilient look at these images and say, oh, this is a filter, but kids who are more vulnerable tend to feel bad when they see it,” Livingstone said. “There is growing evidence that teenage girls feel vulnerable about their appearance.”
When TikTok’s research partner Internet Matters asked a 17-year-old in Sweden about beauty filters, she replied: The effect should be more similar. ”
Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Laboratory, said more experimental research is needed into the social and psychological effects of the most extreme beauty filters.
In 2007, he helped coin the term “Proteus Effect.” This is a term that describes how people’s behavior changes to match their online avatar. People wearing more attractive virtual selves disclosed more about themselves than those wearing less attractive virtual selves.
“We need to strike a careful balance between regulation and welfare concerns,” he said. “Small changes to our virtual selves can quickly become tools we rely on, such as the ‘touch-up’ feature in Zoom and other video conferencing platforms. ”
In response, Snapchat said it doesn’t typically receive feedback about the negative impact its “beauty lenses” have on self-esteem.
Meta, the company behind Instagram, said it walks a fine line between safety and expression through augmented reality effects. The company said it consulted with mental health experts and banned filters that directly encourage cosmetic surgery, such as mapping surgical lines on a user’s face or promoting the procedure.
TikTok has made a clear distinction between effects such as animal ear filters and effects designed to change one’s physical appearance, with teens and parents voicing concerns about “appearance” effects. said. In addition to the restrictions, it said it would raise awareness among those making filters about “some of the unintended consequences that certain effects can cause.”
According to the communications watchdog, Reddit, the US online discussion platform, has surpassed X to become the fifth most popular social media platform in the UK. In May of this year, 22.9 million UK adults visited Reddit, compared to 22.1 million on X, as reported by Ofcom.
Reddit, known for its topic-based communities where users engage in discussion threads, experienced a 47% growth in the UK compared to the same period in 2023, making it the fastest-growing large-scale social media platform. This growth led Reddit to overtake LinkedIn and X, claiming the fifth spot in the UK social media platform ranking, with YouTube surpassing Facebook as the top platform with over 44 million adult users.
The increase in organic search traffic on Reddit was attributed to Google’s latest algorithm updates in the first half of 2024, according to Farhad Divecha, managing director of Acuracast. Ofcom suggested that the rise in Reddit’s popularity may also be due to changes in third-party apps accessing content, prompting users to visit the Reddit site. However, Ofcom also raised concerns about Reddit’s promotion of stock market surfacing.
X, on the other hand, has seen a decline in popularity, with an 8% decrease in reach since May last year. Criticisms of X’s content moderation standards have been ongoing since Elon Musk acquired the platform in 2022. The introduction of a rival platform by Mark Zuckerberg’s Meta and competition from Threads have added pressure on X.
Ofcom’s annual report on digital habits highlighted the prevalence of misinformation and deepfakes online, with four in 10 UK adults encountering such content. One-third of UK adults lack confidence in distinguishing AI-generated images, audio, or videos.
UNESCO has issued a warning that social media influencers urgently need help in fact-checking before sharing information with their followers to prevent the spread of misinformation online.
A report by UNESCO revealed that two-thirds of content creators fail to verify the accuracy of their material, leaving both them and their followers susceptible to misinformation.
The report emphasized the importance of media and literacy education to assist influencers in shaping their work based on accurate information.
Creators’ susceptibility to misinformation due to low fact-checking practices can have significant implications for public discourse and trust in the media, according to UNESCO.
While many creators do not verify information before sharing it, they often rely on personal experiences, research, and conversations with knowledgeable individuals as their primary sources.
UNESCO’s study revealed that the popularity of online sources, measured by likes and views, plays a significant role in creators’ trust, highlighting the need for improved media literacy skills.
To address this issue, UNESCO is collaborating with the Knight Center for Journalism of the Americas to offer an online course on becoming a trusted voice online, focusing on fact-checking and creating content during elections or crises.
Media literacy expert Adeline Hulin noted that many influencers do not perceive their work as journalism, highlighting the need for a deeper understanding of journalistic practices and their impact.
Additionally, UNESCO’s findings indicated a lack of awareness among creators regarding legal regulations, with only half of them disclosing sponsors and funding sources to their audience, as required in some countries.
The survey, involving 500 content creators from various countries, revealed that most influencers are nano-influencers under 35 years old, primarily using Instagram and Facebook, with up to 100,000 followers.
It took around 90 seconds for Liana Montag to witness the violence on her X account. The altercation in the restaurant escalated into a full-fledged brawl, with chairs being smashed over heads and bodies strewn across the floor.
The “Gang_Hits” account features numerous similar clips, including shootings, beatings, and individuals being run over by cars. This falls into a brutal genre of content that is frequently promoted by algorithms and appears on young people’s social media feeds without their consent.
Liana Montag: “It’s normal to see violence.” Photo: Martin Godwin/The Guardian
Montag, an 18-year-old from Birmingham, also active on Instagram and Snapchat, has connected with several other teenagers at the Bring Hope charity in Handsworth. She shared, “If someone mentions they were stabbed recently, you don’t react as strongly anymore. It’s become a normal sight.”
Violent content is becoming more relatable in many cases. Iniko St Clair Hughes, 19, cited the example of gangs filming chases and posting them on Instagram.
“Now everyone has seen him flee, and his pride will likely push him to seek revenge,” he explained. “It spreads in group chats, and everyone knows about the escape, so they feel the need to prove themselves the next time they step out. That’s how it goes. The retaliation gets filmed, sometimes.”
Jamil Charles, 18, admitted to appearing in such video clips. He mentioned that footage of him in fights had been circulating on social media.
“Things can escalate quickly on social media as people glamorize different aspects,” he commented. “Fights can start between two individuals, and they can be resolved. But when the video goes viral, it may portray me in a negative light, leading to a blow to my pride, which might drive me to seek revenge and assert myself.”
All this had a worrying impact, as St. Clair-Hughes pointed out.
“When fear is instilled through social media, you’re placed in a fight-or-flight mode, unsure of how to proceed when leaving your house – it’s either being ahead of the game or lagging behind. You feel prepared for anything… It’s subliminal; no one is explicitly telling you to resort to violence, but the exposure to it intensifies the urge.”
Leanna Reed, 18, shared a story of a friend who started carrying a knife post an argument on Snapchat. While mostly boys were involved, there was also a female acquaintance who carried a weapon.
“It’s no longer a topic of discussion,” she noted. “He who emerges victorious with his weapon is deemed the winner. It’s about pride.”
Is there a solution? St. Clair Hughes expressed pessimism.
“People tend to veer towards negativity… [Social media companies] want us using their platforms, so I doubt they’ll steer towards a more positive direction.”
Reed mentioned hearing about TikTok being more regulated and education-focused in China, leading her to ponder different approaches taken by various countries on the same platform.
O’Shaun Henry, 19, directed a candid message towards social media companies, urging them to utilize their power to make positive changes, especially through AI. Limits need to be set, considering the influence on young individuals. It’s time to introspect, conduct research, and bring about improvements.
Ministers have stated that the social media ban for under-16s is not currently being considered, despite teenagers urging a reconsideration of plans to restrict access to platforms like TikTok, Instagram, and Snapchat following Australia’s example.
Peter Kyle, Secretary of State for Science and Technology, issued a warning to social media platforms about potential fines and prison sentences for breaching online safety laws coming into effect next year. Efforts are being made to increase prevention of online harm.
During a meeting with teenagers at NSPCC headquarters, Mr. Kyle emphasized that there are no immediate plans to ban children from using smartphones, as it is not his preferred choice.
Teenagers expressed concerns about platform addiction and difficulties in seeking help for hacked accounts or offensive content, but did not call for a ban. They highlighted the importance of social connections, support, and safety.
Mr. Kyle’s initial comments about considering a ban caused worry among teenagers, but he clarified that a ban could be a possibility depending on evidence of its effectiveness, especially in light of similar legislation in Australia.
The main focus remains on preventing child fatalities linked to social media activity, with Mr. Kyle citing instances of tragic outcomes. Efforts are ongoing to enhance age verification software to protect children from inappropriate online content.
MPs in a parliamentary inquiry into the UK riots and the proliferation of false and harmful AI content are set to call on Elon Musk to testify about X’s role in spreading disinformation, as reported by The Guardian.
Additionally, senior executives from Meta and TikTok, the companies behind Facebook and Instagram, are expected to be summoned for questioning as part of the Commons Science and Technology Select Committee’s social media inquiry.
The first public hearing is scheduled for the new year, amidst concerns that current online safety laws in Britain are at risk of being outpaced by advancing technology and the politicization of platforms like X.
Images shared on Facebook and X were reportedly used to incite Islamophobic protests following the tragic deaths of three schoolgirls in Southport in August. The inquiry aims to investigate the impact of generative AI and examine Silicon Valley’s business models that facilitate the spread of misleading and potentially harmful content.
The Chairman of the Labour Party Select Committee, Chi Onwura, expressed interest in questioning Musk about his stance on freedom of expression and disinformation. Musk, the owner of X, has been critical of the UK government and was not invited to an international investment summit in September.
Former Labour Secretary Peter Mandelson has called for an end to Musk’s feud with the British government, emphasizing the importance of not overlooking Musk’s influence in the technological and commercial space.
Despite speculation, it remains uncertain whether Musk will testify in the UK, as he is reportedly gearing up for a senior role in President Trump’s White House. Amidst these developments, millions of X users are said to have migrated to a new platform called Bluesky, raising concerns about misinformation and the presence of previously banned users.
The investigation also aims to explore the connection between social media algorithms, generative AI, and the dissemination of false or harmful content. Additionally, the use of AI to complement search engines, such as Google, will be scrutinized in light of recent instances of false and racist claims propagated on online platforms.
In response to the spread of misinformation and incitement after the Southport killings, Ofcom, the UK’s communications regulator, has highlighted the need for social media companies to address activity that incites violence or promotes false behavior. New rules under the Online Safety Act will require companies to take action to prevent the spread of illegal content and minimize security risks.
As a technology reporter, I like to think of myself as an early adopter. I first signed up for the social network Bluesky about 18 months ago, when the platform saw a small spike in users dissatisfied with Elon Musk’s approach to what was then still called Twitter. Ta.
It didn’t stick. Like many people, I found Twitter too tempting and deleted my Bluesky account, but it has returned in recent weeks. I’m not alone. Xodus began as Musk continues to transform his social platform, now called X, while taking on a role in President-elect Donald Trump’s incoming administration. Blue Sky acquired 12 million users in 2 months which is approaching 20 million users. This time I’m going to stay here – and I think others will too.
The main reason is that I want to have a social media experience without being bombarded with hate speech, gore, and porn videos. All of these have been complaints from X users in recent months. But I also have my eye on Bluesky. Because we think this signals a more fundamental change in how social media works.
Social media algorithms, the computer code that determines what each user sees, have long been a source of controversy. Fears of disappearing down the “rabbit hole” of radicalization, or of becoming trapped in an “echo chamber” of consensual and sometimes conspiratorial viewpoints, have dominated the scientific literature.
Displaying information from followers in chronological order creates a confusing quagmire for the average user to process, so using algorithms to filter information has become the norm. Sorting and filtering what’s important or what’s likely to keep users interested has been key to the success of platforms like Facebook, X, and Instagram.
But by controlling these algorithms, we can have a huge say in what people read. One of the problems many users have with X is its “For you” algorithm. Under Musk, comments by and about him appear to be pushed into users’ timelines, even if they don’t directly follow him.
Bluesky’s approach is not to do away with algorithms, but instead to have more than the average social network. in Blog Posts in 2023 Bluesky CEO Jay Graber outlined the ethos of the platform. Bluesky is promoting a “market of algorithms” rather than a single “master algorithm”, she wrote.
In practice, this means users will be able to see posts from users they follow on the app, and will be Bluesky’s default standard view. But they can also choose to see What is popular among your friends? selects posts that your peers will enjoy based on an algorithm. There is Feed exclusively for scientists curated by people who work in or work in the field. to promote black voices often decimated by algorithmic filtering.
Specifically one feed Promoting “Quiet Posters” – Users who post infrequently and whose opinions are drowned out by users who share all their opinions with their followers.
This menu of options allows Bluesky to serve the dual purpose of bridging the past and future eras of social media. The platform has the potential to function as a “de facto public town square” once it reaches a certain number of users. Musk’s Twitter dubbing before he buys it. Given that X has steered toward excluding many mainstream voices, and competitors like Threads have chosen to avoid promoting politics and current events, perhaps Bluesky will have a place in such a forum. It is probably the only one left.
But beyond feeds, Bluesky lets you tailor the app to your needs through other elements, like a starter pack of recommended users to jump-start your niche, and blocking tools to silence unruly voices. You can also.
No doubt, there are still problems. Finding the right feed for you can be difficult, but creating your own is even more complicated and requires third-party tools. But it’s exciting to be able to see the big picture of public conversations and delve into smaller debates within wider clusters and communities of society. This is a new social media model where users, rather than large corporations or mysterious individuals, control what they see. And if Bluesky continues to add users, it could become the norm. Come with me – I @stokel.bsky.social.
Chris Stokel-Walker is a freelance technology journalist.
A parliamentary committee investigating the impact of social media on Australian society has recommended empowering users to change, reset, and disable algorithms, as well as enhancing privacy protections. However, the committee also proposed a ban on social media use by individuals under 16 years old. No final recommendations have been made yet regarding access to social media.
The inquiry primarily focused on the influence of social media on young people. Both the opposition coalition and the federal government have announced plans to regulate social media for individuals under 16, pending legislation to be introduced in parliament by the year’s end in response to the current usage policy.
One of the 12 recommendations in the final report suggests enabling governments to enforce laws on digital platforms more effectively, creating a duty of care for platforms, and requiring platforms to provide data access to researchers and public interest groups. The report also suggests that users should have more control over their online experiences, understand algorithms, enhance digital literacy education, and submit age-guaranteed technology testing results to Congress.
Although there’s bipartisan support for banning social media access for those under 16, the study suggests that ensuring children’s safety may not necessarily involve outright bans until they reach an appropriate age. It emphasizes the need for collaborative efforts with young people in designing regulatory frameworks impacting them.
The Commission highlights the importance of evidence-based decisions regarding age restrictions and the necessity of involving young people in the policymaking process.
The committee suggests that a blanket ban on social media for certain age groups may not be the optimal solution and underscores the need for comprehensive digital reforms to tackle harmful online practices.
Chairperson Labor MP Sharon Claydon emphasizes the complexity of the issue and the necessity for immediate action to safeguard Australian users.
The Greens propose lifting the review of online safety laws, banning data mining of young people’s information, providing more education, and considering a digital services tax on platforms.
“What a privilege to be able to run in the rain. What a privilege to have a house to clean.” Social media is often criticized for its toxicity, but a new trend is emerging that embraces gratitude.
Posts titled “What a privilege” feature images of everyday activities such as cozy beds (being tired after a long day), travel videos (carrying heavy luggage), and even mundane tasks like cooking dinner. This trend has gained attention on platforms like Instagram and TikTok.
Screenshot from @tanyaloucas Photo: TikTok
While not as widespread as previous trends, Gratitude 2.0 is gaining popularity with some posts receiving over 200,000 likes. This trend celebrates both simple and luxurious experiences, from commuting to shopping for designer items.
According to lexicographer Tony Thorne, this trend originated from American evangelicals and lifestyle influencers expressing gratitude. It may come across as self-satisfactory and humbly boastful, but it aims to ground people in reality and away from the virtual world created by social media.
Screenshot from @tanyaloucas Photo: TikTok
Rukiat Ashawe, a junior strategist at Digital Fairy, believes that highlighting ordinary aspects of life resonates well with audiences online. By showcasing the everyday, this trend aims to shift focus from idealized virtual realities to genuine experiences.
Is the internet reshaping the concept of privilege? According to Thorne, platforms like TikTok add nuance to the word and turn it into a powerful symbol that taps into specific moods and attitudes.
Norway has set a strict minimum age limit of 15 for social media in its efforts to combat tech companies that are deemed harmful to young children’s mental development.
Prime Minister Jonas Gare Stoer of Norway acknowledged the challenges ahead in this battle but emphasized the need for politicians to intervene to shield children from the influence of algorithms.
The utilization of social media platforms by the industry has been criticized for potentially causing users to become fixated and unstable.
Despite Scandinavian countries already having a minimum age limit of 13, a significant percentage of younger children still access social media, as highlighted by a survey by the Norwegian Media Authority.
The government has pledged to implement additional safeguards to prevent children from bypassing age restrictions, including revisions to personal data laws mandating a minimum age of 15 for consenting to personal data processing on social media platforms and the development of age verification barriers.
Emphasizing the need for protection of children from harmful content on social media, the prime minister spoke of the powerful impact that tech companies can have on young minds. He acknowledged the formidable challenge ahead but stressed the essential role of politics in addressing this issue.
While recognizing the potential benefits of social media in fostering community for isolated children, he cautioned against excessive reliance on algorithms for self-expression, citing the risk of becoming overly focused and detached.
Minister for Children and Families Gjersti Toppe engaged with parents in Stavanger to advocate for stricter online regulations for children as a means of supporting parental decisions in safeguarding their children’s online activities.
The government is exploring methods to enforce restrictions without infringing on human rights, such as potentially requiring bank account information.
Australia has also proposed a social media ban for teenagers and children, with the age limit likely to fall between 14 to 16 years old.
France is currently testing a ban on mobile phone usage in schools for students up to 15 years old, with plans for potential nationwide implementation from January pending the trial’s success.
Legislation supported by Labor, the Conservative Party, and child protection experts will require social media companies to exclude teenagers from algorithms intended to reduce content addiction in under-16s. This new Safer Telephones Bill, introduced by Labor MPs, prioritizes reviewing mobile phone sales to teenagers and potentially implementing additional safeguards for under-16s. Health Secretary Wes Street voiced support for the bill, citing the negative impact of smartphone addiction on children’s mental health.
The bill, championed by Labor MP Josh McAllister, is receiving positive feedback from ministers, although there is hesitation around banning mobile phone sales to teens. With backing from former Conservative education secretary Kit Malthouse and education select committee chair Helen Hayes, the bill aims to address concerns about children’s excessive screen time and exposure to harmful content.
Mr. McAllister’s bill, which focuses on protecting children from online dangers, will be debated by ministers this week. The bill includes measures to raise the Internet age of majority to 16 and give regulatory powers to Ofcom for children’s online safety. The proposed legislation has garnered support from various stakeholders including former children’s minister Claire Coutinho and children’s charities.
Concerns about the impact of smartphones on children’s well-being have prompted calls for stricter regulations on access to addictive online content. While Prime Minister Keir Starmer is against a blanket ban on mobile phones for under-16s, there are ongoing discussions about how to ensure children’s safety online without restricting necessary access to technology.
The bill aims to regulate online platforms and mobile phone sales to protect young people from harmful content and addiction. Mr. McAllister’s efforts in promoting children’s digital well-being have garnered significant support from policymakers and child welfare advocates.
As the government considers the implications of the bill and the Online Safety Act, which is currently pending full implementation, efforts to protect children from online risks continue to gain momentum. It remains crucial to strike a balance between enabling technology access and safeguarding children from potential online harms.
vinegarWe've reached a point where the CEO of a major social network is being arrested and detained. This is a big change, and it happened in a way that nobody expected. From Jennifer Rankin in Brussels:
French judicial authorities on Sunday extended the detention of Telegram's Russian-born founder. Pavel DurovHe was arrested at Paris airport on suspicion of misconduct related to the messaging app.
Once this detention phase is over, the judge can decide whether to release the defendant or to charge him or her and detain him further.
French investigators had issued a warrant for Durov's arrest as part of an investigation into charges of fraud, drug trafficking, organized crime, promoting terrorism and cyberbullying.
Durov, who holds French citizenship in addition to the United Arab Emirates, St. Kitts and Nevis and his native Russia, was arrested as he disembarked from a private jet after returning from the Azerbaijan capital, Baku, on Sunday evening. Telegram released a statement::
⚖️ Telegram complies with EU law, including the Digital Services Act, and its moderation is within industry standards and is constantly being improved.
✈️ Telegram CEO Pavel Durov has nothing to hide and travels frequently to Europe.
😵💫 It is absurd to claim that the platform or its owners are responsible for misuse of their platform.
French authorities said on Monday that Durov's arrest was part of a cybercrime investigation.
Paris prosecutor Laure Vecuot said the investigation concerns crimes related to illegal trading, child sexual abuse, fraud and refusal to provide information to authorities.
On the surface, the arrests seem decidedly different from previous years. Governments have had tough talk with messaging platform providers in the past, but arrests have been few and far between. Often, when platform operators are arrested, as in the cases of Silk Road's Ross Ulbricht and Megaupload's Kim Dotcom, authorities can argue that the platforms would not have existed without the crimes.
Telegram has long operated as a lightly moderated service, partly because of its roots as a chat app rather than a social network, partly because of Durov's own experience dealing with Russian censors, and partly (as many argue) because it is simply cheaper to have fewer moderators and less direct control over the platform.
But even if a company's moderation team's weaknesses can expose it to fines under laws such as the UK's Online Safety Act or the EU's Digital Services Act, they rarely lead to personal charges, and even less to executives being jailed.
Encryption
But Telegram has one feature that makes it slightly different from its peers, such as WhatsApp and Signal: the service is not end-to-end encrypted.
WhatsApp, Signal and Apple's iMessage are built from the ground up to ensure that content shared on the services cannot be read by anyone other than the intended recipient, including not only the companies that run the platforms but also law enforcement agencies that may be called upon to cooperate.
This has caused endless friction between the world's largest tech companies and the governments that regulate them, but for now, it seems the tech companies have won the main battle: No one is seriously calling for end-to-end encryption to be banned anymore, and regulators and critics are instead calling for messaging services to be monitored differently, with approaches such as “client-side scanning.”
Telegram is different. The service offers end-to-end encryption through a little-used opt-in feature called “Secret Chats,” but by default, conversations are encrypted only enough to be unreadable by anyone connected to your Wi-Fi network. To Telegram itself, messages sent outside of “Secret Chats” (including all group chats, and all messages and comments in one of the service's broadcast “channels”) are effectively unencrypted.
This product decision sets Telegram apart from the pack, yet oddly enough, the company's marketing suggests that the difference is almost the exact opposite. Cryptography expert Matthew Green:
Telegram CEO Pavel Durov continues to aggressively promote the app as a “secure messenger.” issued a scathing criticism He blocked Signal and WhatsApp in his personal Telegram channel, suggesting that these systems were rigged with US government backdoors and that only Telegram's independent encryption protocol could truly be trusted.
Watching Telegram urge people to forego using a messenger that's encrypted by default while refusing to implement a key feature that would broadly encrypt messages for its own users is no longer amusing. In fact, it's starting to feel a bit sinister.
I can't v won't
Paper planes are placed outside near the French Embassy in Moscow in support of Pavel Durov, who was arrested in France. Photo: Yulia Morozova/Reuters
The result of Telegram's mismatch between technology and marketing is a disappointing one: The company, and Durov personally, are selling the app to people who worry that even the gold standards of secure messengers — WhatsApp and Signal — aren't secure enough for their needs, especially from the U.S. government.
At the same time, if the government were to knock on Telegram's door and ask for information about actual or suspected criminals, Telegram would not have the same security as other services. End-to-end encrypted services could honestly tell law enforcement that they could not cooperate. In the long run, this could easily create a rather hostile atmosphere, but the conversation could also become a general conversation about privacy and policing principles.
Telegram, by contrast, is faced with a choice: cooperate with law enforcement, ignore it, or declare that it will not actively cooperate. This is no different from the choice facing the vast majority of online companies, from Amazon to Zoopla, except that Telegram's user base is the only one that demands security from law enforcement.
Every time Telegram says “yes” to police, it infuriates its user base; every time it says “no,” it plays a game of chicken with law enforcement.
The contours of the differences between France and Telegram will inevitably be swamped in conversations about “content moderation” and supporters will rally around it accordingly (Elon Musk has already weighed in, saying, “#FreePavel“) But the conversations are usually about publicly available material and what X or Facebook should or shouldn't do to moderate the discussion on their sites. Private messaging services and group messaging services are fundamentally different services, which is why mainstream end-to-end encrypted services exist. But by trying to straddle both markets, Telegram may have lost both defenses.
Final Question
My last day at the Guardian is fast approaching and next week's emails will be handed over to you, the reader. If you have a question you'd like an answer to, a doubt that's been simmering in the back of your mind for years, or are just curious about the inner workings of Techscape, please reply to this email or get in touch with me directly at alex.hern@theguardian.com. Ask me anything.
If you'd like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.
TRussian-born tech entrepreneur Pavel Durov founded wildly popular social networks and cryptocurrencies, amassed a multi-billion dollar fortune, and found himself at odds with authorities in Russia and around the world.
The man, who is just a few months away from his 40th birthday and has been nicknamed “Russia’s Zuckerberg” after Facebook founder Mark Zuckerberg, has now been arrested in France after being detained at a Paris airport this weekend.
The St. Petersburg native rose to fame in Russia in his 20s when he founded VKontakte (VK), a social network that catered to the needs of Russian-speaking users and surpassed Facebook across the former Soviet Union.
After disputes with Russian authorities and an ownership battle, he sold VKontakte and founded a new messaging service called Telegram, which quickly became popular but also became controversial after being criticized for its lack of control over extremist content.
As this drama raged, Durov remained a mercurial and at times enigmatic figure, rarely giving interviews and limiting himself to the occasional cryptic statement on Telegram.
A self-described libertarian, Durov has promoted internet secrecy and message encryption.
He has steadfastly refused to allow moderation of messages on Telegram, where users can post videos, photos, and comments to “channels” that anyone can follow.
Durov, 39, had an arrest warrant out for him in France for allegedly conducting a wide range of criminal activities on Telegram, including fraud, drug trafficking, cyberbullying, and organized crime, including promoting terrorism and fraud.
The investigation has been entrusted to the French national police’s cyber unit and the national anti-fraud office. The suspect was still in police custody on Sunday, according to two sources familiar with the case. He has not been charged with any crime.
In 2006, Durov, a graduate of St. Petersburg University, founded VK, which captivated users despite its mysterious founder.
In an act that epitomized his unpredictable behavior, Durov in 2012 hurled large banknotes at passersby from VK’s headquarters on the roof of a historic bookstore on Nevsky Prospect in St. Petersburg.
Chinese scientists have made a groundbreaking discovery in producing large amounts of water using lunar soil collected from the 2020 mission, as reported by state-run CCTV on Thursday.
The Chang’e-5 mission in 2020 marked a significant milestone in collecting lunar samples after a 44-year hiatus. Scientists from the Chinese Academy of Sciences found high amounts of hydrogen in minerals present in the lunar soil. When heated to extreme temperatures, this hydrogen reacts with other elements to generate water vapor, according to China Central Television.
CCTV reported, “After extensive research and verification over three years, a new method has been identified for producing significant quantities of water from lunar soil. This discovery is anticipated to play a crucial role in designing future lunar research and space stations.”
This finding could have significant implications for China’s long-standing ambition to establish a permanent lunar base, amid the race between the United States and China to explore and exploit lunar resources.
On August 26, 2021, a small vial containing lunar soil brought back from the moon by China’s lunar probe Chang’e-5 was placed in Beijing.Ren Hui/VCG via Getty Images file
NASA Administrator Bill Nelson has expressed concerns about China’s rapid progress in space exploration and the potential risk of Beijing controlling valuable lunar resources.
According to state media, the new technique can yield approximately 51-76 kilograms of water from one ton of lunar soil, enough to fill over 100 500ml bottles or sustain the daily water needs of 50 individuals.
China aims for its recent and upcoming lunar missions to establish a basis for constructing the International Lunar Research Station (ILRS), a collaborative project with Russia.
The Chinese space agency’s plan includes establishing a lunar “base station” at the moon’s south pole by 2035, followed by a lunar orbiting space station by 2045.
This discovery coincides with ongoing experiments by Chinese scientists on lunar samples obtained from the Chang’e-6 probe in June.
While the Chang’e-5 mission collected samples from the moon’s near side, Chang’e-6 gathered lunar soil from the far side, perpetually hidden from Earth.
The significance of lunar water surpasses sustaining human settlement; NASA’s Nelson mentioned to NPR in May that moon water could be utilized to produce hydrogen fuel for rockets, potentially fueling missions to Mars and beyond.
“Bcontinue @thread
“This week has felt like sitting on a half-empty train early in the morning as gradually more people board with horror stories of how awful the service is on the other line,” actor David Harewood wrote on Meta’s Twitter/X rival, which, judging by the number of “Hey, how does this work?” questions from newcomers, seems to be seeing echoes, at least in the UK, following last week’s far-right riots.
Newcomers to the thread might be wondering why it took so long. To say Elon Musk’s tenure as owner of the social network formerly known as Twitter and now renamed X has been outrageous would be a criminal understatement. Recent highlights include the unbanning of numerous far-right and extremist accounts, as well as his own misinformation campaign regarding far-right anti-immigrant riots in the UK.
Before Musk bought the company in 2022, few alternatives to Twitter existed, but several have emerged in the past few years. Today, there are the generally left- and liberal-leaning Blue Sky and Mastodon, the right-leaning Gab, and Donald Trump’s Truth Social Network.
But perhaps the biggest threat to X is Threads, in part because it was launched by Meta, the giant behind Facebook, Instagram and WhatsApp. But a simple question remains: is Threads any good?
For Satnam Sanghera, an author and journalist, the reason for the move is simple: “This place is corroding the very fabric of British society so I am trying to avoid it as much as possible and hoping it will be regulated,” he explained in a direct message on X. “Systemic abuse has been an issue for me, and for many people of colour, for years.”
But the force behind the switch is not so much the allure of Threads, a popular new social network, but the power to drive people away from X. “Threads has some great things, especially the fact that it links with Instagram, which is probably the most convenient social media platform,” Sanghera says. “But a lot of my loved ones aren’t on it. I’m hoping that will change, or maybe it’s just that it’s time to quit social media altogether.”
The integration with Instagram allows Insta users to open a Threads account with just a few clicks, which seems to have really accelerated Threads’ growth. Threads hit the milestone of 200 million active users earlier this month, just one year after its initial release. In comparison, Bluesky has just 6 million registered accounts and 1.1 million active users, while Mastodon has 15 million registered users, but no public data on active users.
Social media outlet Bluesky is one of X’s current alternatives. Photo: Jaap Arrians/NurPhoto/Shutterstock
“Threads has one big advantage,” says Emily Bell, director of the Center for Digital Journalism at Columbia University in New York. “It has a built-in user base of celebrities and athletes. If you really want to kick everyone off Twitter, you can have Taylor Swift, Chapel Rowan, [Italian sports journalist] “Fabrizio Romano”
Bell believes that because all of these users are already on Instagram, it may be easier to attract them to Threads than to convince them to start from scratch with an entirely new social network.
But she says this is a shame, and thinks Threads is a terrible product. “To me, Threads is a platform designed to compete with Twitter, and it feels like it was designed by a company that hates everything about Twitter,” she says. “Threads is boring as hell – presentation, participation, everything.”
From my personal experience trying out Threads for this article, it seems like Meta doesn’t see Threads as a huge, exciting new product that they want new users to use. Having around 88,000 followers on X has always made me hesitant to join other social networks, which is why I’ve never had an Instagram account.
To join Threads, I had to join Instagram first, which took about 24-36 hours because I got some weird error messages while signing up. I finally managed to create a Threads account, but after following five accounts I was limited. A few hours later the limit was lifted, I was able to follow three more accounts, and then I was limited again. I quickly gave up.
Those who found it easy to join the site say that once they were on it, it was more comfortable than X, but that’s mainly for the simple reason that it still has moderation staff and doesn’t actively try to attract the far right.
“Threads have a different vibe because they’re almost always participated in by small, self-organized groups,” says misinformation researcher Nina Jankowitz. “They’re usually want Something different than Twitter/X. It definitely helps that they are actively moderating it and that the site’s leadership is not actively promoting conspiracy theories.”
Both potential rivals to X are keen to differentiate themselves from the original. Meta has said it doesn’t want Threads to focus on news and current events like X. Mastodon is perhaps the most consciously “woke” of the alternatives, with very different norms around content warnings and sharing. As such, Bluesky offers the closest experience to the “rebellious” and playful “old Twitter” that many still miss.
Even some of the early successes on Threads are a bit sceptical about its actual value: Stella Creasy, the Labour MP for Walthamstow, has more than 20,000 followers on Threads (166,300 on X), but she confesses that she never actually posts there.
“I just cross-post it to Instagram,” she says, sounding a little guilty. “So I [following] Nothing happens and there is no involvement whatsoever.”
That’s not to say Chrissy has shunned social media: she still posts on X, and is now in a local WhatsApp group with up to 700 members, where her supporters can interact with her directly. While she says she “doesn’t understand” TikTok (“I don’t feel like dancing in public”), she created an account there because “local Asian moms told me that’s where it’s at.”
Chrissie noted that this fragmentation of social media has made her job as a member of Congress more difficult during the recent turmoil: Trying to connect with an audience and provide accurate information is harder on six platforms than it is on one.
Threads’ success may be due to the ease of joining by default: If you use Instagram, it’s the easiest thing to join, and once you’re there, it’s… fine. But if other users seem to be operating on autopilot, they probably are.
“It’s a little bit overloaded here, you’re just in the media and you don’t know what to do,” Creasy says, “and ironically, that’s why I don’t do threads. I know that’s where I get my momentum and that’s where I’m not doing anything.”
What actions can the UK government take regarding Twitter? Should What are your thoughts on Twitter? What interests does Elon Musk have?
The billionaire proprietor of the social network, still officially referred to as X, has had an eventful week causing disruptions on his platform. Besides his own posts, which include low-quality memes sourced from 8chan and reposted fake concerns from far-right figures, the platform as a whole, along with the other two of the three “T’s,” TikTok and Telegram, briefly played a significant role in orchestrating this chaos.
There is a consensus that action needs to be taken: Bruce Daisley, former VP EMEA at Twitter, proposes individual accountability.
In the near term, Musk and other executives should be reminded of their legal liability for their actions under current laws. The UK’s Online Safety Act 2023 should be promptly bolstered. Prime Minister Keir Starmer and his team should carefully consider if Ofcom, the media regulator frequently criticized for the conduct of organizations like GB News, can effectively manage the rapid behavior of someone like Musk. In my view, the threat of personal consequences is much more impactful on corporate executives than the prospect of a corporate fine. If Musk continues to incite unrest, an arrest warrant could create sparks from his fingertips, though as a jet-setting personality, an arrest warrant could be a compelling deterrent.
Last week, London Mayor Sadiq Khan presented his own suggestion.
“The government swiftly realized the need to reform the online safety law,” Khan told the Guardian in an interview. “I believe that the government must ensure that this law is suitable immediately. I don’t think it currently is.”
“Responsible social media platforms can take action,” Khan remarked, but added that “if they fail to address their own issues, regulation will be enforced.”
When I spoke to Euan McGaughey, a law professor at King’s College London on Monday, he provided more precise recommendations on what actions the government could take. He mentioned that the Communications Act 2003 underlies many of Ofcom’s authorities and is applied to regulate broadcast television and radio, but extends beyond those media.
Simply as section 232 specifies that “television licensable content services” involve distribution “by any means involving the use of an electronic communications network,” this Act empowers Ofcom to regulate online media content. While Ofcom could exercise this power, it is highly improbable as Ofcom anticipates challenges from tech companies, including those fueling riots and conspiracy theories.
Even if the BBC or the government were reluctant to interpret the old law differently, minor modifications could subject Twitter to stricter broadcasting regulatory oversight, he added.
For instance, there is no distinction between Elon Musk posting a video on X about (so-called) two-tier policing, discussing “detention camps” or asserting “civil war is inevitable” and ITV, Sky, or the BBC broadcasting the news… Online Safety Act Grossly insufficient, as the constraints merely aim to prevent “illegal” content and do not inherently address false or dangerous speech.
The law of keeping promises
Police in Middlesbrough responded to a mob spurred by social media posts this month. Photo: Gary Culton/Observer
It may seem peculiar to feel sympathy for an inanimate object, but the Online Safety Act has likely been treated quite harshly given its minimal enforcement. A comprehensive law encompassing over 200 individual clauses, it was enacted in 2023, but most of its modifications will only take effect once Ofcom has completed the extensive consultation process and established a code of practice.
The law introduces a few new offenses, such as bans on cyber-flashing and upskirt photography. Sections of the old law, referred to as malicious communications, have been substituted with new, more precise laws like threatening and false communications, with two of the new offenses going into effect for the first time this week.
But what if this had all happened earlier and Ofcom was operational? Would the outcome have been different?
The Online Safety Act is a peculiar piece of legislation: an effort to curb the worst impulses on the internet, drafted by a government taking a stance in favor of free speech amidst a growing culture war and enforced by regulators staunchly unwilling to pass judgment on individual social media posts.
What transpired was either a skillful act of navigating a tricky situation or a clumsy mishap, depending on who you ask. The Online Safety Act does not outright criminalize everything on the web; instead, it mandates social media companies to establish specific codes of conduct and consistently enforce them. For certain forms of harm like incitement to self-harm, racism, and racial hatred, major services must at least provide adults with the option to opt out of such content and completely block it from children. For illegal content ranging from child abuse imagery to threats and false communications, it requires new risk assessments to aid companies in proactively addressing these issues.
It’s understandable why this legislation faced significant backlash upon its passage: its main consequence was a mountain of new paperwork in which social networks had to demonstrate adherence to what they had always purportedly done: attempting to mitigate racist abuse, addressing child abuse imagery, enforcing their terms of use, and so forth.
Advocates of the law argue that it serves more as a means for Ofcom to impose its promises on companies rather than forcing them to alter their behavior. The easiest way to impose a penalty under the Online Safety Act – potentially amounting to 10% of global turnover if modeled after GDPR – is to announce loudly to customers that steps are being taken to tackle issues on the platform, only to do nothing.
One could envision a scenario where the CEO of a tech company, the key antagonist in this play, stands before an inquiry, solemnly asserting that the reprehensible behavior they witness violates their terms of service, then returning to their office and taking no action.
The challenge for Ofcom lies in the fact that multinational social networks are not governed by cartoonish villains who flout legal departments, defy moderators, and whimsically enforce one set of terms of service on allies and a different one on adversaries.
Except for one.
Do as I say, don’t do as I do
Elon Musk’s Twitter has emerged as a prime test case for online safety laws. On the surface, the social network appears relatively ordinary: its terms of service prohibit the dissemination of much of the same content as other major networks, with a slightly more lenient stance on pornographic material. Twitter maintains a moderation team that employs both automated and human moderation to remove objectionable content, an appeals process for individuals alleging unfair treatment, and progressive penalties that could ultimately lead to account suspensions for violations.
However, there’s an additional layer to how Twitter operates: Elon Musk follows through on what he says. For instance, last summer, after a prominent right-wing influencer shared child abuse images, the account’s creator received a 129-year prison sentence. The motive remains unclear, but the account was swiftly suspended. Musk then intervened:
The only people who have seen these photos are members of the CSE team. At this time, we will remove these posts and reinstate your account.
While Twitter’s terms of service theoretically prohibit many of the egregious posts related to the UK riots, such as “hateful conduct” and “inciting, glorifying, or expressing a desire for violence,” they do not seem to be consistently enforced. This is where Ofcom may potentially take aggressive actions against Musk and his affiliated companies.
If you wish to read the entire newsletter, subscribe to receive TechScape in your inbox every Tuesday.
aAmong those quickly convicted and sentenced recently for their involvement in racially charged riots were: Bobby Silbon. Silbon exited his 18th birthday celebration at a bingo hall in Hartlepool to join a group roaming the town’s streets, targeting residences they believed housed asylum seekers. He was apprehended for vandalizing property and assaulting law enforcement officials, resulting in a 20-month prison term.
While in custody, Silbon justified his actions by asserting their commonality: “It’s fine,” he reassured officers. “Everyone else is doing it too.” This rationale, although a common defense among individuals caught up in gang activity, now resonates more prominently with the hundreds facing severe sentences.
His birthday festivities were interrupted by social media alerts, potentially containing misinformation about events in Southport. Embedded in these alerts were snippets and videos that swiftly fueled a surge in violence without context.
Bobby Charbon left a birthday party in Hartlepool and headed to the riots after receiving a social media alert.
Picture: Cleveland Police/PA
Mobile phone users likely witnessed distressing scenes last week: racists setting up checkpoints in Middlesbrough, a black man being assaulted in a Manchester park, and confrontations outside a Birmingham pub. The graphic violence, normalized in real-time, incited some to take to the streets, embodying the sentiment of “everyone’s doing it.” In essence, a Kristallnacht trigger is now present in our pockets.
A vintage document from the BBC, the “Guidelines Regarding Violence Depiction,” serves as a reminder of what is deemed suitable for national broadcasters. Striking a balance between accuracy and potential distress is emphasized when airing real-life violence. Specific editorial precautions are outlined for violence incidents that may resonate with personal experiences or can be imitated by children.
Social media lacks these regulatory measures, with an overflow of explicit content that tends to prioritize sensationalism over accuracy, drawing attention through harm and misinformation.
During Keir Starmer’s 2020 Labour leader campaign, his team debated the idea of him leaving Twitter altogether.
Many of Starmer’s close associates wanted to change the party’s direction following a tough election and divisive social media campaigning.
Before Elon Musk took over Twitter, rebranded it as X, and allowed far-right figures back on the platform, there was a noticeable increase in misinformation. The aggressive nature of the platform seemed to fuel a darker side of politics.
Starmer himself has always been wary of Twitter’s usefulness, especially when dealing with difficulties faced by his own MPs. However, the plan to boycott the platform never materialized due to the challenges of being in opposition.
Currently, politicians like Starmer heavily rely on X for communication purposes. Despite criticism, X remains a key platform for making important announcements.
While Labour has a “tweet first” strategy, there are concerns within the government about the sustainability of this approach. Musk recently mocked Starmer on X, spreading misinformation to his large following.
Although government ministers do not explicitly mention X, they acknowledge the problem of misinformation on various platforms including X, Facebook, YouTube, TikTok, and WhatsApp.
Recognizing X’s unique position as a platform used by politicians and journalists, concerns about accuracy and the platform’s owners’ influence in spreading misinformation are growing.
Elon Musk may soon shift focus back to the US presidential election. Photo: David Swanson/Reuters
A spokesperson for Starmer condemned Musk’s inflammatory comments and actions on X, emphasizing the need for responsible behavior on the platform.
While Musk may eventually move on from provoking Starmer, the situation poses a challenge for the government. Efforts to work closely with social media companies continue, but further actions under the online safety law may be considered.
As some organizations and MPs reconsider their use of X, the dilemma of balancing the platform’s benefits with its drawbacks persists. The instant access to influential individuals and breaking news sets X apart, making it a difficult platform to abandon.
Despite criticisms and concerns, the importance of X in the political landscape remains undeniable, making it an indispensable tool for communication and information dissemination.
The 1996 Dunblane massacre and the protests that followed were Textbook example of how an act of terrorism mobilized a nation to demand effective gun control.
The atrocity, in which 16 children and a teacher were killed, triggered a wave of nationwide backlash, and within weeks 750,000 people had signed a petition calling for legal reform. Within a year and a half, new laws were in place making it illegal to own handguns.
Nearly three decades after the horrific violence at a Southport dance studio, it has provoked a starkly different response. It shocked many in the UK this week, but experts on domestic extremism, particularly those who look at the intersection of violence and technology, say it’s all too common — and, in this new age of algorithmic rage, sadly inevitable.
“Radicalization has always happened, but before, leaders were the bridge-builders that brought people together,” said Maria Ressa, a Filipino journalist and sharp-tongued technology critic who won the 2021 Nobel Peace Prize. “That’s no longer possible, because what once radicalized extremists and terrorists now radicalizes the general public, because that’s how the information ecosystem is designed.”
For Ressa, all of the violence that erupted on the streets of Southport, and then in towns across the country, fuelled by wild rumours and anti-immigrant rhetoric on social media, felt all too familiar. “Propaganda has always been there, violence has always been there, it’s social media that has made violence mainstream. [The US Capitol attack on] January 6th is a perfect example. Without social media to bring people together, isolate them, and incite them even more, people would never have been able to find each other.”
The biggest difference between the Dunblane massacre in 1996 and today is that the way we communicate has fundamentally changed. In our instant information environment, informed by algorithms that spread the most shocking, outrageous or emotional comments, social media is designed to do the exact opposite of bringing unity: it has become an engine of polarization.
“It seemed like it was just a matter of time before something like this happened in the UK,” says Julia Ebner, head of the Violent Extremism Lab at the Oxford University Centre for Social Cohesion Research. “This alternative information ecosystem is fuelling these narratives. We saw that in the Chemnitz riots in Germany in 2018, which reminded me strongly of that. And [it] The January 6th riots occurred in the United States.
“You see this chain reaction with these alternative news channels. Misinformation can spread very quickly and mobilize people into the streets. And then, of course, people tend to turn to violence because it amplifies anger and deep emotions. And then it travels from these alternative media to X and mainstream social media platforms.”
This “alternative information ecosystem” includes platforms like Telegram, BitTortoise, Parler and Gab, and often operates unseen behind the scenes of mainstream and social media. It has proven to be a breeding ground for the far-right, conspiracy theories and extremist ideology that has collided this week and mobilized people into the streets.
“Politicians need to stop using the phrase ‘the real world’ instead of ‘the online world,'” Ressa said. “How many times do I have to say it? It’s the same old thing.”
A burnt-out car has been removed after a night of violent anti-immigration protests in Sunderland. Photo: Holly Adams/Reuters
For Jacob Davey, director of counter-hate policy and research at the Institute for Strategic Dialogue in London, it was a “catastrophe”: Recent mass protests in the UK have emboldened the far-right, with far-right figures like Tommy Robinson being “replatformed” on X, while measures to curb hate are being rolled back.
The problem is that even though academics, researchers and policymakers are increasingly understanding the issue, very little is being done to solve it.
“And every year that goes by without this issue being addressed and without real legislation on social media, it’s going to get significantly worse,” Ressa said. “And [Soviet leader] Yuri Andropov said: Design Information [disinformation] “It’s like cocaine. Once or twice it’s okay, but if you take it all the time it becomes addictive. It changes you as a person.”
However, while UK authorities are aware of these threats in theory, in 2021 MI5 Director Ken McCallumsaid far-right extremism was the biggest domestic terrorism threat facing the UK, but the underlying technical problems remain unresolved.
It’s seven years since the FBI and US Congress launched an investigation into the weaponisation of social media by the Russian government, and while much of the UK’s right-wing media has ignored or mocked the investigation, Daily Mail This week, a shocking headline was published about one suspicious account on X. The account may be based in Russia and may be spreading false information, but this may only be part of the picture.
And there is still little recognition that what we are witnessing is part of a global phenomenon — a rise in populism and authoritarianism underpinned by deeper structural changes in communication — or, according to Ebner, the extent to which the parallels with what is happening in other countries run deep.
“The rise of far-right politics is very similar across the world and in different countries. No other movement has been able to amplify their ideology in the same way. The far-right is tapping into really powerful emotions in terms of algorithmically powerful emotions: anger, indignation, fear, surprise.”
“And really what we’re seeing is a sense of collective learning within far-right communities in many different countries. And a lot of it has to do with building these alternative information ecosystems and using them to be able to react or respond to something immediately.”
The question is, what will Keir Starmer do? Ebner points out that this is no longer a problem in dark corners of the internet. Politicians are also part of the radicalised population. “They are now saying things they would not have said before, they are blowing dog whistles to the far right, they are playing with conspiracy theories that were once promoted by far-right extremists.”
And human rights groups such as Big Brother Watch fear that some of Starmer’s solutions – including a pledge to increase facial recognition systems – could lead to further harm from the technology.
Ravi Naik, of AWO, a law firm specialising in cases against technology companies, said there were a number of steps that could be taken, including the Information Commissioner’s Office enforcing data restrictions and police action against incitement to violence.
“But these actions are reactive,” Naik said. “The problem is too big to be addressed at the whim of a new prime minister. It is a deep-rooted issue of power, and it cannot be solved in the middle of a crisis or by impulsive reactions. We need a real adult conversation about digital technology and the future we all want.”
Meta maintains its stance against paying media companies for news in Australia, arguing that it does not address the issue of misinformation and disinformation on Facebook and Instagram.
In March, Meta announced that it would not engage in new agreements with media organizations to pay for news fees after the expiration of contracts signed in 2021 under the Morrison government’s media bargaining code.
Deputy Treasurer Stephen Jones is exploring the possibility of the Albanese government using powers under the News Media Bargaining Code Act to “designate” Meta under the code. If designated, the tech company would be compelled to negotiate payments with news providers or face a fine of 10% of its revenue in Australia.
The Treasury Department is also exploring other options, such as mandating the company to distribute news or leveraging taxation to influence the company. The government is concerned that designating Meta under the code could result in a ban in Australia, similar to what occurred in Canada since August last year.
Experts in Canada have noted that where news content has disappeared, it has been replaced by misleading viral content.
In a submission to a federal parliamentary inquiry on social media and Australian society, Meta stated that they are “unaware of any evidence” supporting claims that misinformation has increased on their Canadian platforms due to the news ban, and that they have never viewed news as a tool to combat misinformation and disinformation on their platform.
“We are committed to removing harmful misinformation and reducing the distribution of fact-checked misinformation, regardless of whether it is news content. By addressing this harmful content, we aim to maintain the integrity of information on our platform,” stated the submission.
“Canadians can still access trusted information from various sources using our services, including government agencies, political parties, and non-government organizations, which have always shared engaging information with their audiences, along with news content links.”
“Put that phone away!” Most parents have yelled something similar to this at their children, usually resulting in a shocked look on the child’s face.
In recent years, the spread of smartphones and social media has led us to spend more time in front of screens. Children are no exception. The COVID-19 pandemic has led to a significant increase in children’s screen time due to lockdowns and school closures.
There are many frightening claims about excessive screen time for children and teens: that it harms their mental health, leading to depression, eating disorders and even suicide; that it cuts into time they could be spending on socializing and exercise, making them feel lonely and less physically fit; and more. In short, the fear is that spending too much time on digital devices is ruining our children’s lives, with the tech companies who design the apps that keep us hooked being complicit. It’s no wonder that governments around the world are considering restricting screen time for under-18s.
Yet a closer look at the evidence does not support this overwhelmingly negative view. This does not mean that the tech giants are harmless and that further regulation is not needed. But it does mean that we need to think more carefully about what healthy screen time looks like for young people, and how we can make the online world the most accessible to them. So here is your guide to what we actually know about the impact of screens and social media.
One thing is clear in this complex field: children and young people, like the rest of us, spend a lot of time in front of screens.
I I don’t usually believe in life hacks. I’d like to imagine that with one simple adjustment my life would reappear like a cracked tennis court, but as time and experience have shown, positive change usually happens slowly and gradually.
But there is one hack that I truly believe in. It’s fast, free, and will instantly change your life for the better. Just mute the annoying people on social media.
The process varies by platform. Typically, you would go to the offending poster’s profile page or one of her posts and tap “Mute,” “Snooze,” or “Unfollow.” But that’s it. Thanks to this digital dust, social media is cleaner, or at least less dirty than it used to be. They’ll disappear from your timeline, and so will the various little annoyances they caused. Also, unlike unfollowing or blocking someone, the muted party won’t know they’ve been silenced, so there’s no risk of awkwardness or drama. .
Several people are muted. Some of them are people you don’t want to unfollow. I unfollowed some people, but I muted them because others might repost and pollute my pure timeline. One is a semi-celebrity who was rude to me about work many years ago. Another person was rude to my friend. There are also ex-lovers and people who are always humble and boastful and make you want to bang your head against something hard.
These people brought out the worst in me. When I saw their posts, I felt angry, mean, and small. I wondered how much it would cost to buy billboards along major highways with bullet points detailing just how bad it really is.
Luckily, I rarely think about these people anymore because I’ve muted them on all platforms. I usually forget these people exist unless someone brings it up in conversation. They were weeded from the lush garden of my brain.
Bailey Parnell, founder and president of the Center for Digital Wellbeing, said, “Muting accounts that repeatedly make you uncomfortable is setting up digital boundaries to create a healthier digital environment.” I am. This allows you to avoid offensive content without disconnecting. It’s a solution, she says, to the complicated situation where a relationship with someone is important to you despite their annoying online presence.
“This allows you to maintain your social and professional networks while also maintaining your mental health,” she says.
This may seem like obvious advice. Still, it can be difficult to follow. The frustration you feel when you see someone’s bad posts can be accompanied by a sense of satisfaction. It’s like, “Look!” It’s annoying!
“There can be a dopamine rush at the end of a big emotion,” says Monica Amorosi, a certified trauma therapist in New York City. We may begin to crave the adrenaline spike that comes with content that makes us feel shocked, angry, or disgusted.
“If we lead a mundane life, lack stimulation, are bored or overwhelmed, consuming this substance can be a form of entertainment or distraction,” says Amoros. says Mr.
Amorosi emphasizes the importance of not creating a “space of ignorance” in your feed by avoiding different perspectives on current events and alarming news. But this does not mean that social media should only be used to access upsetting information. Our feed can be used for “healthy, positive education, connecting with like-minded people, understanding the nuances and diversity of the world, fact-checking information, and learning new hobbies and ideas,” she says. say.
So muting is probably most effectively applied against people who annoy you in a bland, everyday way, such as an arrogant colleague. Not seeing humble braggarts pretending to be ashamed of their professional successes does not limit my worldview. Instead, I get back the 5-10 minutes I might have wasted taking screenshots of posts and complaining to friends about them.
Frankly, I haven’t done anything with the time I’ve gained by not badmouthing the people I’ve muted. But how nice that at least he has days when he’s comfortable for even five minutes.
So feel free to mute yourself and often. And what if you disagree with me? Please mute. You never know!
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.