Examining Anti-Immigrant Themes in AI-Generated Content with Billions of TikTok Views

Numerous TikTok accounts are accumulating billions of views by sharing anti-immigrant and sexually explicit AI-generated material, as highlighted in a recent report.

Researchers found 354 accounts centered around AI that shared 43,000 posts created with AI tools, resulting in 4.5 billion views in just one month.

As per the Paris-based nonprofit AI Forensics, these accounts are attempting to manipulate TikTok’s algorithm—responsible for deciding what content appears for users—by posting large volumes of content in hopes of achieving viral status.

Some accounts reportedly posted as many as 70 times daily, indicative of automated activity, with most accounts established at the start of the year.

TikTok disclosed last month that it hosted at least 1.3 billion AI-generated posts. With more than 100 million pieces of content uploaded daily, AI-labeled material constitutes a minor fraction of TikTok’s offerings. Users can also adjust settings to minimize exposure to AI content.

Among the most active accounts, around half focused on content related to women’s bodies. The report notes, “These AI representations of women are often depicted in stereotypically attractive forms, which include suggestive clothing and cleavage.”

Research from AI Forensics indicated that nearly half of the content posted by these accounts lacked labels, and under 2% used TikTok’s AI tags. The organization cautioned that this could mislead viewers. They noted that some accounts can evade TikTok’s moderation for months, even while distributing content that violates the platform’s terms.

Several accounts identified in the study have been deleted recently, with signs suggesting that moderators removed them, according to the researchers.

Some of this content resembled fake news broadcast segments. An example is an anti-immigrant story and other materials that sexualize young women’s bodies, potentially including minors. AI Forensics identified that half of the top ten most active accounts were focused on the female body niche, with some of the fake news utilizing familiar news brands including Sky News and ABC.

After a mention by The Guardian, some posts were subsequently taken down by TikTok.

TikTok labeled the report’s assertions as “unfounded,” asserting that the researchers acknowledged the issue as one affecting several platforms. Recently, The Guardian revealed that almost one in ten of the fastest-growing YouTube channels primarily features AI-generated content.

“TikTok is committed to eliminating harmful AIGC [artificial intelligence-generated content], we are blocking the creation of hundreds of millions of bot accounts while investing in top-notch AI labeling technology, and providing users with the tools and education necessary to manage their content experience on our platform,” declared a TikTok spokesperson.




An example of AI “slop” is content that lacks substance and is intended to clutter social media timelines. Photo: TikTok

The most viewed accounts flagged by AI Forensics often shared “slop,” a term used to describe AI-generated content that is trivial, odd, and meant to disturb users’ feeds. This includes postings such as animals in Olympic diving or talking babies. Researchers noted that while some of the risqué content was deemed “funny” and “adorable,” it still contributes to the clutter.

Skip past newsletter promotions

TikTok’s policies forbid the use of AI to create deceptive authoritative sources, portray anyone under 18, or depict adults who aren’t public figures.

“Through this investigation, we illustrate how automated accounts integrate AI content into platforms and the broader virality framework,” the researchers noted.

“The distinction between genuine human-generated content and artificial AI-produced material on platforms is becoming increasingly indistinct, indicating a trend towards greater AI-generated content in users’ feeds.”

The analysis spanned from mid-August to mid-September, uncovering attempts to monetize users via the advertisement of health supplements through fictitious influencers, the promotion of tools for creating viral AI content, or seeking sponsorships for posts.

While AI Forensics acknowledged TikTok’s recent move to allow users to restrict AI content visibility, they emphasized the need for improved labeling.

“We remain cautious about the effectiveness of this feature, given the significant and persistent challenges associated with identifying such content,” they expressed.

The researchers recommended that TikTok explore the option of developing AI-specific features within its app to differentiate AI-generated content from that produced by humans. “Platforms should aim to transcend superficial or arbitrary ‘AI content’ labels and develop robust methods that either distinctly separate generated and human-created content or enforce systematic and clear labeling of AI-generated material,” they concluded.

Source: www.theguardian.com

AI’s Energy Drain from Poor Content: Can We Redefine AI for Climate Action?

aArtificial intelligence is frequently linked to massive electricity consumption, resulting in global warming emissions that often support unproductive or misleading gains which contribute little to human advancement.

However, some AI proponents at a significant UN climate summit are presenting an alternative perspective. Could AI actually assist in addressing the climate crisis rather than exacerbating it?

The discussion of “AI for good” resonated at the Cop30 conference in Belem, Brazil, where advocates claim AI has the potential to lower emissions through various efficiencies that could impact multiple aspects of daily life, including food, transportation, and energy—major contributors to environmental pollution.


Recently, a coalition of organizations, UN agencies, and the Brazilian government announced the establishment of the AI Climate Institute, a new global initiative aimed at leveraging AI as a tool for empowerment to assist developing nations in addressing environmental issues.

Proponents assert that, over time, this initiative will educate countries on utilizing AI in various ways to curb emissions, including enhancing public transportation, streamlining agricultural systems, and adjusting energy grids to facilitate the timely integration of renewable energy.

Forecasting weather patterns, including the mapping of impending climate crises like floods and wildfires, could also be refined through this approach, remarked Maria João Souza, executive director of Climate Change AI, one of the organizations involved in the initiative.

“Numerical weather prediction models demand significant computational power, which limits their implementation in many regions,” she noted. “I believe AI will act as a beneficial force that accelerates many of these advancements.”

Lorenzo Sarr, chief sustainability officer at Clarity AI and also present at Cop30, emphasized that AI could aid in tracking emissions and biodiversity, providing insights into current conditions.

“One can truly begin to identify the problem areas,” he said. “Then predictions can be made. These forecasts can address both short-term and long-term scenarios. We can predict next week’s flooding, and also analyze phenomena like rising sea levels.”

Sarr acknowledged valid concerns regarding AI’s societal and governance impacts, but he expressed optimism that the overall environmental outcomes could be beneficial. A report released in June by the London School of Economics delivered unexpectedly positive projections, suggesting that AI could slash global greenhouse gas emissions by 3.2 billion to 5.4 billion tons over the next decade, even factoring in significant energy usage.

“People already make poor energy choices, such as overusing their air conditioners,” Sarr commented. “How much of what we do on our phones is detrimental? It’s a recurring thought for me. How many hours do we spend scrolling through Instagram?”

“I believe society will gravitate toward this direction. We must consider how to prevent harming the planet through heating while ensuring a net positive impact.”

Yet, some experts and environmental advocates remain skeptical. The immense computational demands of AI, particularly in the case of generative models, are driving a surge in data centers in countries like the U.S., which consume vast quantities of electricity and water—even in drought-prone areas—leading to surging electricity costs in certain regions.

The climate ramifications of this AI surge, propelled by companies like Google, Meta, and OpenAI, are considerable and likely to increase, as indicated by a recent study from Cornell University. This impact is comparable to adding 10 million gasoline cars to the roads or matching the annual emissions of all of Norway.

“There exists a techno-utopian belief that AI will rescue us from the climate crisis,” stated Jean Hsu, a climate activist at the Center for Biological Diversity. “However, we know what truly will save us from the climate crisis: the gradual elimination of fossil fuels, not AI.”

While AI may indeed enhance efficiency and lower emissions, these same technologies can be leveraged to optimize fossil fuel extraction as well. A recent report by Wood Mackenzie estimated that AI could potentially unlock an additional trillion barrels of oil. Such a scenario, if accepted by energy markets, would obliterate any chances of preventing severe climate change.

Natasha Hospedares, lead attorney for AI at Client Earth, remarked that while the “AI for good” argument holds some validity, it represents “a very small niche” within a far larger industry focused primarily on maximizing profits.

“There is some evidence that AI could assist developing nations, but much of this is either in the early stages or remains hypothetical, and actual implementation is still lacking,” she stated. “Overall, we are significantly distant from achieving a state where AI consistently mitigates its detrimental environmental impacts.”

“The environmental consequences of AI are already alarming, and I don’t foresee a slowdown in data center expansion anytime soon. A minor fraction of AI is being applied for beneficial purposes, while the vast majority is being exploited by companies like Google and Meta, primarily for profit at the expense of the environment and human rights.”

Source: www.theguardian.com

EU Launches Investigation into Google’s ‘Demotion’ of News Media Commercial Content

The European Union has initiated an investigation into Google Search amid worries that the US tech giant may be “downgrading” commercial content from news media platforms.

The enforcement body of the bloc announced this move after monitoring revealed that various content produced in collaboration with advertisers and sponsors was ranked so low by Google that it essentially vanished from search results.

Officials from the European Commission indicated that this potentially unfair “loss of visibility and revenue” for media owners could stem from Google’s anti-spam policies.

According to the Digital Markets Act (DMA), which governs competition within the tech sector, Google is required to provide “fair, reasonable and non-discriminatory conditions for access to publishers’ websites in Google Search”.

Committee officials clarified that the investigation does not pertain to the overall indexing of newspapers or Google search coverage but focuses specifically on commercial content supplied by third parties.

Media collaborations with firms selling products and services, from seasonal items to apparel, are described as “normal business practices in the offline world” and should be supported in equitable online ecosystems like Google, according to the officials.

For instance, a newspaper may partner with Nike to offer discounts, but evidence suggested that Google Search “demoted the newspaper’s subdomains to the extent that users could no longer access them.” This situation would also negatively impact newspapers.

“We are concerned that Google’s policies do not facilitate fair, reasonable, and non-discriminatory treatment of news publishers in search results,” stated Teresa Rivera, European Commission vice-president for clean, fair, and competitive transition policy.

In the upcoming days, authorities will request publishers to present evidence regarding the effects on traffic and revenue resulting from the alleged violations of fair practices, according to the commission.

Rivera further remarked: “We will investigate to ensure news publishers are not losing essential revenue during a challenging time for the industry and to make certain that Google adheres to the Digital Markets Act.”

“We are taking measures today to guarantee that Digital Gatekeepers do not unreasonably hinder the ability of businesses relying on them to promote their products and services.”

In response, Google has criticized the EU investigation as “misguided” and “without merit”.

Skip past newsletter promotions

The company shared in a blog post: “Unfortunately, the investigation into our anti-spam efforts announced today is misguided and risks harming millions of users in Europe.

“And this investigation is without merit. German courts have already dismissed similar claims, ruling that our anti-spam policies were effective, reasonable, and applied consistently.”

The policy is designed to build “trustworthy results” and “combat deceptive billing tactics” that “degrade” the quality of Google search results.

The EU stated it took these actions to safeguard traditional media competing in online markets, especially after President Ursula von der Leyen recently highlighted in her State of the Union address that the media sector is at risk due to the growth of AI and other threats to media funding.

Officials emphasized that the investigation is part of a “routine violation” inquiry and could lead to penalties of up to 20% of Google’s revenue, although this would only occur if Google is found to be in “systematic violation.”

Source: www.theguardian.com

Meta Faces Potential Multi-Million Dollar Fine for Ignoring Content Agreement in Australia

Meta and various tech firms that decline to enter into content agreements with Australian news organizations could face hefty multimillion-dollar penalties, as Labor’s proposed media bargaining initiative aims to link fines to the local revenues of major platforms.

New regulations will apply to large social media and search platforms generating at least $250 million in Australian revenue, regardless of whether they distribute news content, as per recent disclosures from Assistant Treasurer Daniel Mulino.

Labor has shown a slow response in formulating a news bargaining incentive plan due to apprehensions about potential backlash from US President Donald Trump regarding his approach to US-based platforms.


Initially announced in December 2024, the implementation date remains uncertain, pending a month-long public consultation by the government.

These new regulations are intended to compel payments from platforms which have chosen to withdraw from the news media bargaining framework established during Prime Minister Morrison’s administration, a structure that has enabled publishers like Guardian Australia to secure around 30 agreements valued at an estimated $200 million to $250 million annually.

The decline in advertising revenue has significantly affected major media operators like News Corp and Nine and Seven West Media, leading to layoffs and cost reductions, while digital giants such as Google and Facebook’s parent companies continue to enjoy substantial profits.

Meta, which owns platforms like Facebook and Instagram, has declined to enter into new contracts under the existing terms, whereas Google has willingly renewed some contracts with publishers, albeit at lower payment rates.

Tech firms can bypass existing arrangements by entirely removing news content from their platforms, a move made by Meta in Canada in 2023.

Sign up: AU breaking news email

Labor’s new incentive initiative aims to assist news publishers in obtaining funding even from platforms that have opted out of the news bargaining system and to support smaller publishers that depend heavily on digital platforms for content distribution.

A new discussion paper outlines that if a tech platform refuses to engage in a content agreement, it will be required to pay either a portion of the gross revenue produced in Australia or just the revenue stemming from digital advertising. This penalty would be enforced at the group level and would not extend to smaller subsidiary brands owned by larger corporations.

The Treasury has indicated support for a $250 million annual income threshold for this new framework and suggested that the government use the total group income generated in Australia as the primary benchmark for payments.

Preliminary analyses estimate the worth of existing agreements with publishers is approximately equivalent to 1.5% of the revenue generated by relevant platforms in Australia. The new fines could reach 2.25% of revenue to facilitate trading under existing laws. According to the proposed structure of the new incentives, a portion of eligible expenses might be utilized to decrease penalty amounts.

Companies will need to self-evaluate their liabilities under these regulations, but the legislation will depend on a broad definition of social media and search.

Despite not having a registered business account in Australia, Facebook’s Australian subsidiary announced in April that it generated $1.46 billion in revenue for the year ending December 31, an increase from $1.34 billion the previous year, despite declining advertising markets.

President Trump has previously threatened to impose significant trade tariffs on countries perceived to treat American firms unfairly. His former confidant and billionaire advisor, Elon Musk, is the owner of Platform X.

Nonetheless, Labor is proceeding with the introduction of new penalties following Anthony Albanese’s productive meeting at the White House last month.

Former chairman of the competition watchdog, Rod Sims, has expressed support for Labor’s proposed penalty system, stating that Google and Facebook are profiting from content created by Australian news organizations and that failing to bolster journalism would enable lower-quality sources to flourish.

Sims had previously estimated that commercial contracts established under these terms amounted to $1 billion over a four-year period.

The government will continue consultations regarding the incentive plan until December 19, after which it will finalize its strategy in 2026.




Source: www.theguardian.com

UK Criminalizes Pornographic Content Involving Strangulation

The act of choking in pornography, often referred to as ‘choking,’ will be criminalized, with legal obligations imposed on technology platforms to prevent users in the UK from accessing such content.

Proposed amendments to the Crime and Policing Bill introduced in Parliament on Monday will make it illegal to possess or distribute pornography that features choking.

An additional amendment aims to extend the timeframe for victims of intimate image abuse to come forward, increasing the prosecution limit from six months to three years.

The government stated that these changes would eliminate unnecessary obstacles for victims reporting crimes, thus “enhancing access to justice for those in need.”

The choking ban follows a government review that suggested pornography was fostering the normalization of strangulation as a “sexual norm.”

The Independent Pornography Review, initiated by former chancellor Rishi Sunak and conducted by Baroness Gabby Bertin, published its findings in February, recommending a prohibition on pornography featuring strangulation.

Despite the common belief that strangulation can be performed safely, studies indicate that it poses significant risks. While there may be no visible injuries, oxygen deprivation—even for brief moments—can cause detrimental changes to the brain’s delicate structures.

Research has revealed specific alterations in the brains of women who have been subjected to choking during sexual activities, showing indicators of brain damage; Hemisphere disruption associated with depression and anxiety.

Given these dangers, non-fatal strangulation and non-fatal asphyxiation were categorized as criminal offenses in the Domestic Abuse Act 2021, amid concerns that offenders often escape punishment due to the absence of visible injuries.

The new amendments will mandate platforms to take proactive measures to block users’ access to illegal content involving strangulation and suffocation.

Choking-related offenses will be prioritized, imposing a legal responsibility on pornographic sites and tech platforms to ensure UK users cannot view such material.

The Ministry of Justice indicated that this might involve the use of automated systems for the detection and removal of images, moderation tools, or stricter content policies to hinder the spread of abusive material.

Failure to comply could result in fines of up to £18 million imposed by Ofcom.

Barney Ryan, CEO of the Strangulation Research Institute, expressed support for the ban, stating, “While consenting adults should have the freedom to safely explore their sexuality, we must recognize the severe risks posed by unregulated online content, particularly to children and young people.

“Strangulation represents a severe form of violence, often employed in domestic violence for control, silence, or to induce fear. Its portrayal in pornography, especially without context, can impart confusing and harmful messages to youth regarding what is normal and acceptable in intimate relationships. Our research confirms that there is no safe way to strangle.”

Alex Davis-Jones, Minister for Victims and Violence Against Women and Girls, emphasized that online misogyny “has devastating real-world impacts on all of us.” Daily, women and girls have their lives disrupted by cowards who abuse and exploit them from behind screens.

“This government will not remain passive while women face online violations and become victims of normalized and violent pornography.

“We are delivering a strong message that dangerous and sexist behavior will not be tolerated.”

This initiative comes on the heels of a government-commissioned inquiry in 2020 that revealed “significant evidence.” The link between pornography use and harmful sexual attitudes and behaviors toward women.

Additionally, a study conducted that year found that many children had encountered violent or offensive pornography, which left them feeling upset or anxious; some even mimicked the behaviors observed online. Children who engaged with pornography were three to six times more likely to participate in “potentially risky behavior” concerning consent, according to a study by the British Board of Film Classification. .

Source: www.theguardian.com

Labor Refutes Claims of Permitting Tech Giants to Exploit Copyrighted Content for AI Training

In response to significant backlash from writers, arts, and media organizations, the Albanon government has definitively stated that tech companies will not be allowed to freely access creative content for training artificial intelligence models.

Attorney General Michel Rolland is expected to announce this decision on Monday, effectively rejecting a contentious proposal from the Ministry of Justice. productivity committee, which had support from technology companies.

“Australian creatives are not just top-tier; they are essential to the fabric of Australian culture, and we need to ensure they have robust legal protections,” said Mr. Rowland.

The commission faced outrage in August when its interim report on data usage in the digital economy suggested exemptions from copyright law, effectively granting tech companies free access to content for AI training.

Sign up: AU breaking news email

Recently, Scott Farquhar, co-founder of Atlassian and chairman of the Australian Technology Council, told the National Press Club that revising existing restrictions could “unlock billions in foreign investment for Australia”.

The proposal triggered a strong backlash from creators, including Indigenous rapper Adam Briggs, who testified in September that allowing companies to utilize local content without fair remuneration would make it “hard to put the genie back in the bottle.”

Australian author Anna Funder argued that large-scale AI systems rely on “massive unauthorized appropriation of every available book, artwork, and performance that can be digitized.”

The same inquiry uncovered that the Productivity Commission did not engage with the creative community or assess the potential effects of its recommendations before releasing its report. This led Green Party senator Sarah Hanson-Young to state that the agency had “miscalculated the importance of the creative industries.”

The Australian Council of Trade Unions also cautioned against the proposal, asserting it would lead to “widespread theft” of creative works.

Higher government ministers were disrespectful, although a so-called “text and data mining” exemption may still be considered, Rowland’s statement marks the first time it has been specifically ruled out.

“While artificial intelligence offers vast opportunities for Australia and its economy, it’s crucial that Australian creators also reap the benefits,” she asserted.

The Attorney General plans to gather the government’s Copyright and AI Reference Group on Monday and Tuesday to explore alternative measures to address the challenges posed by advancing technology.

This includes discussions on whether a new paid licensing framework under copyright law should replace the current voluntary system.

Briggs says he will be replaced by AI: AI doesn’t know ‘what a lounge room in Shepparton smells like’ – video

The Australian Recording Industry Association (ARIA), one of the organizations advocating against the exemption, praised the announcement as “a substantial step forward.”

“This represents a win for creativity and Australian culture, including Indigenous culture, but more importantly, it’s a victory for common sense. The current copyright licensing system is effective,” stated ARIA CEO Annabel Hurd.

Skip past newsletter promotions

“Intellectual property law is fundamental to the creative economy, digital economy, and tech industry. It is the foundation that technology companies rely on to protect and monetize their products, driving innovation.”

Hurd emphasized that further measures are necessary to safeguard artists, including ensuring AI adheres to licensing rules.

“Artists have the right to determine how their work is utilized and to share in the value that it generates,” she stated.

“Safeguarding those frameworks is how we secure Australia’s creative sovereignty and maintain our cultural vitality.”

Media companies also expressed their support for the decision.

A spokesperson for Guardian Australia stated that this represents “a significant step towards affirming that Australia’s copyrighted content warrants protection and compensation.”

“Australian media, publishers, and creators all voiced strong opposition to the TDM (text and data mining) exception, asserting it would permit large-scale theft of the work of Australian journalists and creators, undermining Australia’s national interests,” the spokesperson added.

They also indicated that the Guardian seeks to establish a fair licensing system that supports genuine value exchange.

News Corp Australasia executive chairman Michael Miller remarked that the government made the “correct decision” to exclude the exemption.

“By protecting creators’ rights to control access, usage terms, and remuneration, we reinforce the efficacy of our nation’s copyright laws, ensuring favorable market outcomes,” he affirmed.

Source: www.theguardian.com

Meta Found in Violation of EU Law Due to ‘Ineffective’ Illegal Content Complaint System

The European Commission has stated that Instagram and Facebook failed to comply with EU regulations by not offering users a straightforward method to report illegal content, such as child sexual abuse and terrorism.

According to the EU enforcement agency’s initial findings released on Friday, Meta, the California-based company valued at $1.8 trillion (approximately £1.4 trillion) that operates both platforms, has implemented unnecessary hurdles for users attempting to submit reports.

The report indicated that both platforms employ misleading designs, referred to as “dark patterns,” in their reporting features, which can lead to confusion and discourage users from taking action.

The commission concluded that this behavior constitutes a violation of the company’s obligations under the EU-wide Digital Services Act (DSA), suggesting that “Meta’s systems for reporting and addressing illegal content may not be effective.” Meta has denied any wrongdoing.

The commission remarked, “In the case of Meta, neither Facebook nor Instagram seems to provide user-friendly and easily accessible ‘notification and action’ systems for users to report illegal content like child sexual abuse or terrorist content.”

A senior EU official emphasized that the matter goes beyond illegal content, touching on issues of free speech and “overmoderation.” Facebook has previously faced accusations of “shadowbanning” users regarding sensitive topics such as Palestine.

The existing reporting system is deemed not only ineffective but also “too complex for users to navigate,” ultimately discouraging them from reaching out, the official noted.

Advocates continue to raise concerns about inherent safety issues in some of Meta’s offerings. Recent research released by Meta whistleblower Arturo Bejar revealed that newly introduced safety features on Instagram are largely ineffective and pose a risk to children under 13.

Meta has refuted the report’s implications, asserting that parents have powerful tools at their disposal. The company implemented mandatory Instagram accounts for teenagers as of September 2024 and recently announced plans to adopt a version of its PG-13 film rating system to enhance parental control over their teens’ social media engagement.

The commission also pointed out that Meta complicates matters for users whose content has been blocked or accounts suspended. The report indicated that the appeal mechanism does not allow users to present explanations or evidence in support of their case, which undermines its efficacy.

The commission stated that streamlining the feedback system could also assist platforms in combating misinformation, citing examples like: an Irish deepfake video. Leading presidential candidate Catherine Connolly has claimed she will withdraw from Friday’s election.

This ongoing investigation has been conducted in partnership with Coimisiún na Meán, Ireland’s Digital Services Coordinator, which oversees platform regulations from its EU headquarters in Dublin.

The commission also made preliminary findings indicating that TikTok and Meta are not fulfilling their obligation to provide researchers with adequate access to public data necessary for examining the extent of minors’ exposure to illegal or harmful content. Researchers often encounter incomplete or unreliable data.

The commission emphasized that “granting researchers access to platform data is a crucial transparency obligation under the DSA, as it allows for public oversight regarding the potential effects these platforms have on our physical and mental well-being.”

These initial findings will allow the platforms time to address the commission’s requests. Non-compliance may result in fines of up to 6% of their global annual revenue, along with periodic penalties imposed to ensure adherence.

Skip past newsletter promotions

“Our democracy relies on trust, which means platforms must empower their users, respect their rights, and allow for system oversight,” stated Hena Virkunen, executive vice-chair of the commission for technology sovereignty, security, and democracy.

“The DSA has made this a requirement rather than a choice. With today’s action, we are sharing preliminary findings on data access by researchers regarding four platforms. We affirm that platforms are accountable for their services to users and society, as mandated by EU law.”


A spokesperson for Meta stated: “We disagree with any suggestions that we have violated the DSA and are actively engaging with the European Commission on these matters. Since the DSA was implemented, we have made changes to reporting options, appeal processes, and data access tools in the EU, and we are confident that these measures meet EU legal requirements.”

TikTok mentioned that fully sharing data about its platform with researchers is challenging due to restrictions imposed by GDPR data protection regulations.

“TikTok values transparency and appreciates the contributions of researchers to our platform and the industry at large,” a spokesperson elaborated. “We have invested significantly in data sharing, and presently, nearly 1,000 research teams have accessed their data through our research tools.

“While we assess the European Commission’s findings, we observe a direct conflict between DSA requirements and GDPR data protection standards.” The company has urged regulators to “clarify how these obligations should be reconciled.”

Source: www.theguardian.com

OpenAI Empowers Verified Adults to Create Erotic Content with ChatGPT | Artificial Intelligence (AI)

On Tuesday, OpenAI revealed plans to relax restrictions on its ChatGPT chatbot, enabling verified adult users to access erotic content in line with the company’s principle of “treating adult users like adults.”

Upcoming changes include an updated version of ChatGPT that will permit users to personalize their AI assistant’s persona. Options will feature more human-like dialogue, increased emoji use, and behaviors akin to a friend. The most significant adjustment is set for December, when OpenAI intends to implement more extensive age restrictions allowing erotic content for verified adults. Details on age verification methods or other safeguards for adult content have not been disclosed yet.

In September, OpenAI introduced a specialized ChatGPT experience for users under 18, automatically directing them to age-appropriate content while blocking graphics and sexual material.

Additionally, the company is working on behavior-based age prediction technology to estimate if a user is over or under 18 based on their interactions with ChatGPT.

In a post to

These enhanced security measures follow the tragic suicide of California teenager Adam Lane this year. His parents filed a lawsuit in August claiming that ChatGPT offered explicit guidance on committing suicide. Altman stated that within just two months, the company has been able to “alleviate serious mental health issues.”

The US Federal Trade Commission has also initiated an investigation into various technology firms, including OpenAI, regarding potential dangers that AI chatbots may pose to children and adolescents.

Skip past newsletter promotions

“Considering the gravity of the situation, we aimed to get this right,” Altman stated on Tuesday, emphasizing that OpenAI’s new safety measures enable the company to relax restrictions while effectively addressing serious mental health concerns.

Source: www.theguardian.com

Major Direct Action on Actor Image Use in AI Content Poses Fairness Concerns

The performing arts union Equity has issued a warning of significant direct action against tech and entertainment firms regarding the unauthorized use of its members’ likenesses, images, and voices in AI-generated content.

This alert arises as more members express concerns over copyright violations and the inappropriate use of personal data within AI materials.

General Secretary Paul W. Fleming stated that the union intends to organize mass data requests, compelling companies to reveal whether they have utilized members’ data for AI-generated content without obtaining proper consent.

Recently, the union declared its support for a Scottish actor who alleges that his likeness contributed to the creation of Tilly Norwood, an “AI actor” criticized by the film industry.

Bryony Monroe, 28, from East Renfrewshire, believes her image was used to create a digital character by the AI “talent studio” Xicoia, though Xicoia has denied her claims.

Most complaints received by Equity relate to AI-generated voice replicas.

Mr. Fleming mentioned that the union is already assisting members in making subject access requests against producers and tech firms that fail to provide satisfactory explanations about the sources of data used for AI content creation.

He noted, “Companies are beginning to engage in very aggressive discussions about compensation and usage. The industry must exercise caution, as this is far from over.”

“AI companies must recognize that we will be submitting access requests en masse. They have a legal obligation to respond. If a member reasonably suspects their data is being utilized without permission, we aim to uncover that.”

Fleming expressed hope that this strategy will pressure tech companies and producers resisting transparency to reach an agreement on performers’ rights.

“Our goal is to leverage individual rights to hinder technology companies and producers from binding collective rights,” Fleming explained.

He emphasized that with 50,000 members, a significant number of requests for access would complicate matters for companies unwilling to negotiate.

Under data protection laws, individuals have the right to request all information held about them by an organization, which typically responds within a month.

“This isn’t a perfect solution,” Fleming added. “It’s no simple task since they might source data elsewhere. Many actors are behaving recklessly and unethically.”

Ms. Monroe believes that Norwood not only mimics her image but also her mannerisms.

Monroe remarked, “I have a distinct way of moving my head while acting. I recognized that in the closing seconds of Tilly’s showreel, where she mirrored exactly that. Others observed, ‘That’s your mannerism. That’s your acting style.'”

Liam Budd, director of recorded media industries at Equity UK, confirmed that the union takes Mr. Monroe’s concerns seriously. Particle 6, the AI production company behind Xicoia, claimed it is collaborating with unions to address any concerns raised.

A spokesperson from Particle 6 stated, ‘Bryony Monroe’s likeness, image, voice, and personal data were not utilized in any way to create Tilly Norwood.’

“Tilly was developed entirely from original creative designs. We do not, and will not, use performers’ likenesses without their explicit consent and proper compensation.”

Budd refrained from commenting on Monroe’s allegations but said, “Our members increasingly report specific infringements concerning their image or voice being used without consent to produce content that resembles them.”

“This practice is particularly prevalent in audio, as creating a digital audio replica requires less effort.”

However, Budd acknowledged that Norwood presents a new challenge for the industry, as “we have yet to encounter a fully synthetic actor before.”

Equity UK has been negotiating with UK production industry body Pact (Film and TV Producers Alliance) regarding AI, copyright, and data protection for over a year.

Fleming mentioned, “Executives are not questioning where their data originates. They privately concede that employing AI ethically is nearly impossible, as they are collecting and training on data with dubious provenance.”

“Yet, we frequently discover that it is being utilized entirely outside established copyright and data protection frameworks.”

Max Rumney, deputy chief executive of Pact, highlighted that its members must adopt AI technology in production or risk falling behind companies without collective agreements that ensure fair compensation for actors, writers, and other creators.

However, he noted a lack of transparency from tech firms regarding the content and data used for training the foundational models of AI tools like image generators.

“The fundamental models were trained on our members’ films and programming without their consent,” Rumney stated.

“Our members favor genuine human creativity in their films and shows, valuing this aspect as the hallmark of British productions, making them unique and innovative.”

Source: www.theguardian.com

British MPs Demand Investigation into TikTok’s Plan to Eliminate 439 Content Moderators

Labor unions and online safety advocates are urging Members of Parliament to examine TikTok’s decision to eliminate hundreds of content moderation jobs based in the UK.

The social media platform intends to reduce its workforce by 439 positions within its trust and safety team in London, raising alarms about the potential risks to online safety associated with these layoffs.

Conferences from trade unions, communication unions, and prominent figures in online safety have authored an open letter to Chi Onwurah MP, who chairs Labour’s science, innovation, and technology committee, seeking an inquiry into these plans.

The letter references estimates from the UK’s data protection authority indicating that as many as 1.4 million TikTok users could be under the age of 13, cautioning that these reductions might leave children vulnerable to harmful content. TikTok boasts over 30 million users in the UK.

“These safety-focused staff members are vital in safeguarding our users and communities against deepfakes, harm, and abuse,” the letter asserts.

Additionally, TikTok has suggested it might substitute moderators with AI-driven systems or workers from nations like Kenya and the Philippines.




How TikTok harms boys and girls differently – video

The signatories also accuse the Chinese-owned TikTok of undermining the union by announcing layoffs just eight days prior to a planned vote on union recognition within the CWU technology sector.

“There is no valid business justification for enacting these layoffs. TikTok’s revenue continues to grow significantly, with a 40% increase. Despite this, the company has chosen to make cuts. We perceive this decision as an act of union-busting that compromises worker rights, user safety, and the integrity of online information,” the letter elaborates.

Among the letter’s signatories are Ian Russell, the father of Molly Russell, a British teenager who took her life after encountering harmful online content, former meta-whistleblower Arturo Bejar, and Sonia Livingstone, a social psychology professor at the London School of Economics.

The letter also urges the commission to evaluate the implications of job cuts on online safety and worker rights, and to explore legal avenues to prevent content moderation from being outsourced and to keep human moderators from being replaced by AI.

When asked for comments regarding the letter, Onwurah noted that the layoff strategy suggests TikTok’s content moderation efforts are under scrutiny, stating, “The role that recommendation algorithms play on TikTok and other platforms in exposing users to considerable amounts of harmful and misleading content is evident and deeply troubling.”

Skip past newsletter promotions

Onwurah mentioned that the impending job losses were questioned during TikTok’s recent appearance before the committee, where the company reiterated its dedication to maintaining security on its platform through financial investments and staffing.

She remarked: “TikTok has conveyed to the committee its assurance of maintaining the highest standards to safeguard both its users and employees. How does this announcement align with that commitment?”

In response, a TikTok representative stated: “We categorically refute these allegations. We are proceeding with the organizational restructuring initiated last year to enhance our global operational model for trust and safety. This entails reducing the number of centralized locations worldwide and leveraging technological advancements to improve efficiency and speed as we develop this essential capability for the company.”

TikTok confirmed it is engaging with the CWU voluntarily and has expressed willingness to continue discussions with the union after the current layoff negotiations are finalized.

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have insights regarding this matter, please reach out to us confidentially using the methods outlined below.

Secure messaging in the Guardian app

The Guardian app features a tool for submitting tips. All messages are encrypted end-to-end and seamlessly integrated into everyday use of Guardian mobile apps, keeping your communication private.

If you don’t have the Guardian app, download it (iOS/Android) and navigate to the menu. Select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, mail

If you can access the Tor network safely and privately, you can send messages and documents to the Guardian through our SecureDrop platform.

Lastly, our guidelines at theguardian.com/tips provide multiple secure contact methods, detailing their advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

OpenAI Video App Faces Backlash Over Violent and Racist Content: Sora Claims “Guardrails Are Not Real”

On Tuesday, OpenAI unveiled its latest version of AI-driven video generators, incorporating a social feed that enables users to share lifelike videos.

However, mere hours after Sora 2’s release, many videos shared on feeds and older social platforms depicted copyrighted characters in troubling contexts, featuring graphic violence and racist scenes. Sora’s usage of OpenAI’s services and ChatGPT for image or text generation explicitly bans content that “promotes violence” or otherwise “causes harm.”

According to prompts and clips reviewed by the Guardian, Sora generated several videos illustrating the horrors of bombings and mass shootings, with panicked individuals fleeing university campuses and crowded locations like Grand Central Station in New York. Other prompts created scenes reminiscent of war zones in Gaza and Myanmar, where AI-generated children described their homes being torched. One video, labeled as “Ethiopian Footage Civil War News Style,” showcased a bulletproof-vested reporter speaking into a microphone about government and rebel gunfire in civilian areas. Another clip, prompted by “Charlottesville Rally,” depicted Black protesters in gas masks, helmets, and goggles screaming in distress.

Currently, video generators are only accessible through invitations and have not been released to the public. Yet, within three days of a restricted debut, it skyrocketed to the top of Apple’s App Store, surpassing even OpenAI’s own ChatGPT.

“So far, it’s been amazing to witness what collective human creativity can achieve,” stated Sora’s director Bill Peebles in a Friday post on X. “We will be sending out more invitation codes soon, I assure you!”

The SORA app provides a glimpse into a future where distinguishing truth from fiction may become increasingly challenging. Researchers in misinformation warn that such realistic content could obscure reality and create scenarios wherein these AI-generated videos may be employed for fraud, harassment, and extortion.

“It doesn’t hold to historical truth and is far removed from reality,” remarked Joan Donovan, an assistant professor at Boston University focusing on media manipulation and misinformation. “When malicious individuals gain access to these tools, they use them for hate, harassment, and incitement.”

Slop Engine or “ChatGPT for Creativity”?

OpenAI CEO Sam Altman described the launch of Sora 2 as “truly remarkable,” and in a blog post, stated it “feels like a ‘chat for creativity’ moment for many of us, embodying a sense of fun and novelty.”

Altman acknowledged the addictive tendencies of social media linked to bullying, noting that AI video generation can lead to what is known as “slops,” producing repetitive, low-quality videos that might overwhelm the platform.

“The team was very careful and considerate in trying to create an enjoyable product that avoids falling into that pitfall,” Altman wrote. He stated that OpenAI has taken steps to prevent misuse of someone’s likeness and to guard against illegal content. For instance, the app declined to generate a video featuring Donald Trump and Vladimir Putin sharing cotton candy.

Nonetheless, within the three days following SORA’s launch, numerous videos had already disseminated online. Washington Post reporter Drew Harwell created a video depicting Altman as a military leader in World War II and also produced a video featuring “Ragebait, fake crime, women splattered on white geese.”

Sora’s feeds include numerous videos featuring copyrighted characters from series such as Spongebob Squarepants, South Park, and Rick and Morty. The app seamlessly generated videos of Pikachu imposing tariffs in China, pilfering roses from the White House Rose Garden, and partaking in a Black Lives Matter protest alongside SpongeBob. One video documented by 404 Media showed SpongeBob dressed as Adolf Hitler.

Neither Paramount, Warner Bros, nor Pokémon Co responded to requests for comment.

David Karpf, an associate professor at George Washington University’s Media Affiliated Fairs School, noted he observed a video featuring copyrighted characters promoting cryptocurrency fraud, asserting that OpenAI’s safety measures regarding SORA are evident.

Skip past newsletter promotions

“Guardrails aren’t effective when individuals construct copyrighted characters that foster fraudulent schemes,” stated Karpf. “In 2022, tech companies made significant efforts to hire content moderators; however, in 2025, it appears they have chosen to disregard these responsibilities.”

Just before the release of SORA 2, OpenAI contacted talent agencies and studios to inform them they could opt-out if they wished to prevent the replication of their copyrighted materials by video generators. The Wall Street Journal reports.

OpenAI informed the Guardian that content owners can report copyright violations through the “copyright dispute form,” but individual artists and studios cannot opt-out comprehensively. Varun Shetty, OpenAI’s Head of Media Partnerships, commented:

Emily Bender, a professor at the University of Washington and author of the book “The AI Con,” expressed that Sora creates a perilous environment where “distinguishing reliable sources is challenging, and trust wanes once one is found.”

“Whether they generate text, images, or videos, synthetic media machines represent a tragic facet of the information ecosystem,” the vendor observed. “Their output interacts with technological and social structures in ways that weaken and erode trust.”

Nick Robbins contributed to this report

Source: www.theguardian.com

TikTok ‘Leads Child Accounts to Adult Content with Just a Few Clicks’ | TikTok

A report from the Campaign Group reveals that TikTok is guiding child accounts towards pornographic content within just a few clicks.

Global Witness activists created fake accounts using a birth date of 13 and activated the app’s “limited mode,” designed to reduce visibility to “sexually suggestive” material.

Researchers discovered that TikTok suggested sexual and explicit search phrases for seven test accounts established on new mobile devices with no prior search history.

The suggested terms under the “You May Want” feature included “very rude and revealing attire” and “very rude babe,” escalating to phrases like “hardcore porn clip.” Sexual search suggestions appeared instantly for three of the accounts.

After just “a few clicks,” researchers encountered pornographic material ranging from depictions of women to explicit sexual acts. Global Witness indicated that some content tried to evade moderation by appearing as innocuous photos or videos. For one account, access to explicit content required only two clicks: one on the search bar and another on a suggested search term.

Global Witness, an organization focused on climate issues and the implications of Big Tech on human rights, conducted two rounds of testing on July 25, one before and one after the Child Protection Regulation (OSA) was enacted in the UK.

Skip past newsletter promotions

Two videos featuring individuals who appeared under 16 were reported to the Internet Watch Foundation, tasked with monitoring online child sexual abuse material.

Global Witness accused TikTok of breaching the OSA, which mandates tech companies to shield children from harmful content, including pornography.

A spokesperson for the UK Communications Regulatory Authority, Ofcom, stated they would “support the study’s findings and evaluate the results.”

OFCOM’s compliance code stipulates that media promoting harmful content or high-risk tech companies must “design their algorithms to eliminate harmful material from child feeds.” TikTok’s content guidelines expressly prohibit pornographic material.

In response to Global Witness’s concerns, TikTok confirmed the removal of troubling content and modifications to its search recommendations.

“Upon recognizing these issues, we promptly initiated an investigation, eliminated content that breached our policies, and began enhancing our search proposal features,” stated a spokesperson.

Source: www.theguardian.com

Social Media Continues to Promote Suicide-Related Content to Teens Despite New UK Safety Regulations

Social media platforms continue to disseminate content related to depression, suicide, and self-harm among teenagers, despite the introduction of new online safety regulations designed to safeguard children.

The Molly Rose Foundation created a fake account pretending to be a 15-year-old girl and interacted with posts concerning suicide, self-harm, and depression. This led to the algorithm promoting accounts filled with a “tsunami of harmful content on Instagram reels and TikTok pages,” as detailed in the charity’s analysis.

An alarming 97% of recommended videos viewed on Instagram reels and 96% on TikTok were found to be harmful. Furthermore, over half (55%) of TikTok’s harmful recommended posts included references to suicide and self-harm, while 16% contained protective references to users.

These harmful posts garnered substantial viewership. One particularly damaging video was liked over 1 million times on TikTok’s For You Page, and on Instagram reels, one in five harmful recommended videos received over 250,000 likes.

Andy Burrows, CEO of The Molly Rose Foundation, stated: “Persistent algorithms continue to bombard teenagers with dangerous levels of harmful content. This is occurring on a massive scale on the most popular platforms among young users.”

“In the two years since our last study, it is shocking that the magnitude of harm has not been adequately addressed, and that risks have been actively exacerbated on TikTok.

“The measures instituted by Ofcom to mitigate algorithmic harms are, at best, temporary solutions and are insufficient to prevent preventable damage. It is crucial for governments and regulators to take decisive action to implement stronger regulations that platforms cannot overlook.”

Researchers examining platform content from November 2024 to March 2025 discovered that while both platforms permitted teenagers to provide negative feedback on content, as required by Ofcom under the online safety law, this function also allowed for positive feedback on the same material.

The Foundation’s Report, developed in conjunction with Bright Data, indicates that while the platform has made strides to complicate the use of hashtags for searching hazardous content, it still amplifies harmful material through personalized AI recommendation systems once monitored. The report further observed that platforms often utilize overly broad definitions of harm.

This study provided evidence linking exposure to harmful online content with increased risks of suicide and self-harm.

Additionally, it was found that social media platforms profited from advertisements placed next to numerous harmful posts, including those from fashion and fast food brands popular among teenagers as well as UK universities.


Ofcom has initiated the implementation of child safety codes in accordance with online safety laws aimed at “taming toxic algorithms.” The Molly Rose Foundation, which receives funding from META, expresses concern that regulators propose a mere £80,000 for these improvements.

A spokesperson for Ofcom stated, “Changes are underway. Since this study was conducted, new measures have been introduced to enhance online safety for children. These will make a significant difference, helping to prevent exposure to the most harmful content, including materials related to suicide and self-harm.”

Technology Secretary Peter Kyle mentioned that 45 sites have been under investigation since the enactment of the online safety law. “Ofcom is also exploring ways to strengthen existing measures, such as employing proactive technologies to protect children from self-harm and recommending that platforms enhance their algorithmic safety,” he added.

A TikTok spokesperson commented: “TikTok accounts for teenagers come equipped with over 50 safety features and settings that allow for self-expression, discovery, and learning while ensuring safety. Parents can further customize content and privacy settings for their teens through family pairing.”

A Meta spokesperson stated: “I dispute the claims made in this report, citing its limited methodology.

“Millions of teenagers currently use Instagram’s teenage accounts, which offer built-in protections that limit who can contact them, the content they can see, and their time spent on Instagram. Our efforts to utilize automated technology continue in order to remove content that promotes suicide and self-harm.”

Source: www.theguardian.com

Arts and Media Groups Call for AI Training to Combat “Ramp Theft” of Australian Content

Arts, creative, and media organizations are urging the government to prohibit large tech companies from using Australian content and developing artificial intelligence models. There is growing concern that such a decision may “betray” Australian workers and facilitate the “widespread theft” of intellectual property.

The Albanese government has stated that it has no intention of altering copyright laws, but emphasizes that any changes must consider their effects on artists and news media. Opposition leader Sassan Ray has called for compensation for any use of copyrighted material.

“It is unacceptable for Big Tech to exploit the work of Australian artists, musicians, creators, and journalists without just compensation,” Ray asserted on Wednesday.


The Productivity Committee’s interim report titled “Utilizing Data and Digital Technology” proposes regulations for technologies, including AI in Australia, projecting a productivity increase of 0.5% to 13% over the next decade, thereby potentially adding $116 billion to the nation’s GDP.

The report highlighted that building AI models demands a substantial amount of data, prompting concerns from many players, including Creative Australia and copyright agencies, about the misuse of copyrighted content for AI training.

The committee outlined potential solutions, advocating for an expansion of licensing agreements, exemptions for “text and data mining,” and enhancements to existing fair trading regulations that are already in place in other countries.

This latter suggestion faced significant opposition from arts, creative, and media organizations. They expressed discontent at the idea of allowing wealthy tech companies to utilize their work for AI training without appropriate compensation.

Such a shift could jeopardize existing licensing agreements formed between publishers and creators with major tech firms and complicate negotiations for news media seeking fair compensation from social media platforms for journalism online.

Sign up: AU Breaking NewsEmail

The Australian Labour Union Council (ACTU) criticized the Productivity Committee’s proposal, claiming it exploits the interests of large multimillion-dollar corporations, warning that it may mislead efforts to assist Australian workers.

“The extensive discussion surrounding text and data mining exemptions risks normalizing the theft of creative works from Australian artists and Indigenous communities,” said ACTU.

Joseph Mitchell, ACTU Secretary, indicated that such exemptions would allow “high-tech corporations to reap the full benefits of advanced technology without giving back to the creators.”

APRA Chair Jenny Morris is among those who have voiced concerns over potential exemptions for “text and data mining” used in AI training. Photo: AAP

Australia’s music rights organizations, Apra Amcos and the National Aboriginal and Torres Strait Islander Music Bureau, expressed disappointment regarding the committee’s recommendations, raising alarms about the implications for Australia’s $9 billion music sector.

APRA Chair Jenny Morris stressed that this recommendation highlights a recognition that these practices are already widespread.

Attorney General Michelle Roland, responsible for copyright legislation, stated that any advancements in AI must prioritize building trust and confidence.

“Any reforms to Australia’s copyright law must reflect the effects on the nation’s creative and news sectors. We remain dedicated to participating in dialogues around these issues, particularly with the copyright and AI reference groups initiated by the government last year,” she mentioned.

Skip past newsletter promotions

When asked about the committee’s findings, Ray expressed concern regarding the absence of sufficient “guardrails” from the government to tackle AI-related issues.

“We need to safeguard content creators… their work rightfully belongs to them, and we must not take it without compensating them,” she added.

Ed Fushik, former Minister for Industry and Technology for Workers, defended the overall outlook for the economy on Wednesday. Treasurer Jim Chalmers later commented on ABC’s 7.30, saying, “The mechanism you deploy, whether one act or multiple existing acts… is not the crux of the issue.”

“I believe we can strike a balance between concerns that AI is harmful and those who pretend we can return to a previous state,” he indicated.

“There are no current plans to undermine or alter Australia’s copyright arrangements.”

Arts Minister Tony Burke highlighted a submission from Creative Australia regarding the review. He stated that, “It emphasizes the necessity for consent, transparency, and fair compensation concerning copyright and labeling.”

In a statement, Creative Australia asserted that the nation has the potential to lead globally in establishing “fair standards” for AI application.

“Artists and creatives whose work is utilized in training AI are entitled to proper compensation,” a spokesperson remarked.

“Innovation should not come at the cost of ethical business practices.”

The Australian Publishers Association (APA) has expressed worries about the possibility of works being utilized without authorization or compensation.

“While we support responsible innovation, this draft proposal favors infringers over investors,” stated Patrizia Di Biase-Dyson, CEO of APA.

“We oppose the idea that Australian narratives and educational materials integral to our culture and democracy should be treated as free resources for corporate AI systems.”

The copyright agency likewise spoke against the text and data mining exemption, emphasizing that it would adversely affect creators’ revenue.

“The movement towards revision of the Australian copyright system stems from large multinational corporations, and it does not serve the national interest,” remarked CEO Josephine Johnston. “To empower Australia’s high-quality content in the new AI era, it’s critical that creators receive fair compensation.”

Source: www.theguardian.com

Understanding the New UK Online Safety Regulations: How Are Age Checks for Adult Content Implemented?

The importance of online safety for children in the UK is reaching a pivotal moment. Starting this Friday, social media and other internet platforms must take action to safeguard children or face substantial fines for non-compliance.

This marks a critical evaluation of the online safety law, a revolutionary regulation that encompasses platforms like Facebook, Instagram, TikTok, YouTube, Google, and more. Here’s an overview of the new regulations.


What will happen on July 25th?

Companies subject to the law are required to implement safety measures that shield children from harmful content. Specifically, all pornography sites must establish stringent age verification protocols. According to Ofcom, the UK communications regulator, 8% of children aged 8 to 14 accessed online pornographic sites or apps within a month.

Furthermore, social media platforms and major search engines must block access for children to pornography and content that promotes or encourages suicide, self-harm, and eating disorders. This may involve completely removing certain feeds for younger users. Hundreds of businesses will be impacted by these regulations.

Platforms must also minimize the distribution of other potentially harmful content, such as promoting dangerous challenges, substance abuse, or instances of bullying.


Recommended measures include: Algorithms that suggest content to users must exclude harmful materials. All sites and applications must implement procedures to rapidly eliminate dangerous content. Additionally, children should have a straightforward method to report concerns. Compliance is flexible if businesses believe they have effective alternatives to meet their child safety responsibilities.

Services deemed “high risk”, like major social media platforms, must utilize “highly effective” age verification methods to identify users under 18. If a social media platform is found hosting harmful content without age checks, it is responsible for ensuring a “positive” user experience.

X states that if it cannot determine a user’s age as 18 or older, it defaults to sensitive content settings, thereby restricting adult material. They are also integrating age estimation technology and ID verification to ensure users are not underage. Meta, the parent company of Instagram and Facebook, claims to have a comprehensive approach to age verification that includes a teen account feature set by default for users under 18.

“We collaborate with the law firm Payne Hicks Beach,” noted Mark Jones, a partner at the firm. “[Online Safety Act] If not, we strive to clarify it for the company.”

The Molly Rose Foundation, set up by the family of British teenager Molly Russell, who tragically lost her life in 2017 due to harmful online content, is advocating for further changes, including the prohibition of perilous online challenges and requiring platforms to proactively mitigate depressive and body image-related content.


How will age verification be implemented?

Some age verification methods for pornographic providers supported by OFCOM include: assessing a person’s age through live photos and videos (face age estimation), verifying age via credit card, bank, or mobile network operator, matching photo ID, and utilizing a “digital identity wallet” that contains proof of age.

Ria Moody, a lawyer at Linklaters, commented, “Age verification measures must be highly accurate. OFCOM indicates these measures are ineffective unless they ensure the user is over 18, so platforms should not rely solely on them.”


What does this mean in practice?

Pornhub, the UK’s most frequented online porn site, has stated it will implement a “regulatory approved age verification method” by Friday, though specific methods have yet to be disclosed. Another adult site, OnlyFans, is already using facial age verification software, which estimates users’ ages without saving their facial images, relying instead on data from millions of other images. A company called Yoti provides this software and has also made it available on Instagram.

Last week, Reddit began verifying the age of forums and threads containing adult content. The platform utilizes technology from a company named Persona, which verifies age using uploaded selfies or government-issued ID photos. Reddit does not retain the photos, instead storing validation statuses to streamline the process for users.


How accurate is facial age verification?

The software allows websites or apps to set a “challenge” age (e.g., 20 or 25) to minimize the number of underage users accidentally accessing content. When Yoti set a challenge age of 20, less than 1% of 13-17-year-olds were mistakenly verified.


What other methods are available?

Another direct approach entails requiring users to present formal identification, like a passport or driver’s license. Importantly, the ID details need not be stored and can be used solely to verify access.


Will all pornographic sites conduct age checks?

They are expected to, but many smaller sites might try to circumvent the regulations, fearing it will deter demand for their services. Industry representatives suggest that those who disregard the rules may await Ofcom’s response to violations before determining their course of action.


How will child protection measures be enforced?

Ofcom has a broad spectrum of penalties it can impose under the law. Companies can face fines of up to £18 million or 10% of their global revenue for violations—potentially amounting to $16 billion for Meta. Additionally, sites or apps can receive formal warnings. For severe violations, Ofcom may seek a court order to restrict the availability of the site or app in the UK.

Moreover, senior managers at technology firms could face up to two years in prison if they are found criminally liable for repeated breaches of their obligations to protect children and for ignoring enforcement notices from Ofcom.

Source: www.theguardian.com

Wetransfer Assures Users Their Content Won’t Fuel AI Training Following Backlash | Internet

The well-known FileSharing Service Wetransfer has clarified that user content will not be used for training artificial intelligence, following a backlash over recent changes to their terms of service.

The company, widely utilized by creative professionals for online work transfers, had suggested in the updated terms that uploaded files might be utilized to “enhance machine learning models.”

The initial provision indicated that the service reserved the right to “reproduce, modify, distribute, publish” user content, leading to confusion with the revised wording.

A spokesperson for Wetransfer stated that user content has never been utilized internally for testing or developing AI models and mentioned that “specific types of AI” are being considered for use by companies in the Netherlands.

The company assured, “There is no change in how Wetransfer handles content.”

On Tuesday, Wetransfer updated its terms and conditions, eliminating references to machine learning or AI to clarify the language for users.

The spokesperson noted, “We hope that by removing the machine learning reference and refining the legal terminology, we can alleviate customer concerns regarding the updates.”

Currently, the relevant section of the Service Terminology states, “We hereby grant you a royalty-free license to utilize our content for the purpose of operating, developing, and enhancing the service in accordance with our Privacy and Cookie Policy.”

Some service users, including a voice actor, a filmmaker, and a journalist, shared concerns about the new terms on X and threatened to terminate their subscriptions.

The use of copyrighted material by AI companies has become a contentious issue within the creative industry, which argues that using creators’ work without permission jeopardizes their income and aids in the development of competing tools.

The British Writer’s Guild expressed relief at Wetransfer’s clarification, emphasizing, “Never use members’ work to train AI systems without consent.”

Wetransfer affirmed, “As a company deeply embedded in the creative community, we prioritize our customers and their work. We will continue our efforts to ensure Wetransfer remains the leading product for our users.”

Founded in 2009, the company enables users to send large files via email without the need for an official account. Today, the service caters to 80 million users each month across 190 countries.

Source: www.theguardian.com

Experts Warn AI Chatbot ‘Mechahitler’ Could Interpret Content as Violent Extremism in XV eSafety Case

The Australian judiciary has been dubbed “Mecha Hitler” after discussions last week about the classification of anti-Semitic remarks as terrorist and violent extremist content, with chatbots producing such comments also coming under scrutiny.

Nevertheless, experts from X contend that large-scale language models lack intent, placing accountability solely on the users.

Musk’s AI firm, Xai, issued an apology last week regarding statements made by the Grok chatbot over a span of 16 hours, attributing the issue to “deprecated code” that became more influenced by existing posts from X users.

The uproar centered around an administrative review hearing on Tuesday, where X contested a notice from Esafety Commissioner Julie Inman Grant issued last March, demanding clarity on actions against terrorist and violent extremism (TVE) content.


The ban on social media in Australia for those under 16 is now law, with numerous uncertainties still remaining – Video


Chris Berg, an expert witness from X and a professor at RMIT Economics, testified that it is a misconception to believe a large-scale language model can inherently produce this type of content, as it plays a critical role in defining what constitutes terrorism and violent extremism.

Contrarily, Nicolas Suzor, a law professor at Queensland Institute of Technology and one of Esafety’s expert witnesses, disagreed with Berg, asserting that chatbots and AI generators can indeed contribute to the creation of synthetic TVE content.

“This week alone, X’s Grok generated content that aligns with the definition of TVE,” Suzor stated.

He emphasized that AI development retains human influence, which can mask intentions, affecting how Grok responds to inquiries aimed at “quelling awareness.”

The court heard that X believes its Community Notes feature, which allows user contributions to fact-checking, along with Grok’s analytics feature, aids in identifying and addressing TVE material.

Skip past newsletter promotions

Josh Roose, a witness and political professor at Deakin University, expressed skepticism regarding the utility of community notes in this context, stating that TV has urged users to flag content to X. This has resulted in a “black box” scenario for the company’s investigations, where typically only a small fraction of material is removed and a limited number of accounts are suspended.

Suzor remarked that it is hard to view Grok as genuinely “seeking the truth” following recent incidents.

“It’s undisputed that Grok is not effectively pursuing truth. I am deeply skeptical of Grok, particularly in light of last week’s events,” he stated.

Berg countered that X’s Grok analytics feature had not been sufficiently updated in response to the chatbot’s output last week, suggesting that the chatbots have “strayed” by disseminating hateful content that is “quite strange.”

Suzor argued that instead of optimizing for truth, Grok had been “modified to align responses more closely with Musk’s ideological perspectives.”

Earlier in the hearing, X’s legal representatives accused the proceedings of attempting to distort the Royal Commission’s focus on certain aspects of X. Cross-examination raised questions regarding pre-existing meetings prior to any actions taken against X employees.

Government attorney Stephen Lloyd stated that X was portraying Esafety as overly antagonistic in their interactions, attributing the “aggressive stance” to X’s leadership.

The hearing is ongoing.

Source: www.theguardian.com

Meta Sued in Ghana for Effects of Extreme Content on Moderators

Meta is now facing a second lawsuit in Africa related to the psychological trauma endured by content moderators tasked with filtering out disturbing material on social media, including depictions of murder, extreme violence, and child sexual abuse.

A lawyer is preparing to take legal action against a contractor of Meta, the parent company of Facebook and Instagram, following discussions with moderators at a facility in Ghana that reportedly employs around 150 individuals.

Moderators at Magilal in Accra report suffering from depression, anxiety, insomnia, and substance abuse directly linked to their responsibilities of reviewing extreme content.

The troubling conditions faced by Ghanaian workers have come to light through a collaborative investigation by the Guardian and the Bureau of Investigative Journalism.

This issue arose after over 140 Facebook content moderators in Kenya were diagnosed with severe post-traumatic stress disorder due to their exposure to traumatic social media content.

The Kenyan workers were employed by Samasource, an outsourcing company that recruits personnel from across Africa for content moderation tasks for Meta. The Magilal facility, central to the allegations in Ghana, is owned by the French multinational Teleperformance.

One individual, who cannot be identified for legal reasons, disclosed that he attempted suicide due to his work. His contract has since expired, and he claims to have returned to his home country.

Facebook and similar large social media platforms often employ numerous content moderators in some of the world’s most impoverished regions, tasked with removing posts that violate community standards and aiding in training AI systems for the same purpose.

Content moderators are required to review distressing and often brutal images and videos to determine if they should be taken down from Meta’s platform. According to reports from Ghanaian workers, they have witnessed videos including extreme violence, such as people being skinned alive or women being decapitated.

Moderators have claimed that the mental health support provided by the company is inadequate, lacking professional oversight, and there are concerns that personal disclosures regarding the impact of their work are being circulated among management.

Teleperformance contested this claim, asserting that they employed a licensed mental health professional, duly registered with a local regulatory body, who possesses a master’s degree in psychology, counseling, or a related mental health field.

The legal action is initiated by the UK-based nonprofit Foxglove. This marks the second lawsuit filed by an African content moderator, following the lawsuit from Kenya’s Samasource workers in December.

Foxglove has stated they will “immediately investigate these alarming reports of worker mistreatment,” with the goal of employing “all available methods, including potential legal action,” to enhance working conditions.

They are collaborating with Agency Seven Seven, a Ghanaian firm, to prepare two potential cases. One could involve claims of unfair dismissal, including a group of moderators who allege psychological harm, along with an East African moderator whose contract ended following a suicide attempt.

Martha Dark, co-executive director at Foxglove, remarked:

“In Ghana, Meta seems to completely disregard the humanity of the crucial safety personnel that all interests rely on—content moderators.

Dark noted that the base wages for content moderators in Accra fall below the living wage, with pressures to work overtime. Moderators reportedly face pay deductions for not meeting performance targets, she indicated.

Contracts obtained by the Guardian show that starting wages are around 1,300 Ghanaian Cedis per month. This base pay is supplemented by a performance-related bonus system, with the highest earnings reaching approximately 4,900 Cedis (£243) per month, significantly less than what is needed for a decent living, according to living costs in Accra.

A spokesperson for Teleperformance stated that content moderators receive “a competitive salary and benefits,” including a monthly income approximately 10 times the national minimum wage for local moderators, and 16 times the minimum wage from other countries, along with project allowances and other benefits, all automatically provided and not contingent on performance.

Foxglove researcher Michaela Chen observed that some moderators are crammed into tight living spaces: “Five individuals were packed into a single room.” She mentioned the existence of a secretive culture of surveillance from managers that monitors workers even during breaks.

This surveillance extends to the work of Meta moderators. She stated: “Workers dedicate all day to the Meta platform, adhering to Meta’s standards and utilizing its systems, yet they are constantly reminded, ‘You’re not working for Meta,’ and are prohibited from disclosing anything to anyone.”

Teleperformance asserted that the moderators are housed in one of Accra’s most luxurious and well-known residential and commercial zones.

The spokesperson described the accommodation as “secure and offering high levels of safety,” complete with recreational facilities such as air conditioning, a gym, and a pool.

Agency Seven Seven partner Carla Olympio believes personal injury claims could succeed in Ghanaian courts, stating they would set a legal precedent that acknowledges employee protections extend to psychological and physical harm.

“[There exists] a gap in our laws as they do not adequately address advancements in technology and virtual work,” she expressed.

Rosa Curling, co-director at Foxglove, has called upon the court to “mandate immediate reforms in the work environment for content moderators,” ensuring proper protective measures and mental health care.

A Teleperformance spokesperson stated: “We are committed to addressing content moderation in Ghana. We fully disclose the type of content moderators may encounter throughout the hiring process, employee contracts, training sessions, and resilience assessments, while actively maintaining a supportive atmosphere for our content moderators.”

Meta commented that the companies it partners with are “contractually obligated to ensure that employees engaged in content reviews on Facebook and Instagram receive adequate support that meets or exceeds industry standards.”

The tech giant further stated it “places great importance on the support provided to content reviewers,” detailing expectations for counseling, training, and other resources when engaging with outsourced companies.

All content moderators indicated they had signed a non-disclosure agreement due to the sensitivity of user information they handle for their safety; however, they are permitted to discuss their experiences with medical professionals and counselors.

Source: www.theguardian.com

X takes legal action against Modi government for censorship in New India’s content removal battle

India’s IT Ministry has unlawfully extended its censorship authority to facilitate the removal of online content and allow “countless” government officials to enforce such orders.

The lawsuit and accusations indicate the escalation of the ongoing legal dispute between X, who is being instructed by New Delhi to take down content, and Indian Prime Minister Narendra Modi. This comes as Musk prepares to launch Starlink and Tesla in India.

In a recent court filing dated March 5, X argues that India’s IT ministry is utilizing a government website launched by the Home Office last year to issue content blocking orders and compel social media companies to participate on the website. According to X, the process lacks stringent Indian legal safeguards concerning content removal, requiring the issuance of an order in cases of sovereignty or public order harm and involving strict monitoring by top officials.


India’s IT Ministry redirected a request for comment to the Home Office, but did not respond.

The government’s website stated it was attempting to counter the directive by establishing an “unacceptable parallel mechanism” that would lead to “unchecked censorship of Indian information.”

X’s court documents have not been publicly released and were initially reported by the media on Thursday. The case was briefly heard earlier this week by a judge from the Southern High Court of Karnataka, but a final decision was not reached. The next hearing is scheduled for March 27th.

In 2021, X, previously known as Twitter, faced a dispute with the Indian government over defying a legal order to block certain tweets related to farmers’ protests against government policies. X eventually complied after facing backlash from the public, but the legal challenge remains ongoing in Indian courts.

Source: www.theguardian.com

BBC to Establish AI Teams to Deliver Tailored Content

The CEO of BBC News announced plans to create new departments that utilize AI technology to provide more personalized content to audiences. This strategic move comes in response to the evolving landscape of news consumption, where businesses must adapt to reach their target demographic effectively.

In a memo to staff, CEO Deborah Turnness outlined a reorganization plan that includes the establishment of BBC News Growth, Innovation, and AI division. This shift aims to cater to the younger demographic, particularly those under 25, who predominantly consume news through platforms like smartphones and TikTok.

Turnness emphasized the need for companies to address challenges such as news avoidance, increased social media news consumption, digital competition, and decline in traditional broadcasting. The implementation of AI will enable the curation of content tailored to the preferences of mobile users accustomed to social media consumption.

She stressed the importance of understanding audience needs and delivering content that aligns with their preferences while leveraging AI technology to enhance innovation and growth.

While AI plays a significant role in streamlining news delivery, concerns have been raised regarding the accuracy and reliability of AI-generated content. Companies have pledged to uphold public service values and ensure that AI aligns with editorial standards of accuracy, fairness, equity, and privacy.

Turnness mentioned the restructuring of BBC News to broaden audience reach, eliminate traditional silos within the newsroom, and enhance content distribution across various platforms. The creation of BBC Live and Daily News division signifies a shift towards a more integrated approach to news production.

Skip past newsletter promotions

Turnness underscored the importance of adapting to the digital environment and evolving audience preferences to remain competitive in the industry. The appointment of a director for the growth, innovation, and AI departments will ensure strategic investments and innovations are tested and implemented effectively.

Source: www.theguardian.com

Meta issues apology on Instagram for graphic content and disturbing images

Meta, owned by Mark Zuckerberg, issued an apology after Instagram users were exposed to violent, graphic, and disturbing content, including animal abuse and images of corpses.

Users reported encountering these disturbing images due to a glitch in the Instagram algorithm.

Reels, a feature similar to TikTok, allows users to share short videos on the platform.

On Reddit’s Instagram Forum, users discussed finding graphic content on their feeds.

Some users described seeing disturbing videos, including a man being crushed by an elephant, torn apart by a helicopter, and putting his face in boiling oil. Others reported encountering “sensitive content” screens meant to protect users from such graphic material.

A user shared a list of violent content in their feed, as reported by Tech News Site 404, which included videos of a man on fire, a shooting incident, content from an account named “PeopleDeaddaily,” and a pig being beaten.

Another Reddit user expressed concern about the violent content flooding their feed and questioned Instagram’s algorithm’s accuracy and intent.

A spokesperson for Meta, the parent company of Instagram and Facebook, issued an apology for the error.

The incident occurred amidst changes in Meta’s content moderation approach, although the company clarified that the graphic video flood was not related to any policy changes.

Meta’s Content Guidelines mandate removal of particularly violent or graphic content and limiting the use of sensitive content screens. In the UK, the Online Safety Act requires social media platforms to protect users under 18 from harmful materials.

A campaign group advocating for online safety called for a detailed explanation regarding the Instagram algorithm mishap.

The Molly Rose Foundation, established by the family of Molly Russell, a teenager who took her own life in 2017, urged Instagram to explain why such disturbing content appears on the platform.

Andy Burrows, CEO of the foundation, expressed concern that the policy changes at Meta may lead to increased availability of graphic content on the platform.

Source: www.theguardian.com

Meta permitted pornographic advertisements that breach content moderation standards

Meta owns social media platforms such as Facebook and Instagram

JRdes / Shutterstock

In 2024, Meta allowed more than 3,300 pornographic ads, many featuring AI-generated content, to run on social media platforms such as Facebook and Instagram.

The survey results are available below. report by AI forensics a European non-profit organization focused on researching technology platform algorithms. Researchers also found inconsistencies in Meta’s content moderation policies by reuploading many of the same explicit images as standard Instagram and Facebook posts. Unlike ads, these posts violated Meta’s terms and were quickly removed. community standards.

“I am disappointed and not surprised by this report, as my research has already revealed double standards in content moderation, particularly in the area of sexual content,” he said. carolina are At the Center for Digital Citizenship at Northumbria University, UK.

The AI Forensics report focuses on a small sample of ads targeting the European Union. As a result, the explicit meta-authorized ads primarily target middle-aged and older men promoting “shady sexual enhancement products” and “dating sites,” with a total reach of 8.2 million impressions. It turned out that it was exceeded.

This permissiveness reflects a widespread double standard in content moderation, Allais said. She says tech platforms often block content by “women, femme presentations, and LGBTQIA+ users.” That double standard extends to the sexual health of men and women. “Examples include lingerie and period-related advertising. [removed] Ads from Meta are approved, but ads for Viagra are approved,” she says.

In addition to discovering AI-generated images within ads, the AI Forensics team also discovered audio deepfakes. For example, some ads for sex-enhancing drugs featured the digitally manipulated voice of actor Vincent Cassel superimposed over pornographic visuals.

“Meta prohibits the display of nudity or sexual activity in ads or organic posts on our platform, and we remove violating content shared with us,” a Meta spokesperson said. “Bad actors are constantly evolving their tactics to evade law enforcement, which is why we continue to invest in the best tools and technology to identify and remove violating content.”

The report comes at the same time that Meta CEO Mark Zuckerberg announced he would be eliminating the fact-checking team in favor of crowd-sourced community notes.

“If you really want to sound dystopian, which I think there’s reason to do so at this point given Zuckerberg’s latest decision to eliminate fact checkers, Meta You could even say that they’re quickly stripping agencies of their users by taking money from questionable ads,” Allais said.

topic:

Source: www.newscientist.com

Why is the proliferation of AI-generated content harming the internet unchecked? – Arwa Mahdawi

HWhat do you think, humans? My name is Arwa and I am a genuine member of this species homo sapiens. We are talking about 100% real people; meat space This is it. I am by no means an AI-powered bot. I know, I know. That's exactly what the bot says, isn't it? I think you'll just have to trust me on this matter.

By the way, the reason I have such a hard time pointing this out is because content created by real humans is becoming kind of a novelty these days. The internet is rapidly being overtaken by advances in AI. (It's not clear who coined the term, but “slop” is a sophisticated iteration of Internet spam: low-quality text, video, and images generated by AI.) recent analysis It is estimated that more than half of all English long-form posts on LinkedIn are generated by AI. Meanwhile, many news sites are secretly experimenting with AI-generated content, in some cases signed. Author generated by AI.

Slop is everywhere, but Facebook is actively sloshing strange AI-generated images, including bizarre depictions. Jesus was made of shrimp. Much of the AI-generated content is created by fraudsters looking to drive user engagement, rather than remove them from their platforms. fraudulent purpose – Facebook accepted it. A study conducted last year by researchers at Stanford and Georgetown found that Facebook's recommendation algorithm is accelerating. These AI-generated posts.

Meta also creates its own slops. In 2023, the company began introducing AI-powered profiles like Liv, a “proud black queer mom of two and truth teller.” These didn't get much attention until Meta executive Connor Hayes talked about them. financial times The company announced in December that it plans to fill its platform with AI characters. I don't know why he thought bragging that soon we'll have a platform full of AI characters talking to each other would work, but it didn't. Meta quickly deleted the AI ​​profile after it went viral.

For now, people like Liv may be gone from Meta, but our online future looks increasingly sloppy. The gradual “ensitization” of the Internet, as Cory Doctorow memorably called it, is accelerating. Let's pray that Shrimp Jesus will perform a miracle soon. we need that.

Source: www.theguardian.com

Creativity at its best: African content creators on YouTube and TikTok explore new avenues for monetization

VLogs by Nigerian content creators Tayo Aina feature anything from Nigeria Japan (immigration) wave and voodoo festival. Performing with Afrobeats stars in Benin david or last hunter-gatherer tribe. In Tanzania, you can get millions of views on YouTube.

Aina, 31, who started the channel in 2017 while working as an Uber driver, says it has allowed her to see parts of Nigeria that she had never had the chance to visit before. Using his iPhone, he began a mini-travel adventure, taking breaks to record the places he visited and tell stories not covered in mainstream media.


“I want to inspire the next generation of Africans to have no limits,” says Nigerian content creator Tayo Aina. Photo: Handout

Aina learned how to film and edit through YouTube tutorials, saved up to buy better equipment, and soon began traveling beyond Nigeria to countries like Kenya, Ethiopia, and Namibia, learning about the continent’s culture and social life. He created a travel video that introduces Africa through the lens of photography. African traveler.

“Most of the online media was negative, and I realized that I was trying to change the narrative about Africa by presenting it more clearly. It’s light,” says Aina, who now travels around the world.

Africa’s Creator Industry 2024 Report Research by publisher Communiqué and media technology company TM Global values ​​the sector at £2.4bn and predicts it will grow five times by 2030, reflecting trends in the global creator economy. Its growth is being driven by a wave of creators between the ages of 18 and 34, a surge in internet connectivity and social media usage across the continent, and the explosion of African culture on the world stage.

Growing interest in African culture – from Afrobeats and Amapiano Music and dance to an international fashion collection made from African textiles such as Ankara and Kikyoy. African movies – This is part of an international aspiration for authentic cultural storytelling outside the Arctic Circle, reflected in global cultural movements such as Hallyu, says David Adeleke, Founder of Communiqué.

This year, TikTok recognized More than a dozen African creators including Nigerian lifestyle creators @__iremide, a person who makes videos that satirize everyday life, and a South African Sachiko-sama. The 22-year-old is known for cosplaying characters from anime, video games, and pop culture. Nick Clegg, Meta’s president of international affairs, recently said: held a meeting Other social media platforms such as YouTube and TikTok are increasing their presence and Heva I’m getting involved.

The report says the industry is gaining momentum but is still young. Most content creators are in their third year of work, have fewer than 10,000 followers, and are faced with the challenge of turning social capital into income. The report adds that discussions about the monetization and standardization of the creator business ecosystem continue to take place primarily in Western countries.

But that is gradually changing.

As Aina’s channel grew and attracted a more international audience, he discovered what he was capable of. more and more It cost him money when his content was viewed by Western audiences rather than Africa. YouTube’s advertising model relies on ad spend, which is lower in many African markets than in North America and Europe.

“Part of the reason is economic. Generally speaking, Western creators and audiences have more resources, but that alone is not enough to justify the disparity in opportunity. ” says Adeleke.

As Aina began diversifying her content and audience to generate more income, there were other issues to worry about. He shares a video about the barriers and prejudice he faced during his travels, including being detained in Ethiopia on suspicion of drug possession, being arrested in South Africa on suspicion of being a “fraudster” and being refused entry to Dubai. I’ve posted it on my blog. The 2022 incident in Dubai was the “last straw” for Aina. Aina invested her savings in St. Kitts and Nevis and eventually secured a passport from St. Kitts and Nevis, becoming a citizen of the Caribbean nation.

He currently runs the Creator Academy on YouTube, where he has trained nearly 2,000 mostly African creators. “I want the next generation of Africans to grow their brands globally without limitations,” he says.

Chiamaka “Amaka” Amaku A 30-year-old Nigerian travel and lifestyle innovator who works as a social media manager and creates content as a personal project, believes digital infrastructure issues, including the challenges of sending and receiving international money, are a challenge for Nigerian creators. It says it can limit growth. Some global payment platforms have imposed restrictions on certain countries, including Nigeria, due to concerns about fraud and money laundering.

“Payment is one of the biggest challenges in Nigeria’s creator economy,” Amaku said, adding that payment barriers deter global brands from working with Nigerian creators.

In recent years, fintech companies such as flutter wave and salary stack While supporting international payments has reduced the barriers creators face in accepting digital payments, many restrictions remain, including local bank policies. For travel creators like Amaku, that means it’s harder to book flights or take Uber abroad.

Amaku, who charges between £250 and £500 for posts on his Instagram page, which has around 20,000 followers, says it is difficult to make a living from creating content and there is a “culture of secrecy” around fees in the industry in Nigeria. He says that many people have died because of this. Creators quickly changed.

Sharon Makira A 31-year-old Kenyan luxury travel creator who describes her audience as “Afropolitan champagne nomads” agrees. He said competition for brand sponsorships is fierce because many companies still rely on traditional advertising, so negotiating rates can become a race to the bottom.

With around 20,000 followers on Instagram and 7,000 followers on YouTube, she gets around five brand deals a year, and is paid around £600 to £1,000 per campaign. When she became a full-time content creator last year, after nearly a decade in media and PR, she realized she couldn’t make a living relying on a few unpredictable brand deals, so companies started creating content. We have opened a PR studio that allows you to tailor your business to suit your needs. Cooperate with viewers, travel agencies and other businesses nomad And luxury lodges in Rwanda Singita Kwitonda.

According to her, building a business around a social media brand can earn you several times more per project than a brand deal. “I think there’s real promise there.” [African] Creators: Leverage your social capital, network, credibility, and personal brand to launch your business,” she says.


Source: www.theguardian.com

Lisa Nandy urges YouTube and TikTok to promote higher quality content for children

Britain’s Culture Secretary Lisa Nandy has reached out to video-sharing platforms like YouTube and TikTok, urging them to prioritize the promotion of high-quality educational content for children.

Recent data indicates a substantial shift in children’s viewing habits, with a significant decrease in TV consumption over the past decade. Instead, children, aged between 4 and 8, are increasingly turning to platforms like YouTube and TikTok for entertainment, according to Nandy.

During an interview on BBC Radio 4’s Today program, Nandy mentioned the government’s intention to engage in dialogue with these platforms initially, but warned of potential interventions if they do not respond positively.

She emphasized the importance of the high-quality educational content produced in the UK, which plays a crucial role in informing children about the world, supporting their mental well-being and development, and providing entertainment. However, she expressed concerns about the lack of similar quality in content on video-sharing platforms compared to traditional broadcasters.

Former BBC presenter Floella Benjamin, acting as a guest editor on the show, described these platforms as a “wild west” filled with inappropriate content.

Nandy highlighted the government’s efforts to remove harmful content for children and stressed the need to address deeper issues related to the quality of content children consume.

She acknowledged the democratic nature of platforms like YouTube, where individuals can build careers from home, but also emphasized the responsibility to ensure the content is appropriate for young viewers.

Regarding the decrease in funding for children’s television, Nandy mentioned the Young Audiences Content Fund as a positive initiative to boost production. She believed that increasing investment might not be the solution, as the focus should be on reaching all children, including those who do not watch traditional TV.

Despite concerns raised by Benjamin about a crisis in children’s television, Nandy praised the sector as a valuable asset for Britain, from networks like CBeebies to beloved shows like Peppa Pig. She emphasized the government’s role in supporting and nurturing this content, even if it may not be highly profitable.

Nandy admitted the challenges of monitoring her own son’s online activities but commended the platform’s filtering mechanisms and highlighted the positive influence of educational content like news programs.

Skip past newsletter promotions

Nandy confirmed contacting Ofcom to elevate the importance of children’s television in their regulatory considerations and urged a review of public broadcasting, anticipated in the summer.

She stressed the necessity of balancing the influx of investment from platforms like Netflix and Disney with preserving and promoting uniquely British content without overshadowing it.

This involves forming partnerships with public broadcasters to expand online content availability and ensure adequate recognition and support for their contributions, as per Nandy’s statements.

Source: www.theguardian.com

UK arts and media oppose proposal to grant AI companies permission to utilize copyrighted content

Authors, publishers, musicians, photographers, filmmakers, and newspaper publishers have all opposed the Labor government’s proposal to create a copyright exemption for training algorithms by artificial intelligence companies.

Representing thousands of creators, various organizations released a joint statement rejecting the idea of allowing companies like Open AI, Google, and Meta to use public works for AI training unless owners actively opt out. This was in response to the ministers’ proposal announced on Tuesday.

The Creative Rights in AI Coalition (Crac) emphasized the importance of respecting and enforcing existing copyright laws rather than circumventing them.

Included in the coalition are prominent entities like the British Recording Industry, the Independent Musicians Association, the Film Institute, the Writers’ Association, as well as Mumsnet, the Guardian, the Financial Times, the Telegraph, Getty Images, the Daily Mail Group, and Newsquest.

The intervention from these industry representatives follows statements by Technology and Culture Minister Kris Bryant in Parliament, where he promoted the proposed system as a way to enhance access to content for AI developers while ensuring rights holders have control over its use. This stance was reinforced after Bryant mentioned the importance of controlling the training of AI models using UK content accessed from overseas.

Nevertheless, industry lobbying group Tech UK is advocating for a more permissive market that allows companies to utilize and pay for copyrighted data. Caroline Dinenage, chair of the Conservative Party’s culture, media, and sport select committee, criticized the government’s alignment with AI companies.

Mr. Bryant defended the proposed system to MPs by highlighting the need for a flexible regime that allows for overseas developers to train AI models with UK content. He warned that a strict regime could hinder the growth of AI development in the UK.

Creatives in the industry are urged to seek permission from generative AI developers, obtain licenses, and compensate rights holders if they wish to create or train algorithms for various media formats.

A collective statement from the creative industry emphasized the importance of upholding current copyright laws and ensuring fair compensation for creators when licensing their work.

Renowned figures like Paul McCartney, Kate Bush, Julianne Moore, Stephen Fry, and Hugh Bonneville have joined a petition calling for stricter regulations on AI companies that engage in copyright infringement.

Novelist Kate Mosse is also supporting a campaign to amend the Data Bill to enforce existing copyright laws in the UK to protect creators’ rights and fair compensation.

Skip past newsletter promotions

During a recent House of Lords debate, supporters of amendments to enforce copyright laws likened the government’s proposal to asking shopkeepers to opt-out of shoplifting rather than actively preventing it.

The government’s plan for a copyright exemption has faced criticism from the Liberal Democrats and other opponents who believe it is influenced by technology lobbyists and misinterpretations of current copyright laws.

Science Minister Patrick Vallance defended the government’s position by emphasizing the need to support rights holders, ensure fair compensation, and facilitate the development of AI models while maintaining appropriate access.

Source: www.theguardian.com

Impact of Horrific Content: Ex-Facebook Moderator Shares How Job Took a Toll

WWhen James Irungu took a new job at technology outsourcing company Summersource, his manager gave him few details before training began. However, the role was so sought after that his salary almost doubled to £250 a month. Additionally, it provided a way out of Kibera, a vast slum on the outskirts of Nairobi where he lived with his young family.

“I thought I was one of the lucky ones,” the 26-year-old said. But then he finds himself examining a trove of violent and sexually explicit material, including tragic accidents, suicides, beheadings, and child abuse.

“I remember logging in one day and seeing a child with a huge slit in his stomach, suffering but not dead,” the Kenyan told the Guardian. When he saw the subject matter of child exploitation, he said, “that’s when I really knew this was something different.”

He was hired by Samasource to moderate Facebook’s content and eliminate the most harmful posts. Some of the most painful images were etched into his mind, sometimes causing him to wake up in night sweats. He kept it to himself for fear that opening up about his work would cause discomfort, concern, or criticism from others.

His wife, annoyed by his “secrecy,” gradually became estranged from him. Irungu continued to work for three years, resigned to the possibility of their separation and convinced that he was protecting her. He says he regrets pushing.

“I don’t think it’s a job for humans,” he says. “I became really isolated from the real world because I started to think of it as a very dark place.” He became afraid to take his daughter away from his eyes.

“If you ask yourself, was it worth sacrificing your mental health for that money, the answer is no.”

Another former host said some of his colleagues dropped out after being alarmed by some of the content. But she found purpose in managers’ assurances that their work protects users, including young children like her.

“I felt like I was helping people,” she said. However, when I stopped, I realized that what I had taken for granted until now was now a problem.

She recalled screaming in the middle of her office floor after seeing one horrifying scene. She said it was as if nothing had happened, except for a few glances from co-workers and a team leader pulling her aside to tell her he was “going to wellness” for counseling. The wellness counselor told her to take a break and get that image out of her head.

“How do you forget when you get back on the floor after a 15-minute break and move on to the next thing?” she said. She questioned whether the counselor was a qualified psychotherapist and said the moderator would never escalate a mental health case, no matter what she saw or how distressed she was.

She was the kind of person who entertained friends at every opportunity, but she rarely left the house, cried over the deaths of people she didn’t know, felt numb, struggled mentally, and at times struggled with suicidal thoughts. Ta.

“This job damaged me and I could never go back,” the woman said, adding that the lawsuit will impact Africa’s content moderation industry as global demand for such services grows. I hope that you will give me.

“Things have to change,” she said. “I don’t want anyone to go through what we did.”

Source: www.theguardian.com

The Far Right in Europe is Utilizing AI-Generated Content as a Weapon

FThe fake images, created using generative artificial intelligence techniques, aim to stoke fears of a migrant “invasion” among leaders like Emmanuel Macron and far-right parties in Western Europe. This political weaponization is a growing concern.

Experts point to this year’s European Parliament elections as the starting point for the far right in Europe to deploy AI-based electoral campaigns, which have since continued to expand.

Recently, anti-immigrant content on Facebook came under scrutiny by Mark Zuckerberg’s independent oversight board as it launched an investigation. German accounts featuring AI-generated images with anti-immigration rhetoric will be examined by the supervisory board.

AI-generated right-wing content is on the rise on social media platforms in Europe. Posts from extremist groups depict disturbing images, like women and children eating insects, perpetuating conspiracy theories about “global elites.”

The consistent use of AI-generated images with no identifying marks by far-right parties and movements across the EU and UK suggests a coordinated effort in spreading their message.

According to Salvatore Romano, head of research at AI Forensics, the AI content being shared publicly is just the beginning, with more concerning material circulating in private and official channels.

William Alcorn, a senior research fellow, notes that the accessibility of AI models appeals to fringe political groups seeking to exploit new technologies for their agendas.




Some of the AI-generated images posted on X by the L’Europe Sans Eux account. Illustration: @LEuropeSansEux

AI technology makes content creation accessible without coding skills, which has normalized far-right views. Mainstream parties remain cautious about using AI in campaigning, while extremists exploit it without ethical concerns.

Germany

Supporters of Germany’s far-right party AfD use AI image generators to promote anti-immigration messages. Meta’s content moderation committee reviewed an image showing anti-immigrant sentiments against a blonde, blue-eyed woman.

AI-powered campaign ads by AfD’s Brandenburg branch contrast an idealized Germany with scenes of veiled women and LGBTQ+ flags. Reality Defender, a deepfake detection firm, highlighted the speed at which such images can be generated.

Source: www.theguardian.com

‘Artists Join Forces with Murdoch in Fight Against Unauthorized AI Content Scraping’

IIt’s an unlikely alliance between billionaire media mogul Rupert Murdoch and a group of top artists including Radiohead singer Thom Yorke, actors Kevin Bacon and Julianne Moore, and author Kazuo Ishiguro.

This week they launched two very public battles with artificial intelligence companies, accusing them of using their intellectual property without permission to build increasingly powerful and lucrative new technologies.

More than 13,000 creative professionals from the worlds of literature, music, film, theater and television have issued a statement saying that programs such as ChatGPT, where AI companies train their work without permission, are interfering with their lives. It warned that it posed a “serious and unwarranted threat”. By the end of the week, that number had nearly doubled to 25,000.

This comes as Murdoch, the owner of News Corp., a publishing group that owns the Wall Street Journal, The Sun, The Times, The Australian, and others, has warned Perplexity, an AI-based search engine, of illegal activities. This was the day after the company filed a lawsuit alleging that Some of his journalism in the US title has been copied.

The Stars’ statement supports the idea that creative works can be used as training data for free on grounds of “fair use” (a US legal term meaning no permission from the copyright owner is required). It is a collective effort to dissent. Adding to their ire is the fact that these AI models can be used to produce fresh work that competes with human work.




Rupert Murdoch has filed a lawsuit against Perplexity, an AI-powered search engine. Photo: Noah Berger/AP

AI was a major sticking point in last year’s double strike by Hollywood actors and screenwriters, who agreed to ensure new technology remains under the control of employees rather than being used to replace them. Secured. Several ongoing lawsuits could determine whether the copyright battle is similarly successful.

In the US, artists are suing the tech companies behind the image-generating devices, a major record label is suing AI music creators Suno and Udio, and a group of writers including John Grisham and George R.R. Martin is suing ChatGPT developer OpenAI for alleged copyright infringement.

In the fight to make AI companies pay for the content they scrape to build their tools, publishers are also pursuing legal avenues to get them to the negotiating table to sign licensing agreements. There is.

Publishers such as Politico owner Axel Springer, Vogue’s Condé Nast, the Financial Times and Reuters have signed content deals with various AI companies, and in May, News Corp. has signed a five-year contract with Open AI, reportedly worth $250 million. In contrast, the New York Times filed a lawsuit against the creators of ChatGPT and sent a “cease and desist” letter to Perplexity last week.

But in the UK, AI companies are lobbying for legal changes to allow them to continue developing tools without the risk of infringing intellectual property rights. Currently, the text and data mining required to train generative AI tools is only permitted for non-commercial research.

This week, Microsoft CEO Satya Nadella called for a rethink of what “fair use” is. He argued that the large-scale language models that power generative AI do not “regurgitate” the information they have been trained on, and that this would be considered copyright infringement.

Labour’s new minister for AI and digital government, Ferial Clarke, recently said she wants copyright disputes between creative industries and AI companies to be resolved by the end of the year.

she said it might be in there
Form of amendment to existing or new law
opening up the possibility of new provisions allowing AI companies to collect data for commercial purposes.




Actor Kevin Bacon is among those fighting back against AI. Photo: Richard Shotwell/Invision/AP

While news organizations publicly oppose AI-based content abuse, behind the scenes many are adopting technology to replace editorial functions, with commercially-strapped publishers using the technology at a cost. There is growing fear among staff that they will be used as a Trojan horse to enable retrenchment and redundancies.

Last month, the National Union of Journalists launched a campaign to highlight the issue.
“Journalism before algorithms”.

“With wage stagnation, below-inflation wage increases, newsroom staff shortages, and increasing layoffs, there is a need to consider the use of AI,” the paper said. “Threats to journalists’ jobs are considered top of mind… AI is no substitute for real journalism.”

“There are questions about how much publishers themselves are using these tools,” said Niamh Burns, senior research analyst at Enders Analysis. “I think the amount of adoption is low, and there’s a lot of experimentation going on, but I can see a world where publishers are using some of these tools heavily. We need to be realistic about the scale of the opportunity we create.”

Burns said that so far, publishers’ willingness to use AI tools to directly influence or create editorial content has largely depended on how commercially pressurized the media landscape is for their operators. He said that it is related to whether the

BuzzFeed’s once-mighty market value has fallen from $1 billion during its 2021 flotation to less than $100 million.
Rapid AI adapter Against the backdrop of drastic cuts in the news department and sharp decline in income.

And Newsquest, the second-largest newspaper in Britain’s beleaguered local and regional newspaper market, has embarked on initiatives such as rapidly increasing the role of “AI-assisted” journalism.

However, quality national newspapers and media brands remain very cautious, and many, including the Guardian, have set strict principles to guide their work.

But behind the scenes, AI tools are being leveraged to help categorize large datasets and help journalists report on new and exclusive content.

“I think the media companies that are most exposed to commercial risk in the short term are also at risk of overreaching,” Burns said.

“A lot of it has to do with commercial models, where you rely on advertising from a lot of traffic on social platforms and all you need is scale and not quality, where AI can be very helpful.

“But creating generative AI content is never worth the cost or risk.” [for quality national titles]. And for any publisher, producing more conventional journalism comes with long-term costs to quality and risks to competitiveness. ”

Source: www.theguardian.com

UK Bill Could Mandate Social Media Platforms to Develop Less Addictive Content for Under-16s

Legislation supported by Labor, the Conservative Party, and child protection experts will require social media companies to exclude teenagers from algorithms intended to reduce content addiction in under-16s. This new Safer Telephones Bill, introduced by Labor MPs, prioritizes reviewing mobile phone sales to teenagers and potentially implementing additional safeguards for under-16s. Health Secretary Wes Street voiced support for the bill, citing the negative impact of smartphone addiction on children’s mental health.

The bill, championed by Labor MP Josh McAllister, is receiving positive feedback from ministers, although there is hesitation around banning mobile phone sales to teens. With backing from former Conservative education secretary Kit Malthouse and education select committee chair Helen Hayes, the bill aims to address concerns about children’s excessive screen time and exposure to harmful content.

Mr. McAllister’s bill, which focuses on protecting children from online dangers, will be debated by ministers this week. The bill includes measures to raise the Internet age of majority to 16 and give regulatory powers to Ofcom for children’s online safety. The proposed legislation has garnered support from various stakeholders including former children’s minister Claire Coutinho and children’s charities.

Concerns about the impact of smartphones on children’s well-being have prompted calls for stricter regulations on access to addictive online content. While Prime Minister Keir Starmer is against a blanket ban on mobile phones for under-16s, there are ongoing discussions about how to ensure children’s safety online without restricting necessary access to technology.

The bill aims to regulate online platforms and mobile phone sales to protect young people from harmful content and addiction. Mr. McAllister’s efforts in promoting children’s digital well-being have garnered significant support from policymakers and child welfare advocates.

As the government considers the implications of the bill and the Online Safety Act, which is currently pending full implementation, efforts to protect children from online risks continue to gain momentum. It remains crucial to strike a balance between enabling technology access and safeguarding children from potential online harms.

Source: www.theguardian.com

Mark Zuckerberg alleges White House pressured Facebook to censor coronavirus-related content

Meta chief executive Mark Zuckerberg has alleged that he came under pressure from the US government to censor coronavirus posts on Facebook and Instagram during the pandemic, and said he regrets giving in to it.

Zuckerberg said White House officials under Joe Biden\’s administration “repeatedly pressured” Facebook and Instagram\’s parent company, Meta, throughout the pandemic to “censor certain coronavirus-related content.”

“Over the course of 2021, Biden Administration officials, including from the White House, repeatedly pressured us for months to censor certain COVID-19-related content, including humor and satire, and expressed significant frustration to our team when we did not comply,” the letter to House Judiciary Committee Chairman Jim Jordan said in a statement. “We believe the administration\’s pressure was misguided.”

During the pandemic, Facebook began showing misinformation warnings to users when they commented on or liked posts it deemed contained false information about the coronavirus.

The company also removed posts criticizing COVID-19 vaccines and suggesting the virus was developed in a Chinese lab.


During the 2020 US presidential election campaign, Biden accused social media platforms such as Facebook of “killing people” by allowing the posting of misinformation about COVID-19 vaccines.

“With hindsight and new information, I think we would have made choices that we wouldn\’t have made now,” Zuckerberg said. “I regret not being more vocal about it.”

“As I told my team then, I feel strongly that our content standards should not be compromised due to pressure from the Administration, and we are ready to fight back if something like this happens again.”

Zuckerberg also said Facebook had “temporarily downgraded” a story about the contents of a laptop owned by the president\’s son, Hunter Biden, after the FBI warned that Russia was preparing a disinformation campaign against Biden.

Zuckerberg wrote that it was later revealed that the article was not false, and that “in retrospect, we should not have downgraded this article.”

The Republican-controlled House Judiciary Committee called Zuckerberg\’s confession a “major victory for free speech.” Post it on the committee\’s Facebook page.

The White House defended its actions during the pandemic, saying it encouraged “responsible behavior to protect public health and safety.”

“Our position has been clear and consistent,” the company said. “We believe that tech companies and other private actors should consider the impact of their actions on the American people and make their own choices about the information they provide.”

Source: www.theguardian.com

Super-Earths and Sub-Neptunes have significantly higher water content than previously believed

Water is a key component of exoplanets, and its distribution – on the surface or deep inside – has a fundamental impact on the planet’s properties. A new study suggests that for Earth-sized planets and planets with more than six times Earth’s mass, the majority of water resides deep within the planet’s core.



Most of the water isn’t stored on the surface of exoplanets, but deep within their cores and mantles. Image courtesy of Sci.News.

“Most of the exoplanets known to date are located close to their stars,” said Professor Caroline Dohn of ETH Zurich.

“That means they consist mainly of hot worlds with oceans of molten magma that haven’t yet cooled enough to form a solid mantle of silicate rock like Earth’s.”

“Water is very soluble in these magma oceans, unlike, say, carbon dioxide, which quickly outgasssssssssssss and rises into the atmosphere.”

“The iron core is beneath a molten silicate mantle. So how does water partition between the silicates and the iron?”

“It takes time for the iron core to form. Most of the iron is initially contained in the hot magma soup in the form of droplets.”

“The water trapped in this soup binds to these iron droplets and together they sink to the center. The iron droplets act like a lift force, being carried downward by the water.”

Until now, such phenomena were known to occur only under moderate pressures, which also exist on Earth.

It was not known what would happen on larger planets with higher internal pressures.

“This is one of the key findings of our study,” Professor Dorn said.

“The larger and more massive the planet, the more likely the water is to be integrated into the core, together with the iron droplets.”

“Under certain circumstances, iron can absorb up to 70 times more water than silicates.”

“But because of the enormous pressure at the core, the water no longer exists in the form of water molecules, but in the form of hydrogen and oxygen.”

The research was sparked by an investigation into the Earth’s water content, which four years ago led to a startling result: the Earth’s surface oceans contain only a tiny fraction of the planet’s total water.

More than 80 of Earth’s oceans may be hidden within it.

This is shown by simulations that calculate how water would have behaved under conditions when the Earth was young, so experiments and seismological measurements are compatible.

New discoveries about the distribution of water within planets will have a dramatic impact on the interpretation of astronomical observational data.

Astronomers can use telescopes in space and on Earth to measure the weight and size of exoplanets under certain conditions.

They use these calculations to create mass-radius diagrams that allow them to draw conclusions about the planet’s composition.

“Ignoring water solubility and distribution, as has been done in the past, can lead to a massive underestimation of the water volume, by up to a factor of ten,” Prof Doern said.

“There’s a lot more water on the planet than we previously thought.”

The distribution of water is also important if we want to understand how planets form and develop: any water that sinks to the core will remain trapped there forever.

However, dissolved water in the mantle’s magma ocean can degas and rise to the surface as the mantle cools.

“So if we find water in a planet’s atmosphere, there’s probably even more water in its interior,” Prof Dorn said.

Water is one of the prerequisites for life to develop, and there has long been speculation as to whether water-rich super-Earths could support life.

Calculations have since suggested that too much water could be detrimental to life, arguing that on such a watery world, an alien layer of high-pressure ice would prevent vital exchange of materials at the interface between the ocean and the planet’s mantle.

Current research has come to a different conclusion: Most of the water on super-Earths is locked away in their cores, rather than on their surfaces as previously assumed, so planets with deep aqueous layers are probably rare.

This has led astronomers to speculate that planets with relatively high water content could potentially form habitable environments like Earth.

“Their study sheds new light on the possibility that worlds rich enough in water to support life may exist,” the authors said.

of study Published in the journal Natural Astronomy.

_____

H. Luo othersThe interior as the main water reservoir of Super-Earths and Sub-Neptunes. Nat AstronPublished online August 20, 2024; doi: 10.1038/s41550-024-02347-z

Source: www.sci.news

OpenAI Enters into a Multi-Year Content Partnership with Condé Nast | Technology Sector

Condé Nast and OpenAI have announced a long-term partnership to feature content from Condé Nast’s brands such as Vogue, Wired, and The New Yorker in OpenAI’s ChatGPT and SearchGPT prototypes.

The financial details of the agreement were not disclosed. OpenAI, backed by Microsoft and led by Sam Altman, has recently signed similar deals with Axel Springer, Time magazine’s owner, Financial Times, Business Insider, Le Monde in France, and Prisa Media in Spain. This partnership allows OpenAI to access extensive text archives owned by publishers for training large language models like ChatGPT and real-time information retrieval.

OpenAI launched SearchGPT, an AI-powered search engine in July, venturing into Google’s long-dominant territory. Collaborations with magazine publishers enable SearchGPT to display information and references from Condé Nast articles in search results.


OpenAI’s Chief Operating Officer, Brad Lightcap, expressed the company’s dedication to collaborating with Condé Nast and other news publishers to uphold accuracy, integrity, and respect for quality journalism as AI becomes more assimilated in news discovery and dissemination.

Condé Nast CEO Roger Lynch mentioned in an email reported by The New York Times that this partnership will help offset some revenue losses suffered by publishers due to technology companies. He emphasized the importance of meeting readers’ needs while ensuring proper attribution and compensation for the use of intellectual property with emerging technologies.

On the contrary, some media companies like The New York Times and The Intercept have taken legal action against OpenAI for using their articles without permission, indicating an ongoing legal dispute.

Source: www.theguardian.com

Stopping the Rise of Aggro-ism: Addressing the Issue of Misogynistic Content | The Importance of Connection

If you’ve ever stumbled across a misogynistic video by an influencer online, you know how harmful this content can be, but did you know that more than two-thirds of boys ages 11 to 14 are exposed to this kind of harmful and damaging “manosphere” content? 70% of teachers have noticed an increase in sexist language Will it be used in classrooms in the 12 months leading up to February 2024?

The study was published earlier this year: The rise of aggroismIt depicts a boy’s gradual slide into a misogynistic mindset, which leaves him feeling lonely and sad, and negative towards his female teachers and even his own sister.

The film, produced by Vodafone and the charity Global Action Plan, depicts the impact that harmful AI-powered algorithms are having on pre-teen boys. It reflects growing concern among parents, with one in five noticing a gradual change in the language their sons use to talk about women and girls. Experts are now urging families to talk to their sons about what may be on their phones and how it’s reaching them.

Psychologist Dr Ellie Hanson says: “Social media is designed to keep you online as long as possible, so they target your emotions. They exploit emotions such as shock, fear, anxiety, paranoia, superiority, indignation and sexuality. These emotions have been found to be captivating.”

Worryingly, many boys come across this content while searching for something unrelated, such as fitness or gaming videos. Hanson says explaining how social media algorithms are designed is important because it invites kids and teens into the conversation, which is much more effective than telling them not to look.

Teenage boys often come across harmful content while searching for something else: photos with models posing. Photo: Carol Yepes/Getty Images

“Questioning things is a normal part of being a teenager,” she says, “so let’s use that tendency to encourage them to question the tools being used to manipulate them online.”

Hanson says that simply explaining that these platforms directly benefit from your engagement with their content is a strong first step. The content that attracts the most attention is often controversial and conspiratorial. This has resulted in a plethora of influencers who promote a distorted view of masculinity that is sexist, offensive and offensive. This leads to negative and disrespectful behavior towards women and girls, and also damages boys’ mental health and ability to form relationships. Two-thirds of boys They said seeing harmful and negative content online left them feeling anxious, sad and scared.

Kate Edwards, deputy director of online child safety at the NSPCC, says parents need to be aware of how quickly their children’s phones and tablets can become inundated with harmful content. “Social media is currently made up mainly of short form content – videos streamed quickly one after the other. Once you watch something in full, react to it, like or comment, the app will serve you more and more similar content. It can quickly pull you down a rabbit hole,” Edwards said.

“There are steps you can take to teach the algorithm that you don’t want to see it anymore. Look for a ‘hide’ button or a ‘I didn’t like that’ option. Explore the different settings in the app, by yourself and with your child.”

Vodafone co-designed Digital Parenting Toolkit We’ve teamed up with the NSPCC to help parents get ahead of potential risks. It’s full of conversation starters, activities and tips to help young people stay safe while using the internet, as well as advice on what to do if they come across something inappropriate.

Sir Peter Wanless (right): “This toolkit encourages families to have open conversations about their children’s mobile phone use.” Composition: Getty Images, Adrian James White

Sir Peter Wanless, chief executive of the NSPCC, says he is particularly proud of the partnership with Vodafone because it helps navigate an online world that can be overwhelming and confusing for parents as well as children. He says: “The toolkit encourages families to have open conversations about their children’s mobile use, for example discussing situations that might arise online. It also covers safety features available on phones and setting boundaries, such as enforcing screen time limits.”

But screen time rules and parental controls are only one piece of the puzzle: while parents can help stem the flow of harmful content, there is a growing belief that to break the cycle, tech companies themselves need to take action.

To appeal this, Global Action Plan has filed a petition It calls for regulators such as Ofcom to require platforms to take control away from AI-powered algorithms and enforce “safety by design”, which was a key element of the 2023 policy. Online Safety ActBut there are growing concerns that the app may get away with only having bare-bones functionality.

“Despite parents’ best efforts, children are still vulnerable to manipulative algorithms. We should do our best, but the most power lies with the tech companies and regulators,” Hanson said.

Find out more about Vodafone’s pledge to help four million people and businesses bridge the digital divide. here

Source: www.theguardian.com

Can the content on your iPhone remain private? | Technology

AI is a concern for Apple as it consumes a lot of power.

During its global developers conference, Apple unveiled its strategy to integrate AI into daily life, primarily focusing on the latest iPhone users.

Apple’s latest AI models are compatible with the iPhone 15 Pro and Pro Max, the only devices featuring the A17 processor. Additionally, Macs up to three years old with M1, 2, or 3 chips, as well as iPad Pros with similar internal hardware, can benefit from the upgrade.

The more affordable iPhone 15 models come with the A16 Bionic chip introduced in 2022 and 6GB of memory, compared to 8GB in the pricier Pro models. This difference is crucial because the M1 chip powering Macs is equivalent to the A14 processor in 2020 iPhones.

Numerous model numbers highlight that advanced AI features won’t function on just any phone, as many require high-performance devices. If Apple aims to deliver AI technology, it must do so through its data centers—an endeavor that poses challenges, as stated by Kari Paul:

At the core of Apple’s AI privacy measures is its new private cloud computing technology, where most of the computing is done in-house for Apple Intelligence features on devices. However, for tasks exceeding device capabilities, processing is outsourced to the cloud while safeguarding user data.

To uphold privacy, Apple only exports necessary data for each request, implements additional security measures at endpoints, avoids indefinite data storage, and offers tools and software related to its private cloud for third-party validation.

When it comes to AI queries, complete privacy—offered by online backup or messaging services—remains challenging due to server requirements for accurate responses. Apple has long stressed its commitment to privacy, setting itself apart from competitors like Facebook and Google with its “what happens on iPhone stays on iPhone” pledge.




Apple CEO Tim Cook attending an event in Cupertino, California in September 2023.
Photo: Bloomberg/Getty Images

Apple’s solution involves running user data-free data centers designed in-house to validate the integrity of the software. Security researchers are provided with tools to verify the software’s authenticity running on Apple’s servers.

Yet, the question remains: Can Apple be trusted? Huawei’s similar efforts failed to prove its independence from the Chinese government. Trust in Apple’s commitment to privacy is growing, but accommodating AI’s rise forces Apple to compromise its foundational principles.

While Apple emphasizes privacy, the implementation of AI features like Apple Intelligence may necessitate data transfer to ensure functionality, blurring the lines of privacy assurances.

Considering a transition from smartphone to a light phone?




The Light Phone III, a device enticing those seeking freedom from distractions.
Photo: LightPhone

Exploring products outside the conventional smartphone market reveals devices like Humane and Rabbit, showcasing the expanding realm of hardware addressing users’ varying needs.

Anti-phones, exemplified by devices like the Light Phone III, cater to individuals desiring a balance between digital detox and modern conveniences, offering customizable tools optimized for an unobtrusive experience.

The Light Phone III provides a range of optional tools tailored for LightOS, including alarms, calculators, calendars, directories, and more, designed for a thoughtful user experience.

The device’s intentional limitations, such as omitting a web browser, restrict access to streaming services and encrypted messaging platforms, aligning with the anti-distraction philosophy.

Navigating the transition to an anti-phone involves weighing the desire for reduced digital demands against the practicalities of work and personal life, posing a contemplative dilemma.

Exploring the broader technological landscape




A captivating portrayal of “AI” by Miles Astley.
Photo: Miles Astley

Source: www.theguardian.com

Palestinian-American engineer claims Meta fired him due to his content related to Gaza

A former Meta engineer filed a lawsuit on Tuesday accusing the company of discriminatory practices in handling content related to the Gaza war. He claimed that he was fired by Meta for trying to fix a bug that was throttling Palestinian Instagram posts.

Feras Hamad, a Palestinian-American engineer on Meta’s machine learning team since 2021, sued the social media giant in California, alleging discrimination and wrongful termination over his firing in February.

Hamad accused Meta of bias against Palestinians, citing the removal of internal communications mentioning deaths of Gaza Strip relatives and investigations into the use of a Palestinian flag emoji.

The lawsuit alleged the company did not investigate employees posting Israeli or Ukrainian flag emojis in similar situations. Meta did not immediately respond to the allegations.

These allegations align with ongoing criticism from human rights groups about Meta’s moderation of Israel-Palestine content on its platform, including an external review in 2021.

Since last year’s conflict outbreak, Meta has faced accusations of suppressing support for Palestinians. The conflict erupted in Gaza in October after Hamas attacks, resulting in casualties and a humanitarian crisis.

Earlier this year, about 200 Meta employees raised similar concerns in a letter to CEO Mark Zuckerberg and other leaders.

Hamad’s firing seems linked to a December incident involving a troubleshooting procedure at Meta. He raised concerns about restrictions affecting Palestinian content on Instagram.

The lawsuit mentioned a case where a video by a Palestinian photojournalist was wrongly classified as explicit, sparking further issues.

Skip Newsletter Promotions

Hamad faced conflicting instructions on resolving the SEV issues, leading to his investigation and subsequent termination by Meta.

He claimed Meta cited a rule violation related to a photojournalist, but he denied any personal connection to the individual.

Source: www.theguardian.com

Meta will limit political content on Instagram for users who do not opt-in.

Meta’s recent changes on Instagram mean that users will now see less political content in their recommendations and feed unless they choose to opt-in for it. This adjustment, announced on February 9, requires users to specifically enable political content in their settings.

Users noticed this change in recent days, and it has been fully implemented within the last week. According to the app’s version history, the most recent update before this was a week ago.


The change affects how Instagram recommends content in the Explorer, Reels, and In-Feed sections. It does not impact political content from accounts users already follow.

Instagram defines political content as related to legal, electoral, or social topics. This change also applies to Threads, and users can dispute recommendations if they feel unfairly targeted.

Meta’s aim in making this adjustment is to enhance the overall user experience on Instagram and Threads. They want users to have control over the political content they consume without actively promoting it.

For more information, Meta’s spokesperson directed users to a February blog post. Similar changes will be rolled out on Facebook in the future.

Despite recent controversies, like censorship during the Israel-Gaza conflict and perceived polarization by Facebook’s algorithms, Meta continues to work on separating political and news content from its platforms.

Skip past newsletter promotions

Although past studies suggest that algorithm changes may not alter political perceptions, Meta’s efforts to distance itself from politics and news continue. This includes phasing out the News tab on Facebook in anticipation of potential conflicts with news publishers and governments.

In ongoing discussions with the Australian government, Meta faces considerations under the News Media Bargaining Act 2021. Possible fines and revenue loss could result from this legislation.

Meta maintains that news content makes up less than 3% of user engagement on Facebook. The company remains committed to evolving its platforms in response to user preferences and societal concerns.

Source: www.theguardian.com

Ofcom concludes that exposure to violent online content is unavoidable for children in the UK

The UK children are now inevitably exposed to violent online content, with many first encountering it while still in primary school, according to a media watchdog report.

British children interviewed in the Ofcom investigation reported incidents ranging from videos of local school and street fights shared in group chats to explicit and extreme graphic violence, including gang-related content, being watched online.

Although children were aware of more extreme content existing on the web, they did not actively seek it out, the report concluded.

In response to the findings, the NSPCC criticized tech platforms for not fulfilling their duty of care to young users.

Rani Govender, a senior policy officer for online child safety, expressed concern that children are now unintentionally exposed to violent content as part of their online experiences, emphasizing the need for action to protect young people.

The study, focusing on families, children, and youth, is part of Ofcom’s preparations for enforcing the Online Safety Act, giving regulators powers to hold social networks accountable for failing to protect users, especially children.

Ofcom’s director of Online Safety Group, Gil Whitehead, emphasized that children should not consider harmful content like violence or self-harm promotion as an inevitable part of their online lives.

The report highlighted that children mentioned major tech companies like Snapchat, Instagram, and WhatsApp as platforms where they encounter violent content most frequently.

Experts raised concerns that exposure to violent content could desensitize children and normalize violence, potentially influencing their behavior offline.

Some social networks faced criticism for allowing graphic violence, with Twitter (now X) under fire for sharing disturbing content that went viral and spurred outrage.

While some platforms offer tools to help children avoid violent content, there are concerns about their effectiveness and children’s reluctance to report such content due to fear of repercussions.

Algorithmic timelines on platforms like TikTok and Instagram have also contributed to the proliferation of violent content, raising concerns about the impact on children’s mental health.

The Children’s Commissioner for England revealed alarming statistics about the waiting times for mental health support among children, highlighting the urgent need for action to protect young people online.

Snapchat emphasized its zero-tolerance policy towards violent content and assured its commitment to working with authorities to address such issues, while Meta declined to comment on the report.

Source: www.theguardian.com

EU initiates probe into TikTok concerning online content and child safety

The EU is launching an investigation into whether TikTok has violated online content regulations, particularly those relating to the safety of children.

The European Commission has officially initiated proceedings against a Chinese-owned short video platform for potential violations of the Digital Services Act (DSA).

The investigation is focusing on areas such as safeguarding minors, keeping records of advertising content, and determining if algorithms are leading users to harmful content.


Thierry Breton, EU Commissioner for the Internal Market, stated that child safety is the “primary enforcement priority” under the DSA. The investigation particularly focuses on age verification and default privacy settings for children’s accounts.

In April last year, TikTok was fined €345 million in Ireland for violating EU data law in its handling of children’s accounts. Additionally, the UK Information Commissioner fined the company £12.7 million for unlawfully processing data from children under 13.

Companies that violate the DSA can face fines of up to 6% of their global turnover. TikTok is owned by Chinese technology company ByteDance.

TikTok has stated that it is committed to working with experts and the industry to ensure the safety of young people on its platform and is eager to brief the European Commission on its efforts.

The commission is also examining alleged deficiencies in TikTok’s provision of publicly available data to researchers and its compliance with requirements to establish a database of ads shown on the platform.

A deadline for the investigation has not been set and will depend on factors such as the complexity of the case and the degree of cooperation from the companies being investigated.

This investigation of TikTok is the DSA’s second, following a December 2021 formal investigation into Elon Musk’s social media platform X, which was previously known as Twitter. The case against X focuses on failure to block illegal content and inadequate measures against disinformation.

Apple is reportedly facing a substantial fine from the EU for its conduct in the music streaming app market. The European Commission is investigating whether US tech companies blocked music distributors from informing users about cheaper subscription options outside of their own app stores.

According to the Financial Times, the city of Brussels plans to fine Apple 500 million euros, marking a significant decision following years of complaints from companies offering services through iPhone apps.

Apple was previously fined 1.1 billion euros by France in 2020 for anti-competitive agreements with two wholesalers, a fine that was later reduced by an appeals court.

Big technology companies like Apple and Google have come under increased scrutiny due to competitive concerns. Google is appealing against fines of more than 8 billion euros imposed by the EU in three separate competition investigations.

Apple has successfully defended against a lawsuit by Fortnite developer Epic Games alleging that its app store was an illegal monopoly. In December, Epic won a similar lawsuit against Google.

Last month, Apple announced that it would allow EU customers to download apps without using its own app store, in response to the EU’s digital market law.

Source: www.theguardian.com

Starch-based super thickeners lower calorie and carbohydrate content in food

Starch is a component of wheat flour and is used as a thickening agent in cooking.

Victor Fischer/Alamy

Making small sheets or cages from starch particles turns them into super-thickeners, which can reduce the calorie content of food.

Starches are often added to foods such as soups to thicken them, but this increases their calorie and carbohydrate content. now, lee peiron Researchers at Cornell University in New York have discovered that by arranging starch particles into special shapes, they can reduce the amount of starch in foods without sacrificing texture.

Starch particles expand when heated, which thickens the food. This means that the particles get stuck together and there is less room for the liquid components of the dish to flow freely. The researchers wondered if they could recreate this effect while reducing the amount needed by hollowing out starch blocks. “But you can't carve starch grains like pumpkins,” says Lee.

Instead, he and his colleagues devised a method that uses starch particles extracted from amaranth grains and assembles them into three-dimensional shapes by mixing them with water and oil. Starch particles were placed around the oil droplets, and the researchers used a combination of heating and freeze-drying to remove the two liquids. This left only starchy structures, some shaped like cages with a hollow center, others like sheets stacked on top of each other so that the liquid was trapped between them.

The research team discovered that these starch structures are so good as thickeners that they can halve the amount of starch typically needed to thicken foods.

Fan Zhu Researchers at the University of Auckland in New Zealand say the use of these granules as building blocks for a new class of hollow starch structures is so innovative that starches could become a big part of future food design. It has said. However, Zhu said amaranth starch is expensive and difficult to source in large quantities, so it would be advantageous to apply the new method to more affordable and abundant starches, such as starch made from corn. says. “And more research is needed into what happens when you put these kinds of structures in your mouth,” he says.

topic:

Source: www.newscientist.com