Concerns Raised Over Potential Further Censorship of Pro-Palestinian Content in Meta’s Hate Speech Policy Review

The Guardian confirmed that Meta is considering expanding and “reconsidering” its hate speech policy regarding the term “Zionist.” On Friday, the company contacted and met with more than a dozen Arab, Islamic, and pro-Palestinian groups to discuss plans to review its policies to ensure that “Zionist” is not used as a substitute for Jewish or Israeli. An email seen by the Guardian revealed this information.

According to an email sent by Meta representatives to invited groups, the current policy allows the use of “Zionist” in political discussions as long as it does not refer to Jewish people in an inhumane or violent manner. The term will be removed if it is used explicitly on behalf of or on behalf of Israelis. The company is considering this review in response to recent posts reported by users and “stakeholders,” as reported by The Intercept.

Senator demands answers on reports of Meta censoring pro-Palestinian content

Another organization received an email from a Meta representative stating that the company’s current policy does not allow users to attack others based on protected characteristics and that a current understanding of language people use to refer to others is necessary. The email also mentioned that “Zionist” often refers to the ideology of an unprotected individual but can also refer to Jews and Israelis. The organizations participating in the discussions expressed concerns about the changes leading to further censorship of pro-Palestinian voices.

In addition, Meta gave examples of posts that would be removed, including a post calling Zionists rats. The company has been criticized for unfairly censoring Palestinian-related content, which raises concerns about the enforcement of these policies.

In response to a request for comment, Meta spokesperson Corey Chambliss shared a previous statement regarding the “increasing polarized public debate.” He added that Meta is considering whether and how it can expand its nuanced response to such language and will continue to consult with stakeholders to improve the policy. Policy discussions take place during high-stakes periods of conflict, and accurate information and its dissemination can have far-reaching effects.

More than 25,000 Palestinians have been killed since the attack on Gaza began in October 2023. Implementing a policy like this in the midst of a genocide is extremely problematic, and it may cause harm to the community, as stated by an official from the American Arab Anti-Discrimination Committee.

Source: www.theguardian.com

“Curb Your Enthusiasm” Stars Dive into 120 Episodes of Cringe-Worthy Content in This Week’s Top Podcasts

This week's picks

Late fragment
Wide range of weekly episodes available

This introspective and thoughtful show interviews people in their 80s about politics, religion, sex and money. Its outstanding line-up includes Neil Kinnock, Miriam and Margolyes, and Proulis. The first episode of our latest series is a wide-ranging conversation with humanitarian Terry Waite. This is a thoughtful look at his homelessness situation, his economic situation, and what it was like to spend his five years in chains and in total solitary confinement. Alexi Duggins

drink champion
Wide range of weekly episodes available
If you're looking for a quick listen, the latest episode of this loud, alcohol-filled series isn't for you. But if he has more than three hours to spend in conversation with the likes of Grandmaster Flash and Ludacris (below) with his MC Noah and DJ EFN of Hip Hop, it's a lively laugh into the Golden Age of Hip Hop. It will be a journey filled with. advertisement

Ludacris, Guest of Drink Champs. Photo: Mario Anzuoni/Reuters

real black history
Wide range of weekly episodes available
Francesca Ramsey and Conscious Lee shed light on the lesser-known figures who have shaped black culture beyond Martin Luther King Jr., and engage in many fascinating discussions. The excellent first episode focuses on the women of the Black Panther Party, including Assata Shakur, a fugitive targeted by the FBI who maintains her innocence. Hannah Verdier

hidden 20%
Wide range of weekly episodes available
A neurodivergent mind can lead to great creativity, as evidenced by Seedlip entrepreneur Ben Brunson, who was diagnosed with autism and ADHD as an adult. He currently hosts a podcast to change people's perceptions of his 20% who don't fit the neurotypical classification. Guests including actor Kit Harington, vocal coach Carey Grant, and athlete Adele Tracy will bring their insights. HV

A history of curbing enthusiasm
Wide range of weekly episodes available
After 23 years, the final series of Curb has just begun. That's why two of its stars, Jeff Garlin and Susie Essman, are celebrating with a rewatch podcast that rewinds it all the way to the beginning. In fact, in the first episode, Larry David talks about pre-pilot development. A must-listen for avid fans. Holly Richardson

There's a podcast for that

Mary Robinson, host of Mothers of Invention. Photo: Murdo MacLeod/The Guardian

this week, nima job Our picks for the 5 best podcasts on climate crisisfrom the positive changes we can make as individuals to combat the crisis, to the impact on Indigenous communities.

Pre-drilled
From award-winning investigative journalist Amy Westervelt's exclusive season focusing on Namibia's growing oil reserves to Guyana's oil boom that is creating more economic uncertainty for the general public (not to mention rising sea levels) , which delves into the most pressing issues surrounding the climate crisis. . Amy explores the complexities that arise when a country faces both climate change and poverty simultaneously.

mothers of invention
In this fascinating podcast, Mary Robinson (above), Ireland's first female president, shares the microphone with comedian Maeve Higgins and series producer Timari Kodikara. The all-female case leaves no room for debate as to whether men are primarily responsible for the climate crisis. Each episode spotlights a heroic brown, black, and indigenous woman taking on the challenges facing our planet. The trio also give airtime to concerns young people have about how the climate crisis will affect their future prospects. The show features a wide range of guests, from female climate change activists like Diara Tucano to U.S. Sen. Bernie Sanders.

I'm curious about the climate
If you're feeling confused and unprepared to discuss the climate crisis and its potential impact on your life, this TEDxLondon podcast hosted by Mariam Pasha and Ben Hurst is perfect for you. It's a learning tool. The show demystifies unfamiliar climate terminology, dissects climate issues with expert interviews, celebrates Pride, explores queer ecology, and explores intersex birds and transsexual fish. shed light on the world.

climate of change
Climate of Change doesn't have a huge back catalogue, but its six episodes make for a short and sweet listening experience. Guests include Hollywood veteran Cate Blanchett and clean energy economy entrepreneur Danny Kennedy, as well as Prince William, fashion activist Livia Firth and Don't Look Up director Adam McKay. Appear. Despite highlighting the dire challenges facing our planet, this podcast maintains an optimistic tone while providing insight into the important work being done.

Skip past newsletter promotions

good together
Hosted by sustainability expert Laura Alexander Wittig, this podcast gives listeners the tools to make a difference in mitigating the climate crisis. In each weekly episode, she learns about terms like “circular economy” and discovers practical tips for incorporating eco-friendly habits into your daily life. Wittig covers a wide range of topics, from sustainable spring cleaning to the environmental impact of her streaming services. If you want to contribute to positive change, this is the perfect podcast to inspire you to channel your inner climate hero.

For more Guardian reporting on the environment and the climate crisis, sign up here to receive the Down to Earth newsletter every Thursday.

Why not try it…

  • collection of memories This production takes you on a journey across Canada, from a Viking-era Norse settlement in Newfoundland to the ruins of a sacred Haida village in Gwaii Harnas. Each episode explores new locations and stories that help us understand our complicated past.

  • Comedians Kiri Pritchard-McLean and Eshild Sears travel across Wales, sampling local food and drink, famous landmarks and talking to local characters. pod of wales.

  • in Small efforts are prohibitedIn , theologian and professor Lee C. Camp, along with guests including actor Martin Sheen, examines what makes a good life possible.

If you want to read the full newsletter, subscribe to receive Listen Here in your inbox every Thursday.

Source: www.theguardian.com

Meta cracks down on deceptive content by pushing for labeling of all AI images on Instagram and Facebook

Meta works to identify and label AI-generated images on Facebook, Instagram, and Threads, and is striving to expose “people and organizations that actively seek to deceive the public.” Masu.

Images created using Meta’s AI image tools are already labeled as AI, but Nick Clegg, the company’s global president, stated in a blog post on Tuesday that the company’s competing services will start labeling AI-generated images.

Meta’s AI images already have metadata and an invisible watermark indicating that the image was created by AI. The company has partnered with Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to work on AI image generators, according to Clegg.

Clegg said, “As the line between human content and synthetic content becomes blurred, people want to know where the line is.”

He added, “People often encounter AI-generated content for the first time, and our users appreciate the transparency around this new technology. It’s important to let people know that it was created using AI.”

A surfing llama or an AI? Image labels for AI-generated content on Facebook.

Clegg mentioned that the labeling feature is being developed and will be rolled out to all languages in the coming months.

He also stated that the company will add more prominent labels on images, videos, or audio that are “digitally created or altered” and “have a particularly high risk of materially misleading the public.”

Additionally, the company is working to develop technology to automatically detect AI-generated content, even when the content lacks invisible markers or has been removed.

“This work is particularly important because the online space is likely to become increasingly hostile in the coming years,” Mr Clegg said.

He concluded, “People and organizations actively trying to deceive people with AI-generated content will find ways to circumvent the safeguards in place to detect it. Our industry and society as a whole must continue to find ways to stay ahead of the curve.”

AI deepfakes have already become an issue in the US presidential election cycle, with examples of AI-generated deepfakes used to dissuade voters in the New Hampshire Democratic primary.

Australia’s Nine News also faced criticism for altering an image broadcast on the evening news that exposed Victorian Animal Justice Party MP Georgie Purcell’s belly button and altered her chest, using Adobe’s AI image tools.

Source: www.theguardian.com

‘Divergent Views on Personalization in Big Tech Prompt New EU Calls for Default Turning Off of Profiling-Based Content Feeds’

Another policy tug-of-war may be emerging in the European Union over Big Tech’s content recommendation systems, with the European Commission ruling out profiling-based content feeds (also known as “personalization” engines that process user data). Many members of Congress are calling for the government to curb this. To determine what content to display. The tracking and profiling of users by mainstream platforms to power “personalized” content feeds has long raised concerns about potential harm to individuals and democratic societies, and whether this technology is fueling social media addiction. , some critics say poses mental health risks to vulnerable people. There are also concerns that this technology is undermining social cohesion through its tendency to amplify divisive and polarizing content that can push individual anger and anger towards political extremes.

Of letter, 17 MPs from political groups including S&D, the Left, the Greens, EPP and Renew Europe have signed the petition, which calls for recommendation systems on technology platforms to be switched off by default. The idea emerged during negotiations over the bloc’s Digital Services Act (DSA). ), but it was not included in the final regulations because it did not have a democratic majority. Instead, EU lawmakers agreed to transparency measures for recommender systems, along with a requirement that large platforms (so-called VLOPs) must provide at least one content feed that is not based on profiling. But in a letter, lawmakers are calling for a complete dedefault on the technology. “Interaction-based recommender systems, especially hyper-personalized systems, pose a serious threat to the public and society as a whole, as they prioritize emotional and extreme content and target individuals who are particularly likely to be provoked. ” they wrote. “This insidious cycle exposes users to sensational and dangerous content, prolonging their engagement with the platform in order to maximize ad revenue.”

Amnesty International’s experiment on TikTok showed that the algorithm were exposed to videos glorifying suicide within just an hour. Additionally, Meta’s internal research found that 64% of joins to extremist groups were due to recommended tools, and that extremists It has become clear that we are exacerbating the spread of ideology.” The phone is: Draft online safety guidelines for video sharing platforms, was announced earlier this month by the Irish Media Commission (Coimisiún na Meán). The committee will be responsible for overseeing the DSA when regulations become enforceable for covered services next February. Coimisiún na Meán is currently consulting on guidance proposing that video sharing platforms “take steps to ensure that profiling-based recommendation algorithms are turned off by default.” The publication of the guidance occurred after the following episodes. violent civil unrest in Dublin The country’s police authorities suggested the attack was fabricated by far-right “hooligans” with false information spread on social media and messaging apps. And earlier this week, Irish Civil Liberties and Human Rights Council ICCL, which has been campaigning on digital rights issues for many years, also called on the European Commission to support the Koimisiun na Mean proposal and to make it public. my report They say social media algorithms are tearing society apart and are calling for personalized feeds to be turned off by default.

In their letter, MEPs said they also accepted proposals from Ireland’s media regulator, which similarly tend to promote “emotional and extremist content” that they say could undermine civic cohesion. It suggests that it “effectively” addresses issues related to recommender systems. The letter also references recently adopted regulations. Report by the European Parliament On the addictive design of online services and consumer protection, they highlight the negative impact of recommender systems on online services, which involve the profiling of individuals, especially minors. , which aims to keep users on the platform for as long as possible, thus manipulating them.” Artificial amplification of hatred, suicide, self-harm, and disinformation. ” “We call on the European Commission to follow Ireland’s lead and not only approve this measure under TRIS, but also take decisive action.” [Technical Regulations Information System] In addition to following the steps, you can also recommend this measure as a mitigation measure for large online platforms to take. [VLOPs] 35(1)(c) of the Digital Services Act, to give citizens meaningful control over their data and online environment,” the MEPs wrote, adding: “The protection of our citizens, especially young people, is of paramount importance” We believe that the European Commission has an important role to play in ensuring a safe digital environment for everyone. We look forward to your prompt and decisive action on this issue. ”

Under TRIS, EU member states must submit proposals before they are adopted into national law so that the EU can carry out a legal review to ensure that they are consistent with the bloc’s rules, in this case the DSA. draft technical regulations must be notified to the European Commission. . This system means that domestic laws that seek to “golden” EU regulations are unlikely to pass scrutiny. As such, the Irish Media Commission’s proposal to turn off video platforms’ recommender systems by default appears to go further than the text of the relevant legislation and may not survive the TRIS process. be. However, no company has gone that far yet. And clearly not the kind of step that ad-funded, engagement-driven platforms would choose as their commercial default.

When we asked, the European Commission declined public comment on the MEP’s letter (or the ICCL report). Instead, the spokesperson pointed to the “clear” obligations regarding her VLOP’s recommendation system set out in Article 38 of the DSA. This mandate requires platforms to provide at least one non-profiling-based option for each of these systems. However, we were able to discuss the profiling feed debate with EU officials who provided background to speak more freely. They agreed that platforms could choose to turn off profiling-based recommender systems by default as part of DSA systemic risk mitigation compliance, but they still do not have initiatives that stray too far from their own policies. I have confirmed that the platform you are using does not exist. So far, we have only seen examples where non-profiling feeds are optionally provided to users, such as on TikTok and Instagram, in order to meet the aforementioned (Article 38) DSA requirement to provide users with the option of circumvention. not. Personalization of this type of content. However, this requires active opt-out by the user. On the other hand, setting a feed to non-profiling by default is clearly a stronger type of content regulation, as it requires no user action to enable. EU officials we spoke to said that the European Commission, in its capacity as enforcer of the DSA on VLOPs, is considering a recommender system, including the formal process initiated in relation to X earlier this week. admitted that. The recommendation system has also been the focus of some of the formal requests for information the commission has sent to his VLOP, including one to Instagram that focuses on child safety risks. they spoke. And they agreed that the EU could use its enforcer role, or law-abiding power, to force large platforms to stop personalized feeds by default. However, they indicated that the commission would only take such action if it determined it would be effective in mitigating a particular risk. The official noted that multiple types of profiling-based content feeds are in place, even on a platform-by-platform basis, and emphasized that each must be considered in context.

More generally, they appealed for “nuance” in the debate over the risks of recommendation systems. They suggested that the Commission’s approach here would be to conduct a case-by-case assessment of concerns and advocate for data-driven policy interventions on VLOPs rather than blanket measures. did. After all, it’s a collection of platforms as diverse as video-sharing and social media giants, as well as retail and information services and (most recently) porn sites. The risk that an enforcement decision will not be selected by legal challenge in the absence of solid evidence to support the decision is clearly a concern for the Commission. The official also wants to collect more information before making a decision on whether to recommend.

Source: techcrunch.com

EU Identifies Three Porn Sites Subject to Stricter Online Content Regulations

Age verification technology could be heading to adult content sites after these three sites were added to the list of platforms subject to the most stringent level of regulation under the European Union’s Digital Services Act (DSA).

Back in April, the EU announced an initial list of 17 so-called Very Large Online Platforms (VLOPs) and two Very Large Online Search Engines (VLOSEs) designated under the DSA. The initial list did not include adult content sites. The addition of the three platforms specified today changes that.

According to Wikipedia — which, ironically, was already named VLOP in the first wave or commission designation — XVideos and Pornhub are the world’s No. 1 and No. 2 most-visited adult content sites. Stripchat, on the other hand, is an adult webcam platform that live streams nude performers.

None of the three services currently require visitors to undergo a strict age check (i.e. age verification rather than self-declaration) before accessing their content, but all three services As a result, this area is subject to change.

As the EU points out in its report, pan-EU regulations require designated (large) platforms with an average monthly user base of more than 45 million people in the region to have a number of restrictions, including obligations to protect minors. It imposes additional obligations. press release Today — writing [emphasis ours]: “VLOPs must design services, including interfaces, recommendation systems, and terms of use, to address and prevent risks to child welfare. Relax measures to protect children’s rights and prevent minors from accessing pornographic content online (such as age verification tools)

The European Commission, which is responsible for overseeing VLOPs’ compliance with the DSA, today reiterated that creating a safer online environment for children is an enforcement priority.

Other DSA obligations for VLOPs include:They are required to produce a risk assessment report on the “specific systemic risks” that their services may pose in relation to the dissemination of illegal content and content that threatens fundamental rights. It must first be shared with the committee and then published.

and to address the risks associated with the online dissemination of illegal content, such as child sexual abuse material (CSAM), and content that affects fundamental rights, such as human dignity and the right to private life in the absence of consent. , mitigation measures must also be applied. Sharing intimate content or deepfake pornography online.

“These measures may include, among other things, adaptations to terms of use, interfaces, moderation processes, algorithms, etc.,” the Commission notes.

The three adult platforms designated as VLOPs have four months to bring their services into compliance with additional DSA requirements. That means we need time until late April to make the necessary changes, such as rolling out age verification technology.

“The European Commission’s services will closely monitor compliance with the DSA obligations by these platforms, in particular with regard to measures to protect minors from harmful content and to combat the spread of illegal content,” the EU said. , further added: Please work closely with your newly designated platforms to ensure these are addressed appropriately. ”

The DSA also contains a set of more broadly applicable general obligations that apply not only to small-scale digital services but also to VLOPs. For example, ensuring that systems are designed to ensure high levels of privacy, safety and child protection. Promptly notify law enforcement authorities if they become aware of information that gives rise to suspicion of a criminal offense involving a threat to the life or safety of a person, including in cases of child sexual abuse, and compliance with these requirements; Notice deadline will start slightly earlier on February 17, 2024.

The DSA applies across the EU and EEA (European Economic Area), but post-Brexit this region will not include the UK. However, this autumn the UK government passed its own Online Safety Act (OSA), establishing communications regulator Ofcom as the country’s internet content watchdog and introducing a system of harsher penalties for breaches than the EU’s (OSA fines). (can amount to up to 10%) of global annual sales versus up to 6% based on the EU DSA).

UK law also focuses on child protection. And recent Ofcom guidance for porn sites, aimed at helping them comply with new legal obligations to prevent minors from encountering adult content online, says they are “highly effective”. It states that age checks must be conducted, and further specifies that such checks cannot include age gates that simply ask users to self-declarate that they are 18 years of age or older. .

Ofcom’s list of age verification technologies approved in the UK includes provisions such as asking porn site users to upload a copy of their passport to verify their age. Show your face to the webcam to receive an AI age assessment. Alternatively, there are methods that regulators deem acceptable, such as signing into Open Banking and proving that you are not a minor.

Source: techcrunch.com

VTuber Ironmouse, known for her animated persona, wins Content Creator of the Year at Game Awards

At this year’s Game Awards, fan-favorite VTuber Ironmouse won the coveted Content Creator of the Year award. This is the first time an animated character has won this award, and it shows just how expansive the world of streaming can be.

A movement that originated in Japan, “VTuber” means “virtual YouTuber,” but the genre has spread to other streaming sites such as Twitch. iron mouse Has 1.8 million followers, most subscribed female streamer. VTubers often resemble anime characters, with creators constructing virtual personas by fleshing out their avatars using motion capture or AR facial tracking technology. VTubers have been around for about a decade, but their popularity rose during the early days of the pandemic, with VTuber agency Hololive launching an English division to cater to its growing Western audience. The streaming genre will only continue to grow as the technology to create VTubers becomes more accessible.

Although the VTuber phenomenon is already widespread and beloved,fun loving devilIronmouse’s win at the Game Awards further legitimizes the genre.

In announcing Iron Mouse’s victory, the show’s host said, “Iron Mouse couldn’t be here tonight because it’s animated. Unfortunately, we’re not in the Matrix yet.” .

Iron Mouse’s intrigue doesn’t end with her innovative persona. The identity of Ironmouse’s author is unknown, but she reveals that she is from Puerto Rico and suffers from common variable immunodeficiency (CVID) and a chronic illness that includes lung disease. Her chronic illness sometimes left her bedridden, she said. washington post, But being a VTuber gave her access to a rich online world where she could be anyone she wanted to be, and escape the hell out of even pastel pink-clad gamers. Last year, she streamed for 31 consecutive days as part of the annual “Suboxone” event, where her viewers could pledge money to keep her online. This year too, she took on the streaming challenge of an ultramarathon. Immunodeficiency Foundation.

“I have no words to describe how I feel right now,” Ironmouth wrote. X After her victory was announced. “I’m in complete shock. Thank you so much to everyone who changed my life.”

Source: techcrunch.com

Civitai, a generative AI content marketplace with millions of users, receives backing from Andreessen Horowitz

A community that has a large following is stable diffusion, particularly among those experimenting with new AI technologies and creating their own models. Now, there is a platform called Chibitai, a startup that allows members to share their AI image models and content with other enthusiasts. The name “Chibitai” is a word play on “Civitas,” meaning community. Chibitai CEO Justin Maier recognized the need for such a platform after working on web development projects at Microsoft. He saw the potential for creating AI-generated images and posted them on platforms like Reddit and Discord, but felt the need for a centralized community to share and discover AI image models.

Chibitai became the go-to place for sharing AI models and images in 2023 and has since grown to over 3 million registered users and receives about 12-13 million unique visitors every month. The company has even raised $5.1 million in funding from Andreessen Horowitz (a16z) in June 2023.

The platform allows users to upload images to create their own AI image models, and with each image generated, metadata is provided that includes details such as prompts and resources used. However, concerns have been raised about artists’ work being used without consent to train AI models, which Chibitai is working to address by allowing artists to flag resources they believe are using their work. Additionally, there are also concerns about non-consensual pornographic AI images being shared on the platform, which the company is working to address.

In the future, Chibitai aims to expand to other modalities beyond AI image models and to create a consumer-facing mobile app that will act as a repository for AI images. The company also plans to enable users to monetize their creations, but for now, everything on the site is free to use.

Source: techcrunch.com

Instagram is experimenting with a dedicated feed focused solely on content from meta-verified users

Instagram chief Adam Mosseri said the company is testing a feed that only shows posts from meta-verified users. The new toggle appears under the “Following” and “Favorites” options that appear when you click on his Instagram logo in the app.

“We’re testing a way for people to explore their Instagram feed and Reels by switching to only meta-verified accounts,” Mosseri said on Instagram’s broadcast channel. “We’re exploring this as a new control for people and a way for businesses and creators to be discovered.”

The official announcement comes two months after Instagram told TechCrunch that it had not tested such a feature after reverse engineering it. Alessandro Paluzzi We’ve noticed a new feed filter showing meta-authenticated subscribers in the code for both the iOS and Android Instagram apps.

Instagram seems to see the new feed as a way to drive meta-verified subscriptions and get people interested in subscribing. Because this new feed offers an opportunity to increase awareness. Meta Verified costs $11.99 per month on web and $14.99 per month on mobile, and gives users access to blue checkmarks, enhanced customer support, increased visibility and reach in searches and comments, exclusive stickers, and more.

Meta is clearly borrowing from Elon Musk’s playbook with the new Verified-only feed and Meta Verified in general. After Tesla’s CEO acquired Twitter (now X) last fall, the social network launched paid authentication through its revamped X Premium subscription service at $8 per month.With a subscription, users can various functions, the user’s notifications will include a “Verified” tab. The service also includes other features such as an edit button and support for long posts.

Source: techcrunch.com