Mark Zuckerberg’s Meta has made changes to its policies following the tragic death of teenager Molly Russell, who was influenced by harmful content on Instagram related to suicide and self-harm. Just days before her passing, Molly raised concerns about the risks associated with social media platforms.
The Molly Rose Foundation, established in memory of Molly Russell after her death in 2017, is now urging UK regulatory authorities to take urgent action to address these issues. Meta, under Zuckerberg’s leadership, recently announced modifications to its content acquisition methods, amid the restructuring of the company during the Trump administration.
In the US, the fact-checking system has been replaced with a “community notes” feature, allowing users to verify the accuracy of content. Policies regarding “hate speech” have been revised, with guidelines on respectful language for non-binary individuals and restrictions on harmful claims based on gender or sexual orientation.
Meta has implemented measures to address issues related to suicide, self-harm, and eating disorders through its automated content scanning system.
Despite Meta’s efforts, the Molly Rose Foundation remains concerned about the normalization of harmful behaviors associated with suicide and self-harm, particularly among individuals experiencing severe depression.
The META platform is working to collaborate with regulatory bodies to prevent teenagers from encountering harmful content.
Meta’s own data shows that only 1% of reported suicide and self-harm content on their platforms between July and September last year led to action being taken.
Andy Barrows, the CEO of the Molly Rose Foundation, emphasizes the need for OFCOM to strengthen regulations on tech platforms to ensure child safety. He warns that if OFCOM fails to act decisively, the Prime Minister should intervene.
In May, OFCOM released a draft of safety guidelines requiring tech companies to take action in safeguarding children online. These measures include discontinuing algorithms recommending harmful content, implementing age verification checks, and enhancing overall safety protocols.
A spokesperson for Meta asserts that they are actively working to identify and remove harmful content through automated systems and community standards. They emphasize their commitment to user safety and have restricted access to certain types of content for British teen accounts.
An OFCOM representative affirms the importance of online safety laws in protecting children from risks like suicide and self-harm content, emphasizing swift removal of such materials.
The OFCOM spokesperson states that social media companies, including Meta, must comply with regulations to protect children, and OFCOM is prepared to enforce these measures with full authority if necessary.
Nick Clegg has strongly supported Meta’s decision to downgrade the social media platform’s moderation and remove fact-checkers.
The changes to Facebook, Instagram, and Threads, including a shift to promote more political content, were announced by CEO Mark Zuckerberg earlier this month.
Clegg, who is stepping down from the tech company after six years to make room for Joel Kaplan, who leans towards Donald Trump, refuted claims that Meta was diminishing its commitment to truth.
“Please look at what Meta has announced. Ignore the noise, the politics, and the drama that accompanies it,” he said at the World Economic Forum in Davos, describing the new policy as “limited and tailored.” He asserted that.
The former UK deputy prime minister and Liberal Democrat leader stated: “There are still 40,000 people dedicated to safety and content moderation, and this year we will again invest $5 billion (£4 billion) a year in platform integrity. We still maintain the most advanced community standards in the industry.”
Clegg mentioned that Meta’s new community notes system, replacing its fact-checker, will resemble the one used by Elon Musk’s competing social media platform X, and will first be launched in the United States.
He described it as a “crowdsourcing or Wikipedia-style approach to misinformation” and suggested it might be “more scalable” than the fact-checkers that he believes have lost the public’s trust.
Zuckerberg, who has been collaborating closely with President Trump recently, simply aims to refine Meta’s content moderation approach, according to Clegg.
During a roundtable discussion with journalists at a ski resort in Switzerland, Mr. Clegg confirmed that he would not tolerate using the Meta platform in the future, forbidding the use of derogatory terms for groups of people or labeling LGBT individuals as “mentally ill.” Numerous expressions previously allowed were challenged.
Mr. Clegg continued to defend this stance, stating at an event in Davos: “It seems inconceivable to us that individuals can say things in Congress or traditional media that they cannot say on social media. Therefore, some significant adjustments were made.”
He emphasized that speech targeting individuals in a manner designed to intimidate or harass remains unacceptable.
The union representing tech workers in the UK expresses concerns on behalf of British staff at Meta about the company’s decision to eliminate fact-checkers and diversity, equity, and inclusion programs. They feel disappointed and worried about the future direction of the company.
Prospect union, which represents a growing number of UK Meta employees, has written to express these concerns to the company, highlighting the disappointment among long-time employees. They fear this change in approach may impact Meta’s ability to attract and retain talent, affecting both employees and the company’s reputation.
In a letter to Meta’s human resources director for EMEA, the union warns about potential challenges in recruiting and retaining staff following the recent announcements of job cuts and performance management system changes at Meta.
The union also seeks assurances that employees with protected characteristics, especially those from the LGBTQ+ community, will not be disadvantaged by the policy changes. They call for Meta to collaborate with unions to create a safe and inclusive workplace.
Employees are concerned about the removal of fact-checkers and increased political content on Meta’s platform, fearing it may lead to a hostile work environment. They highlight the importance of maintaining a culture of respect and achievement at Meta.
Referencing the government’s Employment Rights Bill, the union questions Meta’s efforts to prevent sexual harassment and ensure that employees with protected characteristics are not negatively impacted by the changes.
The letter from the union follows Zuckerberg’s recent comments on a podcast, where he discussed the need for more “masculine energy” in the workplace. Meta has been approached for comment on these concerns.
Meta, the parent company of Facebook, WhatsApp, and Instagram, is planning to reduce its global workforce by around 5%, with underperforming employees being the most likely to be let go.
CEO Mark Zuckerberg outlined in a memo to employees that due to what he referred to as a challenging year ahead, he has decided to prioritize performance management by letting go of poor performers quicker than usual and accelerating the company’s performance evaluation process.
As of September, Meta had 72,000 employees globally, and the planned job cuts could impact up to 3,600 employees. The company aims to fill the vacant positions later in the year.
The announcement comes shortly after Meta’s decision to end third-party fact-checking and emphasize free speech, coinciding with President Donald Trump’s imminent return to the White House. The Diversity, Equity, and Inclusion (DEI) program is also being terminated.
Employees in the US affected by the layoffs will be notified by February 10, with notifications for employees in other countries to follow later.
In the memo, Zuckerberg stated that he is raising the standards for performance management within the company: “We usually manage underperforming talent over a year, but this time we plan to make broader performance-based cuts during this cycle.”
The 40-year-old billionaire emphasized, “This will be an intense year. I want to ensure we have the best talent on the team.”
Employees being let go will be those who have been with Meta long enough to qualify for performance reviews.
Zuckerberg assured that the company will provide generous severance packages to those losing their jobs, similar to previous layoffs.
Meta’s stock dropped 2.3% on Tuesday, continuing a decline that began the day before.
The company faced criticism for removing its fact checker, potentially allowing misinformation and harmful content to circulate on its platform.
Similar to other tech companies, Meta is investing in artificial intelligence projects, with a focus on crucial technologies like AI, as mentioned by Zuckerberg.
Meta owns social media platforms such as Facebook and Instagram
JRdes / Shutterstock
In 2024, Meta allowed more than 3,300 pornographic ads, many featuring AI-generated content, to run on social media platforms such as Facebook and Instagram.
The survey results are available below. report by AI forensics a European non-profit organization focused on researching technology platform algorithms. Researchers also found inconsistencies in Meta’s content moderation policies by reuploading many of the same explicit images as standard Instagram and Facebook posts. Unlike ads, these posts violated Meta’s terms and were quickly removed. community standards.
“I am disappointed and not surprised by this report, as my research has already revealed double standards in content moderation, particularly in the area of sexual content,” he said. carolina are At the Center for Digital Citizenship at Northumbria University, UK.
The AI Forensics report focuses on a small sample of ads targeting the European Union. As a result, the explicit meta-authorized ads primarily target middle-aged and older men promoting “shady sexual enhancement products” and “dating sites,” with a total reach of 8.2 million impressions. It turned out that it was exceeded.
This permissiveness reflects a widespread double standard in content moderation, Allais said. She says tech platforms often block content by “women, femme presentations, and LGBTQIA+ users.” That double standard extends to the sexual health of men and women. “Examples include lingerie and period-related advertising. [removed] Ads from Meta are approved, but ads for Viagra are approved,” she says.
In addition to discovering AI-generated images within ads, the AI Forensics team also discovered audio deepfakes. For example, some ads for sex-enhancing drugs featured the digitally manipulated voice of actor Vincent Cassel superimposed over pornographic visuals.
“Meta prohibits the display of nudity or sexual activity in ads or organic posts on our platform, and we remove violating content shared with us,” a Meta spokesperson said. “Bad actors are constantly evolving their tactics to evade law enforcement, which is why we continue to invest in the best tools and technology to identify and remove violating content.”
The report comes at the same time that Meta CEO Mark Zuckerberg announced he would be eliminating the fact-checking team in favor of crowd-sourced community notes.
“If you really want to sound dystopian, which I think there’s reason to do so at this point given Zuckerberg’s latest decision to eliminate fact checkers, Meta You could even say that they’re quickly stripping agencies of their users by taking money from questionable ads,” Allais said.
Fact-checkers were confident about the target audience for this week’s news, which was delivered through Mark Zuckerberg’s selected medium. The awkward video message announced Meta’s plan to transition from professional third-party fact-checking to a user-driven “community notes” model similar to X, starting in the US.
Upon hearing the news, one fact-checker expressed concerns about Meta’s intention to please President Trump. Their public response on the matter was more tactful but conveyed the same sentiment.
Across the Atlantic, questions arose about how the European Union would respond to Mr. Meta’s decision, especially if the next US president was watching. The implications could extend beyond Europe’s borders for fact-checkers globally.
Meta’s fact-checking program, which spans 130 countries and is a significant source of funding for fact-checking worldwide, was established shortly after the 2016 US election. Despite Meta’s investment of $100 million in fact-checking efforts since then, concerns remain among fact-checkers about potential changes in the future.
The EU’s new policies will have varying effects on fact-checkers globally depending on Meta’s rollout outside the US. The company’s plans for the EU remain unclear, but there are currently “no immediate plans” to suspend fact-checking within the EU.
The EU’s regulatory framework for digital platforms, including Meta, is being tested through initiatives like the Code of Practice on Disinformation. However, enforcement and interaction with fact-checkers remain unresolved issues.
The European Commission’s response to Meta’s decision will be crucial in testing DSA principles and influencing Meta’s policies worldwide.
Overall, fact-checkers anticipate Meta will phase out third-party fact-checking globally after implementing the new system in the US. The impact on the fact-checking movement, which relies heavily on Meta’s funding, could be significant.
The future of fact-checking remains uncertain, with potential consequences for fact-checkers worldwide. Many organizations may need to scale back or close operations if Meta discontinues its support, impacting efforts to combat misinformation.
Rappler, a Philippine news site, warned that the challenges faced in the US could signify a larger struggle to preserve truth and individual agency in the face of increasing dangers.
Effective immediately, the company will be discontinuing its diversity, equity, and inclusion (DEI) program as of Friday, following Meta’s announcement that fact-checking would be eliminated by Mark Zuckerberg.
An internal memo from Meta acknowledged the changing legal and policy landscape surrounding DEI efforts in the United States, referencing recent Supreme Court decisions and the concept of DEI. It also highlighted the “reprehensible” views held by some individuals. Axios and Business Insider initially reported on the memo. Mehta confirmed the termination of DEI practices but did not provide further comment on how this decision aligns with the company’s overarching goals.
Janelle Gale, vice president of human resources, mentioned in the memo the discontinuation of various programs targeting underrepresented groups, such as the Diverse Slate Approach and Representation Goals, which are currently facing challenges. These programs were utilized to promote diverse employment practices.
Despite Meta’s efforts to increase diversity in the workforce, the company will no longer implement certain diversity employment practices, as stated in a new announcement.
Furthermore, the company will be ending its equity and inclusion training program and permanently disbanding its DEI-focused team.
The decision to terminate diversity efforts contradicts Meta’s AI-powered Instagram and Facebook profiles, which highlighted the need for a more representative team.
The termination of DEI initiatives follows Meta’s alignment with Donald Trump and the addition of Trump ally Dana White to the company’s board of directors. Meta joins a list of companies, including McDonald’s, Walmart, Ford, and Lowe’s, that have voluntarily halted their diversity initiatives or have been targeted by far-right groups.
A group of authors claimed that Mark Zuckerberg authorized Meta to use “pirated copies” of his copyrighted books to train the company’s artificial intelligence models. This claim was made in a filing in US court.
According to the filing, internal meta-communications revealed that the social network company’s CEO warned that the data set used was “known to be pirated” within the company’s AI executive team. The filing also mentioned support for the use of the LibGen dataset, an extensive online archive of books.
The authors suing Meta for copyright infringement, including Ta-Nehisi Coates and Sarah Silverman, made these accusations in a filing in California federal court. They alleged that Meta misused their books to train Llama, a large-scale language model powering chatbots.
The use of copyrighted content in training AI models has become a legal issue in the development of generative AI tools like chatbots. Authors and publishers have been warned that their work may be used without permission, putting their livelihood at risk.
The filing referenced a memo with Mark Zuckerberg’s approval for Meta’s AI team to use LibGen. However, discussions about accessing and reviewing LibGen data internally at Meta raised concerns about the legality of using pirated content.
Last year, a US District Judge ruled that Meta’s AI model infringed an author’s copyright by using copyrighted text. Despite rejecting claims of depriving the author’s name and copyright holder, the plaintiff was granted permission to amend its claims.
The authors argued this week that the evidence supports their infringement claims and justifies reinstating the CMI case and adding new computer fraud claims.
During Thursday’s hearing, Judge Chhabria expressed skepticism about the fraud and the validity of CMI claims but allowed the writers to file an amended complaint.
That week, Meta announced the discontinuation of its fact-checking program in the United States and a rollback to its content moderation policy regarding “hateful conduct.” These measures will undoubtedly open the floodgates to more hateful, harassing, and inflammatory content on Facebook and Instagram. Immigrants and the LGBTQ+ community are two of the groups most likely to be affected.
Last month, after Donald Trump won the election, Zuckerberg visited Trump at Mar-a-Lago and Mehta transferred $1 million to the presidential inaugural fund. When asked for comment on the meth policy change, Trump acknowledged that Zuckerberg said: “probably” Influenced by his threat to jail tech company CEOs.
This is the formation of a mafia state, where open threats are rewarded with lavish gifts and public praise.
Looking back at the history of content moderation, it is easy to conclude that social media companies are tailoring their products to the needs of those with the power to regulate them. This time is no different, but the impact on vulnerable groups will likely be even worse. By changing the meta policy on fact-checking to appease Trump, Zuckerberg is laying the foundations for a frictionless oligarchy. There, those with the most power and influence would no longer have to fight over facts and corrections.
It was during the first Trump administration that technology companies realized that social media was susceptible to domestic and international media manipulation campaigns. Because their products were being used to spread lies, complaints, conspiracies, and hatred to millions of people. Journalists exposed a massive media manipulation campaign carried out by Cambridge Analytica and Russia’s Internet Research Agency. The campaign used Facebook for political purposes during the 2016 US election and Brexit.
Instead of taking responsibility and aggressively removing abusers, Mr. Zuckerberg turned to advisers known in political circles as a cadre of cutthroat fixers. Most of them are Harvard educated and accustomed to political doublespeak. But controlling speech globally became their lifelong challenge.
In November 2016, in response to the growing public criticism of “fake news” on Facebook, Posted by Zuckerberg Facebook contacted “respected fact-checking organizations,” said it was working methodically to avoid becoming the “arbiter of truth,” and posted a lengthy message on its profile about the misinformation. . By December, Adam Mosseri, then the company’s vice president of newsfeed, said: new protocol For publishing the false story, responsibility for content management was transferred to third-party fact checkers who have signed the non-profit media organization Poynter’s International Code of Principles for Fact-Checking. Despite these efforts, Misinformation continued especially thrives among right-leaning audience.
In 2018, the company’s COO, Sheryl SandbergThe former chief of staff at the U.S. Treasury Department before leaving for Google, he supported Facebook’s “Oversight Board,” also known as its “Supreme Court,” which rules on and reviews controversial moderation decisions. did. In early 2021, former British Deputy Prime Minister and Director of Communications Mehta nick cleggwrote the decision to indefinitely expel President Trump after he used the company’s products to facilitate the attack on the Capitol. Zuckerberg said at the time that the “risk of allowing the president to continue using our services” was “simply too great.”
While Meta’s content arbitration system was expensive and unwieldy, on the plus side it forced some transparency into content moderation decisions and ensured that misinformation is a feature rather than a bug in the right-wing media ecosystem. provided conclusive evidence that there is.
Mark Zuckerberg met with Donald Trump at the White House in 2019. Photo: 2020 Images/Alamy
Clegg will now be replaced by Meta’s new head of global policy, Joel Kaplan. Mr. Kaplan is a former senior staffer to George W. Bush. brooks brothers riot After Mr. Mehta’s announcement this week, Mr. Kaplan appeared on Fox News and spoke of his enthusiasm for policy change and lavished Mr. Trump with praise. His influence on the new direction of the meta is clear and troubling to him. defender of internet freedomthey don’t want social media platforms to continue to be pawns in a political chess match.
In remarks attributed to Trump himself, Zuckerberg claimed that “fact checkers are too politically biased and have destroyed more trust than they’ve built, especially in the United States.” Importantly, academic research on fact-checking reveals the opposite of Zuckerberg’s claims. a Research by researchers at MIT Sloan School of Management Even among right-wing viewers who doubt the effectiveness of fact-checking, exposure to fact-checking has been shown to reduce beliefs in misinformation.
Meta never applied its rules equally to all users. Whistleblower Francis Haugen revealed that Meta maintained a high-profile list of accounts that were repeatedly allowed to violate the platform’s rules. Meta has historically excluded politicians from fact-checking eligibility, and the end of the fact-checking program is primarily a windfall for right-wing users of Meta products who are more likely to share misinformation on Facebook. That’s likely, according to a study conducted by academics in partnership with Meta and published in 2016. science.
A lack of fact-checking will likely lead to the aggressive spread of conspiracy theories and hateful content about meta products, testing advertisers and brand safety.
Instead of trained fact-checkers who are experts at detecting, documenting, and debunking misinformation, Meta employs a “community notes” system similar to Elon Musk’s We plan to set up a gallery of unauthorized users to control speech.
But moderation around X is also not going well. After facing a rapidly declining user base and an advertiser boycott, X was worth 20% of what Musk paid. A clear example of how moderation on the platform reflects Musk’s whims and preferences is when Musk was admonished last week by ardent MAGA supporters regarding issuing H-1B visas for foreign workers. It became clear when it was being done. Musk’s response is to optimize the platform for “unrepentant user seconds” by focusing on more interesting content, while banning and demonetizing many Maga heavyweights. It was.
Musk has previously criticized deplatforming and demonetization as censorship tactics only employed by the left. That argument doesn’t hold up now “Dark Maga” himself Play the algorithm and Zuckerberg will follow suit. Rather than criticize fact-checkers, Mr. Zuckerberg should admit that he is changing the rules to reflect Mr. Trump’s political agenda, and after Mr. Musk paved the way with will adjust the algorithm so that it can build a base on Facebook and Instagram.
“It’s time to get back to the basics of freedom of expression,” Zuckerberg declared. (It’s not a meta origin story. Facebook’s predecessor asked students at Harvard to rate the physical attractiveness of their female classmates, but Zuckerberg tried to commit it to memory at every opportunity.) ) However, freedom of expression refers to the human right to “seek and receive.” According to the United Nations, provide information.” universal declaration of human rights. It does not guarantee an audience or amplification of the speech. Furthermore, it does not provide protection against fact-checking or labeling of online speech. This is a power reserved only for companies that control the flow of content between platforms.
Far from enabling freedom of expression, Mehta’s changes to its “hateful conduct” policy signal a return to Facebook’s more misogynistic roots. In a blog post, Mehta pledged to align his moderate policies with “mainstream discourse,” particularly on gender and immigration, two issues championed by Trump and Musk during the 2024 campaign. It’s okay to refer to it now LGBTQ+ people Blaming immigrants as mentally ill when it comes to meth products.
It is a clear sign of techno-fascism that communication systems are disrupted by changes in political power after every election. The protection of vulnerable groups online continues to depend on the political ambitions of social media platform CEOs or owners.
This is further proof that social media is not about free speech. That was never the case. Instead, content moderation is the core product of social media, with algorithms deciding whether speech is displayed, how loud it is, and whether there is counter-speech. Contrary to Zuckerberg’s claims, it wasn’t the fact checkers who ruined the meta product. It was always insider political operatives, including Clegg, Sandberg, and Kaplan, who turned social media into a new frontier in the culture wars.
Experts and politicians are warning that significant changes to Meta’s social media platform are setting it on a collision course with lawmakers in the UK and the European Union.
Lawmakers in Brussels and London have criticized Mark Zuckerberg’s decision to remove fact-checkers from Facebook, Instagram, and Threads in the US, with one MP describing it as “absolutely frightening.”
Changes to Meta’s global policy on hateful content now allow users to refer to transgender people as “it,” and the guidelines state that “no mental illness or abnormality based on gender or sexual orientation shall be permitted.”
Chi Onwula, a Labor MP and chair of the House of Commons science and technology committee, has expressed alarm at Zuckerberg’s decision to eliminate professional fact-checkers, calling it “alarming” and “pretty scary.”
Maria Ressa, a Nobel Peace Prize-winning American-Filipino journalist, has warned of “very dangerous times” ahead for journalism, democracy, and social media users due to Meta’s changes.
Damian Collins, the former UK technology secretary, has raised concerns about potential trade negotiations by the Trump administration that could pressure the UK to accept US digital regulatory standards.
Mehta’s move, revealed as a response to Donald Trump’s inauguration, has sparked predictions of challenges from the Trump administration on laws like the Online Safety Act.
Zuckerberg has hinted at extending his policy of removing fact-checkers beyond the US, raising concerns among experts and lawmakers in the UK and EU.
Regulatory scrutiny on Meta’s changes is expected to increase in the UK and EU, with concerns about the spread of misinformation and potential violations of digital services law.
Mehta has assured that content related to suicide, self-harm, and eating disorders will continue to be considered high-severity violations, but concerns remain about the impact on children in the UK.
MIta’s Rewritten policy on ‘hateful acts’ That means users will be able to say different kinds of things on that platform, Facebook, Instagram, and Threads. After Mark Zuckerberg announced sweeping changes to how content is monitored on the platform, multiple edits were made to the policy.
Among them are:
Certain injunctions against referring to transgender and non-binary people as “it” have been removed. A new section has been added to clarify that “mental illness or abnormality claims are permitted if based on gender or sexual orientation.” It said this was a reflection of “political and religious discourse around transgender and homosexuality, as well as the common use of non-serious terms such as ‘queer'”. Additionally, this policy is aimed at “those who seek exclusion, [using] Derogatory language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. ”
Meta’s policy is to target individuals and groups based on their protected characteristics or immigration status with dehumanizing language that users compare to animals, pathogens, and sub-life forms such as cockroaches and locusts. There is no change in the fact that content should not be posted. However, this shift suggests that it is now possible to compare women to household goods and possessions, and to compare people to feces, filth, bacteria, viruses, diseases, and primitive humans.
Mehta removed warnings against avowed racism, homophobia, and Islamophobia. It also removed warnings against expressions of hate, such as calling people “shitholes,” “sluts,” and “bastards.”
The change could also mean posts about the “China virus,” a term frequently used by President-elect Donald Trump in relation to the coronavirus, would be allowed.
The co-chairs of Meta’s oversight committee stated that the company’s systems had become “too complex” after deciding to eliminate fact-checkers, with Elon Musk’s X CEO welcoming the decision. ” he said.
Helle Thorning-Schmidt, co-chair of Meta’s oversight board and former Danish prime minister, agreed with outgoing international affairs chairman Nick Clegg, stating, “The metasystem is too complex.” He mentioned there was “excessive coercion.”
On Tuesday, Mark Zuckerberg surprised everyone by announcing that Facebook owners will stop using third-party checkers to flag misleading content in favor of notes from other users.
The 40-year-old billionaire revealed that Meta will “eliminate fact-checkers and replace them with community notes similar to `I will replace it with’. To the White House.”
Shortly after Mr. Clegg’s departure from Meta, the former British deputy prime minister who had been with the company for six years, Facebook Oversight Board was established under his leadership to make decisions about the social network’s moderation policies.
Helle Thorning-Schmidt told the BBC, “We appreciate the consideration of fact-checking. We welcome that message and are examining the complexity and potentially excessive enforcement.”
In replacing Mr. Clegg, Joel Kaplan, who previously served as deputy chief of staff for policy under former President George W. Bush, will take over the leadership role. Thorning-Schmidt mentioned that Mr. Clegg had been discussing his departure for a while.
Linda Yaccarino, X chief, expressed her approval of Meta’s policy change during an appearance at the CES technology show in Las Vegas by saying, “Welcome to the party.” The decision comes as a response to the positive reception from Mr. Yaccarino.
The shift will move the social network away from third-party checkers that flag misleading content in favor of user-based notes. This move has faced criticism from online safety advocates for potentially allowing misinformation and harmful content to spread.
Yaccarino praised Meta’s decision as “really exciting” during a Q&A session at CES.
Describing X’s community notes as a positive development, Yaccarino emphasized its effectiveness in unbiased fact-checking.
Yaccarino added, “Human behavior is inspirational because when a post gets noticed, it becomes dramatically less shared. That’s the power of community notes.”
Mr. Zuckerberg, sporting a rare Swiss watch valued at about $900,000, criticized Meta’s current moderation system as “too politically biased” while acknowledging the potential impact on catching malicious content.
Hello. Welcome to TechScape. Happy new year! Headaches are less common in dry January. Today’s highlights from TechScape include Meta’s promotion of a Trumpian bulldog, TikTok facing challenges beyond bans, Meta receiving backlash over AI, and Elon Musk’s foreign involvement.
Former British Deputy Prime Minister Nick Clegg has resigned from Meta after six years as head of international affairs. He played a role in bridging technology and politics, earning approximately $19 million during his tenure.
Clegg, a centrist, may return to British politics following his party’s success in the last election. His departure marks a shift towards more partisan times at Meta under new appointee Joel Kaplan.
Meta’s approach to AI integration has faced criticism, with the company recently removing AI-powered profiles following negative feedback. Elon Musk’s political involvement extends to international affairs, with interests in Germany, France, and Canada.
TikTok faces second war in US: child exploitation lawsuit
Photo: Mike Blake/Reuters
TikTok faces legal challenges in the US over child exploitation allegations, with multiple states suing the app. Concerns have been raised about misuse of its livestreaming feature to harm children.
Meta’s AI strategy has stirred controversy, particularly with its AI-generated profiles causing backlash. The company plans to introduce more AI characters despite previous issues.
Elon Musk intervenes overseas
Photo: Argi February Sugita/ZUMA Press Wire/REX/Shutterstock
Elon Musk’s political influence extends across multiple countries, including Germany, France, and Canada. His support of far-right parties and involvement in international affairs has raised concerns about interference in elections.
Musk’s recent actions suggest a deepening involvement in Canadian politics, aligning himself with conservative figures and advocating for specific political initiatives.
Meta has recently removed the Facebook and Instagram profiles of AI characters that were created over a year ago. This decision came after users rediscovered these profiles, joined conversations, and shared screenshots that went viral.
The company initially introduced these AI-powered profiles in September 2023 but retired most of them by the summer of 2024. However, following comments by Meta executive Connor Hayes, a few characters were kept and gained renewed interest. According to the Financial Times, Meta plans to roll out more AI character profiles soon.
Hayes stated, “We expect these AIs to eventually become permanent fixtures on our platform, similar to user accounts.” The AI profiles would post generated photos on Instagram and respond to messages from users on Messenger.
Conversations with Meta AI user-generated therapist chatbots. Photo: Instagram
The AI profiles included characters like Liv and Carter, who described themselves as a proud black queer mom and a dating expert, respectively. Despite being managed by Meta, these profiles interacted with users. In 2023, Meta released a total of 28 AI personas, all of which were deactivated last Friday.
Conversations with these characters took unexpected turns as users questioned the AI’s creators. In response to inquiries about the lack of diversity among the creator team, for example, Liv pointed out the absence of Black individuals. Shortly after these profiles gained attention, they started disappearing.
Instagram AI Studio for building chatbots. Photo: Instagram
Meta’s spokeswoman, Liz Sweeney, clarified that the accounts were part of an AI experiment conducted in 2023 and were managed by humans. After addressing a bug preventing users from blocking the accounts, Meta removed the profiles.
Regarding the recent confusion, Sweeney stated that the Financial Times article focused on Meta’s long-term vision for AI characters on its platform, not the introduction of a new product. The AI accounts were part of an experiment conducted in 2023 using Connect. Meta assured users that they are working to resolve the blocking issue.
Although the meta-generation accounts have been taken down, users can still create their own AI chatbots. These user-generated chatbots cover various roles and themes, such as therapists, loyal confidants, tutors, and relationship coaches.
The liability of chatbot creators for the content generated by their AI companions remains unaddressed. While US law protects social network creators from user-generated content liability, a lawsuit against Character.ai suggests potential legal issues with AI chatbots.
During his time as owner of Facebook, Instagram, and WhatsApp, Nick Clegg reportedly made around $19 million from the sale of Meta shares. Filings show that before stepping down as president of Global Affairs and Communications, Clegg had sold shares worth $18.4 million.
Although his total salary at Meta has not been disclosed, he still owns approximately 39,000 shares of the company, valued at around $21 million at current prices. Joel Kaplan will succeed him as deputy, known for his conservative views and previous role in the George W. Bush administration.
Speculation surrounds Clegg’s next move after leaving Meta, with potential for a return to politics. He is considering opportunities in artificial intelligence, having criticized Rishi Sunak’s approach to AI regulation and aligning more with Tony Blair’s optimistic views on the technology’s potential.
Open to work opportunities in both public and private sectors, Clegg aims to return to London and remain in Europe in 2022. His wife, Miriam, has her own political ambitions and recently established a think tank in Spain.
Knighted in 2018 for his public service, Clegg faced criticism for joining Facebook later that year. Despite his previous advocacy against Brexit, Clegg’s tenure at Meta saw success amidst challenges of fake news and data protection.
In his Facebook post, Clegg reflects on his time at Meta, expressing pride in his work and the innovative approach he brought to the role. Despite his past political achievements and setbacks, Clegg remains optimistic about the future.
Looking ahead, Clegg’s next steps are uncertain, with possibilities in various sectors on the horizon. His departure from Meta marks a new chapter in his career, leaving a legacy of experience and impact in the digital landscape.
Nick Clegg, former UK deputy prime minister and current director of international affairs at Meta, is leaving the company after six years.
“It truly was an adventure of a lifetime!” said Clegg. post On facebook. “I’m proud of the work I’ve done leading and supporting teams across the company to ensure innovation goes hand in hand with increased transparency and accountability, and new forms of governance.”
Clegg joined Facebook’s parent company in 2018 as vice president of global affairs and communications for the social media platform. At the time, the company was under intense scrutiny over the Cambridge Analytica data scandal and its role in the 2016 US presidential election. He was promoted to director of policy in 2022 after helping establish the Facebook Oversight Board, an independent board that makes decisions about the social network’s moderation policies.
“My time at the company coincided with a major reset in the relationship between ‘big tech’ and the social pressures expressed in new laws, institutions and norms impacting the sector,” Clegg said. wrote. “I hope that I have played a role in bridging the disparate worlds of technology and politics, worlds that continue to interact in unpredictable ways around the world.”
Mr. Clegg will be replaced by Vice President Joel Kaplan, who is “clearly the right person for the right job at the right time,” Mr. Clegg wrote. Mr. Kaplan previously served as deputy chief of staff for policy under former President George W. Bush. He is known as the company’s most prominent conservative voice and rose to the top during a difficult time for Facebook. liberal bias claims.
During his tenure, Kaplan pushed for a partnership with the fact-checking arm of the right-wing news site The Daily Caller, responding to Republican concerns about the company’s affiliation with mainstream news outlets. Most recently, Kaplan caught in the photo Alongside Vice President-elect J.D. Vance at the Time Person of the Year award ceremony at the New York Stock Exchange.
The policy team reshuffle comes just weeks before President Donald Trump’s Jan. 20 inauguration. As President Trump enters and leaves office, tech companies including Meta have vacillated between enforcing moderate provisions such as account bans on Trump or reversing their decisions. Days after Trump’s election, Meta donated $1 million to Trump’s inaugural fund, and its CEO Mark Zuckerberg dined with Trump at Mar-a-Lago. This comes after President Trump threatened to punish Zuckerberg if his policies had any impact on the election.
In response to Clegg’s Facebook post, Zuckerberg thanked the executive and said, “Given his deep experience and insight over many years leading our policy efforts,” Kaplan said. said he is excited to take on the role.
Zuckerberg responded to Clegg’s post, writing, “You have had a significant impact on advancing Meta’s voice and values around the world, and our vision for AI and the Metaverse.” “You have also built a strong team to advance this work.”
Meta has disclosed that it intervened this year to stop around 20 covert influence operations globally. However, the company mentioned that concerns regarding AI-based election distortions may not be realized until 2024.
Nick Clegg, the president of international affairs at Meta, which oversees Facebook, Instagram, and WhatsApp, stated that Russia continues to be the main source of hostile online activity. He expressed surprise that AI has not been utilized to deceive voters during recent busy election periods globally.
The former British deputy prime minister mentioned that Meta, with over 3 billion users, utilized AI tools to create images of political figures like Donald Trump, Kamala Harris, J.D. Vance, and Joe Biden last month. Over 500,000 requests for such images had to be removed before the American election day.
Security experts at the company have been dealing with new operations using fake accounts to manipulate public debate toward strategic goals every three weeks. These operations include Russian networks targeting countries like Georgia, Armenia, and Azerbaijan.
Another operation based in Russia uses AI to create fake news sites resembling well-known brands to weaken support for Ukraine and promote Russia’s role in Africa while criticizing African countries and France.
Mr. Clegg highlighted that Russia remains the most frequent source of covert influence operations disrupted, followed by Iran and China. He noted that the impact of AI-generated deceptive content from disinformation campaigns appears to be limited so far.
While the impact of AI manipulation on video, audio, and photos has been modest, Mr. Clegg warned that these tools are likely to become more pervasive in the future, potentially changing the landscape of online content.
In a recent evaluation, the Center for Emerging Technology and Security suggested that AI-generated deceptive content influenced the US election discourse, but evidence of its impact on the election outcome is lacking. The report warns that AI-based threats could negatively affect democratic systems by 2024.
Sam Stockwell, a researcher at the Alan Turing Institute, highlighted how AI tools may have shaped election discourse and spread harmful content subtly, such as misleading claims and rumors that gained traction during recent elections.
Meta’s latest virtual reality headset, the Quest 3S, offers almost all the features of the Top Model but at a more affordable price point of £290 (€330/$300/AU$500). This makes it around 40% cheaper than the Quest 3 and even cheaper than the 2020 Quest 2.
Positioned between the Quest 2 and Quest 3, the Quest 3S utilizes the same high-performance Qualcomm VR chip found in the Quest 3 while maintaining a similar design and feel to the Quest 2 to keep costs down.
Well-designed straps, rotating arms, and a well-cushioned faceplate make it easy to get a comfortable fit. Photo: Samuel Gibbs/The Guardian
The Quest 3S has adjustable straps, rotatable arms, and a foam faceplate that make it one of the most comfortable headsets for extended wear. Additional straps and faceplates are available for users seeking a customized fit.
Featuring speakers in the arm, the Quest 3S provides decent spatial audio, but users can also opt to connect Bluetooth headphones or use a USB-C headphone adapter for wired audio.
The Quest 3S boasts the same screen and lens as the Quest 2, delivering sharp images at up to 120 frames per second. However, the use of fresnel lenses with limited distance settings may lead to blurriness at the edges when looking around.
The headset comes with industry-leading hand controllers for precise and intuitive interactions. Photo: Samuel Gibbs/The Guardian
Two hand controllers, light and comfortable, feature capacitive buttons that respond to finger movements without accidental presses. Each controller uses a standard AA battery, with rechargeable options recommended for cost-effectiveness and sustainability.
The Quest 3S also includes spacers for glasses and offers prescription lenses for an additional cost.
Meta’s efforts to incorporate artificial intelligence systems in the UK public sector have advanced with the tech giant granting funding to develop technology to reduce waiting times in NHS A&E.
In the midst of competing initiatives by Silicon Valley tech companies to collaborate with national and local governments, Meta hosted Europe’s first ‘hackathon’ where over 200 programmers were challenged to use its Llama AI in UK public services. They were tasked with finding ways to implement the system. A Meta executive stated that they were ‘focused on Labor’s priorities’.
This development followed reports of another US tech company, Palantir, lobbying government officials, including the Department of Justice and Prime Minister Rachel Reeves. Additionally, Microsoft recently sealed a five-year agreement with Whitehall departments to provide AI Copilot technology to civil servants.
Meta’s hackathon featured Nick Clegg, former deputy prime minister and current president of international affairs at Meta in California. Ferial Clarke, the UK’s AI minister, emphasized the potential for governments to adopt AI, like Meta’s open-source model, to bolster their critical missions.
When questioned about the significance of Meta offering free technology, Clegg stated, “It will indirectly benefit us in the long run by fostering an ecosystem of Llama-based innovation, making it more likely for us to integrate innovation back into our products.” He also brushed off concerns regarding AI risks in public services.
Discussing potential regulation, Mr. Clark assured that Labor would address the substantial risks AI poses while supporting innovation and ensuring workers are not overwhelmed by regulations.
Peter Kyle, the secretary of state for science and technology, acknowledged that the UK government was being outspent by tech giants in innovation, highlighting the need for a national strategy in collaborating with such companies.
The push to promote Meta’s open-source AI platform in the public sector comes as concerns mount over the influence of tech giants, particularly following the involvement of Elon Musk’s X platform in the US presidential election and social media’s role in inciting the August riots in the UK.
In response to inquiries about Meta’s management of Facebook, Instagram, and WhatsApp, Clegg highlighted the contrast between Meta and X in how they handle content.
“We approach things very differently,” he remarked. “During the UK riots, individuals like Tommy Robinson and Andrew Tate, who caused significant issues, were long banned from our platforms. This contrasts with platforms like Telegram and X.”
Reports suggest that Mehta, the owner of Facebook and Instagram, terminated approximately 24 employees at the Los Angeles office for misusing $25 meal credits to purchase items like toothpaste, laundry detergent, and wine glasses.
The tech giant, with a market capitalization of £1.2 trillion and ownership of WhatsApp, took action after an investigation revealed unauthorized food deliveries to employees’ homes. One employee allegedly fired was earning $400,000 and admitted to using meal credits for non-food items and groceries.
On Blind, an anonymous platform, the individual wrote about using meal credits only on days they did not eat at the office, leading to their termination upon admission during an HR probe. Some employees were also found to have used credits for personal items like acne pads, with consequences varying based on the severity of the violation.
Free meals have been a common perk at tech companies, including Meta, founded by Mark Zuckerberg, which offers free meals in large offices but provides daily food credits for smaller sites. These credits include $20 for breakfast, $25 for lunch, and $25 for dinner.
In 2022, Meta made changes to its Silicon Valley campus, delaying the free dinner service by 30 minutes to 6:30 p.m. as part of broader cutbacks. This decision sparked discontent among employees as fewer could dine on campus, affecting access to leftover food to take home.
After Meta launched a new platform for sharing fraud information with banks, celebrities and others were taken away in handcuffs. The platform blocked 8,000 pages and 9,000 celebrity scams, reducing the likelihood of Australians seeing deepfake images promoting fraudulent crypto investments on Facebook. This occurred in the first 6 months following the launch.
Between January and August 2024, Australians reported $43.4 million in losses to social media scams through Scamwatch, with almost $30 million related to fake investment scams.
Meta has been dealing with scams using deepfake images of celebrities like David Koch, Gina Reinhart, Anthony Albanese, Larry Emdur, and Guy Sebastian. Politicians and regulators have pressured the company to address these scams, especially those facilitating investment fraud.
Mining tycoon Andrew Forrest is suing Meta for failing to address fraudulent activity using his image.
Meta has partnered with the Australian Financial Crime Exchange (AFCX) to launch the Fraud Information Exchange (Fire). This channel allows banks to report known fraud to Meta, enabling Meta to notify all banks involved in fraud discovered on its platform.
Seven banks, including ANZ, Bendigo Bank, CBA, HSBC, Macquarie, NAB, and Westpac, are participating in the Fire program. Another program involving AFCX’s Intel Loop information sharing service includes banks like Optus, Pivotel, Telstra, TPG, and the National Anti Scams Center.
Since the pilot launch in April, Meta has removed over 9,000 fraudulent pages and 8,000 AI-generated celebrity investment scams on Facebook and Instagram based on 102 reports received.
While the early results are promising, the number of fire reports is low compared to the losses reported to Scamwatch, with 1,600 reported losses in social media scams in August alone.
Meta reported removing 1.2 billion fake accounts worldwide in the last quarter, with 99.7% removed before user reports.
AFCX’s Rhonda Lau mentioned that the program aims to make Australia a less attractive target for fraudsters.
Meta’s David Agranovich stated that the system will help detect fraud outside the platform, connecting the dots between fraudulent activities on Facebook and Instagram.
Meta provides the list of blocked domains to partners and will grant access to the Fire platform to its threat exchange system to detect criminal activity like covert influence operations and child abuse on the platform.
Mr. Agranovich acknowledged the frustration Australians may face in reporting fraud to Meta and mentioned plans for improvement.
Both the Commonwealth Bank and ANZ welcomed the collaboration with Meta. Deputy Treasurer Stephen Jones introduced a draft bill to combat fraud and provide a proper dispute resolution process for fraud victims, with consultations ending on 4th October.
Meta’s content moderation board decided that implementing a complete ban on pro-Palestinian slogans would hinder freedom of speech. They supported the company’s choice to allow posts on Facebook that include the phrase “from the river to the sea.”
The oversight committee examined three instances of Facebook posts featuring the phrase “from the river to the sea” and determined that they did not break Meta’s rules against hate speech or incitement. They argued that a universal ban on the phrase would suppress political speech in an unacceptable manner.
In a decision endorsed by 21 members, the committee upheld Meta’s original decision to keep the content on Facebook, stating that it expressed solidarity with the Palestinian people and did not promote violence or exclusion.
The committee, whose content judgments are binding, mentioned that the phrase has various interpretations and can be used with different intentions. While it could be seen as promoting anti-Semitism and the rejection of Israel, it could also be interpreted as a show of support for the Palestinians.
The majority of the committee stated that the use of the phrase by Hamas, although banned from Meta’s platform and considered a terrorist organization by the UK and the US, does not automatically make the phrase violent or hateful.
However, a minority within the committee argued that as the phrase appeared in Hamas’s 2017 charter, its use in the post could be construed as praising the banned group, particularly following an attack by Hamas. The phrase “From the river to the sea, Palestine will be free” refers to the territory between the Jordan River and the Mediterranean Sea.
Opponents of the slogan claim it advocates for the elimination of Israel, while proponents like Palestinian-American author Yousef Munayyer argue it supports the idea of Palestinians living freely and equally in their homeland.
The ruling pointed out that due to the phrase’s multiple meanings, enforcing a blanket ban, removal of content, or using the phrase as a basis for review would impinge on protected political speech.
In one of the cases, a user responded to a video with the hashtag “FromTheRiverToTheSea,” which garnered 3,000 views. In another case, the phrase “Palestine will be free” was paired with an image of a floating watermelon slice, viewed 8 million times.
The third case involved a post by a Canadian community organization condemning “Zionist Israeli occupiers,” but had fewer than 1,000 views.
A Meta spokesperson, overseeing platforms like Instagram and Threads, remarked: “We appreciate the oversight committee’s evaluation of our policies. While our guidelines prioritize safety, we acknowledge the global complexities at play and regularly seek counsel from external experts, including our oversight committee.”
Russia has been attempting online fraudulent activities using generative artificial intelligence, but according to a Metasecurity report published on Thursday, these efforts have not been successful.
Meta, the parent company of Facebook and Instagram, discovered that AI-powered strategies have only brought minimal benefits in terms of productivity and content generation to malicious actors. Meta was successful in thwarting deceptive influence campaigns.
Meta’s actions against “systematic fraud” on its platform are in response to concerns that generative AI could be employed to mislead or confuse individuals during elections in the U.S. and other nations.
David Agranovich, Meta’s director of security policy, informed reporters that Russia continues to be the primary source of “coordinated illicit activity” using fake Facebook and Instagram accounts.
Since the 2022 invasion of Ukraine by Russia, these efforts have been aimed at weakening Ukraine and its allies, as outlined in the report.
With the upcoming U.S. election, Meta anticipates Russian-backed online fraud campaigns targeting political candidates who support Ukraine.
Facebook has faced accusations of being a platform for election disinformation, while Russian operatives have utilized it and other U.S.-based social media platforms to fuel political tensions during various U.S. elections, including the 2016 election won by Donald Trump.
Experts are worried that generative AI tools like ChatGPT and Dall-E image generator can rapidly create on-demand content, leading to a flood of disinformation on social networks by malicious actors.
The report notes the use of AI in producing images, videos, translating and generating text, and crafting fake news articles and summaries.
When Meta investigates fraudulent activity, the focus is on account behavior rather than posted content.
Influence campaigns span across various online platforms, with Meta observing that X (formerly Twitter) posts are used to lend credibility to fabricated content. Meta shared its findings with X and other internet companies, emphasizing the need for a coordinated defense against misinformation.
When asked about Meta’s view on X addressing scam reports, Agranovic mentioned, “With regards to Twitter (X), we’re still in the process of transitioning. Many people we’ve dealt with there in the past have already gone elsewhere.”
X has disbanded its trust and safety team and reduced content moderation efforts previously used to combat misinformation, making it a breeding ground for disinformation according to researchers.
Meta has announced that its new artificial intelligence model is the first open-source system that can compete with major players like OpenAI and Anthropic.
The company revealed in a blog post that its latest model, named “Llama 3.1 405B,” is able to perform well in various tasks compared to its competitors. This advancement could potentially make one of the most powerful AI models accessible without any intermediaries controlling access or usage.
Meta stated, “Developers have the freedom to customize the models according to their requirements, train them on new data sets, and fine-tune them further. This empowers developers worldwide to harness the capabilities of generative AI without sharing any data with Meta, and run their applications in any environment.”
Users of Llama on Meta’s app in the US will benefit from an additional layer of security, as the system is open-source and cannot be mandated for use by other companies.
Meta co-founder Mark Zuckerberg emphasized the importance of open source for the future of AI, highlighting its potential to enhance productivity, creativity, and quality of life while ensuring technology is deployed safely and evenly across society.
While Meta’s model matches the size of competing systems, its true effectiveness will be determined through fair testing against other models like GPT-4o.
Currently, Llama 3.1 405B is only accessible to users in 22 countries, excluding the EU. However, it is expected that the open-source system will expand to other regions soon.
Mark Zuckerberg’s Meta announced that it would not release an advanced version of its artificial intelligence model in the EU, citing “unpredictable” behavior of regulators.
The owners of Facebook, Instagram and WhatsApp are preparing to make the Llama model available in a multimodal format, meaning it can work with text, video, images and audio, not just one format. Llama is an open-source model, meaning users can freely download and adapt it.
But a Meta spokesperson confirmed that the model would not be available in the EU, a decision that highlights tensions between big tech companies and Brussels amid an increasingly tough regulatory environment.
“We plan to release a multi-modal Llama model in the coming months, but it will not be released in the EU due to the unpredictable regulatory environment there,” the spokesperson said.
Brussels is introducing an EU AI law which comes into force next month, while new regulatory requirements for big tech companies are being introduced in the form of the Digital Markets Act (DMA).
However, Meta’s decision regarding its multimodal Llama model has implications on its compliance with the General Data Protection Regulation (GDPR): Meta was ordered to stop training its AI models on posts from Facebook and Instagram users in the EU for potential violations of privacy regulations.
The Irish Data Protection Commission, which oversees Meta’s compliance with GDPR, said it was in discussions with the company about training its models.
However, Meta is concerned that other EU data watchdogs could step in to the regulatory process and halt its approval. Although a text-based version of Llama is available in the EU, and a new text-only version is due to be released in the EU soon, these models have not been trained on EU Meta user data.
The move comes after Apple announced last month that it would not roll out some new AI features in the EU due to concerns about compliance with the DMA.
Meta had planned to use the multimodal Llama model in products such as Ray-Ban smart glasses and smartphones. Llama’s decision was first reported by Axios.
Meta also announced on Wednesday that it had suspended use of its Generative AI tool in Brazil after the Brazilian government raised privacy concerns about the use of user data to train models. The company said it decided to suspend use of the tool while it consults with Brazil’s data authorities.
Meta has lifted previous restrictions on Donald Trump’s Facebook and Instagram accounts as the 2024 presidential election approaches, the company announced on Friday.
After being banned for his online behavior during the January 6 riot, President Trump was allowed to return to the social network in 2023 with “guardrails” in place. But those guardrails have now been removed.
“In assessing the responsibility of permitting political expression, I believe the American people should be able to hear from presidential candidates with the same standards,” Mehta said in a blog post, alluding to Trump formally becoming the party’s nominee at the Republican National Convention scheduled for next week.
As a result, Mr. Trump’s account will no longer be subject to the harsh suspension, which he said was instituted in response to “extreme and extraordinary circumstances” and “was not necessary to apply.”
“All US presidential candidates are required to follow the same community standards as all Facebook and Instagram users, including policies to prevent hate speech and incitement to violence,” the company said in a blog post.
Since returning to the meta social network, Trump has mainly used his account to share campaign information, attacks on Democratic candidate Biden and memes.
Critics of Trump and online safety advocates have expressed concern that his return could lead to an increase in misinformation and incitement to violence like that seen during the storming of the Capitol, which initially prompted the president’s travel ban.
The Biden campaign condemned Mehta’s decision in a statement on Friday, calling it a “greedy and reckless decision” that amounts to “a direct attack on our security and democracy.”
“Restoring his access would be like giving car keys to someone you know is going to drive his car into a crowd and off a cliff,” campaign spokesman Charles Kretschmer Luttwak said. “It’s like giving a megaphone to a real racist who is going to shout hatred and white supremacy from the rooftops and make it mainstream.”
In addition to the Meta platform, other major social media companies, including Twitter (now X), Snapchat and YouTube, have also banned Trump’s accounts due to his online activity surrounding the January 6 attack.
The former president was allowed to return to X last year following a decision by Elon Musk, who bought the company in 2022, but has yet to tweet.
Trump Came back It is set to appear on YouTube in March 2023. He remains banned from Snapchat.
Trump launched his own social network, Truth Social, in early 2022.
Meta maintains its stance against paying media companies for news in Australia, arguing that it does not address the issue of misinformation and disinformation on Facebook and Instagram.
In March, Meta announced that it would not engage in new agreements with media organizations to pay for news fees after the expiration of contracts signed in 2021 under the Morrison government’s media bargaining code.
Deputy Treasurer Stephen Jones is exploring the possibility of the Albanese government using powers under the News Media Bargaining Code Act to “designate” Meta under the code. If designated, the tech company would be compelled to negotiate payments with news providers or face a fine of 10% of its revenue in Australia.
The Treasury Department is also exploring other options, such as mandating the company to distribute news or leveraging taxation to influence the company. The government is concerned that designating Meta under the code could result in a ban in Australia, similar to what occurred in Canada since August last year.
Experts in Canada have noted that where news content has disappeared, it has been replaced by misleading viral content.
In a submission to a federal parliamentary inquiry on social media and Australian society, Meta stated that they are “unaware of any evidence” supporting claims that misinformation has increased on their Canadian platforms due to the news ban, and that they have never viewed news as a tool to combat misinformation and disinformation on their platform.
“We are committed to removing harmful misinformation and reducing the distribution of fact-checked misinformation, regardless of whether it is news content. By addressing this harmful content, we aim to maintain the integrity of information on our platform,” stated the submission.
“Canadians can still access trusted information from various sources using our services, including government agencies, political parties, and non-government organizations, which have always shared engaging information with their audiences, along with news content links.”
According to the European Commission, Meta, led by Mark Zuckerberg, has breached the EU’s new digital law with its advertising strategy. This model involved charging users for access to ad-free versions of Facebook and Instagram.
Last year, Meta introduced a “pay or consent” system to comply with EU data privacy regulations. Under this model, users could pay a monthly fee to use Facebook and Instagram without ads and with their personal data not utilized for advertising. Non-paying users agree to have their data used for personalized ads during the signing-up process.
The European Commission, the executive body of the EU, stated that this model does not align with the Digital Markets Act (DMA) created to regulate big tech companies. The Commission’s initial findings of the “Pay or Consent” investigation revealed that this model coerces users into consenting to data collection across various platforms. Additionally, users are not given the option to choose services that use less data but are similar to the ad-supported versions of Facebook and Instagram.
The Commission expressed that this alternative does not offer users a comparable less personalized version of the Meta network, forcing them to agree to data integration. To comply with the DMA, Meta would need to launch a version of Facebook or Instagram using less user data.
In response, a Meta spokesperson mentioned that the new model was designed to adhere to regulatory requirements such as the DMA. They highlighted that subscriptions as an alternative to advertising are a common business model and were implemented to address various obligations.
The European Commission is required to complete its investigation by the end of March next year. Meta may face fines of up to 10% of its global turnover, amounting to $13.5 billion (£10.5 billion). The Commission recently found Apple guilty of violating the DMA by impeding competition in its app store.
A former Meta engineer filed a lawsuit on Tuesday accusing the company of discriminatory practices in handling content related to the Gaza war. He claimed that he was fired by Meta for trying to fix a bug that was throttling Palestinian Instagram posts.
Feras Hamad, a Palestinian-American engineer on Meta’s machine learning team since 2021, sued the social media giant in California, alleging discrimination and wrongful termination over his firing in February.
Hamad accused Meta of bias against Palestinians, citing the removal of internal communications mentioning deaths of Gaza Strip relatives and investigations into the use of a Palestinian flag emoji.
The lawsuit alleged the company did not investigate employees posting Israeli or Ukrainian flag emojis in similar situations. Meta did not immediately respond to the allegations.
These allegations align with ongoing criticism from human rights groups about Meta’s moderation of Israel-Palestine content on its platform, including an external review in 2021.
Since last year’s conflict outbreak, Meta has faced accusations of suppressing support for Palestinians. The conflict erupted in Gaza in October after Hamas attacks, resulting in casualties and a humanitarian crisis.
Earlier this year, about 200 Meta employees raised similar concerns in a letter to CEO Mark Zuckerberg and other leaders.
Hamad’s firing seems linked to a December incident involving a troubleshooting procedure at Meta. He raised concerns about restrictions affecting Palestinian content on Instagram.
The lawsuit mentioned a case where a video by a Palestinian photojournalist was wrongly classified as explicit, sparking further issues.
The European Union delivered a direct message to the owners of Facebook in Silicon Valley on Tuesday due to concerns about President Vladimir Putin’s attempts to influence the European Parliament with pro-Russian lawmakers.
Meta has a deadline of five days to outline its plan to tackle fake news, fake websites, and Kremlin-funded advertisements, or face serious consequences.
The EU is worried about Facebook’s handling of fake news, especially 40 days after the European Parliament elections and during a year when many people around the world are voting.
Thierry Breton, the Internal Market Commissioner, emphasized that electoral integrity is a top priority and warned of swift action if Facebook does not address the issues within a week.
He stated, “We expect Meta to inform us within five working days of the measures they are taking to mitigate these risks, or we will take all necessary steps to safeguard our democracy.”
The commission has initiated formal proceedings against Meta ahead of the elections taking place across Europe from June 6 to 9.
There are concerns that Russia might exploit Facebook, with its over 250 million monthly active users, to influence the election outcome in its favor.
Belgian Prime Minister Alexander de Croo suggested that Russia’s aim to support pro-Russian candidates in the European Parliament was evident through alleged payments to parliamentarians.
While specific examples were not provided, concerns include foreign-funded advertisements on Facebook.
An official stated, “They are mistaken if they think they are not profiting from this.”
Additionally, there is insufficient transparency in the tools for identifying illegal or questionable content.
The EU has highlighted delays in removing links to fake news platforms, known as “doppelganger sites”.
Last week, a Czech news agency’s website was hacked to display fake news, including a false claim about an assassination attempt on the Slovak president.
French Europe Minister Jean-Noël Barrault raised concerns about Russian propaganda targeting France to disrupt public debate and interfere in the European election campaign.
Privacy Notice: Newsletters may include information about charities, online advertising, and content funded by external organizations. Please see our Privacy Policy for more information. We use Google reCaptcha to protect our website and Google. privacy policy and terms of service Apply.
After newsletter promotion
One more issue with Facebook is Meta’s decision to restrict discussions on sensitive topics like the Middle East to prevent user-generated content.
This practice known as “shadowbanning” has raised transparency concerns, and the EU is urging Facebook to clarify its decision-making process.
The official added, “Users must be informed when this occurs and have the opportunity to challenge it, or it could lead to controversy.”
There are also worries that Facebook might discontinue CrowdTangle, a service that assists in monitoring disinformation for fact checkers, journalists, and researchers.
The case against Facebook on Tuesday marks the sixth by the European Commission since the Digital Services Act (DSA) came into effect.
However, many question whether these actions are sufficient to combat misinformation. NATO officials have compared disinformation to a weapon as potent as physical warfare during a panel in Brussels.
Authorities argue that Facebook is not idle in addressing these issues, but the existing measures are inadequate, opaque, and not effective enough.
Under the new DSA laws implemented in August, the EU has the authority to levy fines up to 6% of social media companies’ revenue or bar them from operating in the union.
Facebook responded, stating, “We have robust processes for identifying and mitigating risks on our platform. We are collaborating with the European Commission and will share further details of our efforts with them. We look forward to the opportunity.”
A recent report accuses Meta and Google of obstructing information about abortion and reproductive health in Africa, Latin America, and Asia.
MSI Reproductive Choices and the Center to Combat Digital Hate claim that while these platforms restrict advertising for local abortion providers, they don’t limit public access to reproductive health care, leading to the spread of damaging misinformation.
Mehta has agreed to review the findings of the report.
MSI, operating in 37 countries, has had ads containing sexual health information rejected or removed by the platforms.
MSI Ghana and Vietnam reported that their ads promoting reproductive health content were removed or flagged as violating community guidelines.
Whitney Chinogwenya, Global Marketing Manager at MSI, expressed concerns about the censorship of reproductive health content on social media platforms like Facebook, which many women rely on for information.
MSI Mexico faced removal of a Facebook post promoting legal abortion services despite the recent decriminalization of abortion in some states.
The report highlighted Meta’s inaction against anti-abortion misinformation and misleading content about abortion procedures.
The report also revealed fake MSI pages on Facebook that exploit the organization’s reputation for various malicious purposes.
MSI clinics in Ghana were targeted by disinformation campaigns on messaging platforms.
MSI Ghana Advocacy stresses the importance of fact-checking systems on digital platforms to promote accurate reproductive health information.
The report, compiled from interviews and evidence from MSI teams in several countries, aims to raise awareness among digital platforms about their responsibilities.
Meta and Google responded to the report’s allegations, with Meta emphasizing its policies against false information and Google denying any inconsistent enforcement on its platforms.
Both companies stated their commitment to ensuring accurate and compliant advertising on their platforms.
Meta’s recent changes on Instagram mean that users will now see less political content in their recommendations and feed unless they choose to opt-in for it. This adjustment, announced on February 9, requires users to specifically enable political content in their settings.
Users noticed this change in recent days, and it has been fully implemented within the last week. According to the app’s version history, the most recent update before this was a week ago.
The change affects how Instagram recommends content in the Explorer, Reels, and In-Feed sections. It does not impact political content from accounts users already follow.
Instagram defines political content as related to legal, electoral, or social topics. This change also applies to Threads, and users can dispute recommendations if they feel unfairly targeted.
Meta’s aim in making this adjustment is to enhance the overall user experience on Instagram and Threads. They want users to have control over the political content they consume without actively promoting it.
For more information, Meta’s spokesperson directed users to a February blog post. Similar changes will be rolled out on Facebook in the future.
Despite recent controversies, like censorship during the Israel-Gaza conflict and perceived polarization by Facebook’s algorithms, Meta continues to work on separating political and news content from its platforms.
Although past studies suggest that algorithm changes may not alter political perceptions, Meta’s efforts to distance itself from politics and news continue. This includes phasing out the News tab on Facebook in anticipation of potential conflicts with news publishers and governments.
In ongoing discussions with the Australian government, Meta faces considerations under the News Media Bargaining Act 2021. Possible fines and revenue loss could result from this legislation.
Meta maintains that news content makes up less than 3% of user engagement on Facebook. The company remains committed to evolving its platforms in response to user preferences and societal concerns.
Facebook and Instagram are currently experiencing significant issues as of Tuesday afternoon in the UK, with users unable to log in and feeds not updating. The problem was first noticed around 3:30pm GMT.
Interestingly, Google also faced login problems at the same time, indicating a potential common cause for the outage affecting these two major tech companies that manage their own infrastructure.
Meta’s status page highlighted various disruptions, including a major issue with groups’ admin center and Facebook Login, a service that enables users to sign in to third-party platforms using their Facebook credentials, causing outages on other websites.
By 4pm GMT, Meta updated its status page to show an “unknown” status for most services except the Messenger API for Instagram, while services like WhatsApp and Facebook Ads Transparency page were still operational. However, the meta status page itself stopped working at 4:15 p.m.
In a tweet, Facebook spokesperson Andy Stone acknowledged the ongoing issues and stated that they were working to resolve them.
Google’s ad status page confirmed an outage in its Ad Manager at 3:30pm GMT and mentioned investigating other reported issues. However, Google’s consumer services like search and YouTube were largely unaffected, although login problems did impact some corporate clients, such as the Guardian newspaper.
Systemic internet issues appear to be the underlying cause, with users of various platforms like X and Microsoft’s Teams also facing sporadic difficulties.
This outage is the first major Facebook outage of 2021, attributed to a configuration error in the BGP protocol, which inadvertently removed its address from the internet communication system between servers. Despite a swift discovery, it took several hours to implement and rectify the fix, compounded by the lack of remote access for engineers to resolve the issue.
Meta works to identify and label AI-generated images on Facebook, Instagram, and Threads, and is striving to expose “people and organizations that actively seek to deceive the public.” Masu.
Images created using Meta’s AI image tools are already labeled as AI, but Nick Clegg, the company’s global president, stated in a blog post on Tuesday that the company’s competing services will start labeling AI-generated images.
Meta’s AI images already have metadata and an invisible watermark indicating that the image was created by AI. The company has partnered with Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to work on AI image generators, according to Clegg.
Clegg said, “As the line between human content and synthetic content becomes blurred, people want to know where the line is.”
He added, “People often encounter AI-generated content for the first time, and our users appreciate the transparency around this new technology. It’s important to let people know that it was created using AI.”
A surfing llama or an AI? Image labels for AI-generated content on Facebook.
Clegg mentioned that the labeling feature is being developed and will be rolled out to all languages in the coming months.
He also stated that the company will add more prominent labels on images, videos, or audio that are “digitally created or altered” and “have a particularly high risk of materially misleading the public.”
Additionally, the company is working to develop technology to automatically detect AI-generated content, even when the content lacks invisible markers or has been removed.
“This work is particularly important because the online space is likely to become increasingly hostile in the coming years,” Mr Clegg said.
He concluded, “People and organizations actively trying to deceive people with AI-generated content will find ways to circumvent the safeguards in place to detect it. Our industry and society as a whole must continue to find ways to stay ahead of the curve.”
AI deepfakes have already become an issue in the US presidential election cycle, with examples of AI-generated deepfakes used to dissuade voters in the New Hampshire Democratic primary.
Australia’s Nine News also faced criticism for altering an image broadcast on the evening news that exposed Victorian Animal Justice Party MP Georgie Purcell’s belly button and altered her chest, using Adobe’s AI image tools.
According to a whistleblower, Mark Zuckerberg’s Meta Inc. has not done enough to protect children following Molly Russell’s death. The whistleblower claimed that the social media company already poses a risk to teenagers and that Zuckerberg had put in place infrastructure to protect against such content.
Arturo Bejar, the owner of Instagram and Facebook, voiced his concern that the company had not learned from Molly’s death and could have provided a safer experience for young users. Bejar’s survey of Instagram users revealed that 8.4% of 13- to 15-year-olds had seen someone harm themselves or threaten to harm themselves within the past week.
Bejar stressed that if the company had taken the right steps after Molly Russell’s death, the number of people encountering self-harm content would have been significantly lower. Russell, who committed suicide after viewing harmful content related to suicide, self-harm, depression, and anxiety on Instagram and Pinterest, sparked the whistleblower’s concerns. Bejar believes that the company could have made Instagram safer for teens but chose not to make necessary changes.
Former Meta employees have also asked the company to set goals for reducing harmful content and creating sustainable incentives to work on these issues. Meanwhile, Béjart has met with British politicians, regulators, and activists, including Ian Russell, Molly’s father.
Bejar has suggested a series of changes for Meta, including making it easier for users to flag unwanted content, surveying users’ experiences regularly, and facilitating the reporting of negative experiences with Meta’s services.
According to an internal document released late Wednesday, Meta estimates that about 100,000 children on Facebook and Instagram are subjected to online sexual harassment every day, including “pictures of adult genitalia.” The unsealed legal filings include several allegations against Meta, based on information the New Mexico Attorney General’s Office learned from presentations and communications between Meta employees. These allegations describe an incident in 2020 in which the 12-year-old daughter of an Apple executive was solicited via Instagram’s messaging product, IG Direct.
In testimony before the US Congress late last year, a senior Meta employee described how his daughter was recruited through Instagram. His efforts to resolve the issue were ignored, he said. This suit is the latest in a series of lawsuits filed by the New Mexico Attorney General’s Office on December 5, alleging that Meta’s social network has become a marketplace for child predators. The state’s attorney general, Raul Torrez, accused Meta of allowing adults to find, send messages to, and groom children. Meta released a statement in response to Wednesday’s filing, stating, “We want to provide teens with a safe and age-appropriate online experience, and we have over 30 tools to support them and their parents.”
The lawsuit also referenced a 2021 internal presentation on child safety, in which Meta states that it has “poorly invested in the sexual expression of minors on IG, with significant sexual commentary on content posted by minors.” The complaint also highlights Meta employees’ concerns about the safety of children. Meta’s statement also said the company “has taken significant steps to prevent unwanted contact from teens, especially adults.”
The New Mexico lawsuit follows a Guardian investigation in April that revealed how Meta failed to report or detect the use of its platform for child trafficking. According to documents included in the lawsuit, Meta employees “coordinate human trafficking operations” and ensure that “every step of human exploitation (recruitment, conditioning, and exploitation) is expressed on our platform.” But an internal email from 2017 said executives opposed scanning Facebook Messenger for “harmful content,” citing the service’s desire to “provide more privacy.” In December, Meta received widespread criticism for introducing end-to-end encryption for messages sent via Facebook and Messenger.
It’s been about two years since Sheryl Sandberg stepped down from the board of Facebook’s parent company, Meta.
As Chief Operating Officer of Meta, Mr. Sandberg was the lead architect of Facebook’s digital advertising-driven business model.
The 54-year-old announced he would step down from his role in June 2022 and step down from the Meta board after his term ends in May.
“The Meta business is strong and well-positioned for the future, so we feel now is the right time to exit,” Sandberg said in a Facebook post, adding that he has asked the company’s advisors to He added that he will take office.
Sandberg joined Facebook from Google in 2008 and will step down as head of operations at Meta in 2022, a position he held for 14 years.
In response to Sandberg, Meta CEO and founder Mark Zuckerberg said he looked forward to “a new chapter together.”
Sandberg, once Zuckerberg’s second-in-command, was one of the company’s most visible executives.
While serving as chief operating officer of Mr. Zuckerberg’s social media empire, she covered the Cambridge Analytica scandal, the use of the Facebook platform in organizing the 2021 Capitol riot, and Facebook’s massive success. faced a number of controversies, including continued concerns about mining user data to power its advertising business.
Prior to joining Facebook, Mr. Sandberg was vice president of global online sales and operations at Google and served as chief of staff at the U.S. Treasury under former President Bill Clinton.
Sandberg, a Harvard graduate, is the author of several books, including the 2013 feminist manifesto “Lean In: Women, Work, and the Will to Lead.”
This April marked: Tenth One year since Google released the first generation of Glass. It may be hard to believe in retrospect 10 years later, but the limited release Explorer’s Edition was a coveted item. They felt like the future, at least for a while. But the past decade for smart glasses has been a very mixed bag. There have been more misses than hits, and it seems like it will be years before we can reach any kind of agreement on form or function. Google Glass has never reached the critical mass needed to launch a commercial product, but the company seems content to try again every few years.
Meanwhile, AR’s success has been largely confined to smartphone screens, but it’s not for lack of trying. Magic Leap, Microsoft, and Meta have all introduced their AR products to varying degrees of success, but next year’s Apple Vision Pro release is sure to shake things up. However, technical limitations limit these solutions to significantly larger form factors. Shrinking this kind of technology down to the size of regular glasses is a great goal, but it’s still a long way off.
It’s telling that at Meta’s recent hardware event he released two head-mounted devices. The first was the Quest 3, a VR headset that offers an AR experience thanks to pass-through technology. The other Ray-Ban Meta makes no pretense of offering augmented reality, but fits nicely into a standard glasses form factor. Image credits: brian heater Like Snapchat Spectacles before it, Ray-Ban Meta is all about capturing content. A camera built into the frame allows wearers to shoot quick videos for social media or livestream. When it comes to content consumption, there are speakers built into the temples that direct music and podcast audio into the wearer’s ears.
However, unlike the Ray-Ban, Amazon’s Echo Frames 3 don’t do video capture (you can almost hear the collective sigh of relief from privacy advocates around the world). However, it offers similar audio settings. The speaker is located just in front of the tip of the temple. The company didn’t opt for bone conduction here, which is probably for the best (though neat, the technology is generally a passing grade at best). Unlike most headphones and earbuds, they do not cover the entrance to your ear canal. This is great for situational awareness, but not so great for immersive sound.
This is not a bad option if you want to focus on the world around you while walking down the street or riding a bike while listening to music. Image credits: brian heater They’re pretty loud when held close to your ear, and their directional nature means they’re hard to hear when you’re not wearing them (though they’re not completely silent to others). On the other hand, the actual audio quality still leaves a lot of room for improvement. They can help in a pinch for music, but I’d rather not rely on them as a daily driver of any kind.
However, as the name suggests, the real highlight here is the Echo feature. Frames are yet another form factor for invoking Alexa. This makes a lot of sense at first glance, being a hands-free voice assistant that you can take with you anywhere as long as your phone is properly connected. First, you can play/pause, make calls, and set reminders. All of these things can be done in your earbuds using a connected voice assistant. Image credits: brian heater There are five styles: black square, black rectangle, blue circle, brown cat’s eye, and gray rectangle. The first ones Amazon sent were similar to your typical Buddy Holly/Elvis Costello glasses, but with electronics inside, a plastic-like design, and large temples.
They fit me well enough, and while they’re not something I would choose over, say, a Warby Parker, I don’t feel embarrassed wearing them in public. You can further customize your frames with prescription lenses, blue light filtering, or sunglasses. Sure, there are all great options. Battery life is listed at 14 hours with “moderate” usage. One charge should get him through the day if he listens to a standard amount of music. This is especially great considering that the charging dock is larger and more unwieldy than the glasses themselves. Charging instructions are included in the package (along with some short Braille instructions, a nice accessibility consideration). This is necessary because the design is not intuitive.
When the glasses are folded and the lenses are facing up, the charging points on the temples make contact with the charger. It’s very different from the Ray-Ban Meta’s extremely convenient and well-designed charging case. Amazon’s case, on the other hand, is collapsible. It’s not great, but it does have the convenience of being able to fold it flat while wearing your glasses. If I hadn’t recently tested the Ray-Ban Meta, my thoughts on the latest Echo Frame might have been different. The price is $270, which is $30 cheaper than Metagras. If you’re having trouble deciding between the two, I think you should take the plunge and spend the extra $30. Of course, it’s also worth considering that as of this writing, Amazon is currently offering new Echo Frames for a heavily discounted $200.
New court filings say Meta has stolen sensitive data from test accounts mentioned in a New Mexico bombshell lawsuit that alleges underage Facebook and Instagram users are exposed to child predators. “He threatened to delete it,” he said.
New Mexico Attorney General Raul Torrez said in a Monday filing that Meta had “deactivated” several test accounts used by law enforcement to investigate the popular app.
According to the filing, Torrez will restrain Meta from deleting “any information related to the accounts referenced in the complaint or any information related to any account on which Meta has taken action based on the information in the complaint.” They are seeking a court order.
“The state filed this motion seeking an order requiring Meta to comply with its data retention obligations under New Mexico law,” the filing states.
The attorneys also cited New Mexico court precedent against destroying relevant evidence.
New Mexico Attorney General Raul Torrez said Meta had “deactivated” several test accounts used by law enforcement to investigate Instagram and Facebook. AP
Amazing lawsuit filed last weekAccording to , the test accounts used AI-generated photos that allegedly depicted children under the age of 14, and contained adult-oriented sexual content and content, including “genital photos and videos” and six offers. He said he was bombarded with unpleasant messages from alleged child predators. Pay to appear in porn videos.
Meta subsequently disabled these accounts. This allegedly hindered the ongoing investigation by denying authorities access to critical information “including the usernames of accounts with which investigators interacted, as well as search history and other information about those accounts.” That’s what it means.
It is unclear whether Meta has shut down the Facebook and Instagram accounts of the alleged child offenders.
Meta has been accused by the New Mexico AG’s office of failing to protect underage users. AFP (via Getty Images)
“Of course, we store data in accordance with our legal obligations,” a Meta spokesperson said.
Torres’ office did not comment on Monday’s filing.
In New Mexico, a test account called “Issa Bee” claiming to be a 13-year-old girl living in Albuquerque had more than 6,700 followers on Facebook, most of whom were “males between the ages of 18 and 40.” ” he claimed. -age.
The account has received several disturbing sexual offers, including one from an adult user who allegedly “openly promised $5,000 a week to be his ‘sugar baby’.” was.
According to the state, Meta notified the company on December 7, the day after the lawsuit was filed, that it would disable the test account.
The social media giant said: “Even though the account in question had been operating for several months without any action by Meta, and law enforcement had previously reported unlawful and unlawful content to Meta through reporting channels. Despite this, the company took this action, the filing states.
When the investigator tried to log in, he received a message warning that his account had been “deactivated.”
The message states that you have 30 days to request a review before your account is “permanently disabled.”
State attorneys contacted them the same day and asked for confirmation that Meta would “preserve all data” associated with the account, according to the filing.
Meta’s lawyers reportedly responded that the company “takes reasonable steps to identify the accounts referred to in the complaint and preserve relevant data and information regarding those accounts once identified.”
The state said Meta did not respond to requests for details about what data from accounts it deemed “relevant” and what data it would not keep.
“Given Meta’s refusal to preserve ‘all data’ related to the accounts mentioned in the complaint, a court order is required to preserve this important evidence for trial.” is stated in the submitted documents.
In October, a group of 33 state agencies sued Meta for targeting young users. Getty Images/iStockphoto
Meta CEO Mark Zuckerberg has been named as a defendant in a New Mexico lawsuit.
State officials allege that Mr. Zuckerberg’s product design decisions played a key role in putting underage users at risk.
Meta has not yet responded specifically to the lawsuit’s allegations.
“We use advanced technology, employ child safety experts, report content to the National Center for Missing and Exploited Children, and communicate information and tools with other companies and law enforcement agencies, including state attorneys general. to help root out looters,” Mehta said. Statement to the Wall Street Journal after the lawsuit was filed.
Meta CEO Mark Zuckerberg has been named as a defendant in a New Mexico lawsuit. AP
The New Mexico lawsuit is separate from a larger lawsuit filed by 33 state attorneys general in October.
The states allege that Meta intentionally made the app addictive to trap young users and collected personal data from underage users in violation of federal law.
Meta will offer ad-free subscription versions of Facebook and Instagram in the European Union, EEA (European Economic Area), and Switzerland, confirming the core of the WSJ report earlier this month. The new ad-free subscription will be available starting next month. meta blog post.
The move follows years of privacy litigation, enforcement, and court rulings in the EU, resulting in Meta having a contractual right (or legitimate interest) to track and profile users for ad targeting. The situation has reached a point where it is no longer possible to claim profits. (As of this writing, however, it’s still doing the latter, meaning it’s technically operating without a proper legal basis.) But this summer, Meta announced its intention to switch to consent. )
Under local data protection laws, the only available basis left for Meta’s tracking and profiling advertising business is to obtain freely given consent from users. But the ad tech giant’s interpretation of free consent in its “pay or be tracked” subscription proposal will understandably infuriate privacy advocates. This is because the options the company is offering here are, after all, “pay for it.” Or pay us for your privacy. ”
According to Meta’s blog post, the price they plan to charge users to avoid tracking and targeting (i.e. an ad-free subscription) is €9.99 per month on the web and on iOS for each linked Facebook and Instagram account. or 12.99 euros per month on Android. User Account Center. It also states that from March 1, 2024, an additional fee of €6 per month on web and €8 per month on iOS or Android will apply for each additional account listed in a user’s account center.
As such, the cost of using Meta’s services without being tracked or profiled can quickly become prohibitive for those with multiple accounts on Meta’s social network.
Even for users with just one account (either Facebook or Instagram), the cost to protect their privacy from Meta tracking and profiling is almost 120 euros (for web usage) or just over 155 euros (for web usage) per year. (for mobile).
As we reported earlier this month, Mehta relies on a single sentence in a judgment handed down by the bloc’s highest court, the CJEU, earlier this year – in which the judge acknowledged the possibility – that “necessary (another warning) that comparable alternative services (i.e., services that lack tracking and profiling) will be charged an “appropriate fee”. Therefore, the legal fight against Meta’s continued tracking and profiling of users comes down to what is necessary and appropriate in this situation.
in press release After WSJ reported earlier this month that Meta plans to charge users for their privacy, noyb founder and chairman emeritus Max Schrems wrote: “The CJEU said that advertising alternatives must be ‘necessary’ and the fees must be ‘adequate’. I doubt that €160 per year was what they had in mind. These six… The words are also “observances”, a non-binding element that goes beyond the core case before the CJEU. For Meta, this is not the most stable case law and we clearly oppose such an approach..”
Ireland’s Data Protection Commission (DPC), the lead regulator for General Data Protection Regulation (GDPR) meta in the EU, has sent us a statement requesting our response to this development. “Meta informed the DPC on July 27 of its intention to introduce an alternative consent-based model that would allow users to choose between an ad-funded version of the platform and a subscription version in exchange for a monthly fee. , users are told they will not receive targeted advertising,” the Irish regulator wrote.
“Meta had originally identified February 2024 as the earliest date on which it could operate its consent model, but on direction from the DPC it has agreed to bring that date forward to November 2023. , we were concerned about ensuring that the changes were carried out.”Meta was unable to demonstrate its right to rely on the legal bases it relied on at the time when processing users’ data for behavioral advertising purposes. will be implemented on the platform as soon as possible, given previous findings to the effect that These include the decisions made by the Court of Justice of the European Union on 4 July 2023 in a case in which the court considered the legal basis on which Meta’s processing of user data for the purpose of behavioral advertising is based. Includes findings.
“Since Meta’s first proposal in July, the DPC has been working on a detailed regulatory assessment of the consent-based model in consultation with other European supervisory authorities. It is being led by the DPC, reflecting its position as a supervisory authority. The exercise is not yet concluded and no findings have been made to date. It is expected to be completed soon, at which point the DPC will If we determine that the way we implement new user services is compatible with Meta’s obligations under the GDPR, we will notify Meta.”
It is therefore clear that Meta’s move to offer users subscriptions and tracking has not yet been approved by data protection authorities and may lead to further regulatory intervention in the future.
You just need to comply with the GDPR, which sets out the conditions necessary for consent to be lawful (e.g., it must be specific, informed, and freely given). Meta is currently subject to the pan-EU Digital Services Act (DSA). This also sets conditions on large platforms when it comes to tracking and profiling people seeking advertising. Therefore, it is not solely up to data protection authorities to decide whether a Meta subscription or tracking offer applies. The European Commission is responsible for overseeing the DSA compliance of very large online platforms.
Meta is also designated as a so-called gatekeeper under the Digital Markets Act (DMA), a sister regulation of the DSA. The law also places certain restrictions on the use of people’s data for advertising. The Commission is the sole executive body of his DMA.
Meta is already under scrutiny by the European Commission over its approach to the DSA, and EU executives have in recent days sought further information from the tech giants regarding content threats arising from the Israel-Hamas war and their approach to elections. ing. security issue. But it remains to be seen whether the EU will apply similar scrutiny to Meta’s ad tracking proposals.
In a blog post, Meta said that by offering people the choice of paying for privacy or otherwise agreeing to be tracked, Meta is “giving users a choice and It allows us to balance the requirements of European regulators while allowing us to continue to serve everyone in the EU, EEA and Switzerland.” . But hey, I guess you could say that.
Additionally, this subscription is only available to people over 18, which raises the question of how it will comply with DSA and DMA requirements not to process children’s data for ad targeting.
“Given this evolving regulatory landscape, we continue to explore ways to provide teens with a helpful and responsible advertising experience,” the magazine said.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.