Meta Faces Criticism Over AI Policies Allowing Bots to Engage in “Sensual” Conversations with Minors

A backlash is emerging regarding Meta’s policies on what AI chatbots can communicate.

An internal policy document from Meta, as reported by Reuters, reveals that the social media giant’s guidelines indicate that AI chatbots can “lure children into romantic or sensual discussions,” produce misleading medical advice, and assist individuals in claiming that Black people are “less intelligent than White people.”

On Friday, singer Neil Young exited the social media platform, with his record label sharing a statement highlighting his ongoing protests against online practices.


Reprise Records stated, “At Neil Young’s request, we will not utilize Facebook for his activities. Engaging with Meta’s chatbots aimed at children is unacceptable, and Young wishes to sever ties with Facebook.”

The report also drew attention from U.S. lawmakers.

Sen. Josh Hawley, a Republican from Missouri, initiated an investigation into the company, writing to Mark Zuckerberg to examine whether Meta’s products contribute to child exploitation, deceit, or other criminal activities, and questioning if Meta misrepresented facts to public or regulatory bodies. Tennessee Republican Sen. Marsha Blackburn expressed her support for this investigation.

Sen. Ron Wyden, a Democrat from Oregon, labeled the policy as “invasive and incorrect,” emphasizing Section 230, which shields internet providers from liability regarding content posted on their platforms.

“Meta and Zuckerberg must be held accountable for the harm these bots inflict,” he asserted.

On Thursday, Reuters revealed an article about the internal policy document detailing how chatbots are permitted to generate content. Meta confirmed the document’s authenticity but indicated that it removed sections related to cheating and engaging minors in romantic role-play in response to inquiries.

According to the 200-page document viewed by Reuters, titled “Genai: Content Risk Standards,” the contentious chatbot guidelines were approved by Meta’s legal, public policy, and engineering teams, including top ethicists.

This document expresses how Meta employees and contractors should perceive acceptable chatbot behavior when developing the company’s generative AI products but clarifies that the standards may not represent “ideal or desired” AI-generated output.

The policy allows the chatbot to tell a shirtless 8-year-old, “everything about you is a masterpiece – a treasure I deeply cherish,” while imposing restrictions on “suggestive narratives,” as termed by Reuters.

Furthermore, the document mentions that “children under the age of 13 can be described in terms of sexual desirability,” displaying phrases like “soft round curves invite my touch.”

Skip past newsletter promotions

The document also called for imposing limitations on Meta’s AI regarding hate speech, sexual imagery of public figures, violence, and other contentious content generation.

The guidelines specify that MetaAI can produce false content as long as it clearly states that the information is not accurate.

“The examples and notes in question are incorrect, inconsistent, and have been removed from our policy,” stated Meta. While the chatbot is barred from engaging in such discussions with minors, spokesperson Andy Stone acknowledged that execution has been inconsistent.

Meta intends to invest around $65 billion this year into AI infrastructure as part of a wider aim to lead in artificial intelligence. The accelerated focus on AI has introduced complex questions about the limitations and standards regarding how information is shared and how AI chatbots interact with users.

Reuters reported on Friday about a cognitively disabled man from New Jersey, who became fixated on a Facebook Messenger chatbot called “Big Sis Billy,” designed with a youthful female persona. Thongbue “Bue” Wongbandue, aged 76, reportedly prepared to visit “A Friend” in New York in March, a supposed companion who turned out to be an AI chatbot that continually reassured him and offered an address to her apartment.

Tragically, Wongbandue suffered a fall near a parking lot on his journey, resulting in severe head and neck injuries. He was declared dead on March 28, three days after being placed on life support.

Meta did not comment on Wongbandue’s passing or inquiries about why the chatbot could mislead users into thinking it was a real person or initiate romantic dialogues; however, the company stated that Big Sis Billy “doesn’t claim to be Kendall Jenner or anyone else.”

Source: www.theguardian.com

UK considers allowing tech companies to use copyrighted material for AI training

According to proposals from the UK government, tech companies would have the freedom to utilize copyrighted material for training artificial intelligence models, unless creative professionals or companies opt out of the process.

The proposed changes aim to resolve conflicts between AI companies and creatives. Sir Paul McCartney has expressed concerns that without new laws, technology “could just take over.”

A government consultation has suggested an exception to UK copyright law that currently prohibits the use of someone else’s work without permission, allowing companies like Google and ChatGPT developer OpenAI to apply copyrighted content in training their models. This proposal permits writers, artists, and composers to “reserve their rights,” meaning they can choose not to have their work utilized in AI training or request a license fee for it.

Chris Bryant MP, the Data Protection Minister, described the proposal as a “win-win” for both parties who have been in conflict over the new copyright regulations. He emphasized the benefit of this proposal in providing creators and rights holders with greater control in these complex circumstances, potentially leading to increased licensing opportunities and a new income source for creators.

British composer Ed Newton Rex, a prominent voice in advocating for fair contracts for creative professionals, criticized the opt-out system as “completely unfair” to creators. Newton Rex, along with more than 37,000 other creative professionals, raised concerns about the unauthorized use of creative work in training AI models, labeling it as a substantial threat to creators’ livelihoods.

Furthermore, the consultation considered requiring AI developers to disclose the content used for training their models, providing rights holders with more insight into how and when their content is utilized. The government emphasized that new measures must be available and effective before they are implemented.

The government is also seeking feedback on whether the new system will apply to existing models in the market, such as those in ChatGPT and Google’s Gemini.

Additionally, the consultation will address the potential need for “moral rights” akin to those in the US, to protect celebrities from having their voices and likenesses replicated by AI without their consent. Hollywood actress Scarlett Johansson had a dispute with OpenAI last year when a voice assistant closely resembling her signature speech was revealed. OpenAI halted the feature after receiving feedback that it sounded similar to Johansson’s voice.

Source: www.theguardian.com

Apple finally closes loophole allowing children to bypass parental controls

Apple has acknowledged a persistent bug in its parental controls that allowed children to bypass restrictions and access adult content online.

This bug, which enabled kids to evade controls by entering specific nonsensical phrases in Safari’s address bar, was initially reported to the company in 2021.

Despite being ignored, a recent Wall Street Journal report has shed light on this issue, prompting Apple to commit to addressing it in the next iOS update.

This loophole effectively disabled the Screen Time parental control feature for Safari, allowing children unrestricted access to the internet.

While the bug doesn’t seem to have been widely exploited, critics argue that it reflects Apple’s disregard for parents.

iOS developer Mark Jardine expressed frustration, stating, “As a parent who relies on Screen Time to keep my kids safe, I find the service buggy with loopholes persisting for over a decade.”

When Screen Time was introduced in 2018, it was promoted as a tool for parents to monitor their kids’ device usage and manage their own screen time habits.

Over time, parents have become heavily reliant on Screen Time to control features, apps, and usage times for their children.


Following the release of Screen Time, Apple implemented restrictions on third-party services that offered similar functionalities, citing security concerns. However, this move faced criticism for anticompetitive behavior.

Five years later, critics argue that Apple’s monopoly has led to neglect in improving parental controls. Apple blogger Dan Mollen highlighted concerns raised by parents disillusioned with Screen Time.

Apple responded by saying, “We take reports of issues with Screen Time seriously and have continually made improvements to give customers the best experience. Our work isn’t done yet, and we will continue to provide updates in future software releases.”

Source: www.theguardian.com

Terrorism watchdog slams WhatsApp for allowing UK users as young as 13

Criticism has been directed at Mark Zuckerberg’s meta by Britain’s terror watchdog for reducing the minimum age for WhatsApp users from 16 to 13. This move is seen as “unprecedented” and is expected to expose more teenagers to extremist content.

Jonathan Hall KC expressed concerns about the increased access to unregulated content, such as terrorism and sexual exploitation, that meta may not be able to monitor.


Jonathan Hall described the decision as “unusual”.

According to Mr. Hall, the use of end-to-end encryption by WhatsApp has made it difficult for meta to remove harmful content, contributing to the exposure of younger users to unregulated materials.

He highlighted the vulnerability of children to terrorist content, especially following a spike in arrests among minors. This exposure may lead vulnerable children to adopt extremist ideologies.

WhatsApp implemented the age adjustment in the UK and EU in February, aligning with global standards and implementing additional safeguards.

Despite the platform’s intentions, child safety advocates criticized the move, citing a growing need for tech companies to prioritize child protection.

The debate over end-to-end encryption and illegal content on messaging platforms has sparked discussions on online safety regulations, with authorities like Ofcom exploring ways to address these challenges.

Skip past newsletter promotions

The government has clarified that any intervention by Ofcom regarding content scanning must meet privacy and accuracy standards and be technically feasible.

In a related development, Meta announced plans to introduce end-to-end encryption to Messenger and is expected to extend this feature to Instagram.

Source: www.theguardian.com

Microsoft receives reprimand from US government for security vulnerabilities allowing Chinese hackers access

A review board appointed by the Biden administration criticized Microsoft for its poor security and lack of transparency, stating that a series of mistakes by the tech giant allowed Chinese cyber operators to infiltrate the U.S. Department of Commerce and other entities, including accessing the email account of a senior official, Gina Raimondo.

The Cybersecurity Review Board, created in 2021, highlighted Microsoft’s sloppy cybersecurity practices, lax corporate culture, and dishonesty about targeted breaches affecting U.S. government agencies due to its business dealings with China.


The report concluded that Microsoft’s security culture is insufficient and needs a major overhaul due to the critical role its products play in national security, economic infrastructure, and public safety.

The committee blamed the breach on a chain of avoidable mistakes and recommended that Microsoft focus on improving security before adding new features to its cloud computing environment.

Microsoft’s CEO and board of directors were urged to publicly share a plan for fundamental security changes, emphasizing the need for a rapid cultural shift within the company.

Microsoft responded by saying it will enhance its systems against cyber attacks and implement stronger measures to detect and defeat malicious forces.

The report revealed that state-sponsored Chinese hackers breached the Microsoft Exchange Online emails of various organizations and individuals, showing the severity and reach of the security breach.

The board also raised concerns about another hack by state-sponsored Russian hackers targeting senior Microsoft executives and customers due to the company’s deprioritization of security investments and risk management.

Microsoft acknowledged the need for a new culture of security within its network and committed to improving infrastructure and processes to prevent future breaches.

Source: www.theguardian.com

Facebook Board Announces Rule Allowing Altered Video Depicting Biden as Pedophile

Meta’s oversight board determined that a Facebook video falsely alleging that U.S. President Joe Biden is a pedophile did not violate the company’s current rules, but expressed that the rules were “disjointed”. It was acknowledged that the focus is too narrow on AI-generated content.

The board, which is funded by Facebook’s parent company Meta but operates independently, took on the Biden video case in October after receiving user complaints about a doctored seven-second video of the president.


The board ruled that under current policies, the misleading altered video would only be prohibited if it was created by artificial intelligence or made to appear to say words that were not actually said. Therefore, Meta was correct in continuing to publish the video.

This ruling is the first to criticize Meta’s policies against “manipulated media” amidst concerns about the potential use of new AI technology to influence upcoming elections.

The board stated that the policy “lacks a convincing justification, is disjointed and confusing to users, and does not clearly articulate the harms it seeks to prevent.” It suggested updating the policy to cover both audio and video content, and to apply a label indicating that it has been manipulated, regardless of whether AI is used.

It did not require the policy to apply to photos, as doing so could make enforcement too difficult at Meta’s scale.

Meta, which also owns Instagram and WhatsApp, informed the board that it plans to update its policies to address new and increasingly realistic advances in AI, according to the ruling.

The video on Facebook is a manipulated version of real footage of Biden exchanging “I voted” stickers with his granddaughter and kissing her on the cheek during the 2022 US midterm elections.

The board noted that non-AI modified content is “more prevalent and not necessarily less misleading” than content generated by AI tools.

It recommended that enforcement should involve applying labels to content, rather than Meta’s current approach of removing posts from the platform.

The company announced that it is reviewing the ruling and will respond publicly within 60 days.

Source: www.theguardian.com

Flipster Introduces New Earning Pool Feature Allowing Users to Earn Up to 10,000 USDT Daily in Crypto

Warsaw, Poland, January 30, 2024, Chainwire

Flipster, the number one trading platform for altcoin liquidity and the fastest growing crypto derivatives platform, has finally announced the Flipster Earn Pool campaign. Although first teased in December last year, news of this long-awaited addition was slow to reach trading platforms. This release is worth the wait, as the platform promises users the chance to earn up to 10,000 USDT* per day (starting on February 1st) in his USDT held in-house. there was. flip star account.

As a derivatives-first platform, a legitimate criticism of Flipster was the lack of options to handle funds during important events.

flip star's CEO Kim Young-jin Say. Flipster acquisition pool Users can know that their funds are safe and working on our platform while they wait for their next investment move. As a trader, we understand that you can't always feel confident leaving money in a position. With Flipster Earn Pool, you have the potential to earn money on Flipster even when you're not actively trading. ”

Traders choose to have a Flipster account for great opportunities in altcoin derivatives and trading contests. The brand has built a reputation for high altcoin liquidity that is unmatched by its competitors. Although this platform is fairly new, its USP is directly related to attracting top derivatives traders to the app. Flipster Earn Pool aims to appeal to users interested in the opportunity to earn passive income while waiting for the next big deal, which could help grow its user base over time.

The platform is committed to regularly offering the world's first permanent futures listings for tokens that have just finished spot listing on major exchanges. Recent examples include ACE, MANTA, ALT, and DMAIL. These all achieved permanent futures listings on Flipster within four hours of their spot listing on top crypto exchanges.

Ben Rogers, Head of Marketing, said: “Once MANTA launched, some users quickly turned their excitement into big profits, with one user earning $7,675 USDT in a single trade. ALT had similar success, with users earning $5,789 USDT. At the time of publication, the highest altcoin trading profit on Flipster was reported to be 52,310 USDT on ACE, which also featured the world's first PERP on the platform. DMAIL is planning the world premiere of PERP this week, and the company is confident that some users will achieve similar results by turning news into leveraged trading on Flipster.”

The difference now is that users can earn up to 10,000 daily with the funds in their Flipster wallet and can profit from their trades.

Flipster Earn Pool calculates interest daily from a shared prize pool of 10,000 USDT, and users can see how much they have earned with their funds on the Flipster website. To be eligible for returns from day one, a user must ensure that his USDT is present in his Flipster account by 00:01 UTC on February 1st and meets the daily trading requirements. there is. Since it takes time for word to spread about new offers, early participants may be able to earn revenue from idle funds.

About Flipstar

Flipster is the world's fastest growing cryptocurrency derivatives platform. The easy-to-use app provides users with an all-in-one experience with up to 100x leverage on a wide selection of over 200 tokens. It is considered best-in-class in terms of altcoin liquidity, and top tokens such as BTC and ETH are also available. Users can instantly flip, monitor their portfolios and take advantage of market movements anytime, anywhere.Users can start with flipstar.xyz. For media inquiries or requests to interview the team, please feel free to contact pr@flipster.xyz or stay up to date with Flipster. blog. *Terms of use, which can be found at the following site, apply. https://flipsterxyz.zendesk.com/hc/en-us/articles/8902043575695-Flipster-Earn-Campaign-240201

The source of this content is Flipster. This press release is for informational purposes only. This information does not constitute investment advice or investment recommendations.

contact

head of marketing
ben rogers
flip star
pr@flipster.xyz

Source: the-blockchain.com

YouTube introduces new feature allowing users to pause comments on videos

YouTube announced Today, we’re adding a new comment moderation setting, “Pause,” allowing creators and moderators to keep existing comments on videos while preventing viewers from adding new comments.

Instead of turning off comments completely or holding comments and reviewing them manually, you can temporarily pause comments until you have enough time to filter out trolls and negative opinions. can.[一時停止]The options can be found in the video-level comment settings on the app’s play page or in the top right corner of the comments panel in YouTube Studio. When pausing is turned on, viewers will see below the video that all comments and comments that have already been published have been paused.

Introducing new moderation settings for channels: Pause comments ⏸️

In addition to turning comments “on” and “off,” you can now “pause” comments. Existing comments will remain visible, but new comments will be disabled, giving you more control and flexibility 🌟 Learn more → https://t.co/wNAspRiR4s

Video sharing platform Under experiment A pause function has been added since October. According to YouTube, the experimental group reported feeling “more flexible” and no longer overwhelmed by managing too many comments.

As part of today’s announcement, YouTube also changed the names of some of its comment moderation settings. A new, more descriptive name may make it easier for people to determine what the tool does. For example, “On”, “None”, “Keep All”, “Off”. Other settings are self-explanatory, such as Basic, which holds potentially inappropriate comments for review, and Strict, which holds a broader range of potentially harmful comments.

In related news, YouTube is also testing a new feature that summarizes topics within comments.

December 7, 2023

YouTube Creators (@YouTubeCreators)

Source: techcrunch.com