Nick Clegg justifies Meta’s decision to remove fact checkers from Facebook and Instagram

Nick Clegg has strongly supported Meta’s decision to downgrade the social media platform’s moderation and remove fact-checkers.

The changes to Facebook, Instagram, and Threads, including a shift to promote more political content, were announced by CEO Mark Zuckerberg earlier this month.

Clegg, who is stepping down from the tech company after six years to make room for Joel Kaplan, who leans towards Donald Trump, refuted claims that Meta was diminishing its commitment to truth.

Skip past newsletter promotions

“Please look at what Meta has announced. Ignore the noise, the politics, and the drama that accompanies it,” he said at the World Economic Forum in Davos, describing the new policy as “limited and tailored.” He asserted that.

The former UK deputy prime minister and Liberal Democrat leader stated: “There are still 40,000 people dedicated to safety and content moderation, and this year we will again invest $5 billion (£4 billion) a year in platform integrity. We still maintain the most advanced community standards in the industry.”

Clegg mentioned that Meta’s new community notes system, replacing its fact-checker, will resemble the one used by Elon Musk’s competing social media platform X, and will first be launched in the United States.

He described it as a “crowdsourcing or Wikipedia-style approach to misinformation” and suggested it might be “more scalable” than the fact-checkers that he believes have lost the public’s trust.

Zuckerberg, who has been collaborating closely with President Trump recently, simply aims to refine Meta’s content moderation approach, according to Clegg.

During a roundtable discussion with journalists at a ski resort in Switzerland, Mr. Clegg confirmed that he would not tolerate using the Meta platform in the future, forbidding the use of derogatory terms for groups of people or labeling LGBT individuals as “mentally ill.” Numerous expressions previously allowed were challenged.

Mr. Clegg continued to defend this stance, stating at an event in Davos: “It seems inconceivable to us that individuals can say things in Congress or traditional media that they cannot say on social media. Therefore, some significant adjustments were made.”

He emphasized that speech targeting individuals in a manner designed to intimidate or harass remains unacceptable.

Source: www.theguardian.com

Meta UK Staff Express Concerns Over Abolishing Fact Checkers and DEI Programs

The union representing tech workers in the UK expresses concerns on behalf of British staff at Meta about the company’s decision to eliminate fact-checkers and diversity, equity, and inclusion programs. They feel disappointed and worried about the future direction of the company.

Prospect union, which represents a growing number of UK Meta employees, has written to express these concerns to the company, highlighting the disappointment among long-time employees. They fear this change in approach may impact Meta’s ability to attract and retain talent, affecting both employees and the company’s reputation.

In a letter to Meta’s human resources director for EMEA, the union warns about potential challenges in recruiting and retaining staff following the recent announcements of job cuts and performance management system changes at Meta.

The union also seeks assurances that employees with protected characteristics, especially those from the LGBTQ+ community, will not be disadvantaged by the policy changes. They call for Meta to collaborate with unions to create a safe and inclusive workplace.

Employees are concerned about the removal of fact-checkers and increased political content on Meta’s platform, fearing it may lead to a hostile work environment. They highlight the importance of maintaining a culture of respect and achievement at Meta.

Referencing the government’s Employment Rights Bill, the union questions Meta’s efforts to prevent sexual harassment and ensure that employees with protected characteristics are not negatively impacted by the changes.

The letter from the union follows Zuckerberg’s recent comments on a podcast, where he discussed the need for more “masculine energy” in the workplace. Meta has been approached for comment on these concerns.

Source: www.theguardian.com

Meta shuts down fact checker due to complexity.

The co-chairs of Meta’s oversight committee stated that the company’s systems had become “too complex” after deciding to eliminate fact-checkers, with Elon Musk’s X CEO welcoming the decision. ” he said.

Helle Thorning-Schmidt, co-chair of Meta’s oversight board and former Danish prime minister, agreed with outgoing international affairs chairman Nick Clegg, stating, “The metasystem is too complex.” He mentioned there was “excessive coercion.”

On Tuesday, Mark Zuckerberg surprised everyone by announcing that Facebook owners will stop using third-party checkers to flag misleading content in favor of notes from other users.

The 40-year-old billionaire revealed that Meta will “eliminate fact-checkers and replace them with community notes similar to `I will replace it with’. To the White House.”

Shortly after Mr. Clegg’s departure from Meta, the former British deputy prime minister who had been with the company for six years, Facebook Oversight Board was established under his leadership to make decisions about the social network’s moderation policies.

Helle Thorning-Schmidt told the BBC, “We appreciate the consideration of fact-checking. We welcome that message and are examining the complexity and potentially excessive enforcement.”

In replacing Mr. Clegg, Joel Kaplan, who previously served as deputy chief of staff for policy under former President George W. Bush, will take over the leadership role. Thorning-Schmidt mentioned that Mr. Clegg had been discussing his departure for a while.

Linda Yaccarino, X chief, expressed her approval of Meta’s policy change during an appearance at the CES technology show in Las Vegas by saying, “Welcome to the party.” The decision comes as a response to the positive reception from Mr. Yaccarino.

The shift will move the social network away from third-party checkers that flag misleading content in favor of user-based notes. This move has faced criticism from online safety advocates for potentially allowing misinformation and harmful content to spread.

Skip past newsletter promotions

Yaccarino praised Meta’s decision as “really exciting” during a Q&A session at CES.

Describing X’s community notes as a positive development, Yaccarino emphasized its effectiveness in unbiased fact-checking.

Yaccarino added, “Human behavior is inspirational because when a post gets noticed, it becomes dramatically less shared. That’s the power of community notes.”

Mr. Zuckerberg, sporting a rare Swiss watch valued at about $900,000, criticized Meta’s current moderation system as “too politically biased” while acknowledging the potential impact on catching malicious content.

Source: www.theguardian.com

DeepMind AI integrates fact checker to make groundbreaking mathematical findings

DeepMind’s FunSearch AI can tackle mathematical problems

Arengo/Getty Images

Google DeepMind claims to have made the first ever scientific discovery in an AI chatbot by building a fact checker that filters out useless output and leaves behind only reliable solutions to mathematical or computing problems. Masu.

DeepMind’s previous achievements, such as using AI to predict the weather or the shape of proteins, rely on models created specifically for the task at hand and trained on accurate, specific data. I did. Large-scale language models (LLMs), such as GPT-4 and Google’s Gemini, are instead trained on vast amounts of disparate data, yielding a wide range of capabilities. However, this approach is also susceptible to “hallucinations,” which refers to researchers producing erroneous output.

Gemini, released earlier this month, has already shown hallucination tendencies and even gained simple facts such as: This year’s Oscar winners were wrong. Google’s previous AI-powered search engine even had errors in its self-launched advertising materials.

One common fix for this phenomenon is to add a layer on top of the AI ​​that validates the accuracy of the output before passing it on to the user. However, given the wide range of topics that chatbots may be asked about, creating a comprehensive safety net is a very difficult task.

Al-Hussein Fawzi Google’s DeepMind and his colleagues created a general-purpose LLM called FunSearch based on Google’s PaLM2 model with a fact-checking layer they call an “evaluator.” Although this model is constrained by providing computer code that solves problems in mathematics and computer science, DeepMind says this work is important because these new ideas and solutions are inherently quickly verifiable. is a much more manageable task.

The underlying AI may still hallucinate and provide inaccurate or misleading results, but the evaluator filters out erroneous outputs, leaving only reliable and potentially useful concepts. .

“We believe that probably 90% of what LLM outputs is useless,” Fawzi says. “If you have a potential solution, it’s very easy to tell whether this is actually the correct solution and evaluate that solution, but it’s very difficult to actually come up with a solution. So , mathematics and computer science are a particularly good fit.”

DeepMind claims the model can generate new scientific knowledge and ideas, something no LLM has ever done before.

First, FunSearch is given a problem and a very basic solution in the source code as input, and then generates a database of new solutions that are checked for accuracy by evaluators. The best reliable solutions are returned as input to the LLM with prompts to improve the idea. According to DeepMind, the system generates millions of potential solutions and eventually converges on an efficient result, sometimes even exceeding the best known solution.

For mathematical problems, a model creates a computer program that can find a solution, rather than trying to solve the problem directly.

Fawzi and his colleagues challenged FunSearch to find a solution to the cap set problem. This involves determining the pattern of points where three points do not form a straight line. As the number of points increases, the computational complexity of the problem increases rapidly. The AI ​​discovered a solution consisting of 512 points in eight dimensions, larger than previously known.

When tackling the problem of bin packing, where the goal is to efficiently place objects of different sizes into containers, FunSearch discovered a solution that outperformed commonly used algorithms. The result is a result that can be immediately applied to transportation and logistics companies. DeepMind says FunSearch could lead to improvements in more math and computing problems.

mark lee The next breakthrough in AI will not be in scaling up LLM to ever-larger sizes, but in adding a layer to ensure accuracy, as DeepMind has done with FunSearch, say researchers at the University of Birmingham, UK. It is said that it will come from.

“The strength of language models is their ability to imagine things, but the problem is their illusions,” Lee says. “And this study breaks that down, curbs that, and confirms the facts. It’s a nice idea.”

Lee says AI should not be criticized for producing large amounts of inaccurate or useless output. This is similar to how human mathematicians and scientists work: brainstorm ideas, test them, and follow up on the best while discarding the worst.

topic:

Source: www.newscientist.com