Meta shuts down fact checker due to complexity.

The co-chairs of Meta’s oversight committee stated that the company’s systems had become “too complex” after deciding to eliminate fact-checkers, with Elon Musk’s X CEO welcoming the decision. ” he said.

Helle Thorning-Schmidt, co-chair of Meta’s oversight board and former Danish prime minister, agreed with outgoing international affairs chairman Nick Clegg, stating, “The metasystem is too complex.” He mentioned there was “excessive coercion.”

On Tuesday, Mark Zuckerberg surprised everyone by announcing that Facebook owners will stop using third-party checkers to flag misleading content in favor of notes from other users.

The 40-year-old billionaire revealed that Meta will “eliminate fact-checkers and replace them with community notes similar to `I will replace it with’. To the White House.”

Shortly after Mr. Clegg’s departure from Meta, the former British deputy prime minister who had been with the company for six years, Facebook Oversight Board was established under his leadership to make decisions about the social network’s moderation policies.

Helle Thorning-Schmidt told the BBC, “We appreciate the consideration of fact-checking. We welcome that message and are examining the complexity and potentially excessive enforcement.”

In replacing Mr. Clegg, Joel Kaplan, who previously served as deputy chief of staff for policy under former President George W. Bush, will take over the leadership role. Thorning-Schmidt mentioned that Mr. Clegg had been discussing his departure for a while.

Linda Yaccarino, X chief, expressed her approval of Meta’s policy change during an appearance at the CES technology show in Las Vegas by saying, “Welcome to the party.” The decision comes as a response to the positive reception from Mr. Yaccarino.

The shift will move the social network away from third-party checkers that flag misleading content in favor of user-based notes. This move has faced criticism from online safety advocates for potentially allowing misinformation and harmful content to spread.

Skip past newsletter promotions

Yaccarino praised Meta’s decision as “really exciting” during a Q&A session at CES.

Describing X’s community notes as a positive development, Yaccarino emphasized its effectiveness in unbiased fact-checking.

Yaccarino added, “Human behavior is inspirational because when a post gets noticed, it becomes dramatically less shared. That’s the power of community notes.”

Mr. Zuckerberg, sporting a rare Swiss watch valued at about $900,000, criticized Meta’s current moderation system as “too politically biased” while acknowledging the potential impact on catching malicious content.

Source: www.theguardian.com

Elon Musk ridicules Microsoft Word’s progressive ‘inclusive language checker’

Elon Musk criticized a feature in Microsoft Word known as “Inclusivity Checker,” where he claimed he was “reprimanded” for typing the word “insane.”

The billionaire owner of Tesla posted a screenshot of a Microsoft Word document that discussed Tesla’s new Cybertruck and highlighted the new electric vehicle’s “unusual stability.”
The phrase was flagged by Word’s software, which identifies terms and phrases considered politically incorrect and suggests alternative wording.
“Microsoft Word now scolds you for using words that are not ‘inclusive’,” wrote the world’s richest man on his social media platform.
Musk also posted a screenshot showing an attempt to type “11,000 pounds,” though it’s unclear why that term would be considered non-inclusive.
The prompt in Microsoft Word says, “Think about it from a different perspective,” and suggests alternatives such as “11,000 pounds” or “11,000 pounds (about twice the weight of an elephant).”

Elon Musk has mocked Microsoft Word’s “inclusivity checker,” which flags terms and phrases deemed politically incorrect. Reuters

The Post has reached out to Microsoft for comment.
Other social media users posted screenshots of attempts to use the terms flagged by the software’s “inclusivity checker.”
One user wrote in a Word document: “Hello, could you please guard the booth this afternoon?”

The checker, which is only available to customers on the Windows maker’s $7 per month Microsoft 365 subscription plan, flags the phrase “man in the booth” as a “gender-neutral term” and suggests “staff” and “control” as alternatives.
Other terms flagged by the “inclusivity checker” include “postman” (suggested substitute: “postal worker”) and “master” (“expert”).
GitHub, a Microsoft-owned open-source software engineering site, banned the use of the phrases “master” and “slave” in response to the killing of George Floyd in 2020, deeming them racially insensitive.

Microsoft Word’s “inclusiveness checker” flagged the use of the term “insane.”

Beginning in 2020, updated versions of Microsoft Word flag the use of language promoting age bias, gender bias, cultural slurs, sexual orientation bias, and racial bias, with a built-in feature that prompts users to do so.
Users must manually enable this feature by opening a new Word document and clicking the “Editor” button, then selecting “Proofreading” in the settings section.

The Microsoft Word Inclusiveness Checker is only available to Microsoft 365 subscribers.

There is a drag-down menu for “Grammar and Refinement” near the “Writing Style” option. The user must push the “Settings” button, displaying a drag-down menu where the user can click on the box under the “Inclusiveness” category.
When the “inclusivity checker” is activated, the software flags terms that are not included in the “approved” and “allowed” lists of terms.

Microsoft removed terms such as “slave” and “master” from its GitHub site in response to the 2020 killing of George Floyd. AFP (via Getty Images)

When a user types the word “humanity,” the software flags the term and suggests alternatives such as “human race” or “human race.” Users can also simply ignore the prompt and accept the term.

Source: nypost.com

DeepMind AI integrates fact checker to make groundbreaking mathematical findings

DeepMind’s FunSearch AI can tackle mathematical problems

Arengo/Getty Images

Google DeepMind claims to have made the first ever scientific discovery in an AI chatbot by building a fact checker that filters out useless output and leaves behind only reliable solutions to mathematical or computing problems. Masu.

DeepMind’s previous achievements, such as using AI to predict the weather or the shape of proteins, rely on models created specifically for the task at hand and trained on accurate, specific data. I did. Large-scale language models (LLMs), such as GPT-4 and Google’s Gemini, are instead trained on vast amounts of disparate data, yielding a wide range of capabilities. However, this approach is also susceptible to “hallucinations,” which refers to researchers producing erroneous output.

Gemini, released earlier this month, has already shown hallucination tendencies and even gained simple facts such as: This year’s Oscar winners were wrong. Google’s previous AI-powered search engine even had errors in its self-launched advertising materials.

One common fix for this phenomenon is to add a layer on top of the AI ​​that validates the accuracy of the output before passing it on to the user. However, given the wide range of topics that chatbots may be asked about, creating a comprehensive safety net is a very difficult task.

Al-Hussein Fawzi Google’s DeepMind and his colleagues created a general-purpose LLM called FunSearch based on Google’s PaLM2 model with a fact-checking layer they call an “evaluator.” Although this model is constrained by providing computer code that solves problems in mathematics and computer science, DeepMind says this work is important because these new ideas and solutions are inherently quickly verifiable. is a much more manageable task.

The underlying AI may still hallucinate and provide inaccurate or misleading results, but the evaluator filters out erroneous outputs, leaving only reliable and potentially useful concepts. .

“We believe that probably 90% of what LLM outputs is useless,” Fawzi says. “If you have a potential solution, it’s very easy to tell whether this is actually the correct solution and evaluate that solution, but it’s very difficult to actually come up with a solution. So , mathematics and computer science are a particularly good fit.”

DeepMind claims the model can generate new scientific knowledge and ideas, something no LLM has ever done before.

First, FunSearch is given a problem and a very basic solution in the source code as input, and then generates a database of new solutions that are checked for accuracy by evaluators. The best reliable solutions are returned as input to the LLM with prompts to improve the idea. According to DeepMind, the system generates millions of potential solutions and eventually converges on an efficient result, sometimes even exceeding the best known solution.

For mathematical problems, a model creates a computer program that can find a solution, rather than trying to solve the problem directly.

Fawzi and his colleagues challenged FunSearch to find a solution to the cap set problem. This involves determining the pattern of points where three points do not form a straight line. As the number of points increases, the computational complexity of the problem increases rapidly. The AI ​​discovered a solution consisting of 512 points in eight dimensions, larger than previously known.

When tackling the problem of bin packing, where the goal is to efficiently place objects of different sizes into containers, FunSearch discovered a solution that outperformed commonly used algorithms. The result is a result that can be immediately applied to transportation and logistics companies. DeepMind says FunSearch could lead to improvements in more math and computing problems.

mark lee The next breakthrough in AI will not be in scaling up LLM to ever-larger sizes, but in adding a layer to ensure accuracy, as DeepMind has done with FunSearch, say researchers at the University of Birmingham, UK. It is said that it will come from.

“The strength of language models is their ability to imagine things, but the problem is their illusions,” Lee says. “And this study breaks that down, curbs that, and confirms the facts. It’s a nice idea.”

Lee says AI should not be criticized for producing large amounts of inaccurate or useless output. This is similar to how human mathematicians and scientists work: brainstorm ideas, test them, and follow up on the best while discarding the worst.

topic:

Source: www.newscientist.com