Scholars Assess the Trustworthiness of Elon Musk’s AI-Driven Encyclopedia, Grok

TSir Richard Evans, a distinguished British historian, authored three expert witness reports for libel trials involving Holocaust denier David Irving, pursued his doctorate under Theodore Zeldin, took over the Regius Professorship of History at Cambridge University (a title originally bestowed by King Henry VIII), and oversaw Bismarck’s dissertation on social policy.

However, all these details were fabricated, as Professor Evans found when he logged onto Grokipedia, the AI-driven encyclopedia launched last week by the world’s richest individual, Elon Musk.

This marks a rocky beginning for humanity’s latest venture to encapsulate the entirety of human knowledge, or, as Musk describes it, to establish a compendium of “the truth, the whole truth, and nothing but the truth,” created through the capabilities of his Grok artificial intelligence model.

With his fortune, Musk switched his views on Grokipedia this Tuesday, claiming it is “better than Wikipedia,” or “Walkpedia,” as its proponents call it, highlighting the belief that the leading online encyclopedia often leans toward leftist narratives. One post on X encapsulated the victorious sentiment among Musk’s supporters: “Elon just killed Wikipedia. Good for you.”

Nevertheless, users quickly discovered that Grokipedia mainly excerpts from the websites it aimed to co-opt, is rife with inaccuracies, and seems to endorse right-wing narratives championed by Musk. In a series of posts promoting his creations this week, Musk asserted that “a British civil war is inevitable,” urged Brits to “ally with the hardliners” like far-right figure Tommy Robinson, and claimed only the AfD party could “save Germany.”

Musk is so captivated by his AI encyclopedia that he has expressed a desire to engrave “a comprehensive collection of all knowledge” into stable oxide and place a copy in orbit, on the moon, and on Mars, to ensure its preservation for the future.

However, Evans identified a more pressing issue with Musk’s application of AI for fact-checking and verification. As a specialist in the Third Reich, he shared with the Guardian that “contributions to chat rooms are granted the same weight as serious academic work.” He emphasized, “AI merely observes everything.”

Richard Evans noted that Grokipedia’s entry on Albert Speer (shown to the left of Hitler) reiterated fabrications and distortions propagated by the Nazi Munitions Minister himself. Photo: Picture Library

He pointed out that the article attributed to Albert Speer, Hitler’s architect and wartime munitions minister, perpetuated lies previously debunked in his award-winning 2017 biography. Evans also stated that the entry about Eric Hobsbawm, a Marxist historian for whom he wrote a biography, falsely claimed he experienced Germany’s hyperinflation in 1923, served as an officer in the Royal Corps of Signals, and overlooked the fact that he had married twice.

David Larson Heidenblad, deputy director of the Lund Knowledge History Center, commented on the clash of knowledge cultures emerging in Sweden.

“We live in an era where there is a prevalent belief that algorithmic aggregation is more trustworthy than interpersonal insight,” Heidenblad remarked. “The Silicon Valley mindset significantly diverges from the traditional academic methodology. While Silicon Valley’s knowledge culture embraces iterations and views mistakes as part of the process, academia builds trust gradually and fosters scholarship over extended periods, during which the illusion of total knowledge dissipates. These represent the genuine processes of knowledge.”

The launch of Grokipedia follows a long-standing tradition of encyclopedias, ranging from the Yongle encyclopedias of 15th-century China to the Enlightenment-era creations in 18th-century France. These were succeeded by the primarily English Encyclopedia Britannica and, since 2001, the crowd-sourced Wikipedia. However, Grokipedia stands out as the first service significantly driven by AI, raising pressing questions: Who governs the truth when an AI controlled by powerful entities holds the pen?

“If Mr. Musk is behind it, I fear there could be political manipulation,” wrote Peter Burke, a cultural historian and professor emeritus at Emmanuel College in Cambridge, in his 2000 work on the social history of knowledge dating back to Johannes Gutenberg’s printing press in the 15th century.

“While some aspects may be evident to certain readers, the concern is that others might overlook them,” Burke elaborated, highlighting that many entries in the encyclopedia were anonymous, lending them an “air of authority they do not deserve.”

“An AI-generated encyclopedia (a sanitized reflection of reality) is a superior offering compared to what we’ve had in the past,” asserted Andrew Dudfield, head of AI at the UK-based fact-checking organization Full Fact. While we lack the same transparency, we desire comparable trust. There’s ambiguity regarding how much input was human and how much was produced by AI, along with what the AI’s agenda was.” Trust becomes problematic when choices remain obscured.”

Skip past newsletter promotions

Musk was encouraged to initiate Grokipedia by Donald Trump’s technology advisor David Sachs, among others, who criticized Wikipedia as “hopelessly biased” and maintained by an “army of leftist activists.”

Grokipedia refers to the far-right group Britain First as a “patriotic party,” which delighted its leader Paul Golding (left), who was imprisoned for anti-Muslim hate crimes in 2018. Photo: Gareth Fuller/PA

Until 2021, Musk expressed support for Wikipedia, celebrating its 20th anniversary on Twitter with “I’m so glad you exist.” However, by October 2023, his growing disdain for the platform led him to offer £1bn “if it would change its name to Dickipedia.”

Yet, many of Grokipedia’s 885,279 articles available in its launch week were nearly verbatim reproductions from Wikipedia, including entries on the PlayStation 5, Ford Focus, and Led Zeppelin. Nonetheless, other components differ substantially.

  • Grokipedia’s entry on Russia’s invasion of Ukraine cites the Kremlin as a main information source, incorporating official Russian language regarding the “denazification” of Ukraine, the defense of ethnic Russians, and the removal of threats to Russian security. In contrast, Wikipedia characterizes Putin’s views as imperialistic and states he “baselessly claimed that the Ukrainian government is neo-Nazi.”

  • Grokipedia refers to far-right group Britain First as a “patriotic party”, which pleased its leader, Paul Golding, who was jailed for anti-Muslim hate crimes in 2018. Conversely, Wikipedia identifies it as a “neo-fascist” and “hate group.”

  • Grokipedia labeled the turmoil at the U.S. Capitol in Washington, D.C., on January 6, 2021, as an “insurrection” instead of an attempted coup. It asserted an “empirical basis” for the belief that mass immigration was orchestrating the deliberate demographic erasure of whites in Western nations, a notion critics dismiss as a conspiracy theory.

  • Grokipedia’s section on Donald Trump’s conviction for falsifying business records related to the Stormy Daniels case stated it was decided “after a trial in a heavily Democratic jurisdiction” and omitted mention of his conflicts of interest, such as receiving a private jet from Qatar or the Trump family’s cryptocurrency enterprise.

Grokipedia categorized the unrest at the U.S. Capitol in Washington, D.C., on January 6, 2021, as an “insurrection” rather than an attempted coup. Photo: Leah Millis/Reuters

Wikipedia responded to Grokipedia’s inception with poise, stating it seeks to understand how Grokipedia will function.

“In contrast to new endeavors, Wikipedia’s advantages are evident,” a spokesperson for the Wikimedia Foundation remarked. “Wikipedia upholds transparent guidelines, meticulous volunteer oversight, and a robust culture of continuous enhancement. Wikipedia is an encyclopedia designed to inform billions of readers without endorsing a particular viewpoint.”

xAI did not respond to requests for comment.

Source: www.theguardian.com

UK Border Officials Utilize AI to Assess Ages of Child Asylum Seekers

Officials will employ artificial intelligence to assist in estimating the age of asylum seekers who claim to be minors.

Immigration Minister Angela Eagle stated on Tuesday that the government will pilot technology designed to assess a person’s age based on facial characteristics.

This initiative is the latest effort aimed at helping the Labor Minister leverage AI to address public service issues without incurring significant expenses.

The announcement coincided with the public release of a report by David Bolt, the Chief Inspector of Borders and Immigration. A crucial report indicated efforts to estimate the age of new arrivals.

Eagle mentioned in a formal statement to Parliament: “We believe the most economically feasible approach is likely to involve estimating age based on facial analysis. This technology can provide age estimates with known accuracy for individuals whose age is disputed or uncertain, drawing from millions of verifiable images.”

“In cases where it’s ambiguous whether the individual undergoing age assessment is over 18 or not claiming to be a minor, facial age estimation offers a potentially swift and straightforward method to validate judgments against the technology’s estimates.”

Eagle is launching a pilot program to evaluate the technology, aiming for its integration into official age verification processes by next year.

John Lewis announced earlier this year that it will be the first UK retailer to facilitate online knife sales using facial age estimation technology.

The Home Office has previously utilized AI in other sectors, such as identifying fraudulent marriages. However, this tool has faced criticism for disproportionately targeting specific nationalities.

Although there are concerns that AI tools may intensify biases in governmental decision-making, the minister is exploring additional applications. Science and Technology Secretary Peter Kyle announced a partnership with OpenAI, the organization behind ChatGPT, to investigate AI deployment in areas like justice, safety, and education.

Bolt expressed that the mental health of young asylum seekers has deteriorated due to failings in the age verification system, especially in Dover, where the influx of small boats is processed.

Skip past newsletter promotions

“Many concerns raised over the past decade regarding policy and practices remain unresolved,” Bolt cautioned, emphasizing that the challenging conditions at the Dover processing facility could hinder accurate age assessments.

He added: “I have heard accounts of young individuals who felt distrustful and disheartened in their encounters with Home Office officials, where hope has faded and their mental well-being is suffering.”

His remarks echo a report from the Refugee Council, indicating that at least 1,300 children have been mistakenly identified as adults over an 18-month period.

Last month, scholars from the London School of Economics and the University of Bedfordshire suggested that the Home Office should be stripped of its authority to make decisions regarding lonely asylum seekers.

Source: www.theguardian.com

AI Companies Caution: Assess the Risks of Superintelligence or Face the Consequences of Losing Human Control

Prior to the deployment of the omnipotent system, AI companies are encouraged to replicate the safety assessments that formed the basis of Robert Oppenheimer’s initial nuclear test.

Max Tegmark, a prominent advocate for AI safety, conducted analyses akin to those performed by American physicist Arthur Compton before the Trinity test, indicating a 90% likelihood that advanced AI could present an existential threat.

The US government went ahead with Trinity in 1945, after providing assurances that there was minimal risk of the atomic bomb igniting the atmosphere and endangering humanity.

In a paper published by Tegmark and three students at the Massachusetts Institute of Technology (MIT), the “Compton constant” is suggested for calculation. This is articulated as the likelihood that omnipotent AI could evade human control. Compton mentioned in a 1959 interview with American author Pearlback that he approved the test after evaluating the odds for uncontrollable reactions to be “slightly less” than one in three million.

Tegmark asserted that AI companies must diligently assess whether artificial superintelligence (ASI)—the theoretical system that surpasses human intelligence in all dimensions—can remain under human governance.

“Firms developing superintelligence ought to compute the Compton constant, which indicates the chances of losing control,” he stated. “Merely expressing a sense of confidence is not sufficient. They need to quantify the probability.”

Tegmark believes that achieving a consensus on the Compton constant, calculated by multiple firms, could create a “political will” to establish a global regulatory framework for AI safety.

A professor of physics at MIT and an AI researcher, Tegmark is also a co-founder of The Future of Life Institute, a nonprofit advocating for the secure advancement of AI. The organization released an open letter in 2023 calling for a pause in the development of powerful ASI, garnering over 33,000 signatures, including notable figures such as Elon Musk and Apple co-founder Steve Wozniak.

This letter emerged several months post the release of ChatGPT, marking the dawn of a new era in AI development. It cautioned that AI laboratories are ensnared in “uncontrolled races” to deploy “ever more powerful digital minds.”

Tegmark discussed these issues with the Guardian alongside a group of AI experts, including tech industry leaders, representatives from state-supported safety organizations, and academics.

The Singapore consensus, outlined in the Global AI Safety Research Priority Report, was crafted by distinguished computer scientist Joshua Bengio and Tegmark, with contributions from leading AI firms like OpenAI and Google DeepMind. Three broad research priority areas for AI safety have been established: developing methods to evaluate the impacts of existing and future AI systems, clarifying AI functionality and designing systems to meet those objectives, and managing and controlling system behavior.

Referring to the report, Tegmark noted that discussions surrounding safe AI development have regained momentum following remarks by US Vice President JD Vance, asserting that the future of AI will not be won through mere hand-raising and safety debates.

Tegmark stated:

Source: www.theguardian.com

Advancements in Dementia Research: Science can now accurately assess the “biological age” of your brain

If you’re like Khloe Kardashian, who recently turned 40, you may have considered testing your biological age to see if you feel younger than your actual age. But while these tests can tell you a lot about your body’s aging, they often overlook the aging of your brain. Researchers have now developed a new method to determine how quickly your brain is aging, which could help in predicting and preventing dementia. Learn more here.

Unlike your chronological age, which is based on the number of years since you were born, your biological age is determined by how well your body functions and how your cells age. This new method uses MRI scans and artificial intelligence to estimate the biological age of your brain, providing valuable insights for brain health tracking in research labs and clinics.

Traditional methods of measuring biological age, such as DNA methylation, do not work well for the brain due to the blood-brain barrier, which prevents blood cells from crossing into the brain. The new non-invasive method developed at the University of Southern California combines MRI scans and AI to accurately assess brain aging.

Using AI to analyze MRI brain scans, researchers can now predict how quickly the brain is aging and identify areas of the brain that are aging faster. This new model, known as a 3D Convolutional Neural Network, has shown promising results in predicting cognitive decline and Alzheimer’s disease risk based on brain aging rates.

Researchers believe that this innovative approach can revolutionize the field of brain health and provide valuable insights into the impact of genetics, environment, and lifestyle on brain aging. By accurately estimating the risk of Alzheimer’s disease, this method could potentially lead to the development of new prevention strategies and treatments.

Overall, this new method offers a powerful tool for tracking brain aging and predicting cognitive decline, bringing us closer to a future where personalized brain health assessments can help prevent and treat neurodegenerative diseases.

For more information, visit Professor Andrei Ilimia’s profile here.

https://c02.purpledshub.com/uploads/sites/41/2025/02/MRI-scan.mp4
Using AI to analyze MRI brain scans, you can see how quickly your brain is aging.

Source: www.sciencefocus.com

Fish use mirrors to assess their size and determine their likelihood of winning a confrontation

Bluestreak Cleaner checking himself out in the mirror

Osaka Metropolitan University

Before deciding whether to fight another fish, wrasse look at their own reflection in the mirror to gauge their size.

Blue Streak Cleaner Lass (Loveroid) are astonishingly bright. This finger-sized coral reef fish is the first to pass the mirror test, a common assessment of whether an animal can recognize its own body and not another animal in a mirror. Researchers discovered that these wrasses use their own reflection to build an image of their own body size and compare it to others.

beginning, Taiga Kobayashi Researchers at Osaka Metropolitan University in Japan conducted an experiment to see if fish were willing to attack. They held up images of different wrasses, each 10 percent larger or smaller than the real fish, against the glass wall of an aquarium. Regardless of the size of the model fish in the photo, the territorial wrasses put up a fight.

The researchers then repeated the test with additional mirrors, and the fish saw their own reflection in the mirror, but when the researchers held up pictures of larger or smaller wrasses on the glass plate, the fish chose to fight only the smaller rivals.

“This was unexpected, as this fish has always been known to be aggressive towards rivals, regardless of its size,” Kobayashi says.

Because the tanks are partitioned, the wrasses can't see both themselves and pictures of rival fish at the same time, so the scientists think the fish must be comparing the pictures to a mental approximation of their own size.

How did wrasses develop this ability, given that they evolved in an environment without mirrors? In both the lab and in the wild, it's advantageous for fish to know the size of their opponent before fighting, Kobayashi says. In other words, the fish were smart enough to use the mirror as a decision-making tool.

topic:

Source: www.newscientist.com

New ways to assess hurricanes may be necessary as their strength increases

Satellite image of Typhoon Surigae over the Pacific Ocean in 2021

European Union/Copernicus Sentinel-3 images

In the past decade, five tropical cyclones have recorded wind speeds strong enough to be classified as Category 6 storms. Analysis suggests hurricane sizes may need to be updated as rising temperatures strengthen storms.

If carbon emissions continue at their current pace, a “Category 7” storm is even possible. 'It's certainly possible in theory if we keep warming the planet,' says climate scientist james cossin at the First Street Foundation, a nonprofit research organization in New York.

Officially, there is no such thing as a Category 6 or Category 7 hurricane. According to the Saffir-Simpson hurricane scale used by the National Hurricane Center (NHC) in the United States, storms with sustained wind speeds of 252 kilometers per hour or higher are categorized as Category 5.

But as the wind speeds of the strongest storms increase, Kossin and his colleagues say using this scale becomes increasingly problematic. michael wehner That's because a study at California's Lawrence Berkeley National Laboratory does not convey the increased risk posed by increasingly severe storms.

“The situation is bad and it's getting worse,” Kossin said. “As the climate changes, these storms will become stronger.”

They say there are three pieces of evidence that global warming is causing the wind speeds of the strongest storms to increase. First, the basic theory of hurricanes as a type of heat engine says that a hotter world should produce stronger storms.

Second, high-resolution climate models produce storms with faster winds as the Earth's temperature rises.

And finally, the real-world storm is getting stronger. Of the 197 Category 5 tropical cyclones between 1980 and 2021, half occurred in the 17 years ending in 2021, with the five fastest occurring in the last nine years of this period. It occurred on.

If the Saffir-Simpson hurricane scale were expanded to rank storms with wind speeds over 309 km/h as Category 6, all five of these storms would fall into that category. The five are Typhoon Haiyan in 2013, Hurricane Patricia in 2015, Typhoon Meranti in 2016, Typhoon Goni in 2020, and Typhoon Surigae in 2021.

However, Cossin and Wehner are not suggesting that the NHC formally adopt the Category 6 definition. Cossin says using a scale based on wind speed is fundamentally flawed, given that flooding and storm surges can pose a greater threat to life and buildings. .

Instead, they believe the NHC needs to implement an entirely new system to better communicate the overall risk posed by the storm. For example, 2008's Hurricane Ike was a massive storm that caused massive flooding and damage, but Kossin said it was only a Category 1 or 2 storm when it made landfall in the United States.

kelly emmanuel at the Massachusetts Institute of Technology agree that a new scale is needed. “While I think it's important to recognize that hurricane intensity is increasing, we should also point out that most of the damage, injuries, and loss of life from hurricanes comes from water, not wind.” he says.

“I have been an advocate of replacing the venerable but outdated Saffir-Simpson scale with a new scale that reflects the totality of risk from a particular storm,” Emanuel says.

Another hurricane expert, Jeff Masters, now semi-retired, doesn't think the NHC intends or should change the Saffir-Simpson scale. “But it's important to understand how devastating these new superstorms could be, so talking about a hypothetical Category 6 storm is a valuable communications strategy for policymakers and the public. ” he says.

Masters said wind damage increases exponentially with wind speed, with a Category 6 storm with wind speeds of 314 km/h causing four times more damage than a Category 5 storm with wind speeds of 257 km/h. It is said that there is a possibility.

topic:

  • climate change/
  • Abnormal weather

Source: www.newscientist.com