What is the likelihood of an asteroid impacting Earth?

Asteroids are an intrinsic aspect of our solar system. Millions of rocky bodies orbit the sun, including those categorized as near-Earth asteroids, which occasionally come close to our planet. While cinematic portrayals often depict asteroid strikes as abrupt, inevitable catastrophes, experts contend that in reality, the risk is significantly more manageable and frequently preventable.

But what are the actual probabilities of an asteroid colliding with Earth? Recent studies shed light on this issue and offer some unexpected insights.

What are the chances that an asteroid will hit Earth?

A major asteroid impact would have effects that could be felt globally. Depending on its landing site, it might either harmlessly drop into the ocean or inflict severe damage on populated regions.

“Most people on Earth are likely aware of moderate to large asteroid impacts,” explains Carrie Nugent, a planetary scientist at the Olin Institute of Technology in Massachusetts.

However, Nugent emphasizes that catastrophic outcomes are exceedingly rare. While our planet has faced significant asteroid impacts throughout its history, including a notable one that contributed to the extinction of the dinosaurs 66 million years ago, current scientific understanding suggests there is no immediate cause for alarm.

New research on asteroid impact probability

Nugent, along with a team from Aalborg University in Denmark, employed computer simulations to analyze the risks associated with asteroid impacts. Their research concentrated on asteroids akin to recognized Near Earth Objects (NEOs).

Utilizing the publicly available NASA JPL Horizons system, they simulated the orbits of these asteroids to determine the frequency with which they intersect Earth’s orbit, allowing researchers to estimate the likelihood of large asteroids striking our planet.

According to their findings published on August 12th in the Planetary Science Journal:

  • Asteroids over 140 meters (460 feet) – Roughly equivalent to the length of a small cruise ship
  • Collisions with Earth approximately once every 11,000 years

Keeping asteroid risks in perspective

Understanding probabilities like “once every 11,000 years” can be complex. To provide clarity, Nugent compared asteroid impacts to other more familiar real-world events.

Her analysis revealed that:

  • You are more likely to survive an asteroid impact than to be struck by lightning.
  • Conversely, your chances of dying in a car accident are significantly higher than from an asteroid collision.

There are also other low-probability but high-risk events, such as the collapse of a deep hole in dry sand, that can result in fatalities but remain largely unknown to the general public.

“This is an extremely rare cause of death that many are unaware of,” Nugent noted, underscoring how human perception often miscalculates risk.

Can asteroid collisions be prevented?

In contrast to popular narratives in films and literature, asteroid strikes are not fate-driven events. In fact, scientists have demonstrated that altering an asteroid’s trajectory is possible.

In 2022, NASA’s DART mission successfully changed the path of a small asteroid that posed no threat to Earth. This experiment showcased that, with sufficient warning, we could potentially deflect a hazardous asteroid and avert a collision entirely.

“This is the only natural disaster we can completely prevent,” Nugent asserts.

Why asteroid tracking is important

Continuous research and sky survey initiatives are crucial for planetary defense. Early detection and tracking of near-Earth asteroids provide scientists ample time to evaluate risks and take necessary actions if needed.

Modern asteroid detection systems are continually improving, diminishing uncertainty and enhancing Earth’s preparedness against cosmic threats.

Conclusion

Though asteroid strikes captivate public imagination, scientific evidence indicates that they are infrequent, quantifiable, and preventable. Advances in tracking technology and the success of missions like NASA’s DART test reassure us that Earth is better shielded than ever.

Experts suggest that asteroid research should foster confidence and continued investment in planetary defense rather than fear.

Source: hitechub.com

Likelihood of Catastrophic Asteroid Impact Rises Temporarily in 2025

Illustration of an asteroid passing near the moon

Mark Garlick/Science Photo Library

In 2025, the threat of a disastrous asteroid impact momentarily heightened when astronomers detected a building-sized asteroid on a collision course with Earth.

Known as 2024 YR4, this asteroid was initially identified by astronomers in late December 2024, with estimates placing its size between 40 and 90 meters. Any potential trajectory through our solar system would intersect a narrow zone that includes Earth, leading astronomers to initially assess a 1 in 83 probability of collision in 2032.

As they monitored the asteroid’s orbit more closely in early 2025, the likelihood of an impact was updated to a concerning 1 in 32 by February.

If it had impacted close to an urban area, the consequences would have been devastating, equivalent to several megatons of TNT. The asteroid was temporarily classified as a 3 on the Turin scale, where 0 means no threat and 10 signifies a global catastrophe. This raised alarms among several United Nations agencies, resulting in coordinated efforts for a global telescope campaign and discussions on the necessity of an asteroid deflection mission.

During this period, global space agencies convened regularly to share observations and enhance understanding of the asteroid. “2024 YR4 proved to be a significant learning experience for us,” stated Richard Moisle from the European Space Agency (ESA). “This served as crucial training to enhance our capabilities related to asteroid detection and understanding the overarching challenges.”

By February 20, astronomers had refined the trajectory of 2024 YR4, effectively removing Earth from the asteroid’s predicted path, and ESA subsequently reduced the collision risk to 1 in 625, or 0.16 percent. Weeks later, both NASA and ESA confirmed that there was no longer any risk of collision. “They are not considered a threat to our planet,” affirmed Moisle.

Nonetheless, astronomers still acknowledge a minor risk of a lunar impact, estimated at about 4% for 2032. “Should we hit the moon, it would provide a unique opportunity to observe the impact process from a safe distance,” commented Gareth Collins from Imperial College London.

Researchers are now assessing the potential ramifications of an asteroid impacting the moon, including the risk of debris cascading toward Earth. They are also exploring the feasibility of a deflection mission and strategizing on how to effectively dispatch a small satellite to an asteroid in an attempt to detonate it with a nuclear device. “We must tread carefully to ensure that a moon impact does not unintentionally lead to an Earth impact,” Rang Moisle.

The present 4 percent chance of a lunar collision is not sufficiently alarming to compel global space agencies to initiate a formal mission. This probability is unlikely to shift soon, as 2024 YR4 is currently obscured by the Sun and won’t be visible until 2028. However, due to its unique positioning in Earth’s orbit, there will be a rare opportunity to observe it with the James Webb Space Telescope in February 2026. Moisle indicated that since planning an asteroid mission can take years, data from these observations will represent the last realistic chance to determine whether a mission to visit or deflect the asteroid is warranted.

Total Solar Eclipse 2027 Cruise: Spain and Morocco

Join an extraordinary expedition aboard the cutting-edge exploration vessel Douglas Mawson to witness the longest total solar eclipse of the century on August 2, 2027.

Topic:

  • Asteroid/
  • 2025 News Review

Source: www.newscientist.com

The Method We Use to Train AIs Increases Their Likelihood of Producing Nonsense

Certain AI training techniques may lead to dishonest models

Cravetiger/Getty Images

Researchers suggest that prevalent methods for training artificial intelligence models may increase their propensity to provide deceptive answers, aiming to establish “the first systematic assessment of mechanical bullshit.”

It is widely acknowledged that large-scale language models (LLMs) often produce misinformation or “hagaku.” According to Jaime Fernandez Fissac from Princeton University, his team defines “bullshit” as “discourse designed to manipulate an audience’s beliefs while disregarding the importance of actual truth.”

“Our analysis indicates that the problems related to bullshit in large-scale language models are quite severe and pervasive,” remarks FISAC.

The researchers categorized these instances into five types: “This red car combines style, charm, and adventure that captivates everyone,” Weasel Words—”Ambiguous statements like ‘research suggests that in some cases, uncertainties may enhance outcomes’; Essentialization—employing truthful statements to create a false impression; unverified claims; and sycophancy.

They evaluated three datasets composed of thousands of AI-generated responses to various prompts from models including GPT-4, Gemini, and Llama. One dataset included queries specifically designed to test the generation of bullshit when AIS was asked for guidance or recommendations, alongside others focused on online shopping and political topics.

FISAC and his colleagues first employed LLMs to determine if the responses aligned with one of the five categories and then verified that the AI’s classifications matched those made by humans.

The team found that the most critical truths posed challenges stemming from a training method called reinforcement learning from human feedback, aimed at enhancing the machine’s utility by offering immediate feedback on its responses.

However, FISAC cautions that this approach is problematic, as models “sometimes conflict with honesty,” prioritizing immediate human approval and perceived usefulness over truthfulness.

“Who wants to engage in the lengthy and subtle rebuttal of bad news or something that seems evidently true?” FISAC questions. “By attempting to adhere to our standards of good behavior, the model learns to undervalue the truth in favor of a confident, articulate response to secure our approval.”

This study revealed that reinforcement learning from human feedback notably heightened bullshit behavior, with inflated rhetoric increasing by nearly 40%, substantial enhancements in Weasel Words, and over half of unverified claims.

Heightened bullshitting is especially detrimental, as team member Kaique Liang points out, leading users to make poorer decisions. In cases where the model’s features were uncertain, deceptive claims surged from five percent to three-quarters following human training.

Another significant issue is that bullshit is prevalent in political discourse, as AI models “tend to employ vague and ambiguous language to avoid making definitive statements.”

AIS is more likely to behave this way when faced with conflicts of interest, as the system caters to multiple stakeholders including both the company and its clients, as the researchers discovered.

To address this issue, the researchers propose transitioning to a “hindcasting feedback” model. Instead of seeking immediate feedback post-output, the system should first generate a plausible simulation of potential outcomes based on user input, which is then presented to a human evaluator for assessment.

“Ultimately, we hope that by gaining a deeper understanding of the subtle but systematic ways AI may seek to mislead us, we can better inform future initiatives aimed at creating genuinely truthful AI systems,” concludes FISAC.

Daniel Tiggard of the University of San Diego, though not involved in the study, expresses skepticism regarding discussions of LLMs’ output under these circumstances. He argues that just because LLMs generate bullshit, it does not imply intentional deception, as AI systems currently stand. I left to deceive us, and I have no interest in doing so.

“The primary concern is that this framing seems to contradict sensible recommendations about how we should interact with such technology,” states Tiggard. “Labeling it as bullshit risks anthropomorphizing these systems.”

Topics:

Source: www.newscientist.com

Reducing high blood pressure may decrease the likelihood of developing dementia

Low blood pressure is associated with a lower risk of dementia

Shutterstock/Greeny

According to a large study from Chinese people, lowering hypertension reduces the risk of dementia and cognitive impairment.

Many studies link Hypertension is also known as hypertension, and is at a high risk of developing dementia.. Some studies have also shown that side effects of blood pressure treatment may be at a lower risk of dementia.

now, jiang he At the University of Texas Southwestern Medical Center, Dallas and his colleagues are directly considering the effectiveness of drugs that lower blood pressure for dementia and cognitive impairment.

They studied 33,995 people in rural China. They were all over 40 years old and had high blood pressure. Participants were split into one of two random groups, each with an average age of approximately 63 years.

On average, the first group actively received three antitherapeutic drugs, such as ACE inhibitors, diuretics, or calcium channel blockers, actively ensuring lower blood pressure. They also coached home blood pressure monitoring and lifestyle changes that help to reduce blood pressure, such as weight loss and alcohol and salt intake.

Another set treated as a control group achieved local treatment levels with the same coaching and more general levels of treatment, including on average one drug.

At follow-up appointments 48 months later, participants were tested for blood pressure and measured for signs of cognitive impairment using a standard questionnaire.

Concerns about hypertension begin when a person’s systolic pressure exceeds 130 mm mercury (mmHg) or when diastolic pressure exceeds 80 mmHg. blood pressure It has exceeded 130/80.

On average, many people who received the medication lowered their blood pressure from 157.0/87.9 to 127.6/72.6 mmHg, while the control group was able to take it from 155.4/87.2 to just 147.7/81.0 mmHg.

The researchers also found that 15% fewer people who received multiple drug therapies during the study received dementia diagnosis compared to the control group, and 16% suffered from cognitive impairment.

“The results of this study demonstrated that lowering blood pressure is effective in reducing the risk of dementia in patients with uncontrolled hypertensive conditions,” he says. “This proven and effective intervention should be widely adopted and expanded to alleviate the global burden of dementia.”

“Over the years, many people know that blood pressure is likely a risk factor for dementia. Zachary Malcolm At Washington University in Seattle.

Raj Shah Rush University in Chicago says it’s helpful to add evidence that treating high blood pressure can help stop dementia, but that’s just one of the dementia puzzles, as we affect brain abilities as we age.

“You need to treat hypertension for multiple reasons,” says Shah. “Because of people’s longevity and happiness, they can age healthyly over time.”

Marcum also says people should think more broadly than just blood pressure to avoid dementia. He says there is Other known risk factors This is associated with an increased risk of dementia, including smoking, inactivity, obesity, social isolation, and hearing loss.

And many factors are more influential at different stages of life. To reduce the risk of dementia, “a holistic approach is needed throughout your life,” says Shah.

topic:

Source: www.newscientist.com

New theories suggest that the likelihood of intelligent life existing beyond Earth is higher

In 1983, theoretical physicist Brandon Carter said that the time it took for humans to evolve on Earth compared to the total lifespan of the Sun was essentially unlikely to have been our evolutionary origin. We concluded that observers like humans who are comparable to the above are very rare. . In a new study, scientists from Pennsylvania, the University of Munich and the University of Rochester have critically reevaluated the core assumptions of Carter's “hard step” theory through the lens of historical geologics. Specifically, they propose alternative theories with no harsh steps, and the evolutionary specificity required for human origin can be explained through mechanisms other than essentially non-performance. Furthermore, if the surface environment of the Earth initially did not reach the specific important intermediate steps necessary for human existence, as well as human life, the timing of human origin would be a habitability surrounding the history of the Earth. Controlled by continuous openings in the new global environment window.

The new theory proposes that humans may represent potential consequences of biological and planetary evolution. Image credit: Fernando Ribas.

“This is a huge change in how we think about life history,” said Professor Jennifer McCalady of Pennsylvania.

“It suggests that the evolution of complex life may be less about the interaction between luck and its environment, and I am to understand our origins and our place in the universe. paves the path for exciting new research in our quest.”

“The 'hard step' model, originally developed by Brandon Carter in 1983, took humans to evolve on Earth compared to the total lifespan of the sun, so our evolutionary origins are largely due to the fact that He claims it is unlikely. Human beings are extremely low across the globe. ”

In a new study, Professor Makaradi and her colleagues say that the Earth's environment is initially incapable of parasitic life in many forms, and only important evolutionary steps when the Earth's environment reaches a state of “tolerant” claimed that it was possible.

“For example, because complex animal life requires a certain level of oxygen in the atmosphere, oxygenation of the Earth's atmosphere through photosynthesis is the oxygenation of the Earth's atmosphere through microorganisms and bacteria, and oxygenation of the Earth's atmosphere through planets. It was a natural evolutionary step, said Dr. Dan Mills, a postdoctoral researcher at the University of Munich.

“We argue that intelligent life may not need a series of lucky breaks.”

“Humans did not evolve “early” or “slowly” in the history of the Earth, but when conditions were right, they “on time.” ”

“It's probably just a matter of time, and while other planets can probably achieve these conditions more quickly than Earth, other planets may take even longer.”

The central prediction of the “hard step” theory is that, based on Carter's, steps such as the origin of life, the development of complex cells, and the emergence of human intelligence, if there are no other civilizations, then the other civilizations are He says there is little that exists in the universe. The interpretation of the total lifespan of the Sun is 10 billion years, and the age of the Earth is about 5 billion years old.

In a new study, the authors have the ability to originate human origin by continuous openings in the window of habitability to the history of the Earth, driven by changes in nutritional availability, sea surface temperature, ocean salinity levels, and oxygen levels. I suggested that the timing could be explained. atmosphere.

Given all the interaction factors, the Earth has only just become kind to humanity recently. It is simply a natural result of workplace conditions.

“We believe we need to use geological time scales rather than predicting based on the lifespan of the sun, because it takes time for the atmosphere and landscape to change,” Penn State said. said Professor Jason Wright.

“These are the normal timescales on Earth. When life evolves with planets, they evolve at the planet's pace on the planet's timescale.”

Team's paper It was published in the journal this month Advances in science.

____

Daniel B. Mills et al. 2025. A reevaluation of the “hard step” model for the evolution of intellectual life. Advances in science 11(7); doi:10.1126/sciadv.ads5698

Source: www.sci.news

Calculating the Likelihood of Intelligent Life in the Universe and Beyond: A New Theoretical Model

In 1961, American astrophysicist and astrobiologist Dr. Frank Drake multiplied several factors to estimate the number of intelligent civilizations in the Milky Way that could make their presence known to humans. I devised an equation. More than 60 years later, astrophysicists have created a different model that focuses instead on conditions created by the accelerating expansion of the universe and the amount of stars forming. This expansion is thought to be caused by dark energy, which makes up more than two-thirds of the universe.

Artistic impression of the multiverse. Image credit: Jaime Salcido / EAGLE collaboration.

“Understanding dark energy and its impact on our universe is one of the biggest challenges in cosmology and fundamental physics,” said Dr. Daniele Solini, a researcher at Durham University’s Institute for Computational Cosmology. .

“The parameters that govern our universe, such as the density of dark energy, may explain our own existence.”

Because stars are a prerequisite for the emergence of life as we know it, the team’s new model predicts the probability of intelligent life arising in our universe, and in a hypothetical multiverse scenario of different universes. could be used to estimate the

The new study does not attempt to calculate the absolute number of observers (i.e. intelligent life) in the universe, but instead calculates the relative probability that a randomly chosen observer will inhabit a universe with certain properties. will be considered.

It concludes that a typical observer would expect to experience significantly greater densities of dark energy than seen in our Universe. This suggests that its ingredients make it a rare and unusual case in the multiverse.

The approach presented in this paper involves calculating the rate at which ordinary matter is converted into stars for different dark energy densities throughout the history of the universe.

Models predict that this proportion would be about 27% in a universe where star formation is most efficient, compared to 23% in our universe.

This means that we do not live in a hypothetical universe where intelligent life has the highest probability of forming.

In other words, according to the model, the values ​​of dark energy density that we observe in the Universe do not maximize the potential for life.

“Surprisingly, we found that even fairly high dark energy densities can still coexist with life. This suggests that we may not be living in the most likely universe. ,” Dr. Solini said.

The model could help scientists understand how different densities of dark energy affect the structure of the universe and the conditions for life to develop there.

Dark energy causes the universe to expand faster, balancing the pull of gravity and creating a universe that is capable of both expansion and structure formation.

But for life to develop, there needs to be areas where matter can aggregate to form stars and planets, and conditions need to remain stable for billions of years to allow life to evolve.

Importantly, this study shows that the astrophysics of star formation and the evolution of the large-scale structure of the universe combine in subtle ways to determine the optimal value of dark energy density required for the generation of intelligent life. It suggests that.

“We will use this model to investigate the emergence of life across different universes and reinterpret some fundamental questions we ask ourselves about our own universe,” said Lucas Lombreiser, professor at the University of Geneva. It will be interesting to see if there is a need.”

of study Published in Royal Astronomical Society Monthly Notices.

_____

Daniele Solini others. 2024. Influence of the cosmological constant on past and future star formation. MNRAS 535 (2): 1449-1474;doi: 10.1093/mnras/stae2236

Source: www.sci.news

Fish use mirrors to assess their size and determine their likelihood of winning a confrontation

Bluestreak Cleaner checking himself out in the mirror

Osaka Metropolitan University

Before deciding whether to fight another fish, wrasse look at their own reflection in the mirror to gauge their size.

Blue Streak Cleaner Lass (Loveroid) are astonishingly bright. This finger-sized coral reef fish is the first to pass the mirror test, a common assessment of whether an animal can recognize its own body and not another animal in a mirror. Researchers discovered that these wrasses use their own reflection to build an image of their own body size and compare it to others.

beginning, Taiga Kobayashi Researchers at Osaka Metropolitan University in Japan conducted an experiment to see if fish were willing to attack. They held up images of different wrasses, each 10 percent larger or smaller than the real fish, against the glass wall of an aquarium. Regardless of the size of the model fish in the photo, the territorial wrasses put up a fight.

The researchers then repeated the test with additional mirrors, and the fish saw their own reflection in the mirror, but when the researchers held up pictures of larger or smaller wrasses on the glass plate, the fish chose to fight only the smaller rivals.

“This was unexpected, as this fish has always been known to be aggressive towards rivals, regardless of its size,” Kobayashi says.

Because the tanks are partitioned, the wrasses can't see both themselves and pictures of rival fish at the same time, so the scientists think the fish must be comparing the pictures to a mental approximation of their own size.

How did wrasses develop this ability, given that they evolved in an environment without mirrors? In both the lab and in the wild, it's advantageous for fish to know the size of their opponent before fighting, Kobayashi says. In other words, the fish were smart enough to use the mirror as a decision-making tool.

topic:

Source: www.newscientist.com

High potency cannabis increases the likelihood of developing cannabis-induced psychosis

Anders Gillian was only 17 years old when he started to lose touch with reality.

“He believed there was a higher being communicating with him, telling him what to do and who he was,” said his mother, Christine Gillian, who lives in Nashville. ‘ he said.

Her son, who had been using marijuana since he was 14, was diagnosed with schizophrenia, a chronic mental illness with symptoms such as delusions, hallucinations, and incoherent speech.

He began taking antipsychotic medication but eventually stopped due to side effects. He turned to heroin to quiet the voices in his head and tragically died from an accidental drug overdose at age 22 in 2019.

“If he hadn’t started using marijuana, he might still be here today,” says Gillian, a neuroscientist at Vanderbilt University. Despite having a family history of schizophrenia, she believes her son’s marijuana use triggered a psychotic episode and led to his condition.

Anders was part of a group of young men at heightened risk of developing psychosis due to marijuana use. Studies from Denmark and Britain suggest a connection between heavy marijuana use and mental disorders like depression, bipolar disorder, and schizophrenia. Researchers believe that the increased potency of THC, the psychoactive compound in marijuana, may exacerbate these symptoms in individuals predisposed genetically. THC levels in marijuana have been rising over the years.

Kristen Gilliland holds a photo of her son Anders, who was diagnosed with schizophrenia due to marijuana-induced psychosis and died of an accidental overdose.NBC News

“We’re seeing a rise in marijuana-induced psychosis among teenagers,” said Dr. Christian Thurstone, an addiction expert and child psychiatrist at the University of Colorado School of Medicine in Denver.

Is higher potency marijuana more dangerous?

Nora Borkow, director of the National Institute on Drug Abuse, stated that the higher the potency of a cannabis product, the more negative effects it is likely to have on users.

“Those who consume higher doses are at a greater risk of developing psychosis,” she explained.

Research on the adverse effects of high THC levels is limited, but a 2020 study found that high-potency cannabis products were associated with an increased risk of hallucinations and delusions compared to lower-potency variants.

“There seems to be a correlation between potency and the risk of psychosis, but further research is needed,” said Ziva Cooper, director of UCLA’s Center for Cannabis and Cannabinoids.

Research suggests that a proportion of individuals with cannabis-induced psychosis may go on to develop schizophrenia or bipolar disorder.

Mr. Thurstone highlighted the particular concern regarding young people and adolescents.

“Current research shows that the risk of psychosis is dependent on the dose of marijuana, especially during adolescence. Higher exposure during this critical period increases the likelihood of psychosis, schizophrenia, and potentially severe mental illnesses,” he stated.

More news about marijuana and health

Another issue with high-potency products is the risk of developing cannabis use disorder or marijuana addiction. Increased exposure to stronger cannabis products may lead to addiction, although more research is required to definitively establish this connection.

“There is clear scientific evidence that marijuana can be psychologically addictive and habit-forming, and even physically habit-forming,” Thurstone warned. “It creates tolerance, requiring increased usage for the same effect.”

Approximately 1 in 10 individuals who start using cannabis may become addicted. According to the Centers for Disease Control.

How the potency of cannabis is related to psychosis

Marijuana overstimulates cannabinoid receptors in the brain, leading to a high. This stimulation can impair cognitive functions, memory, and problem-solving abilities.

While the exact mechanisms of how marijuana induces psychosis are not fully understood, scientists believe it interferes with the brain’s ability to differentiate between internal thoughts and external reality.

“In the ’60s, ’70s, ’80s, and early ’90s, marijuana had THC content of about 2% to 3%,” noted Thurstone, highlighting the significant increase in potency levels in recent years.

Patrick Johnson, an assistant store manager at Frost Exotic Dispensary in Colorado, has witnessed the rise in potency firsthand, especially after the legalization of recreational marijuana in 2014.

Since then, 24 states, two territories, and Washington, D.C., have legalized marijuana for medical and recreational use.

As cannabis consumption grows across the nation, the demand for high-potency products is increasing, experts suggest.

“After legalization, I’ve seen potency rise from 19-20% to 30-35%,” Johnson remarked.

Currently, his store offers strains ranging from 14% to 30%, with most customers preferring stronger varieties.

Mahmoud Elsohly, a cannabis researcher at the University of Mississippi, explained that one reason for increased potency is users developing tolerance to the drug over time. This has led to a steady increase in THC content over the years.

“People need more potent products to achieve the desired high,” he noted.

Previously, a joint with 2% THC might have been enough, but as tolerance develops, individuals may need multiple joints or higher THC concentrations for the same effect.

Are some forms of marijuana safer?

Cannabis potency primarily refers to the THC content in the smokable parts like the flower or bud.

THC levels in flowers can reach up to 40%, while concentrates and oils may contain levels as high as 95%.

The challenge, according to UCLA’s Cooper, lies in the absence of a standardized dose for cannabis products, making it hard to predict individual reactions.

Establishing unit doses for inhaled products is also complicated. A joint can contain 100 to 200 milligrams of THC, but factors like inhalation depth and frequency of puffs affect actual exposure.

On the other hand, edibles typically contain 5 to 10 milligrams per serving. Efforts are underway to standardize dosing for edibles and regulate THC intake. For example, New York State limits edibles to 10 mg per serving.

How high can THC go?

Borkow of the National Institute on Drug Abuse believes that excessively high THC levels may induce extreme reactions like agitation and paranoia, predicting that marijuana flower THC levels won’t exceed 50%.

Cooper added that there is a threshold for THC production, but manufacturers are finding innovative ways to increase potency.

“The industry is boosting THC levels in plant products by adding extra THC, like injecting it into pre-rolled cannabis cigarettes,” she said. “We’re witnessing higher THC exposure levels than ever before.


Source: www.nbcnews.com

AI trained on extensive life stories has the ability to forecast the likelihood of early mortality

Data covering Denmark’s entire population was used to train an AI that predicts people’s life outcomes

Francis Joseph Dean/Dean Photography/Alamy Stock Photo

Artificial intelligence trained on personal data covering Denmark’s entire population can predict people’s likelihood of dying more accurately than existing models used in the insurance industry. Researchers behind the technology say it has the potential to have a positive impact on early prediction of social and health problems, but must be kept out of the hands of large corporations. There is.

Sune Lehmann Jorgensen The researchers used a rich Danish dataset covering the education, doctor and hospital visits, resulting diagnoses, income, and occupation of 6 million people from 2008 to 2020.

They converted this dataset into words that can be used to train large-scale language models, the same technology that powers AI apps like ChatGPT. These models work by looking at a set of words and statistically determining which word is most likely to come next based on a large number of examples. In a similar way, the researcher’s Life2vec model can look at the sequence of life events that form an individual’s history and determine what is most likely to happen next.

In the experiment, Life2vec was trained on all data except for the last four years of data, which was kept for testing. Researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict who lived and who died. This was 11% more accurate than existing AI models and life actuarial tables used in the financial industry to price life insurance policies.

The model was also able to predict personality test results for a portion of the population more accurately than AI models trained specifically to do the job.

Jorgensen believes the model has consumed enough data that it has a good chance of shedding light on a wide range of topics in health and society. This means it can be used to predict and detect health problems early, or by governments to reduce inequalities. But he stresses that it can also be used by companies in harmful ways.

“Obviously, our model should not be used by insurance companies, because the whole idea of ​​insurance is that if some unlucky person suffers some kind of incident, dies, loses their backpack, etc. ‘Because we share the lack of knowledge about what to do, we can share this burden to some extent,’ says Jorgensen.

But such technology already exists, he says. “Big tech companies that have large amounts of data about us are likely already using this information against us, and they are using it to make predictions about us. It is.”

Matthew Edwards Researchers from UK professional institutes the Institute of Actuaries and the Faculty of Actuaries say that while insurers are certainly interested in new forecasting techniques, the bulk of decision-making is based on a type of model called a generalized linear model. The research is done using AI, which he says is rudimentary compared to this research. .

“If you look at what insurance companies have been doing for years, decades, centuries, they’ve taken the data they have and tried to predict life expectancy from that,” Edwards said. “But we are deliberately conservative in adopting new methodologies, because when we are creating policies that are likely to be in place for the next 20 or 30 years, the last thing we want is to make any significant mistakes. . Everything can change, but slowly because no one wants to make mistakes.”

topic:

Source: www.newscientist.com