Remarkable Advances in Developing Practical Quantum Computers

Quantum Computing Advancements

Practical Quantum Computers Approaching Reality

Alexander Yakimov / Alamy

The quantum computing industry is concluding the year with renewed hope, despite the absence of fully operational quantum systems. At December’s Q2B Silicon Valley Conference, industry leaders and scientists expressed optimism regarding the future of quantum computing.

“We believe that it’s highly likely that someone, or perhaps several entities, will develop a genuinely industrially viable quantum computer, but we didn’t anticipate this outcome until the end of 2025,” stated Joe Altepeter, program manager for the Defense Advanced Research Projects Agency’s Quantum Benchmarking Initiative (QBI). The QBI aims to evaluate which of the competing quantum computing approaches can yield practical devices capable of self-correction or fault tolerance.

This initiative will extend over several years, involving hundreds of professional evaluators. Reflecting on the program’s initial six months, Altepeter noted that while “major roadblocks” were identified in each approach, none disqualified any team from the pursuit of practical quantum devices.

“By late 2025, I sense we will have all major hardware components in place with adequate fidelity; the remaining challenges will be primarily engineering-focused,” asserted Scott Aaronson, a key figure in the field, during his presentation at the University of Texas at Austin. He acknowledged the ongoing challenge of discovering algorithms for practical quantum applications, but highlighted significant progress in hardware developments.

Though quantum computing hardware advancements are encouraging, application development is lagging, according to Ryan Babush from Google. During the conference, Google Quantum AI alongside partners unveiled the finalists for the XPRIZE competition, aiming to accelerate application development.

The research by the seven finalists spans simulations of biomolecules crucial for human health, algorithms enhancing classical simulations for clean energy materials, and calculations that could impact the diagnosis and treatment of complex health issues.

“A few years back, I was skeptical about running applications on quantum computers, but now my interest has significantly increased,” remarked John Preskill, a pivotal voice in quantum computing at Caltech, advocating for the near-term application of quantum systems in scientific discovery.

Over the past year, numerous quantum computers have been employed for calculations, including the physics of materials and high-energy particles, potentially rivaling or surpassing traditional computational methods.

While certain applications are deemed particularly suitable for quantum systems, challenges remain. For instance, Pranav Gokhale at Inflection, a company manufacturing quantum devices from cryogenic atoms, is implementing Scholl’s algorithm—a classic method capable of breaking many encryption systems used by banks today. However, this initial implementation still lacks the computational power necessary to effectively decrypt real-world encrypted information, illustrating that significant enhancements in both hardware and software are essential.

Dutch startup Quantware has proposed a solution to the industry’s major hardware challenge, asserting that increasing quantum computer size can enhance computational capacity while maintaining reliability. Their quantum processor unit design aims to utilize 10,000 qubits, roughly 100 times the capacity of most current superconducting quantum computers. According to Matt Reilersdam, QuantWare anticipates having its first device operational within two and a half years. Other firms, such as IBM and Quantinuum, are working toward similar large-scale quantum systems, while QuEra aims to fabricate 10,000 qubits from ultra-cold atoms within a year, intensifying the competitive landscape.

Moreover, the quantum computing industry is projected to expand significantly, with global investments expected to rise from $1.07 billion in 2024 to approximately $2.2 billion by 2027, as noted in a Quantum Computing Industry Survey by Hyperion Research.

“More individuals than ever can now access quantum computers, and I believe they will accomplish things we can scarcely imagine,” said Jamie Garcia from IBM.

Topics:

Source: www.newscientist.com

Perfect Timing for Firefox: Developing an AI Browser and the Future of the Web

Need an assistant for your online activities? Several major artificial intelligence companies have moved away from chatbots like ChatGPT and are now focusing on new browsers with deep AI integration. These could take the form of agents who shop for you or ubiquitous chatbots that follow you, summarizing what you’re looking at, looking up related information, and answering related questions.

In the last week alone, OpenAI released the ChatGPT Atlas browser, while Microsoft showcased Edge’s new Copilot mode, both heavily utilizing chatbots. In early October, Perplexity made its Comet browser available for free. Mid-September saw Google rolling out Chrome with Gemini, integrating its AI assistant into the world’s most popular browser.

Following these releases, I spoke with Firefox General Manager Anthony Enzor-DeMeo to discuss whether AI-first browsers will gain traction, if Firefox will evolve to be fully AI-driven, and how user privacy expectations may change in this new era of personalized, agent-driven browsing.

Guardian: Have you tried ChatGPT Atlas or other AI browsers? I’m curious what you think about them.

Anthony Enzor-DeMeo: Yes, I’ve tried Atlas, Comet, and other competing products. What do I think about them? It’s a fascinating question: What do users want to see? Today, users typically go to Google, perform a search, and view various results. Atlas seems to be transitioning towards providing direct answers.

Guardian: Would you want that as a user?

Enzor-DeMeo: I prefer knowing where the AI derives its answers. References are important, and Perplexity’s Comet provides them. I believe that’s a positive development for the internet.

Guardian: How do you envision the future of the web? Is search evolving into a chat interface instead of relying solely on links?

Enzor-DeMeo: I’m concerned that access to content on the web may become more expensive. The internet has traditionally been free, mostly supported by advertising, though some sites do have subscriptions. I’m particularly interested in how access to content might shrink behind paywalls while aiming for a free and open internet. AI may not be immediately profitable, yet we have to guard against a shift towards a more closed internet.

Guardian: Do you anticipate Firefox releasing an AI-integrated or agent-like browser similar to Perplexity Comet or Atlas?

Enzor-DeMeo: Our focus remains on being the best browser available. With 200 million users, we need to encourage people to choose us over default options. We closely monitor user preferences regarding AI features, which are gradually introduced. Importantly, users retain control; they can disable features they do not wish to use.

Guardian: Do you think AI browsers will become popular or remain niche tools?

Enzor-DeMeo: Currently, paid AI usage is about 3% globally, so it’s premature to deem it fully mainstream. However, I believe AI is here to stay. The forthcoming years will likely see greater distribution and trial and error as we discover effective revenue models that users are willing to pay for. This varies widely by country and region, so the next phase of the internet presents uncertainties.

Guardian: What AI partnerships is Firefox considering?

Enzor-DeMeo: We recently launched Perplexity, akin to a search partnership agreement. While Google search is our default, users have access to 50 other search engines, providing them with options.

Guardian: Given your valuable partnership with Google, what financial significance does the Perplexity partnership hold?

Enzor-DeMeo: I’m unable to share specific details.

Guardian: Firefox has established its reputation on user privacy. How do you reconcile increasing demands for personalization, which requires more data, with AI-assisted browsing?

Enzor-DeMeo: Browsers inherently have a lot of user context. Companies are developing AI browsers to leverage this data for enhanced personalization and targeted ads. Mozilla will continue to honor users’ choices. If you prefer not to store data, that’s entirely valid. Users aren’t required to log in and can enjoy completely private browsing. If it results in less personalized AI, that’s acceptable. Ultimately, the choice lies with users.

Guardian: Do you think users anticipate sacrificing privacy for personalization?

Enzor-DeMeo: We’ve observed a generational divide. Younger cohorts prioritize value exchange—will sharing more information lead to a more tailored experience? In a landscape with numerous apps and social media, this expectation has emerged. However, perspectives vary between generations; Millennials often value choice, while Gen Xers prioritize privacy. Many Gen Z users emphasize personalization and choice.

Guardian: What are your thoughts on the recent court decision regarding Google’s monopoly?

Enzor-DeMeo: The judge acknowledged the influx of competition entering the market. He deliberately avoided delving into the browser engine domain. We support search competition but not at the cost of independent browsers. The ruling allows us to keep receiving compensation while monitoring market evolution over the next few years. The intersection of search and AI remains uncertain, and a prudent stance is to observe how these developments unfold.

Guardian: Firefox’s market share has been steadily declining over the past decade; what are your realistic goals for user growth in the coming years?

Enzor-DeMeo: Every user must decide to download and use Firefox. We’re proud to serve 200 million users. I believe that AI presents us with significant growth opportunities. We want to provide choices rather than lock users into a single solution, fostering diverse growth possibilities for us.

Source: www.theguardian.com

Should We Be Concerned About AI Developing Lethal Biological Weapons? Not Now, But Eventually.

AI can be utilized to synthesize the toxin lysine, which is also sourced from castor beans found in many gardens.

American Photo Archives/Alamy

Artificial intelligence holds the potential to revolutionize biology, enhancing the development of advanced drugs, vaccines, and even synthetic organisms that can, for instance, consume waste plastic. Nonetheless, there are concerns about its potential misuse in creating biological weapons that might evade traditional detection methods until it is too late. So, what level of concern is warranted?

“AI advancements are catalyzing breakthroughs in biology and medicine,” states Eric Horvitz, Chief Science Officer at Microsoft. “With these new capabilities comes the responsibility to remain vigilant.”

His research team explored whether AI could be utilized to design proteins that mimic the functions of known hazardous proteins while being distinct enough to avoid detection as dangerous. The specific proteins they attempted to redesign were not disclosed, although some research details were withheld, including toxins such as lysine, infamous for its role in a 1978 assassination, and botulinum, a potent neurotoxin known as Botox.

Creating numerous proteins akin to Botulinum requires a blueprint—the DNA that encodes it. Typically, if biologists need a specific DNA sequence, they order it from specialized companies.

Due to anxieties about bioterrorism, the option to order recipes for biological weapons exists through this method. Some DNA synthesis companies have voluntarily implemented screening processes to detect potentially hazardous orders. Proteins are essentially sequences of amino acids, and the screening examines whether the amino acid sequences correspond to a “sequence of concern,” meaning a biological threat.


However, AI theoretically enables the design of protein versions with altered amino acid sequences that still perform the same functions. Horvitz and his colleagues applied this approach to 72 potentially hazardous proteins and found that existing screening methods frequently overlooked these alternative variations.

This isn’t entirely unexpected. For a variety of reasons, the team did not physically create the redesigned proteins. Additionally, in a previous study conducted earlier this year, they tested a redesigned version of a non-toxic protein and determined that it did not function as intended, as detailed in their findings.

Moreover, while bioterrorist attacks have occurred, the frequency is low, and there’s limited reason to attribute this to a failed voluntary screening system. Numerous methods to circumvent regulations exist without resorting to AI redesign. For example, lysine can be harvested from castor oil plants found in many gardens. This study serves as a cautionary tale that great sophistication is not required to exploit gaps in security—much like in a scene from Mission Impossible, when a vault door remains wide open.

Lastly, apart from government-sponsored actions, historical records show that bioterrorists have rarely leveraged protein-based biological weapons effectively. For instance, the Aum Shinrikyo cult attempted to employ Botulinum for mass harm but ultimately relied on chemical agents. Letters laced with lysin sent to the White House failed to result in any fatalities. Based on casualty statistics, firearms and explosives pose significantly greater risks than biological toxins.

Does this imply we should cease our concerns over AI-generated biological weapons? Not at all. While Horvitz’s research focused strictly on proteins, viruses present a substantial threat. AI is already being leveraged to redesign entire viruses.

Recently, a team from Stanford University unveiled their attempt to redesign a virus that infects bacteria like E. coli. Consistent with findings from the protein redesign efforts, the results were underwhelming with respect to E. coli, but this is merely the beginning.

In discussions regarding AI-created viruses, James Diggans from DNA manufacturer Twist Bioscience, a member of Horvitz’s team, remarked that detecting viruses encoded with DNA is generally easier than finding proteins of concern. “Synthetic screening functions best with abundant data. Therefore, at the genomic level, it proves exceedingly beneficial.”

Nevertheless, not all DNA manufacturers are conducting such screening, and desktop DNA synthesis options are now accessible to the public. There are narratives of developers allegedly refusing to create harmful viruses or attempting to discern malicious intentions, yet individuals have discovered numerous ways to circumvent safeguards against creating bioweapons.

To clarify, history indicates that the threat posed by “wild” viruses is significantly higher than that of bioterrorism. Contrary to assertions from the current U.S. administration, evidence suggests that SARS-CoV-2 emerged as a result of a bat virus crossing over to other wildlife.

Moreover, the act of becoming a bioterrorist could inflict massive damage by merely releasing known viruses such as naturally occurring pathogens. There are substantial gaps in the Bioweapon Control efforts, thus reducing the need to rely on advanced AI techniques.

For all of these reasons, the risk of AI-engineered viruses being deployed is likely minimal at present. However, this risk increases as various technologies continue to improve. The COVID-19 pandemic has illustrated the chaos a new virus can unleash, even when it is not particularly harmful. Thus, there are justified reasons for concern.

Topics:

Source: www.newscientist.com

AI Firms “Unprepared” for Risks of Developing Human-Level Systems, Report Warns

A prominent AI Safety Group has warned that artificial intelligence firms are “fundamentally unprepared” for the consequences of developing systems with human-level cognitive abilities.

The Future of Life Institute (FLI) noted that its AI Safety Index scored a D in “Existential Safety Plans.”

Among the five reviewers of the FLI report, there was a focus on the pursuit of artificial general intelligence (AGI). However, none of the examined companies presented “a coherent, actionable plan” to ensure the systems remain safe and manageable.

AGI denotes a theoretical phase of AI evolution where a system can perform cognitive tasks at a level akin to humans. OpenAI, the creator of ChatGPT, emphasizes that AGI should aim to “benefit all of humanity.” Safety advocates caution that AGIs might pose existential risks by eluding human oversight and triggering disastrous scenarios.

The FLI report indicated: “The industry is fundamentally unprepared for its own aspirations. While companies claim they will achieve AGI within a decade, their existential safety plans score no higher than a D.”

The index assesses seven AI developers—Google Deepmind, OpenAI, Anthropic, Meta, Xai, Zhipu AI, and Deepseek—across six categories, including “current harm” and “existential safety.”

Humanity received the top overall safety grade of C+, followed by OpenAI with a C-, and Google DeepMind with a D.

FLI is a nonprofit based in the US advocating for the safer development of advanced technologies, receiving “unconditional” donations from crypto entrepreneur Vitalik Buterin.

SaferAI, another nonprofit focused on safety; also released a report on Thursday. They raised alarms about advanced AI companies exhibiting “weak to very weak risk management practices,” deeming current strategies “unacceptable.”

FLI’s safety evaluations were conducted by a panel of AI experts, including UK computer scientist Stuart Russell and Sneha Revanur, founder of the AI Regulation Campaign Group.

Max Tegmark, co-founder of FLI and professor at MIT, remarked that it was “quite severe” to expect leading AI firms to create ultra-intelligent systems without disclosing plans to mitigate potential outcomes.

He stated:

Tegmark mentioned that the technology is advancing rapidly, countering previous beliefs that experts would need decades to tackle AGI challenges. “Now, companies themselves assert it’s just a few years away,” he stated.

He pointed out that advancements in AI capabilities have consistently outperformed previous generations. Since the Global AI Summit in Paris in February, new models like Xai’s Grok 4, Google’s Gemini 2.5, and its video generator Veo3 have demonstrated significant improvements over their predecessors.

A spokesperson for Google DeepMind asserted that the report overlooks “the entirety of Google DeepMind’s AI safety initiatives,” adding, “Our comprehensive approach to safety and security far exceeds what’s captured in the report.”

OpenAI, Anthropic, Meta, Xai, Zhipu AI, and Deepseek have also been contacted for their feedback.

Source: www.theguardian.com

Reducing high blood pressure may decrease the likelihood of developing dementia

Low blood pressure is associated with a lower risk of dementia

Shutterstock/Greeny

According to a large study from Chinese people, lowering hypertension reduces the risk of dementia and cognitive impairment.

Many studies link Hypertension is also known as hypertension, and is at a high risk of developing dementia.. Some studies have also shown that side effects of blood pressure treatment may be at a lower risk of dementia.

now, jiang he At the University of Texas Southwestern Medical Center, Dallas and his colleagues are directly considering the effectiveness of drugs that lower blood pressure for dementia and cognitive impairment.

They studied 33,995 people in rural China. They were all over 40 years old and had high blood pressure. Participants were split into one of two random groups, each with an average age of approximately 63 years.

On average, the first group actively received three antitherapeutic drugs, such as ACE inhibitors, diuretics, or calcium channel blockers, actively ensuring lower blood pressure. They also coached home blood pressure monitoring and lifestyle changes that help to reduce blood pressure, such as weight loss and alcohol and salt intake.

Another set treated as a control group achieved local treatment levels with the same coaching and more general levels of treatment, including on average one drug.

At follow-up appointments 48 months later, participants were tested for blood pressure and measured for signs of cognitive impairment using a standard questionnaire.

Concerns about hypertension begin when a person’s systolic pressure exceeds 130 mm mercury (mmHg) or when diastolic pressure exceeds 80 mmHg. blood pressure It has exceeded 130/80.

On average, many people who received the medication lowered their blood pressure from 157.0/87.9 to 127.6/72.6 mmHg, while the control group was able to take it from 155.4/87.2 to just 147.7/81.0 mmHg.

The researchers also found that 15% fewer people who received multiple drug therapies during the study received dementia diagnosis compared to the control group, and 16% suffered from cognitive impairment.

“The results of this study demonstrated that lowering blood pressure is effective in reducing the risk of dementia in patients with uncontrolled hypertensive conditions,” he says. “This proven and effective intervention should be widely adopted and expanded to alleviate the global burden of dementia.”

“Over the years, many people know that blood pressure is likely a risk factor for dementia. Zachary Malcolm At Washington University in Seattle.

Raj Shah Rush University in Chicago says it’s helpful to add evidence that treating high blood pressure can help stop dementia, but that’s just one of the dementia puzzles, as we affect brain abilities as we age.

“You need to treat hypertension for multiple reasons,” says Shah. “Because of people’s longevity and happiness, they can age healthyly over time.”

Marcum also says people should think more broadly than just blood pressure to avoid dementia. He says there is Other known risk factors This is associated with an increased risk of dementia, including smoking, inactivity, obesity, social isolation, and hearing loss.

And many factors are more influential at different stages of life. To reduce the risk of dementia, “a holistic approach is needed throughout your life,” says Shah.

topic:

Source: www.newscientist.com

Research shows that the shingles vaccine can lower the chances of developing dementia

Getting vaccinated from shingles reduces the risk of developing dementia. Large-scale new research I’ll find it.

This result provides some of the most powerful evidence that several viral infections can affect brain function in a few years and can interfere with them.

The study, published in Nature on Wednesday, found that those who received the shingles vaccine were 20% less likely to develop dementia seven years later than those who were not vaccinated.

“If you are reducing your risk of dementia by 20%, that’s very important in the public health context. Given that there aren’t too many at this point in slowing the onset of dementia,” Dr. Harrison was not involved in the new study. Other studies The shingles vaccine indicates a lower risk of dementia.

Whether protection can exceed seven years can only be determined by further research. However, with few effective treatments or prevention currently available, Dr. Harrison said the shingles vaccine appears to have “some of the most potent potential protective effects against dementia that we know are actually potentially potentially available.”

The case of shingles comes from a virus that causes water cell-zoster, a childhood chicken pox. As the age and the immune system weakens, the virus can reactivate and cause shingles, with symptoms such as burning, tingling, painful blisters and numbness. Nerve pain can be chronic and ineffective.

In the US, 1 in 3 Develop a lifetime centre for disease control and prevention of at least one case of shingles, also known as Herpes Zoster. Approximately one-third of eligible adults have received the vaccine in recent years. According to the CDC

While several previous studies suggest that shingles vaccinations may reduce the risk of dementia, most people were unable to rule out the possibility that vaccinated individuals may have other dementia protective properties, such as a healthier lifestyle, better diet, or more education.

New research ruled out many of these factors.

“That’s very strong evidence,” said Dr. Anupam Jena, a health economist and physician at Harvard Medical School.

This study emerged from an unusual aspect of the development of the shingles vaccine in Wales on September 1, 2013. Welsh officials have established strict age requirements. That date, 79 people were eligible for the vaccine for a year, but those over 80 were ineligible. When the young people turned 79, they qualified for the vaccine for a year.

Dr. Pascal Geldsetzer, an assistant professor of medicine at Stanford University and senior author of the study, said the age cutoff was imposed because of limited supply and the vaccine was deemed ineffective to people over 80 years of age.

Scientists were able to compare relatively equal groups. I’m with people who qualify for the vaccine and people who couldn’t get it. “If you hire 1,000 people born a week and 1,000 people born a week later, there shouldn’t be any difference between them, except for the big differences in vaccinations,” Dr. Geldsetzer said.

Researchers tracked health records of around 280,000 people, ages 71-88, without dementia when the development began. Over seven years, almost half of those eligible for the vaccine received it, but only a small number of ineligible groups received it, providing a clear front-and-after distinction.

To limit the likelihood of differences between groups, researchers used statistical analysis to measure data from people who only had one week on either side of the cutoff.

They also looked at medical records for possible differences between vaccinated and non-vaccinated. They evaluated whether unvaccinated people received more dementia diagnoses and took more medications that could increase their risk of dementia simply because they visited their doctors more frequently.

“They do a pretty good job with that,” Dr. Jena said. I wrote an explanation about nature research. “They are seeing nearly 200 drugs that have been shown to be linked to an increased risk of Alzheimer’s disease at least.”

He said, “They are making a lot of effort to understand whether there may be other things to suit that age cutoff, other health policy changes, but that doesn’t seem to be the case.”

The study included Zostavax, an older form of shingles vaccine. This includes a fixed version of the live virus. It was then discontinued in the US and several other countries as protection against shingles declined over time. Singlix, a new vaccine containing the inactive portion of the virus, is more effective and permanent, research shows.

Last year’s research Dr. Harrison and his colleagues suggested that Singlix may be more protective against dementia than the older vaccine. Based on another “natural experiment,” the shift from Zostavax to Shingrix in the US in 2017 found that people who received the new vaccine for more than six years had fewer dementia diagnoses than those who received the old vaccine. Of those diagnosed with dementia, those who received the new vaccine were nearly six months longer than those who received the old vaccine, and nearly six months before they developed the condition.

There are various theories as to why the shingles vaccine protects against dementia. One possibility is that by preventing shingles, the vaccine reduces neuroinflammation caused by virus reactivation, Dr. Geldsetzer said. “Inflammation is a bad thing for many chronic diseases, including dementia,” he said.

Both new and Singlics research support that theory.

Another possibility is that vaccines will make the immune system more broad. New research also provides some evidence of that theory. Dr. Geldsetzer found that women with a more reactive immune system and greater antibody response to vaccination than men experience greater protection against dementia than men. The vaccine also provided a greater protective effect against dementia among people with autoimmune conditions and allergies.

Dr. Maria Nagel, a professor of neurology at the University of Colorado School of Medicine, was not involved in the study, but said both theories were true. “There is evidence of direct and indirect effects,” said Dr. Nagel, who consulted the manufacturer of GSK’s Shingrix.

She said that while other vaccines, including those against the flu, produce common neuroprotective effects, there are several studies that have found that it makes sense that the shingles vaccine is especially protected against cognitive impairment, as the shingles virus is hidden in the nerves.

Although this study did not distinguish between types of dementia, other studies suggest that “the effect of the shingles vaccine against Alzheimer’s disease is much more pronounced than that of another dementia.” Recent research Alzheimer’s disease and other dementia and vaccines. She said that some cases of Alzheimer’s are linked to immunity compromise.

The Welsh population in the study was mostly white, Dr. Geldsetzer said, but the report suggested similar protective effects by analyzing UK death certificates for deaths caused by dementia. His team also replicated the results in Australia, New Zealand and Canada.

Dr. Jena said there is a need to further study the connection, noting that reducing the risk of dementia is not the same as preventing all cases. Still, he suggests that the evidence “something about exposure or access to the vaccine will have this effect on the risk of dementia in a few years.”

Source: www.nytimes.com

Can your diet impact your risk of developing dementia?

As we age, we naturally struggle to remember our memories. However, the condition that usually occurs later in life is dementiawhich can cause more severe memory loss. Dementia can affect our quality of life by making it difficult to remember important information, such as our age, phone number, home address, and the names of loved ones. Although there is no treatment for dementia, researchers have investigated the impact of different lifestyle choices on the risks of developing it.

A team of researchers recently analyzed the effects of diet on individuals who are sensitive to the onset of dementia and depression. These researchers previously found that both dementia and depression are associated with brain cells formed in areas that create new memories. Hippocampus. This process is known as Neurogenesis in the hippocampus, and IT problems involving cells dying at increasingly high speeds can exacerbate the risk of dementia and depression. The researchers mentioned the genetic predisposition of people with problems with neurogenesis in the hippocampus in terms they coined. Biological sensitivity of neurogenesis centers.

The researchers wanted to determine whether their diet affects neurogenesis in the hippocampus. They looked for either an increased or reduced risk of dementia and depression, depending on what participants ate. Other dementia researchers focus primarily on: Mediterranean diet Reduced the risk of dementia. In contrast, these researchers have shown that the relationship between several vitamins and food groups and conditions such as Alzheimer's disease, blood vessels and other types of dementia, depression, and general cognitive decline. We focused on neurogenesis sensitivity.

The team worked with 371 people without dementia, with an average age of 76 at the start of the exam. First, the researchers obtained blood samples from each participant to assess nutritional levels. Information from blood samples was then used to identify those who met and did not meet the criteria for neurogenesis-centered biosensitivity. Finally, they recorded the participants' medical history and paid attention to their medication.

After they gathered this initial information, they met with participants every two years for 12 years. They interviewed them about their diet during their first two years of follow-up visit. They also monitored their mental abilities and emotional states with each visit. Over the course of 12 years, 21% of participants developed dementia and 29% experienced symptoms related to depression.

After a 12-year trial, the researchers assessed how each participant's diet affected the risk of developing Alzheimer's disease, vascular dementia, or depression. They are Odds ratio, Odds ratios above 1 mean that individuals are at a higher risk of developing these conditions, and odds ratios below 1 reduce the risk. They found that sensitive participants who reported a greater chicken diet, such as chicken or turkey, had an odds ratio of 0.9, or a lower risk for Alzheimer's disease. On the other hand, for Alzheimer's disease, those who reported a diet consisting of large amounts of lean meat, such as beef and pork, showed 1.1 or increased risk.

Scientists also found that vulnerable participants who consumed large amounts of vitamin D in fatty fish, fortified milk and grains were either an increased odds ratio, or a risk for vascular dementia. . They found susceptible individuals who consumed more vitamin E forms found in whole grains, lush greens, and nuts. γ-Tocopherolshowed an increased odds ratio or risk for depression. However, researchers noted that diet did not affect whether an individual experiences natural cognitive decline, and did not affect the risk of dementia in people who are not sensitive to it.

Scientists concluded that eating more poultry than lean meat could reduce the risk of Alzheimer's disease in individuals with neurogenetic-centered biological sensitivity. However, since these vitamins should benefit human health, they did not expect vitamins D and E to increase the risk of dementia and depression, respectively. Regardless of these nuances, researchers suggested that understanding the relationship between meat consumption and Alzheimer's disease could improve the later health of those with that tendency.


Post view: 269

Source: sciworthy.com

Study finds that consuming more dark chocolate, instead of milk, lowers risk of developing type 2 diabetes

A long-term US study found that consuming at least 5 servings of dark chocolate per week (1 serving equals a standard chocolate bar/pack or 1 oz) was associated with lower risk of type 2 diabetes compared to infrequent consumption. However, increased milk chocolate intake was associated with increased weight gain.

Consuming dark chocolate instead of milk chocolate may lower your risk of type 2 diabetes. Image credit: Sci.News.

The global prevalence of type 2 diabetes has increased significantly over the past few decades, with an estimated 463 million people affected worldwide in 2019 and projected to rise to 700 million by 2045. I am.

Type 2 diabetes is a multifactorial disease characterized by insulin resistance and impaired insulin secretion, which can lead to a number of serious complications, including cardiovascular disease, kidney failure, and vision loss.

A series of studies has highlighted the importance of lifestyle factors, such as a healthy diet, in the prevention and management of type 2 diabetes.

Higher total dietary flavonoid intake, as well as specific flavonoid subclasses, is associated with a lower risk of type 2 diabetes.

Randomized controlled trials have shown that these flavonoids exert antioxidant, anti-inflammatory, and vasodilatory effects that may benefit cardiometabolism and reduce the risk of type 2 diabetes, but the data are inconsistent. It wasn’t.

chocolate made from beans cacao tree (Theobroma cacao)one of the foods with the highest flavanol content and a popular snack around the world.

However, the association between chocolate intake and risk of type 2 diabetes remains controversial due to inconsistent results obtained in observational studies.

For new research, Liu Binkai Researchers at Harvard University's T.H. Chan School of Public Health combined data from three longitudinal U.S. observational studies of female nurses and male health care workers who had no history of diabetes, heart disease, or cancer at the time of recruitment. .

They investigated type 2 diabetes and total chocolate intake in 192,208 participants and 111,654 participants over an average 25-year monitoring period using food frequency questionnaires completed every 4 years. We analyzed the relationship between chocolate subtype (dark and milk) intake.

Because weight change strongly predicts type 2 diabetes risk, the researchers also used these food questionnaires to assess participants' total energy intake.

In the overall chocolate analysis, 18,862 people developed type 2 diabetes. After adjusting for personal, lifestyle, and dietary risk factors, the authors found that people who ate all types of chocolate at least five times a week were more likely to develop type 2 diabetes than those who ate little or no chocolate. We found that the incidence was significantly lower by 10%. .

In the chocolate subtype analysis, 4,771 people developed type 2 diabetes. After adjusting for the same risk factors, those who ate dark chocolate at least five times a week had a 21% significantly lower risk of type 2 diabetes, but there was no significant association with milk chocolate intake. was not found.

Researchers also found that each additional weekly intake of dark chocolate reduced the risk of type 2 diabetes by 3% (dose-response effect).

Increased milk intake was associated with long-term weight gain, but dark chocolate intake was not.

Dark chocolate has similar levels of energy and saturated fat as milk chocolate, but the high levels of flavanols found in dark chocolate reduce the risk of saturated fat and sugar for weight gain and other cardiometabolic diseases such as diabetes. may offset the effects of

“Increased consumption of dark chocolate, but not milk, was associated with a lower risk of type 2 diabetes,” the scientists said.

“Increased milk intake was associated with long-term weight gain, but dark chocolate intake was not.”

“Further randomized controlled trials are needed to replicate these findings and further investigate the mechanisms.”

of study What was posted this week BMJ.

_____

Liu Binkai others. 2024. Chocolate intake and risk of type 2 diabetes: A prospective cohort study. BMJ 387: e078386;doi: 10.1136/bmj-2023-078386

Source: www.sci.news

The impact of cholesterol levels on the risk of developing dementia

Recent research has found a significant connection between cholesterol levels and the risk of developing dementia. It is not just high cholesterol levels that are concerning, but also the fluctuations in levels over time. A study of 10,000 individuals suggests that these fluctuations could increase the chances of developing dementia by up to 60 percent.

The study also indicates that large variations in cholesterol levels, from high to low, are linked to a higher risk of general cognitive decline, regardless of dementia. Dr. Jen Zhou, a researcher at Monash University in Australia, emphasized the importance of closely monitoring and actively intervening to prevent such fluctuations.

The research focused on two main types of cholesterol – “bad cholesterol” or LDL and “good cholesterol” or HDL. Large fluctuations in LDL levels were found to accelerate cognitive decline, while fluctuations in HDL levels did not impact cognitive decline risk significantly.

The study highlighted the potential adverse effects of LDL cholesterol levels above 130mg per deciliter and the role of LDL fluctuations in destabilizing atherosclerotic plaques in arteries, potentially leading to impaired blood flow to the brain.

The study involved individuals in their 70s from Australia and the United States who did not have dementia at the start of the observation period. By the end of the study, a portion of participants developed dementia while others experienced cognitive decline. Those with stable cholesterol levels had a lower risk of neurological symptoms.

Globally, high levels of bad cholesterol contributed to millions of deaths in 2021. To manage cholesterol levels, individuals are advised to undergo regular medical check-ups and make lifestyle changes such as increasing physical activity, quitting smoking, and consuming a healthy diet.

According to Emily McGrath from the British Heart Foundation, lowering cholesterol can be achieved through various lifestyle adjustments, including reducing saturated fats and opting for foods rich in unsaturated fats like olive oil, nuts, seeds, and oily fish.

Read More:

Source: www.sciencefocus.com

High potency cannabis increases the likelihood of developing cannabis-induced psychosis

Anders Gillian was only 17 years old when he started to lose touch with reality.

“He believed there was a higher being communicating with him, telling him what to do and who he was,” said his mother, Christine Gillian, who lives in Nashville. ‘ he said.

Her son, who had been using marijuana since he was 14, was diagnosed with schizophrenia, a chronic mental illness with symptoms such as delusions, hallucinations, and incoherent speech.

He began taking antipsychotic medication but eventually stopped due to side effects. He turned to heroin to quiet the voices in his head and tragically died from an accidental drug overdose at age 22 in 2019.

“If he hadn’t started using marijuana, he might still be here today,” says Gillian, a neuroscientist at Vanderbilt University. Despite having a family history of schizophrenia, she believes her son’s marijuana use triggered a psychotic episode and led to his condition.

Anders was part of a group of young men at heightened risk of developing psychosis due to marijuana use. Studies from Denmark and Britain suggest a connection between heavy marijuana use and mental disorders like depression, bipolar disorder, and schizophrenia. Researchers believe that the increased potency of THC, the psychoactive compound in marijuana, may exacerbate these symptoms in individuals predisposed genetically. THC levels in marijuana have been rising over the years.

Kristen Gilliland holds a photo of her son Anders, who was diagnosed with schizophrenia due to marijuana-induced psychosis and died of an accidental overdose.NBC News

“We’re seeing a rise in marijuana-induced psychosis among teenagers,” said Dr. Christian Thurstone, an addiction expert and child psychiatrist at the University of Colorado School of Medicine in Denver.

Is higher potency marijuana more dangerous?

Nora Borkow, director of the National Institute on Drug Abuse, stated that the higher the potency of a cannabis product, the more negative effects it is likely to have on users.

“Those who consume higher doses are at a greater risk of developing psychosis,” she explained.

Research on the adverse effects of high THC levels is limited, but a 2020 study found that high-potency cannabis products were associated with an increased risk of hallucinations and delusions compared to lower-potency variants.

“There seems to be a correlation between potency and the risk of psychosis, but further research is needed,” said Ziva Cooper, director of UCLA’s Center for Cannabis and Cannabinoids.

Research suggests that a proportion of individuals with cannabis-induced psychosis may go on to develop schizophrenia or bipolar disorder.

Mr. Thurstone highlighted the particular concern regarding young people and adolescents.

“Current research shows that the risk of psychosis is dependent on the dose of marijuana, especially during adolescence. Higher exposure during this critical period increases the likelihood of psychosis, schizophrenia, and potentially severe mental illnesses,” he stated.

More news about marijuana and health

Another issue with high-potency products is the risk of developing cannabis use disorder or marijuana addiction. Increased exposure to stronger cannabis products may lead to addiction, although more research is required to definitively establish this connection.

“There is clear scientific evidence that marijuana can be psychologically addictive and habit-forming, and even physically habit-forming,” Thurstone warned. “It creates tolerance, requiring increased usage for the same effect.”

Approximately 1 in 10 individuals who start using cannabis may become addicted. According to the Centers for Disease Control.

How the potency of cannabis is related to psychosis

Marijuana overstimulates cannabinoid receptors in the brain, leading to a high. This stimulation can impair cognitive functions, memory, and problem-solving abilities.

While the exact mechanisms of how marijuana induces psychosis are not fully understood, scientists believe it interferes with the brain’s ability to differentiate between internal thoughts and external reality.

“In the ’60s, ’70s, ’80s, and early ’90s, marijuana had THC content of about 2% to 3%,” noted Thurstone, highlighting the significant increase in potency levels in recent years.

Patrick Johnson, an assistant store manager at Frost Exotic Dispensary in Colorado, has witnessed the rise in potency firsthand, especially after the legalization of recreational marijuana in 2014.

Since then, 24 states, two territories, and Washington, D.C., have legalized marijuana for medical and recreational use.

As cannabis consumption grows across the nation, the demand for high-potency products is increasing, experts suggest.

“After legalization, I’ve seen potency rise from 19-20% to 30-35%,” Johnson remarked.

Currently, his store offers strains ranging from 14% to 30%, with most customers preferring stronger varieties.

Mahmoud Elsohly, a cannabis researcher at the University of Mississippi, explained that one reason for increased potency is users developing tolerance to the drug over time. This has led to a steady increase in THC content over the years.

“People need more potent products to achieve the desired high,” he noted.

Previously, a joint with 2% THC might have been enough, but as tolerance develops, individuals may need multiple joints or higher THC concentrations for the same effect.

Are some forms of marijuana safer?

Cannabis potency primarily refers to the THC content in the smokable parts like the flower or bud.

THC levels in flowers can reach up to 40%, while concentrates and oils may contain levels as high as 95%.

The challenge, according to UCLA’s Cooper, lies in the absence of a standardized dose for cannabis products, making it hard to predict individual reactions.

Establishing unit doses for inhaled products is also complicated. A joint can contain 100 to 200 milligrams of THC, but factors like inhalation depth and frequency of puffs affect actual exposure.

On the other hand, edibles typically contain 5 to 10 milligrams per serving. Efforts are underway to standardize dosing for edibles and regulate THC intake. For example, New York State limits edibles to 10 mg per serving.

How high can THC go?

Borkow of the National Institute on Drug Abuse believes that excessively high THC levels may induce extreme reactions like agitation and paranoia, predicting that marijuana flower THC levels won’t exceed 50%.

Cooper added that there is a threshold for THC production, but manufacturers are finding innovative ways to increase potency.

“The industry is boosting THC levels in plant products by adding extra THC, like injecting it into pre-rolled cannabis cigarettes,” she said. “We’re witnessing higher THC exposure levels than ever before.


Source: www.nbcnews.com

The Potential of Minecraft for Developing Adaptive AI

Minecraft is a game for humans, but it can also be useful for AI

Mine Craft

Mine Craft Not only is it the best-selling video game of all time, it could be the key to creating adaptive artificial intelligence models that can handle a variety of tasks just like humans.

stephen james and colleagues at the University of the Witwatersrand in South Africa developed a benchmark test in which. Mine Craft Measure the general intelligence of your AI model. MinePlanner evaluates AI's ability to ignore unimportant details when solving complex problems in multiple steps.

According to James, much AI training is “cheating” by giving the model all the data it needs to learn how to do a job, and nothing irrelevant. While this is a useful approach if you're writing software to perform a specific task, such as predicting the weather or folding proteins, it's not useful if you're trying to create artificial general intelligence (AGI).

James says that future AI models will need to tackle wicked problems, and he hopes MinePlanner will guide that research. The AI ​​working to solve in-game problems recognizes scenery, extraneous objects, and other details that are not necessarily needed to solve the problem and should be ignored. You need to investigate your surroundings and decide for yourself what is necessary and what is not.

MinePlanner consists of 15 construction problems, each with easy, medium, and hard settings, for a total of 45 tasks. The AI ​​may need to perform intermediate steps to complete each task. For example, building a series of stairs to place blocks at a certain height. This requires AI to narrow down the problem and plan ahead to achieve the goal.

Experiments with state-of-the-art planning AI models ENHSP and Fast Downward, open-source programs designed to process sequential operations in pursuit of an overall goal, show that both models successfully complete difficult problems. I couldn't do it. Fast Downward was only able to complete one medium problem and five easy problems, while ENHSP completed all but one easy problem and all but two medium problems. By completing all of the above tasks, they achieved slightly better results.

“You can't step in and tell a human designer exactly what to care about and what not to care about for every task that an AI needs to solve,” James said. say. “That's the problem we're trying to address.”

topic:

Source: www.newscientist.com

HyperVerse cryptocurrency targeted developing countries before collapsing, leading to investor ‘suicides’

The HyperVerse cryptocurrency scheme targeted investors in developing countries in Asia, Africa, and the Pacific until it eventually collapsed, leaving many people unable to access their funds.

One investor said that in Nepal, some people who took out bank loans to buy Hyperverse packages felt suicidal when they could not withdraw their money, and in some cases even committed self-harm. .

The promoter of UK-based HyperVerse, which toured five African countries in 2022, told a Ghanaian radio station that millions of people around the world are trying to understand blockchain technology “without really understanding it.” He said he has benefited from it.

HyperVerse, which was linked to a previous scheme known as HyperFund, was founded by Australian blockchain entrepreneur Sam Lee and his business partner Ryan Hsu, two of the founders of bankrupt Australian company Blockchain Global. ) was launched by.

Despite one overseas regulator warning that they could be a “scam” and another calling HyperVerse a “suspected pyramid scheme”, a Guardian Australia investigation found , revealed widespread losses from a scheme that escaped regulator warnings in Australia.

This push to expand the system, which encourages existing member states to reap financial rewards for bringing in new members, has resulted in the system spreading to hitherto untapped markets, including developing countries. It seems so.

In January 2022, the Central Bank of Nepal issued a public warning naming Hyperfund and several other unrelated schemes, encouraging people to participate in such cryptocurrency products with the promise of “high returns in a short period of time.” He said he was tempted to do so.

In a February 2023 Zoom meeting between Nepali Hyper members and Lee, the members said people were angry because they could not withdraw funds from the platform.

One member told Mr Lee that he was “sad and grumpy” and was fielding requests from people who didn’t have access to the funds he brought into the scheme.

“We really need to do something fast, you may be somewhere far away and you may not be under direct pressure, but people like us, we don’t live in the neighborhood. And our relationship has deteriorated, and whenever we do something, it’s people like us. We wake up in the morning and there’s people at the door.”

Q&A

How did the HyperVerse investment scheme work?

show

Investors were offered “membership” to HyperVerse, a “blockchain community” where members could “explore the HyperVerse ecosystem.”

The minimum membership amount is USD 300, which will be converted into Hyperunits after investment.

This scheme offers a minimum return of 0.5% per day, with a return of 300% in 600 days.

Members were encouraged to “reinvest” their earnings and were provided with more Hyper Units if they did not withdraw after funds became available.

Members were also paid hyper units for recruiting new members, and were paid a referral fee on a sliding scale based on the number of people recruited. Additional commissions were paid based on the number of people these recruits subsequently recruited up to the 20th level.

Hyperunits are linked to various crypto tokens and, once matured, can be withdrawn and converted into other cryptocurrencies.

While early investors were able to make profits and withdraw money, this system has left many investors unable to access their funds.

Thank you for your feedback.


A Nepalese man living in the UK told Lee that some people in his home country try to commit suicide by taking out bank loans to buy the Hyperverse package, and one of his acquaintances has committed self-harm. That's what he said.

“There have also been instances where people have lent money to buy this company's packages because they were presented in such a favorable way. We know it's wrong, we urge them to do so. But…the benefits outweighed the risks, so people took out some loans from banks and packaged this project. I bought it,” said a Nepali man.

“I don’t want to name names, but there was a case of self-harm in my hometown. [in Nepal]. We have received several SOS calls. With people in this situation, it is better to take a suicidal step than to wait for this company to come up with a repayment plan. ”

In response, Mr Lee said on a Zoom call that he hoped vulnerable people would be prioritized in recovering their initial investment, but denied he was responsible.

“I don't want to say anything about these individual incidents because I'm not in a position to empathize with them. But, you know, we just have to recognize…others Many industries have been misunderstood, and this is just the newest industry to be misunderstood,” Lee said.

“And the way to prevent something like this from happening again is that we need to increase everyone's literacy about technology and how these opportunities work.”

Sam Lee, one of the founders of the failed blockchain global cryptocurrency exchange. Photo: Blockchain Global/Facebook

Lee blamed the situation on the “corporate” team behind HyperVerse.

Despite speaking at HyperVerse's official launch, he denied any involvement in HyperVerse, saying he was only involved in the fund management side through his role at HyperTech Group, of which he is chairman.

Another person who attended the February 2023 meeting challenged Mr. Lee on this claim.

“Community leaders have always projected you as a Midas-esque figure – HyperTech, HyperVerse, HyperFund, whatever, it’s Sam Lee, it’s Sam Lee, it’s Sam Lee, that’s what we do every day. Everything you’ve been told every day,” they said. Said.

In response, Lee said, “If you don't get involved, you can't completely disappear from HyperVerse.”

“The company put out misleading information, which of course management used to drive sales, so ultimately the company loses out. But I am 100% “It's not free, because if things were misunderstood, they could have always issued a press release or a statement to clarify,” he said.

www.theguardian.com

Mark Zuckerberg commits to developing advanced AI to address concerns

Mark Zuckerberg has faced accusations of being irresponsible in his approach to artificial intelligence after working to develop AI systems as powerful as human intelligence. The Facebook founder has also raised the possibility of making it available to the public for free.

Meta’s CEO announced that the company intends to build an artificial general intelligence (AGI) system and plans to open source it, making it accessible to outside developers. He emphasized that the system should be “responsibly made as widely available as possible.”

In a Facebook post, Zuckerberg stated that the next generation of technology services requires the creation of complete general-purpose intelligence.

Although the term AGI is not strictly defined, it generally refers to a theoretical AI system capable of performing a range of tasks at a level of intelligence equal to or exceeding that of humans. The potential emergence of AGI has raised concerns among experts and politicians worldwide that such a system, or a combination of multiple AGI systems, could evade human control and pose a threat to humanity.

Zuckerberg expressed that Meta would consider open sourcing its AGI or making it freely available for developers and the public to use and adapt, similar to the company’s Llama 2 AI model.

Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the United Nations advisory body on AI, expressed concern about the potential for open source AGI, calling it “really, very scary” and labeling Zuckerberg’s approach as irresponsible.

According to Mr. Hall, “Thankfully, I think it will still be many years before those aspirations become a reality.” She stressed the need to establish a regulatory system for AGI to ensure public safety.

Last year, Meta participated in the Global AI Safety Summit in the UK and committed to help governments scrutinize artificial intelligence tools before and after their release.

Another UK-based expert emphasized that decisions about open sourcing AGI systems should not be made by technology companies alone but should involve international consensus.

In an interview with tech news website The Verge, Zuckerberg indicated that Meta would lean toward open sourcing AGI as long as it is safe and responsible.

Meta’s decision to open source Llama 2 last year drew criticism, with some experts likening it to “giving people a template to build a nuclear bomb.”

OpenAI, the developer of ChatGPT, defines AGI as “an AI system that is generally smarter than humans.” Meanwhile, Google DeepMind’s head, Demis Hassabis, suggested that AGI may be further out than some predict.

OpenAI CEO Sam Altman warned at the World Economic Forum in Davos, Switzerland, that further advances in AI will be impossible without energy supply breakthroughs, such as nuclear fusion.

Zuckerberg pointed out that Meta has built an “absolutely huge amount of infrastructure” to develop the new AI system, but did not specify the development timeline. He also mentioned that a sequel to Rama 2 is in the works.

Source: www.theguardian.com

Intrinsic, supported by Y Combinator, is developing essential infrastructure for trust and safety teams

Karine Mellata and Michael Lin met several years ago while working on Apple’s Fraud Engineering and Algorithmic Risk team. Both Mellata and Lin were involved in addressing online fraud issues such as spam, bots, account security, and developer fraud among Apple’s growing customer base.

Despite their efforts to develop new models to respond to evolving patterns of abuse, Melata and Lin feel they are falling behind and stuck in rebuilding core elements of their trust and safety infrastructure. I did.

“As regulation puts increased scrutiny on teams that centralize somewhat ad hoc trust and safety responses, we are helping modernize this industry and build a safer internet for everyone. We saw this as a real opportunity to do that,” Melata told TechCrunch in an email interview. “We dreamed of a system that could magically adapt as quickly as the abuse itself.”

Co-founded by So Mellata and Lin essentialis a startup that aims to give safety teams the tools they need to prevent product fraud. Intrinsic recently raised $3.1 million in a seed round with participation from Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.

Intrinsic’s platform is designed to moderate both user-generated and AI-generated content, allowing customers (primarily social media companies and e-commerce marketplaces) to detect and take action on content that violates their policies. We provide the infrastructure to do so. Intrinsic focuses on integrating safety products and automatically orchestrates tasks like banning users and flagging content for review.

“Intrinsic is a fully customizable AI content moderation platform,” said Mellata. “For example, Intrinsic can help publishers creating marketing materials avoid giving financial advice that carries legal liability. We can also help marketplaces discover listings such as:

Mellata notes that there are no off-the-shelf classifiers for such sensitive categories, and even for a well-resourced trust and safety team, adding a new auto-discovered category can take weeks of engineering. They claim it can take several months in some cases. -House.

Asked about rival platforms such as Spectrum Labs, Azure, and Cinder (almost direct competitors), Mellata said Intrinsic is superior in terms of (1) explainability and (2) significantly expanded tools. I said I was thinking about it. He explained that Intrinsic allows customers to “ask questions” about mistakes they made in content moderation decisions and provide an explanation as to why. The platform also hosts manual review and labeling tools that allow customers to fine-tune moderation models based on their own data.

“Most traditional trust and safety solutions were inflexible and not built to evolve with exploits,” Melata said. “Now more than ever, resource-constrained trust and safety teams are looking to vendors to help them reduce moderation costs while maintaining high safety standards.”

Without third-party auditing, it is difficult to determine how accurate a particular vendor’s moderation model is or whether it is susceptible to some type of influence. prejudice It plagues content moderation models elsewhere. But either way, Intrinsic appears to be gaining traction thanks to its “large and established” enterprise customers, who are signing deals in the “six-figure” range on average.

Intrinsic’s near-term plans include increasing the size of its three-person team and expanding its moderation technology to cover not just text and images, but also video and audio.

“The widespread slowdown in the technology industry has increased interest in automation for trust and safety, and this puts Intrinsic in a unique position,” Melata said. “COOs are concerned with reducing costs. Chief compliance officers are concerned with mitigating risk. Embedded helps both. , to catch more fraud.”

Source: techcrunch.com