The Unsettling Reality of Medical Cannabis and Its Impact on Mental Health

In 2018, the legalization of medical cannabis in the UK marked a pivotal change, driven by campaigns advocating for children with treatment-resistant epilepsy.

The legal reforms permit specialist medical consultants to prescribe cannabis-based medical products (CBPMs) for a variety of conditions, always prioritizing the patient’s well-being.

Despite this legalization, the possession and use of cannabis (classified as a class B drug) without a valid prescription continues to be illegal in the UK.

Most cannabis products available are unlicensed, lacking endorsement from the Medicines and Healthcare products Regulatory Agency (MHRA), resulting in limited prescriptions through the National Health Service (NHS). This gap has inadvertently triggered a burgeoning private market.

Currently, more than 30 specialist cannabis clinics are registered with the Healthcare Quality Commission, with estimated prescriptions for cannabis products reaching 80,000 patients. Conditions treated range from chronic pain and anxiety to ADHD.

Data reveals that 42% of patients were prescribed medical cannabis for mental health issues such as anxiety, depression, PTSD, and OCD, aligning with trends observed in Australia and the US.

The UK stands as a major producer of medical cannabis. Photo courtesy of Getty.

However, a recent review published in Lancet Psychiatry assessed over 50 randomized controlled trials (RCTs) and found “no evidence” supporting the efficacy of cannabinoids for treating conditions like anxiety, PTSD, substance use disorders, ADHD, bipolar disorder, psychotic disorders, or anorexia.

While some efficacy was noted for cannabis use disorder, insomnia, Tourette syndrome, and autism spectrum disorder, these findings were categorized as “low quality.”

The Advisory Committee on the Abuse of Drugs (ACMD) is conducting a review examining the implications of medical cannabis prescriptions in the UK, focusing on any “unintended consequences” resulting from recent legal changes.

Professor Owen Bowden Jones, former ACMD Chairman, indicated that the study results indicate that the benefits of medical cannabis may have been “overestimated” for numerous conditions, and these products “should not be administered for psychiatric conditions lacking supportive evidence.”

“We must focus on reducing barriers to facilitate superior research that further explores cannabis product effects,” he added.

The review asserts that routine cannabinoid use for mental health conditions is “seldom justified,” raising critical questions, notably, why is cannabis prescribed despite limited evidence of its effectiveness?

Treatment Options

It is stated that “absence of evidence is not evidence of absence.” Dr. Niraj Singh, a consultant psychiatrist in the UK, has prescribed medical cannabis for over six years.

“Numerous patients have reported that this treatment effectively addresses a range of conditions, and most use it responsibly. In my experience, it has yielded positive results, enabling patients to lead happy, fulfilling lives,” Singh remarked.

Many patients seeking treatment at cannabis clinics have reportedly exhausted all traditional options or lack access to adequate mental health support. As of January 2026, 1.5 million adults engaged with NHS mental health services, while 8.7 million people were prescribed antidepressants in the UK from 2023 to 2024, believed to be effective for approximately one year.

In a survey by the United Patient Alliance, a patient dealing with anxiety, depression, and PTSD expressed feeling “seen and supported” after receiving effective treatment without harmful side effects associated with previous prescriptions.

“In instances where individuals have plateaued in treatment options, medical cannabis is making a significant difference,” Singh expressed.

Evidence from peer-reviewed studies links cannabis to improved symptoms and quality of life for conditions such as: PTSD, OCD, and insomnia. However, observational studies were excluded from the aforementioned review due to concerns of biases that could not establish causality.

Despite the need for more robust clinical trials, Professor David Nutt, former chair of ACMD and founder of the independent charity Drug Science, argues that RCTs alone do not offer sufficient data on a drug’s effectiveness.

This sentiment is echoed by Sir Michael Rollins, former director of the MHRA and the National Institute for Healthcare Research and Evaluation (NICE). He emphasized the need for real-world evidence that could yield “better clinical data and statistical power” in a speech at the Royal College of Physicians.

According to Nutt, “Placebo-controlled trials are costly and involve highly selective patient populations, limiting their generalizability.” He also highlighted that cannabis’s numerous active compounds, which vary vastly in dosage and formulation, pose significant challenges when conducting double-blind, placebo-controlled studies. Professor Mike Burns, President of the Association of Medical Cannabis Clinicians, emphasized the need for a more nuanced approach in understanding mental health prescribing.

Clinical Supervision

Medical cannabis can induce side effects, including heightened anxiety and paranoia, making it unsuitable for individuals with a history of psychosis.

According to a survey published in BMJ Mental Health, those using cannabis for self-medication tend to use it more frequently and consume higher levels of tetrahydrocannabinol (THC), resulting in increased paranoia.

“Cannabis is not devoid of side effects,” stated Dr. Marta Di Forti, a Professor of Drug Use, Genetics, and Psychosis at King’s College London, who runs a clinic for individuals with mental health issues in London.

She recounted cases where patients developed complications after being prescribed products containing high THC levels, leading to hospitalizations for psychotic symptoms. Yet, much of our understanding in this area remains anecdotal.

“There is valid reasoning for prescribing cannabis as medication,” she noted. “However, there must be comprehensive evidence and proper oversight, which is currently lacking.”

The Association of Medical Cannabis Clinicians recommends a review by a peer panel for prescriptions exceeding 60 grams per month or containing over 25% THC. Like other controlled substances, prescribing CBPM requires diligent clinical oversight, thorough evaluation, and ongoing monitoring, especially in complex cases with significant mental health histories.

While Singh noted that side effects are relatively rare, he expressed concern about the rising availability of high-THC products. “Checks and balances are imperative,” he insisted, “as adjustments to THC concentrations must be carefully monitored.”

Prescribers maintain that a strong clinical oversight process is in place, stating they’ve never felt pressure to prescribe. Eligibility for medical cannabis entails having undergone at least two previous treatments, receiving an evaluation from a psychiatrist, and being reviewed by a multidisciplinary team.

Nonetheless, some critics argue that clinics should enhance support and training for prescribers and have a responsibility to foster research that substantiates their claims. “The industry has not adequately collected and analyzed patient outcomes,” Burns stated. “Clinics have a moral obligation to gather and share data whenever possible.”

In 2018, cannabis became legal for medical use in the UK with a prescription. Use without a prescription remains illegal. Photo credit: Getty.

Evidence Gap

There is a shared consensus on the urgent need to develop a robust evidence base. However, finding common ground proves challenging. Some advocate for cannabis’s efficacy, while others dispute it, with a lack of substantial research to confirm either stance.

Nutt emphasized that the current clinical research system is inadequate for medical cannabis. “In 2018, the Health Ministry pledged to conduct efficacy trials for children with epilepsy, but no progress has been made. This reflects a disinterest from pharmaceutical companies due to the impossibility of patenting plant medicines.”

This challenge cannot be solved solely by a call for further research, he noted, but requires prioritizing real-world data and practical experience to support cannabis in clinical settings.

Meanwhile, patients express fears of being pushed back into the illegal market, where they have no access to medical oversight or regulated products, which is widely viewed as more dangerous.

Denying access to medical marijuana based on “incomplete evidence” not only misrepresents scientific data but also inflicts harm on patients who rely on it, according to the United Patient Alliance.

“Real-world evidence studies, patient-reported outcomes, and research focusing on treatment-resistant populations are critically needed,” they added. “We do not ask for science to be ignored; we urge it to catch up with patient experiences.”

Read More:

Source: www.sciencefocus.com

Exploring the Implications of an Extra Dimension in the Universe: What It Means for Science and Reality

Extra dimensions allow for even more complex shapes

Vitalij Chalupnik / Alamy and NASA, ESA, and K. Stapelfeldt (JPL)

One of the most striking interviews of my career began with me sitting at my desk, head in my hands, discussing extra dimensions with a physicist over the phone. I sought to grasp the implications of dimensions being “small.” Amidst the conversation, I tuned out the laughter of a colleague and asked, “They’re not as small as jellybeans, are they?” The answer? It’s a complex one.

While extra dimensions are routinely referenced in physics, their true significance is often overlooked. They frequently arise in discussions regarding string theory—a revolutionary concept proposing that everything stems from minuscule, vibrating strings. These vibrations create particles, from atoms to electrons to quarks. My skepticism about string theory stems from its ideas ranging from the profoundly challenging to the outright untestable, which can be quite daunting. Additionally, these theories usually depend on an extra dimension to conceal the curled strings, a notion that I find difficult to wrap my head around.

Some established explanations, like the Flatland novella, provide entertaining yet enlightening allegories—helping us understand the experience of encountering another dimension while accustomed to four. However, most discussions devolve into ambiguity before we move on.

If extra dimensions are indeed real, they could resolve significant issues in both physics and cosmology, making it imperative to explore them. A notable challenge is gravity: paradoxically weaker than other fundamental forces. This anomaly might occur because gravity “leaks” into other dimensions, reducing its force in our observable universe. Recent hypotheses suggest that dark energy might similarly diminish over time due to an evolving extra dimension, affecting the energy balance of our familiar four-dimensional setup: three spatial dimensions and one of time.

Moreover, this concept is captivating, even as I grapple with the likelihood of extra dimensions existing alongside our own.

One of the most comprehensible kinds of additional dimensions can be found in Flatland, a narrative about geometric entities inhabiting a two-dimensional realm. They navigate a flat surface, much like a puck on ice, and perceive other shapes merely as lines from their limited viewpoint.

Conversely, beings with additional dimensions (humans, for example) see these entities from above or below, recognizing them as shapes rather than mere lines. In our three-dimensional world, we can extract shapes from this plane and rotate them. The remaining forms in Flatland maintain their flatness; instead of seeing stable lines, we’d view an intriguing cross-section where the shape intersects our dimension.

When applied to our universe — with three spatial dimensions and one temporal — even higher-dimensional entities could peer within our world, potentially drawing us into their dimensional space. Observers left behind would witness shifting cross-sections of our likenesses as we traverse this five-dimensional reality.

A variation of this scenario is the brane-world hypothesis, suggesting that our universe exists as the boundary of a higher-dimensional space. Originally proposed in 1999, this concept has recently gained traction as a feasible integration of our universe with the principles of string theory.

In one interpretation, our universe resides at the precipice between a higher-dimensional construct known as hyperspace and the void. Essentially, we occupy the very edge of existence, intriguingly termed the End of the World Brain. The fundamental particles we recognize correspond to the terminals of five-dimensional strings within hyperspace — yet, like the shapes in Flatland, we can never perceive the entirety of these strings.

This theory introduces five dimensions, but there could be countless others, most not resembling our universe at all. Imagine time not merely progressing forward and backward but also moving sideways (details omitted). Some dimensions could possess sizes akin to jellybeans, or even minuscule.

Are extra dimensions like nesting dolls?

Lars Ruecker/Getty Images

Consider a dimension as a collection of glass matryoshka dolls, each nestled within a larger one, accessible depending on the dimensional level one inhabits (likely four) and the doll representing the inner dimensions. The dimensions comparable to a jellybean may seem physically minute but represent expansive realities, akin to bubbles in glass. Each of these bubbles encapsulates a small realm, a kind of pocket universe.

Wondering about entry into this pocket world? These dimensions are often extremely diminutive, making it improbable for anyone larger than a jellybean—or perhaps a photon—to encounter them. Their minuscule nature is partly why they remain elusive. More sizeable dimensions would certainly attract attention. However, discovering smaller dimensions is not entirely out of the question. Think of light passing through a glass matryoshka doll. Air bubbles distort and reflect light. A parallel phenomenon occurs in actual additional dimensions.

Imagine a gravitational wave traversing one of our universe’s bubbles. It could emerge distorted, and with a potent enough detector, such distortions could be measured. Other investigative methods might include subtle quantum effects and exotic particles believed to originate exclusively from extra dimensions.

Researchers utilizing gravitational wave detectors, particle colliders, and traditional telescopes are diligently searching for these faint signs. However, no concrete evidence has been unearthed yet. Nonetheless, the very endeavor of seeking out extra dimensions could undermine my initial assertion that string theory lacks testable predictions. Should we eventually uncover such dimensions, it could significantly reshape my perspective on string theory — and our overarching understanding of the universe.

Topics:

Source: www.newscientist.com

Unraveling the Peculiar Rules of Reality: Are They Starting to Shift?

There is a concerning issue with the “Higgs field”, the pivotal energy field responsible for giving particles mass. Recent studies indicate it may be dangerously close to becoming inherently unstable. In the absence of particles, the Higgs field exhibits a non-zero background “vacuum energy.”

However, scientists suggest that this could merely represent a “trough” of energy, rather than the absolute minimum energy of the Higgs field.

An analogy for this scenario is a ball rolling down a hill, getting stuck in a crater. The ball remains stable in the crater, yet it hasn’t reached the lowest energy point possible.







Physicists describe this condition as a “metastable” state, with the resulting background energy referred to as a “false vacuum.” Current measurements indicate we exist in a universe characterized by this false vacuum.

But what if the Higgs field unexpectedly transitioned to a lower energy state? Such an event, termed a “vacuum collapse,” could spell disaster for our universe. The constants of nature would alter, resulting in a completely different realm of physics, chemistry, and biology.

This event could annihilate and recreate the universe in a massive release of energy, the nature of the new universe remains unknown.

The Higgs field is an invisible energy field that permeates the entire universe – Photo courtesy of Getty

How probable is this scenario? For a vacuum collapse to happen, a significant concentration of energy in a minuscule volume is required. Yet calculations reveal no known process can achieve this.

Nonetheless, the Higgs field adheres to the principles of quantum physics. There exists a phenomenon called “tunneling” that permits the Higgs field to spontaneously shift to another energy state, akin to our ball piercing the ground and escaping the crater.

Fortunately, calculations indicate this occurrence is exceedingly rare, estimated at about once in 10100 years (1 followed by 100 zeros). However, just because an event is unlikely doesn’t mean it is impossible.

In fact, vacuum collapse might have already commenced somewhere in the universe, racing through space at the speed of light. This catastrophic event would obliterate everything in its path, with no warning before it arrives. But there’s no need to panic.


This article addresses a question posed by Kirill Jerdev via email: “Is it possible for the universe to explode?”

To submit your questions, please email questions@sciencefocus.com or connect with us on Facebook, Twitter, or Instagram (please include your name and location).

For more engaging scientific insights, check out our Ultimate Fun Facts page.


Read more:


Source: www.sciencefocus.com

Discover Our Innovative Approach to Understanding the Nature of Reality

Canal Reflection

We can Usually Agree on Objects’ Appearance, But Why?

Martin Bond / Alamy

Although our world seems inherently ambiguous at the quantum level, this is not the experience we face in daily life. Researchers have now established a methodology to measure the speed at which objective reality emerges from this quantum ambiguity, lending credibility to the notion that an evolutionary framework can elucidate this emergence.

In the quantum domain, each entity, such as a single atom, exists within a spectrum of potential states and only assumes a definitive, “classical” state upon measurement or observation. Yet, we perceive strictly classical objects devoid of existential ambiguities, and the processes enabling this have challenged physicists for years.

Prominent physicist Wojciech Zurek of Los Alamos National Laboratory in New Mexico introduced the concept of “quantum Darwinism,” suggesting that a process akin to natural selection confirms the visibility of the “fittest” state among numerous potential forms, ensuring successful replication through environmental interactions up to the observer’s perspective. When observers with access to only portions of reality converge on the same objective observation, it indicates they are witnessing one of these identical copies.

Researchers at University College Dublin, led by Steve Campbell, have shown that differing observers can still arrive at a consensus on objective reality, even if their observational methods lack sophistication or precision.

“Observers can capture a fragment and make any measurements they desire. If I capture a different fragment, I too can make arbitrary measurements. The question becomes: how does classical objectivity arise?” he explains.

The research team has redefined the emergence of objectivity as a quantum sensing issue. For instance, if the objective fact pertains to the frequency of light emitted by an object, the observer must acquire accurate data about that frequency, similar to how a computer employs a light sensor. In optimal conditions, this method achieves ultra-precise measurements, quickly leading to a definitive conclusion about the light’s frequency. This scenario is assessed using Quantum Fisher Information (QFI), a mathematical formula that benchmarks how varying, less accurate observational techniques can still attain similar precise conclusions. Gabriel Randy at the University of Rochester highlights this comparison in their recent study.

Remarkably, their calculations indicate that for significantly large fragments of reality, even observers employing imperfect measurements can ultimately gather enough data to reach the same conclusions about objectivity as those derived from the ideal QFI standard.

“Surprisingly, simplistic measurements can be just as effective as more advanced ones,” Lundy states. “This illustrates how classicality emerges: as fragments grow larger, observers tend to agree on even basic measurements.” Thus, this research contributes further to our understanding of why, when observing the macroscopic world, we concur about its physical attributes, such as the color of a coffee cup.

“This study underscores that we do not require flawless, ideal measurements,” adds Diego Wisniacki from the University of Buenos Aires, Argentina. He notes that while QFI is foundational in quantum information theory, its application to quantum Darwinism has been sparse, presenting pathways to bridge theoretical frameworks with established experimental methodologies, like quantum devices utilizing light-based or superconducting qubits.

“This research serves as a foundational ‘brick’ in our comprehension of quantum Darwinism,” states G. Massimo Palma from the University of Palermo, Italy. “It more closely aligns with the experimental descriptions of laboratory observations.”

Palma elaborates that the simplicity of the model used in this study could facilitate new experimental pursuits; however, complex system calculations will be essential to solidify quantum Darwinism’s foundation. “Advancing beyond rudimentary models would mark a significant progression,” Palma asserts.

Lundy conveyed that researchers are eager to transform theoretical findings into experimental validations. For instance, qubits formed from trapped ions could be employed to evaluate how the emergence of objectivity timescale relates to the durations during which these qubits retain their quantum characteristics.

Topics:

Source: www.newscientist.com

How Virtual Reality Farming Will Transform the Future of Food Supply

Agriculture has long been a skilled and high-pressure profession, but modern farmers encounter challenges that even our grandparents could not have imagined.

In the UK, extreme weather is severely impacting agricultural lands. A recent survey revealed that 84% of farmers have witnessed a drop in crop yields or livestock production. This decline stems from a mix of heavy rain, drought, and extreme heat. Coupled with labor shortages, escalating machinery costs, and the demand to produce more food with fewer resources, the outlook for British agriculture appears increasingly uncertain.

As these issues escalate, innovations have surged. One of the most surprising solutions isn’t a cutting-edge tractor, miracle fertilizer, or genetically enhanced supercrops. Instead, it’s virtual reality (VR). This immersive technology, typically associated with gaming, is gradually becoming essential for the agricultural sector.

Here are five ways VR can pave the way for resilient farms and safeguard the food supply for an expanding population.

Life-saving VR Simulator

Operating a tractor is a daily task on the farm, but it can be daunting for new drivers. Tractors may be slow, but they can pose serious risks.

Rural roadways are infamous for narrow lanes, mud, hidden ditches, overgrown hedges, and blind turns, all of which can lead to serious accidents. Statistics indicate higher accident risks.

To combat this, researchers at Nottingham Trent University have developed a tractor-specific VR hazard perception test. Utilizing 360-degree footage from a tractor’s perspective, learners can experience real-life scenarios. Farmers report these situations as highly dangerous: hidden bikes, potholes, tight corners, and vehicles that regard 14-ton tractors as mere obstacles.

In trials with over 100 drivers, many, particularly those with past accidents, struggled to recognize hazards in time. It’s evident that traditional training doesn’t suffice, as tractors have distinct turning radii, slower speeds, and unique blind spots compared to cars.

There’s hope that this VR training could become a standard educational tool in universities and young farmers’ clubs, ensuring safer driving practices before they venture onto the roads.

Hone Your Skills in VR

VR is also training the next generation of vineyard workers safely, minimizing the risk of harming the vines. The Maara Tech project in New Zealand has created a system enabling trainees to practice vine cutting indoors, even on rainy days. Pruning in wet conditions carries significant risks, exposing fresh cuts to moisture, which can lead to fungal diseases.

Researchers at Eurecat, a European R&D center collaborating with several universities on agricultural innovations, have advanced this concept further. They’ve developed VR pruning shears equipped with sensors that guide users on the correct pressure, angle, and technique. It’s not just about speed; precision is crucial.

Accurate cuts result in healthier grapes, leading to superior quality and fewer errors. Since this training is virtual, new workers can build their confidence and help alleviate seasonal labor shortages.

Mindfulness with VR Headsets

Agriculture is not just physically demanding; it’s also mentally taxing. When adverse weather ruins planting schedules, drought devastates fields, and costs soar, even the most resilient farmers can reach their breaking point.

It’s perhaps unsurprising that 95% of farmers under 40 believe that mental health issues are the biggest hidden struggle they face in agriculture.

In response, researchers at the University of East Anglia have initiated the Rural Mind Project, employing a 360-degree VR experience to immerse healthcare professionals, policymakers, and support workers in real farming scenarios—addressing issues like isolation, anxiety due to weather, and financial pressures.

This initiative goes beyond fostering empathy; it aims to facilitate tangible change. VR training is equipping practitioners to recognize rural-specific stressors, find effective support strategies, and dismantle the stigma associated with seeking help.

Unlike conventional therapy, where the presence of a psychiatrist may induce anxiety, farmers can practice coping methods in a tranquil virtual setting designed for rural challenges. Initial feedback suggests VR may reach individuals who would typically avoid seeking assistance.

While it’s not a complete solution, it’s a promising step towards making mental health care as accessible as checking the weather forecast.

Learn the Ropes Without the Mess

Not only does VR help in understanding farm life, but it also provides the younger generation a head start without the mess, fertilizers, or early wake-ups.

Through the DIVE4Ag project at Oregon State University, schoolchildren can embark on virtual field trips via their gadgets, exploring dairy farms, urban gardens, and aquaculture facilities.

Meanwhile, at Lala Lajpat Rai University of Veterinary and Animal Science in India, the AR/VR Experience Center offers agricultural students interactive lessons on crop cultivation, animal care, and modern production methods.

As immersive VR education gains traction, it sparks excitement and confidence, motivating the upcoming generation to consider agricultural careers long before stepping onto a physical farm.

Stepping into the Metaverse

If VR can train farmers effectively, support their mental well-being, and educate them about agriculture, why not extend these benefits to animals? In Turkey, one adventurous dairy farmer has started using VR goggles on his cows while they are comfortably housed in a barn, allowing them to view lush pastures accompanied by soft classical music.

The goal was to create a serene atmosphere to reduce stress and potentially enhance milk output. Early results have been remarkable, as average production climbed from 22 to 27 liters per cow per day.

This approach might seem quirky, but managing cows indoors during extreme climates allows for better control over their feeding, milking, and overall health, suggesting that the future of farming may indeed lie where livestock engage with the metaverse.

From safer tractor operations to calming cows using VR, this technology is demonstrating its value beyond mere gaming. It offers a glimpse into the future of agriculture. EIT Food showcases these innovations, merging visionary concepts with practical solutions to illustrate how immersive technology can make agriculture smarter, safer, and more sustainable for all.

Source: www.sciencefocus.com

Is “Brain Rot” a Reality? Researchers Highlight Emerging Risks Linked to Short-Form Videos

Short-form videos are dominating social media, prompting researchers to explore their impact on engagement and cognitive function. Your brain may even be changing.

From TikTok to Instagram Reels to YouTube Shorts, short videos are integral to platforms like LinkedIn and Substack. However, emerging research indicates a link between heavy short-form video consumption and issues with concentration and self-control.

The initial findings resonate with concerns about “brain rot,” defined by Oxford University Press as “the perceived deterioration of a person’s mental or intellectual condition.” This term has gained such popularity that it was named the word of the year for 2024.

In September, a review of 71 studies found that extensive short-form video use was correlated with cognitive decline, especially in attention span and impulse control, involving nearly 100,000 participants. Published in the American Psychological Association’s Psychological Bulletin, this review also connected heavy consumption to heightened symptoms of depression, anxiety, stress, and loneliness.

Similarly, a paper released in October summarized 14 studies that indicated frequent consumption of short-form videos is linked to shorter attention spans and poorer academic performance. Despite rising concerns, some researchers caution that the long-term effects remain unclear.

James Jackson, a neuropsychologist at Vanderbilt University Medical Center, noted that fear of new technologies is longstanding, whether regarding video games or iconic concerts. He acknowledges legitimate concerns but warns against overreacting. “It’s naive to dismiss worries as just grumpy complaints,” he said.

Jackson emphasized that research indicates extensive short-form video consumption could adversely affect brain function, yet further studies are needed to identify who is most at risk, the long-lasting impact, and the specific harmful mechanisms involved.

ADHD diagnoses in the U.S. are on the rise, with about 1 in 9 children diagnosed by 2022, according to the CDC. Keith Robert Head, a doctoral student at Capella University, suggests that the overlap between ADHD symptoms and risks from short videos deserves attention. “Are these ADHD diagnoses truly ADHD, or merely effects of short video use?” he questioned.

Three experts noted that research on the long-term effects of excessive short-form video use is in its early stages, with international studies revealing links to attention deficits, memory issues, and cognitive fatigue. However, these studies do not establish causation, often capturing only a snapshot in time.

Dr. Nidhi Gupta, a pediatric endocrinologist focused on screen time effects, argues that more research is necessary, particularly concerning older adults who may be more vulnerable. Gupta cautions that cognitive changes associated with short-form media may lead to a new addiction, likening it to “video games and TV on steroids.” She speculated that, just as research on alcohol and drugs took decades to evolve, a similar moral panic around short videos could emerge within the next 5 to 10 years.

Nevertheless, Jackson contends that short-form videos can be beneficial for online learning and community engagement: “The key is balance. If this engagement detracts from healthier practices or fosters isolation, then that becomes a problem.”

Source: www.nbcnews.com

The Trump Administration is Distorting Reality.

Tom Williams/CQ-Roll Call, Inc (via Getty Images)

Peek-a-boo is an entertaining game for young children. Due to their limited understanding of object permanence, hiding faces from babies brings joyful smiles as they try to grasp what’s happening in the world around them.

Playing this game with the wealthiest and most powerful nation may not be as amusing, but the Trump administration has certainly given it a shot.

For years, U.S. federal agencies carried out extensive public health research to guide policies addressing issues like drug addiction and food insecurity. However, these invaluable data collection efforts have now been significantly reduced or entirely scrapped (see, US public health system is flying blind after deep cuts).

By figuratively covering its eyes, the U.S. government seems to be wishing these challenges will vanish, when, in reality, the opposite is likely to occur.

As we learned during the peak of the COVID-19 pandemic, data, monitoring, and preparedness are crucial for preventing disasters. Statistical agencies and data collectors aren’t just collecting data; they’re our frontline defense against uncertainty.


Not all heroes wear capes, but some do their best to create them from spreadsheets.

The U.S. isn’t alone in this forgetfulness. The UK’s Office for National Statistics, once regarded as exemplary, has experienced a decline in recent years. Facing issues of poor-quality data and inaccurate statistics due in part to a lack of funding.

A significant part of the issue is the perception of this type of work as dull. No politician ever gained votes by vowing to conduct surveys on every mailbox, and statisticians hardly become celebrities.

However, this needs to change. Not all heroes wear capes, but some strive to craft them from spreadsheets. This vital data-driven work deserves recognition and reinforcement. Governance without object permanence is ill-advised, and sadly, the United States is on the brink of discovering this reality.

Source: www.newscientist.com

Quantum Computers Confirm the Reality of Wave Functions

SEI 270583733

The wave function of a quantum object might extend beyond mere mathematical representation

Povitov/Getty Images

Does quantum mechanics accurately depict reality, or is it merely our flawed method of interpreting the peculiar characteristics of minuscule entities? A notable experiment aimed at addressing this inquiry has been conducted using quantum computers, yielding unexpectedly solid results. Quantum mechanics genuinely represents reality, at least in the context of small quantum systems. These findings could lead to the development of more efficient and dependable quantum devices.

Since the discovery of quantum mechanics over a hundred years ago, its uncertain and probabilistic traits have confounded scientists. For instance, take superposition. Are particles truly existing in multiple locations simultaneously, or do the calculations of their positions merely provide varying probabilities of their actual whereabouts? If it’s the latter, then there are hidden aspects of reality within quantum mechanics that may be restricting our certainty. These elusive aspects are termed “hidden variables,” and theories based on this premise are classified as hidden variable theories.

In the 1960s, physicist John Bell devised an experiment intended to disprove such theories. The Bell test explores quantum mechanics by evaluating the connections, or entanglement, between distant quantum particles. If these particles exhibit quantum qualities surpassing a certain threshold, indicating that their entanglement is nonlocal and spans any distance, hidden variable theories can be dismissed. The Bell test has since been performed on various quantum systems, consistently affirming the intrinsic nonlocality of the quantum realm.

In 2012, physicists Matthew Pusey, Jonathan Barrett, and Terry Rudolph developed a more comprehensive test (dubbed PBR in their honor) that enables researchers to differentiate between various interpretations of quantum systems. Among these are the ontic perspective, asserting that measurements of a quantum system and its wavefunction (a mathematical representation of a quantum state) correspond to reality. Conversely, the epistemological view suggests that this wavefunction is an illusion, concealing a richer reality beneath.

If we operate under the assumption that quantum systems possess no ulterior hidden features that impact the system beyond the wave function, the mathematics of PBR indicates we ought to comprehend phenomena ontically. This implies that quantum behavior is genuine, no matter how peculiar it appears. PBR tests function by comparing different quantum elements, such as qubits in a quantum computer, assessing how frequently they register consistent values for specific properties, like spin. If the epistemological perspective is accurate, the qubits will report identical values more often than quantum mechanics would suggest, implying that additional factors are at play.

Yang Songqinghao and his colleagues at the University of Cambridge have created a method to perform PBR tests on a functioning IBM Heron quantum computer. The findings reveal that if the number of qubits is minimal, it’s possible to assert that a quantum system is ontic. In essence, quantum mechanics appears to operate as anticipated, as consistently demonstrated by the Bell test.

Yang and his team executed this validation by evaluating the overall output from a pair or group of five qubits, such as a sequence of 1s and 0s, and determined the frequency at which this outcome aligned with predictions regarding the behavior of the quantum system, factoring in inherent errors.

“Currently, all quantum hardware is noisy and every operation introduces errors, so if we add this noise to the PBR threshold, what is the interpretation? [of our system]? ” remarks Yang. “We discovered that if we conduct the experiment on a small scale, we can fulfill the original PBR test and eliminate the epistemological interpretation.” The existence of hidden variables vanishes.

While they successfully demonstrated this for a limited number of qubits, they encountered difficulties replicating the same results for a larger set of qubits on a 156-qubit IBM machine. The error or noise present in the system becomes excessive, preventing researchers from distinguishing between the two scenarios in a PBR test.

This implies that the test cannot definitively determine whether the world is entirely quantum. At certain scales, the ontic view may dominate, yet at larger scales, the precise actions of quantum effects remain obscured.

Utilizing this test to validate the “quantum nature” of quantum computers could provide assurance that these machines not only function as intended but also enhance their potential for achieving quantum advantage: the capability to carry out tasks that would be impractically time-consuming for classical computers. “To obtain a quantum advantage, you must have quantum characteristics within your quantum computer. If not, you can discover a corresponding classical algorithm,” asserts team member Haom Yuan from Cambridge University.

“The concept of employing PBR as a benchmark for device efficacy is captivating,” he notes. Matthew Pusey PhD from York University, UK, one of the original PBR authors. However, Pusey remains uncertain about its implications for reality. “The primary purpose of conducting experiments rather than relying solely on theory is to ascertain whether quantum theory can be erroneous. Yet, if quantum theory is indeed flawed, what questions does that raise? The entire framework of ontic and epistemic states presupposes quantum theory.”

Understanding Reality To successfully conduct a PBR test, it’s essential to devise a method of performing the test without presuming that quantum theory is accurate. “A minority of individuals contend that quantum physics fundamentally fails at mesoscopic scales,” states Terry Rudolph, one of the PBR test’s founders from Imperial College London. “This experiment might not pertain to dismissing certain proposals, but let me be straightforward: I am uncertain! – Investigating fundamental aspects of quantum theory in progressively larger systems will always contribute to refining the search for alternative theories.”

reference: arXiv, Doi: arxiv.org/abs/2510.11213

topic:

Source: www.newscientist.com

Unveiling the Reality Behind F1’s New ‘Sustainable’ Fuel and Its Impact on Future Cars

In the upcoming year, Formula 1 (F1) is set to undertake one of its most ambitious transformations yet, shifting from fossil fuels to a fully sustainable fuel mixture. This initiative is part of a broader strategy to adhere to new environmental regulations and demonstrate that the sport can, as F1 puts it, “continue without the need for new car production”.

Nonetheless, skepticism remains. As F1 contributes over 1% of the total carbon footprint in sports, experts argue that there are far more significant environmental issues that F1 must address. What are these challenges and how can we overcome them?

Switch Gears

In 2020, F1’s governing body, the Fédération Internationale de l’Automobile (FIA), established a timeline for race car engines to transition to 100% sustainable fuel by 2026 and achieve carbon neutrality by 2030.

From 2023 to 2024, Formula 2 and Formula 3, F1’s supporting racing series, will start utilizing 55% ‘sustainable bio-based fuels’, transitioning to 100% ‘advanced sustainable fuels’ by 2025.

F1 has developed its own ‘sustainable’ fuel for 2026, designed specifically for the hybrid engines currently used in F1 cars, which consist of both an internal combustion engine (ICE) and two electric motor generators.

Images from the Japanese Grand Prix, which was rescheduled from autumn to spring to minimize carbon emissions related to equipment transport between races (Source: Formula 1) – Formula 1

According to F1, the new fuel will not raise the overall carbon levels in the atmosphere. The carbon used in these new fuels will be sourced from existing materials, such as household waste and non-food biomass, or it will be captured directly from atmospheric carbon dioxide.

This will enable the production of synthetic fuels, which are man-made fuels aimed at replacing the fossil fuel-based gasoline currently in use. In the long term, the FIA asserts that F1, 2, and 3 will all eventually adopt this “fully synthetic hybrid fuel”.

Moreover, this new fuel will be classified as “drop-in”, indicating that it will be compatible with existing internal combustion engines as well as the current fuel distribution infrastructure. This means the fuel powering F1 cars in 2026 will be the same fuel you could purchase at your local gas station today.

Is it Truly Sustainable?

However, as the term “sustainable” has gained popularity, experts have started to challenge F1’s assertions.

Dr. Paula Pérez-López, an expert in environmental and social sustainability at the MINES ParisTech Center for Observation, Impacts, and Energy (OIE), articulates that for a product to qualify as “sustainable”, it must fulfill certain environmental, social, and economic criteria, with each segment of the supply chain considering these factors.

“The term ‘sustainable’ should not be confused with ‘low carbon’. A product or process may exhibit low carbon emissions but still produce high levels of other pollutants, thus rendering it ‘unsustainable’. “

The FIA’s collaboration with the Zemo partnership, a UK-based nonprofit organization, has led to the introduction of the Sustainable Racing Fuel Assurance Scheme (SRFAS). This third-party initiative ensures that sustainable racing fuels comply with FIA regulations.

The certification mandates that the fuel comprises “at least 99 percent Advanced Sustainable Components (ASC)” that are certified to be derived from renewable energy sources such as non-biological origin (RFNBO), municipal waste, or non-food biomass.

Essentially, this means that the new fuel must be synthetic, produced from waste, or derived from materials not intended for human or animal consumption, such as specially engineered algae.

New fuels must also adhere to criteria such as the EU Renewable Energy Directive III (RED III) along with EU Delegated Law.

Fraser Browning, the founder of Curve Carbon, which advises companies on minimizing their environmental footprints, indicates that these new fuels can indeed facilitate genuine decarbonization efforts if managed appropriately.

“The overarching question pertains to F1’s complete impact,” he notes. “Is F1 pursuing synthetic fuels as a vital component of their sustainability goals, or is it merely a procedural formality?”

Browning emphasizes that advancements in motorsport have historically contributed to significant innovations in sustainable transportation. For instance, in 2020, Mercedes announced that hybrid technology would be utilized in road cars. Earlier this year, they also revealed a new battery technology capable of extending the range of electric vehicles by 25 percent.

“Without the innovations deriving from motorsport, hybrid vehicles wouldn’t have evolved at the present speed,” he contends. “However, this needs to be executed transparently and responsibly.”

Cutting Carbon

Beyond the transition to synthetic fuels, F1 is also making strides to reduce carbon emissions in other areas. Travel and logistics account for roughly two-thirds of F1’s carbon emissions, as teams, heavy machinery, and fans travel considerable distances between races each year.

To mitigate this, adjustments have been made to the F1 calendar for 2024 to lessen freight distances between events, as stated in F1’s latest Impact Report. For example, the Japanese Grand Prix has been synchronized with other Asia-Pacific races and moved to April.

Formula 1 has unveiled that DHL’s new fleet of biofuel-powered trucks minimizes carbon dioxide emissions by an average of 83% compared to traditional fuel-powered trucks during the European segment of the 2023 season (Source: F1) – Formula 1

Additionally, F1 has broadened the adoption of biofuels for the trucks used to transport equipment throughout Europe, resulting in a 9% reduction in logistical carbon emissions.

By the conclusion of 2024, total carbon emissions are projected to decrease by 26% from 2018 levels, although F1 acknowledges there remain “key milestones to achieve, including further investments in alternative fuels and updates to our logistics system to enhance efficiency”.

Synthetic Fuels vs. Electric Vehicles

What does it mean when F1 claims that its new synthetic fuel is a drop-in solution suitable for everyday vehicles? Could it serve as a more sustainable alternative to electric vehicles (EVs)?

Critics warn that producing synthetic fuels for internal combustion engines (ICE) is energy-intensive, costly, and may require five times the renewable electricity compared to operating a battery-powered electric vehicle.

At present, 96% of hydrogen used for these fuels within the EU is derived from natural gas, a process that releases significant amounts of CO₂. Currently, renewable hydrogen is more costly than fossil-based hydrogen.

“Obtaining pure and concentrated CO₂ poses a considerable challenge,” states Gonzalo Amarante Guimarantes Pereira, a professor at the State University of Campinas in São Paulo, Brazil, and co-author of a study comparing biofuels with pure electric vehicles.

“There is a technology known as direct air capture that can achieve this, but attaining 100% concentration comes with substantial energy costs. The estimated expense varies between $500 to $1,200 (approximately £375 to £895) per tonne, rendering e-fuels at least four to eight times more costly than operating an electric vehicle.”

Browning concurs that EVs represent a more favorable low-carbon choice for the future. “Their emissions during use and maintenance are significantly lower,” he states.

“While synthetic fuels might yield a lesser overall impact if managed wisely, we still lack a comprehensive lifecycle assessment across multiple sustainability metrics to definitively address this issue.”

In simpler terms, as long as the entire system producing synthetic fuels cannot be reliably demonstrated to have a positive environmental impact, the jury remains out on the actual extent of their effects.

Read More:

Source: www.sciencefocus.com

From Lab to Reality: Is the Graphene Revolution Finally Within Reach?

ASince graphene was first synthesized at the University of Manchester in 2004, it has been recognized as a remarkable material—stronger than steel yet lighter than paper. Fast forward 20 years, and not all UK graphene enterprises have been able to harness its full capabilities. Some view the future with optimism, while others face significant challenges.

Derived from graphite, the same substance used in pencils, graphene consists of a lattice-like sheet of carbon just one atom thick, boasting impressive conductivity for both heat and electricity. Presently, China is the leading global producer, leveraging this to secure an edge in the race for microchip production and construction applications.

In the UK, graphene-enhanced low-carbon concrete, developed by the Graphene Engineering Innovation Center (GEIC) at the University of Manchester in collaboration with Cemex UK, was recently installed at Northumbrian Waters in July.

“The material had an overwhelming amount of hype as it came out of academia… the real challenge lies in transitioning it from the lab to actual production,” explains Ben Jensen, CEO of 2D Photonics, a startup that originated from the University of Cambridge, specializing in graphene-based photonics technology for data centers.

Jensen was also behind the invention of Vantablack, a coating made from carbon nanotubes (rolled graphene sheets) renowned as the “blackest black” due to its ability to absorb 99.96% of light. He founded Surrey Nanosystems in 2007, where he sold exclusive artistic rights to sculptor Anish Kapoor, who featured the material on the X6 Coupe to achieve the “blackest black” effect six years ago.

Anish Kapoor’s untitled Vantablack piece was displayed in Venice in 2022. Photo: David Levin/The Guardian

“Shifting to new materials to replace existing technologies presents a significant challenge,” Jensen states. “The value proposition must be compelling, while also ensuring that the material can be manufactured efficiently at scale and priced competitively, otherwise, there’s little point in offering something ten times more costly than existing products.”

German company Bayer attempted to produce large quantities of carbon nanotube items but shuttered its pilot plant over a decade ago when a surge in demand failed to materialize. Currently, this material finds its primary use as a filler to enhance the strength of plastic products. Bayer has referred to the potential applications for nanotubes as “fragmentary.”

More promising is a graphene-based optical microchip created by CamGraPhIC, a branch of 2D Photonics, stemming from research at the University of Cambridge and CNIT in Italy.

Silicon photonics microchips currently translate electrical data into optical signals for transmission through fiber optic cables. The company claims its graphene-based chips can transmit more data in less time and at significantly lower costs.

Graphene single crystal. Photo: 2D Photonics

These chips consume 80% less energy and are capable of functioning across a broader temperature range, minimizing the requirement for costly water and energy-intensive cooling systems in AI data centers.

Transmitting data through silicon often leads to delays. Jensen compares this issue to a 16-lane highway unexpectedly narrowing down to one lane due to construction, slowing down traffic significantly. He argues that graphene photonics functions like an expansive highway with hundreds of lanes.

“Our breakthrough lies in the capability to cultivate stable, ultra-high performance graphene and effectively integrate it into devices,” he asserts. “Keep in mind, this material is only one atom thick, which makes the process particularly challenging.”

Ben Jensen, CEO of 2D Photonics. Photo: Ermanno Fissole

CamGraPhIC was established in 2018 by Professor Andrea Ferrari, a Cambridge Nanotechnology professor, who also heads the Cambridge Graphene Center, alongside Marco Romagnoli, head of advanced photonics at CNIT in Pisa and the startup’s chief scientific officer.

The parent company, 2D Photonics, recently acquired £25m in funding from a diverse group of investors, including Italy’s sovereign wealth fund, NATO, the Sony Innovation Fund, Bosch Ventures, and the UK’s Frontier IP Group. The firm will be based in the former Pirelli photonics research facility in Pisa and aims to launch a pilot manufacturing site in the Milan region designed for large-scale production of 200mm wafers, confident in receiving an additional €317m (£276m) in funding by year-end.

Skip past newsletter promotions

Aside from data centers, the company’s chips have potential uses in high-performance computing, 5G and 6G mobile systems, aviation technologies, autonomous vehicles, advanced digital radar, non-satellite space communications, and beyond.

Paragraph, a spin-out from Cambridge University located in the nearby village of Somersham, has thrived in the past decade with backing from the UK Treasury. The firm creates graphene-based electronic devices, including sensors designed for electric vehicles and biosensors for early disease detection and various applications in medicine and agriculture. Recently, they secured $55 million (£41 million) from a group of investors, including a sovereign wealth fund from the United Arab Emirates, which acquired a 12.8% share in Paragraph.

Graphene Innovations Manchester, a fledgling company started by Vivek Konchery in 2021, finalized a deal with Saudi Arabia in December for the first commercial production of graphene-enhanced carbon fiber. This material will be utilized in constructing roofs, facades, and light poles. Production has begun in Tabuk with local partners, with an expected output of 3,000 tons by 2026.

2D photonics cleanroom at the Pisa development facility. Photo: 2D Photonics

Conversely, other companies are facing harsher realities. One of the pioneering firms in this domain, Applied Graphene Materials, was launched in 2010 by Professor Carl Coleman, a spin-out from Durham University. It introduced various products, such as anti-corrosion primers and bike detail protection sprays, which became available in Halfords stores. However, the struggling company declared bankruptcy in 2023, resulting in its main operations being acquired by Canada’s Universal Matter.

Ron Mertens, the owner of Graphene-Info, remarked, “As is often true in the broader materials industry, the path to market can be lengthy. Many graphene producers and developers have yet to generate substantial revenue or profit.”

Versarian, located in Gloucestershire, expanded from a garage startup with support from the government agency Innovate UK. They developed graphene powder and other products for usage in sensors, low-carbon concrete, paints, electronic inks, textiles, and more, including running gear and prototype stealth technologies for the US military.

The AIM-listed firm sought to establish operations in Spain and South Korea, but encountered financial troubles, leading several subsidiaries to enter administration or voluntary liquidation in July. Versarian is now looking to sell off assets, such as its patent portfolio, and currently has enough funds to last only until the end of October.

Depending on the nature of the upcoming transactions, this may trigger a liquidation process for the company or a financial shelter. Their investment agreement with a Chinese partner collapsed after the British government intervened to block any technological collaboration, marking a somber potential finale for what was once a promising graphene venture.

Source: www.theguardian.com

Unveiling the Reality of Borneo’s “Vampire Squirrel” and Its Enormous Tail

Ever find yourself gazing at adorable things until they start to seem a bit creepy? Think of garden gnomes, baby dolls, kids dressed as princesses, and all cats. Well, there’s one more addition to this peculiar list.

The tufted ground squirrel (Rheithrosciurus macrotis) may appear cute with its bright eyes and bushy tail, but the Dayak hunters of Borneo view it as a cold-blooded killer.

This ruthless rodent, nicknamed “Vampire Alice,” is infamous for allegedly flipping deer onto their backs, using its razor-sharp teeth to sever their jugular veins, causing the animals to bleed out.

Those who discovered the remains of a deer in the woods suspect that the squirrel returns to the scene to feast on the deer’s heart, liver, and stomach.

In villages bordering the forest, tufted ground squirrels are also known to prey on domestic chickens and consume their hearts and livers.

The squirrel gained notoriety in 2014 thanks to a paper written by 15-year-old Emily Meyard, titled Academic Paper, which revealed folk tales about animals with a bloodthirsty reputation.

The paper was published in Taprobanica: Journal of Biodiversity in Asia and has since made these once-overlooked creatures go viral. Articles have been written, videos shared, perhaps making Beatrix Potter reconsider her legacy.

In 2015, footage of one caught on camera went viral for the first time, however, it did not catch any herbivores in the act.

Instead, they were seen foraging in Gunungparun National Park in West Kalimantan, where action from the killer critter remained elusive, but new revelations emerged.

The tufted ground squirrel shares its native Borneo habitat with Prevost’s squirrel, a fluffy creature with a black, reddish-brown, and white coat that prefers life among the trees. – Credit: Richard McManus via Getty

In 2020, researchers discovered that the unusual teeth of these squirrels—long incisors with intricate ridges—are adapted for cracking open tough nuts.

Tufted ground squirrels are highly specialized seed predators, with a strong preference for canarium tree nuts.

It turns out the perception of tufted ground squirrels as fearsome creatures is a misconception. They truly have bright eyes and fluffy tails.

In fact, their bushy tails are among the largest proportionally of any mammal, being 30% larger than their bodies.

The reason for this unusual trait remains uncertain. Since they spend most of their time on the forest floor seeking food, it’s not for warmth, as it rarely gets cold in Borneo.

This could be related to attracting mates, deterring predators, or perhaps serving a mysterious form of camouflage. Their tail, which features a charcoal hue with frosty accents, helps them blend into the forest floor.

Regardless, I’ve stopped disparaging tufted ground squirrels and have learned to appreciate them as genuinely fascinating creatures.


Please email us to submit your questions at Question @sciencefocus.com or Message Facebook, Twitter, or Instagram Page (please include your name and location).

Check out our ultimate Fun fact and more amazing science pages


Read more:


Source: www.sciencefocus.com

Encounter Your Descendants and Future Self! Extended Travel to Reality Island at the Venice Film Festival

In Guests, the largest cinema at the Venice Film Festival, will converge for the premiere of Frankenstein. The stunning portrayal of Guillermo del Toro mirrors that of the creator who played God and crafted a monster. When a young scientist resurrects a body for his peers, some see it as a deceit, while others react with anger. “It’s hateful and grotesque,” shouts a hidden elder, and his concern is partially warranted. Every technological advancement unseals Pandora’s box. I’m uncertain about what will be craved or where this will lead me.

Behind the main festival venue lies Lazarete Vecchio, a small, forsaken island. Since 2017, it has hosted Venice Immersive, an innovative section dedicated to showcasing and promoting XR (Extended Reality) storytelling. Previously, it served as a storage facility, and before that, as a plague quarantine zone. This year’s judge, Eliza McNitt, recalls a time when construction halted as human bones were uncovered. “There’s something unforgettable about presenting this new form of film at the world’s oldest film festival,” she remarks. “We are delving into the medium of the future, while conversing with ghosts.”

This year, the island is home to 69 distinct monsters, ranging from expansive walk-through installations to intricate virtual realms accessible via headsets. Naturally, Frankenstein’s creations draw the attention of its makers, and McNitt acknowledges similar worries surrounding immersive art, which is often intertwined with runaway technology that poses a threat to all of us, frequently associated with AI.

“Immersive storytelling is a fundamentally different discussion than AI,” she states. “Yet, there’s a palpable anxiety regarding what AI signifies for the film industry. It largely stems from the false belief that a mere prompt can conjure something magical. The reality is that utilizing AI tools to cultivate something personal and unique is a collaborative effort involving large teams of dedicated artists. AI is not a substitute for humans,” she emphasizes, “because AI lacks taste.”




“Each experience requires a leap of faith”… Zan Brooks, left, experiencing the reflection of a small red dot. Photo: Venice immersion

McNitt has embraced AI tools early on and recently employed them in the autobiographical film Ancestra, set for release in 2025. She suspects that other filmmakers are not far behind. “I believe this experience here is merely the beginning of experimenting with these tools,” she says. “But next year, we will likely see deeper involvement in all aspects of these projects.”

The immersive storytelling segment at the Venice Film Festival aligns seamlessly with the film itself, encouraging attendees to view it as a natural progression or heir to traditional cinema. Various mainstream Hollywood directors have already explored this avenue. For instance, Asteroids, a high-stakes space thriller about disastrous mining expeditions, led by Dagriman, the Swingers director, reflects this trend. His production partner, Julina Tatlock, states that the interactive short films effectively brought Liman back to his independent roots, allowing him to conceive and create projects free from studio constraints. Asteroids is a labor of love, entwining elements of a larger narrative that could still be recognized as a feature of conventional cinema. “Doug is fascinated by space,” she adds.

The clouds possess a similar cinematic quality, floating above 2000 meters. This passionate arthouse drama depicts a grieving family pursuing the spirits of their deceased wives through the pages of uncompleted novels. Taiwanese director Singing Chen, adept in both traditional film and VR, believes each medium possesses unique strengths. “Immersive art was a pathway to film,” she remarks. “Even with the arrival of film, still images retain their potency and significance; they do not overshadow photographs. They affect us in ways distinct from moving images.”

Films in the Venice lineup are largely familiar. We often recognize the actors and directors, allowing for intuitive engagement with the storylines. In contrast, the artwork on the island can span a vast range—from immersive videos and installations to interactive adventures and virtual worlds. In the afternoon space, visitors can engage with the interactivity of an arcade game featuring Samantha Gorman and Danny Canisarro’s faces, along with a whistletop tour of Singapore’s cultural history. Every experience demands a leap of faith and hinges on a willingness to get lost. You might stumble, but you may also soar.




Visitors often meander through a dazzling…dark room. Photo: Venice immersion

Three projects stand out from this year’s Venice showcase. The Ancestors by Steye Hallema are lively ensemble interactives where visitors first form pairs, then expand into large families, viewing photos of their descendants on synchronized smartphones. This experience is unique in its pure focus on community, joyful yet slightly chaotic, embodying the essence of a good family. If Ancestors emphasizes relationship significance, here the form and content are beautifully synchronized.

The extraordinary blur by Craig Quintero and Phoebe Greenberg (likely the most sought-after ticket on the island) explores themes of cloning and identity, Genesis and extinction, requiring an impromptu immersive theater approach. It shifts perspectives, creating a bizarre, provocative, and enticing experience. As it concludes, users face a chilling VR representation of aging—a messenger from the future. The eerie, decrepit figure approaching me made me feel a year or two older than I actually am.

If there’s a real-world parallel to the Frankenstein scene, where an enraged scientist screams “hate” and “obscene,” it occurs when a middle-aged Italian finds himself in a dispute with the producer of sensory installations dubbed the Dark Room as he ferries to the island. He accuses the producer of being a Satanist. They assure him it’s not the case. “Maybe it’s not,” he responds. “But you did Satan’s bidding.” In truth, dark rooms are splendid and not at all demonic. Co-directed by Mads Damsbo, Laurits Flensted-Jensen, and Anne Sofie Steen Sverdrup, this vivid ritual tale immerses participants in a dynamic, intense journey through various corners of queer subculture, nightclubs, and backrooms, ultimately leading them across the sea. It’s captivating, disquieting, and profoundly moving. Visitors often navigate aimlessly, as I noted.

Initially, many stories at Venice oversimplified the experiences to comfort newcomers intimidated by technology. However, the medium is now gaining assurance. It has matured from its infancy to adolescence. This art form has evolved to become more robust, daring, and psychologically intricate. It’s no coincidence that many immersive experiences at Venice explore themes of ancestors and descendants, examining the connections between both. Moreover, numerous experiences unfold in mobile environments, fragile bridges, and open elevators. The medium reveals its current state—somewhere between stages of transit, perpetually evolving. It journeys between worlds, fervently seeking its future trajectory.

Source: www.theguardian.com

Reevaluating Reality: How Google’s AI Transformation is Reshaping the Online News Landscape

WThe chief executive of the Financial Times suggested this summer at a media conference that competing publishers might explore a “NATO” alliance to bolster negotiations with artificial intelligence firms.

Nevertheless, John Slade’s announcement regarding a “pretty sudden, sustained” drop in traffic from readers via search engines quickly highlighted the grave threat posed by the AI revolution.

Queries submitted on platforms like Google, which dominate over 90% of the search market, have been central to online journalism since its inception, with news outlets optimizing their headlines and content to secure high rankings and lucrative clicks.

Currently, Google’s AI summary appears at the top of the results page, presenting answers directly and reducing the need for users to click through to the original content. The introduction of the AI mode tab, which responds to queries in a chatbot format, has sparked fears of a future dominated by “Google Zero,” where referral traffic dwindles.

“This is the most significant change in search I’ve witnessed in decades,” states a senior editorial tech executive. “Google has historically been a reliable partner for publishers. Now, certain aspects of digital publishing are evolving in ways that could fundamentally alter the landscape.”

Last week, the owner of the Daily Mail revealed that the AI summary was officially in place following Click-Through traffic to a competitive market review of Google’s search services.

DMG Media and other major news organizations, including the Guardian Media Group and the Magazine Trade Body, the PPA, have advocated for the competitive watchdog. Urge Google for more transparency regarding AI summaries and traffic metrics provided to publishers as part of an investigation into tech company search monopolies.

Publishers are already experiencing financial strain from rising costs, declining advertising revenue, reduced print circulation, and changing readership trends. Google insists that they must accept agreements regarding how their content is utilized in AI systems or face the loss of all search results.

Besides the funding threat, concerns about AI’s impact on accuracy persist. Historical iterations advised users to consume harmful items, and although Google has since enhanced its summaries, the issue of “hallucinations” — where AI presents inaccurate or fabricated information as truth — remains, alongside inherent biases when machines, not humans, interpret sources.




Google Discover has supplanted search with content as the primary source of traffic clicks. Photo: Samuel Gibbs/The Guardian

In January, Apple pledged to improve its AI feature that summarized BBC News alerts with the company’s logo on the latest iPhone model. The alert misleadingly stated that a man accused of murdering a US insurance executive had taken his own life and falsely claimed that tennis star Rafael Nadal had come out as gay.

Last month, in a blog post, Liz Reid, Google’s search manager, claimed that AI had not yet been integrated into searches. “Driving more queries and quality clicks”.

“This data contradicts third-party reports that inaccurately suggest a drastic reduction in overall traffic,” she stated. “[These reports] are often based on flawed methodologies, isolated instances, or traffic alterations that occurred prior to the deployment of AI functionalities during searches.”

She also mentioned that overall traffic to all websites remains “relatively stable,” though “spacious” webs mean that user trends are redirecting traffic to different sites.

Recently, Google Discover, which delivers articles and videos tailored to user behavior, has taken precedence over search as the main source of traffic.

However, David Buttle, founder of DJB Strategy, stated that the services linked to publisher search transactions do not supply the quality traffic most publishers require to support their long-term strategies.

“Google Discover holds no product significance for Google,” he explained. “As traffic from general search diminishes, Google can concentrate more traffic on publishers. Publishers are left with no choice but to comply or face losing organic search, which often rewards ClickBaity content.”

Simultaneously, publishers are engaged in a broader struggle against AI companies looking to exploit content to train extensive language models.

The creative sector is rigorously lobbying the government to prevent AI firms from using copyrighted materials without authorization, urging for legislation.

Skip past newsletter promotions



The February Make It Fair campaign highlighted threats to the creative sector posed by Generative AI. Photo: Geoffrey Swaine/Rex

Some publishers have reacted against bilateral licensing agreements with AI companies, including the Financial Times, German media group Axel Springer, the Guardian, and Nordic publisher Schibsted. Others, like the BBC, have initiated actions against AI companies for alleged copyright infringement.

“It’s a double-edged attack on publishers, almost a ‘Pinker move’,” remarks Chris Duncan, a senior executive at News UK and Bauer Media, also leading the consultancy Seadelta. “Content is vanishing into AI products without appropriate compensation, while AI summaries are embedded within products, negating the need for clicks and effectively draining revenue from both ends. It’s an existential crisis.”

Publishers are pursuing various courses of action, from negotiations and litigation to regulatory lobbying, while also integrating AI tools into their newsrooms, as seen with the Washington Post and Financial Times launching their AI-powered chatbots and solutions for climate inquiries.

Christoph Zimmer, chief product officer at Germany’s Der Spiegel, notes that while current traffic remains steady, he anticipates a decline in referrals from all platforms.

“This is part of a longstanding trend,” he states. “However, it has affected brands that haven’t prioritized direct audience relationships or subscription growth in recent years, instead depending on broad content reach.”

“What has always been true remains valid. Prioritizing quality and diverse content is essential; it’s about connecting with people, not merely chasing algorithms.”

Publication industry leaders emphasize that efforts to negotiate deals for AI models to aggregate and summarize news are rapidly being replaced by advancements in models interpreting live news updates.

“The initial focus was on licensing arrangements for AI training to ‘speak English,’ but that will become less relevant over time,” asserts an executive. “We’re transitioning towards providing news directly. To achieve this, we require precise, live sources — a potentially lucrative market publishers are keen to explore next.”

PPA CEO Saj Merali emphasizes the need for a fair equilibrium between technology-induced changes in consumer digital behavior and the just compensation for trustworthy news.

“What remains at the core is something consumers require,” she explains. “AI needs credible content. There’s a shift in how consumers prefer to access information, but they must have confidence in what they read.”

“The industry has historically shown resilience through significant digital and technological transitions, yet it is crucial to ensure pathways that sustain business models. At this point, the AI and tech sectors have shown no commitment to support publishers’ revenue.”

Source: www.theguardian.com

Lab-Grown Hexagonal Diamonds Now a Reality

SEI 260617479

Crystal structure of hexagonal diamond

ogwen/shutterstock

Difficult-to-create diamonds, eluding scientists for years, can now be synthesized in labs, allowing the production of exceptionally challenging cutting and drilling tools.

Diamonds are known for their cubic atomic structure, yet for over 60 years, researchers have recognized the existence of a much tougher hexagonal diamond form.

Natural hexagonal diamonds are found in certain metamorphic rocks, referred to by the mineral name Ronzderate, but they only occur together with cubic diamonds. Earlier efforts to synthesize hexagonal diamonds yielded only minute quantities of impure variants.

Recently, Ho-Kwang Mao and his team at the Advanced Research Center for High Pressure Science and Technology in Beijing successfully produced relatively large hexagonal diamond samples measuring 1 mm in diameter and 70 micrometers thick.

While researchers have synthesized regular diamonds for some time, they state, “We explored various pressures and temperatures to identify optimal conditions for producing hexagonal diamonds. This includes 1400°C at a pressure of 20 Gigapascals, which is about 200,000 times the Earth’s atmospheric pressure.”

As these materials are unprecedented, Mao indicated a comprehensive investigation is necessary to ascertain their properties. “It’s extremely valuable,” he explains. “However, once the synthesis process is understood, anyone can replicate it. Thus, securing a patent and discovering ways to reduce production costs are critical.”

Predictions suggest hexagonal diamonds might be around 60% more rigid than conventional diamonds based on their structure. Cubic diamonds have a hardness rating of about 115, as measured by Vickers hardness tests. The hexagonal diamonds synthesized by Mao’s group exhibit a rating of 120 Gigapascals, which they believe could improve with further refinement of their techniques.

If hexagonal diamonds can be fabricated to sufficient thickness, they could be utilized to create more robust and resilient industrial tools for applications like geothermal energy drilling, according to James Elliott from Cambridge University. “Naturally, as you drill deeper, temperatures rise, which may enable exploration at greater depths.”

Topics:

  • diamond/
  • Materials Science

Source: www.newscientist.com

The Incredible Reality Behind 10,000 Myths—and What You Should Strive for Instead

Walking 7,000 steps daily can significantly enhance your overall health.

A recent research review indicated that individuals who walk at least 7,000 steps each day nearly halve their risk of death from all causes over a given timeframe.

Walking just 4,000 steps daily has been shown to considerably lower the risk of cardiovascular disease, cancer, diabetes, dementia, depression, and falls.

Improvements continue with increased step counts, but the benefits start to taper off after reaching 7,000 steps. This makes 7,000 steps a more realistic goal for those aiming to boost their health, compared to the commonly recommended 10,000 steps.

It’s well-known that increasing physical activity offers substantial health benefits; however, our increasingly sedentary lifestyles mean that one-third of the global population is considered insufficiently active.

Counting daily steps is a popular method for tracking activity levels. The often cited target of 10,000 steps is frequently viewed as the benchmark to achieve, but this number lacks solid scientific backing.

A recent review published in Lancet Public Health examined 57 studies to clarify what step count should be targeted for health benefits.

The review started with a baseline of 2,000 steps per day, finding that health benefits increased with every additional 1,000 steps.

However, the pace of improvement began to level off after 7,000 steps.

For the average person, 7,000 steps equate to roughly 3-3.5 miles, depending on stride length – Credit: Getty Images

At the 7,000-step mark, the results showed a dramatic impact: all-cause mortality decreased by 47%, the risk of dementia dropped by 38%, and cardiovascular disease risk reduced by 25%. There were also significant reductions in the risk of depression, type 2 diabetes, and cancer.

Even a slight increase in step counts can lead to a 36% reduction in all-cause mortality.

Despite the rising interest in using step counts as a metric for tracking activity levels, public health officials have previously lacked enough evidence to establish scientifically backed targets.

The non-official 10,000-step target originated from a pedometer marketing campaign during the 1968 Tokyo Olympics, rather than being health-related; interestingly, the number resembles a walking figure in Japanese characters.

Read more:

Source: www.sciencefocus.com

How Metaphysics Uncovers Hidden Assumptions to Comprehend Reality

Metaphysics often faces undue criticism. “Many people consider it a waste of time,” states philosopher Stephen Mumford from Durham University, UK, and author of Metaphysics: A Very Short Introduction. “Are they simply arguing over trivial matters, like how many angels can dance on the head of a pin?”

This viewpoint is understandable. Classical metaphysics—originating from the Greek term “meta”—has often grappled with peculiar questions. For instance, what constitutes a table? What shape does color assume? We utilize logical tools like “reductio ad absurdum” to derive conclusions solely from inference. This method seeks to demonstrate the validity of a claim by highlighting absurdities within its negation, quite different from the empirical observations that characterize scientific inquiry.

This article is part of our concept special, exploring how experts view some of the most intriguing scientific ideas. Click here for more information.

Nonetheless, the notion that metaphysics is merely an abstract discipline disconnected from reality is rebutted by Mumford:

Indeed, modern science has encroached upon areas once deemed exclusive to metaphysics, including the nature of consciousness and the implications of quantum mechanics. It’s becoming increasingly evident that both domains are interconnected.

To understand this interplay, one must recognize that everyone inherently possesses metaphysical beliefs, asserts Vanessa Seyfert, a philosopher of science at the University of Bristol, England. For instance, many believe in the existence of objects even when they are not being observed, despite the absence of robust empirical evidence to support this claim.

Moreover, “naturalized metaphysics” emerges from this discussion. Unlike traditional metaphysics, which remains speculative, this version is grounded in scientific understanding, according to Seyfert. “We observe what science reveals about our universe and consider whether we can accept it as literal truth.”

This contemporary metaphysics serves a crucial role for science, as it probes the foundational assumptions behind our understanding of the universe. “In many instances, metaphysical beliefs form the basis upon which empirical knowledge is constructed,” explains Mumford.

Causality—the principle that every effect has a cause—is a prime example. Despite the fact that causality itself is not directly observable, it is a belief we universally hold. “Essentially, the entirety of science operates on this metaphysical premise of causality,” he remarks.

These days, scientists routinely engage with deeply metaphysical concepts, ranging from chemical elements to space and time, as well as the very laws of nature, thereby intensifying the scrutiny of these ideas.

“We can critically evaluate our metaphysical assumptions or choose to overlook them for their validity,” says Mumford. “However, ignoring them means we make unexamined assumptions.”

One notable intersection of science and metaphysics exists in quantum mechanics, which delves into the atomic and subatomic realm. While it stands as a highly successful scientific framework, addressing its implications requires physicists to confront metaphysical queries, such as the interpretation of quantum superpositions.

In this realm, competing interpretations of reality exist without being testable through conventional experiments. It’s increasingly clear that scientific advancement hinges on confronting these hidden assumptions. In response, some researchers are revitalizing the notion of “experimental metaphysics,” aiming to assess the consistency of metaphysical beliefs that prioritize various interpretations of quantum theory.

“Ultimately, you cannot engage in physics without also grappling with metaphysical inquiries,” states Eric Cavalcanti, a prominent proponent of this perspective at Griffith University in Brisbane, Australia. “Both aspects must be addressed simultaneously.”

Explore further stories in this series via the links below:

Topic:

Source: www.newscientist.com

Quasiparticles: Profound Insights into the Nature of Reality

koto_feja/Getty Images

koto_feja/Getty Images

Traditionally, we envision particles as tangible objects—tiny, point-like entities with specific properties like position and velocity. In reality, however, particles are energetic fluctuations within an underlying field that fills the universe, and they cannot be directly observed. This concept can be quite perplexing.

This article is part of our special focus on concepts, examining how experts interpret some of the most astonishing ideas in science. Click here for more information.

Furthermore, there exists a layer of complexity due to quasiparticles, which arise from intricate interactions among the “fundamental” particles found in solids, liquids, and plasma. These quasiparticles possess fascinating properties of proximity, suggesting the potential for exotic new materials and techniques, challenging our established notions of particles.

“When discussing what particles are, the topic can become quite convoluted,” states Douglas Natelson from Rice University in Houston, Texas. He describes quasiparticles as “excitations in a material that exhibit many characteristics associated with particles.” They can have relatively well-defined positions and velocities and can carry charge and energy. So why aren’t they considered actual particles?

The answer lies in their existence. Natelson likens this to fans performing “waves” in a stadium. “We can observe the waves and think, ‘Look! There’s a wave, it’s of a certain size, moving at a specific speed.’ But those waves are essentially a collective phenomenon, resulting from the actions of all the fans present.”

To create a quasiparticle, physicists often manipulate materials like metal substrates subjecting them to extreme temperatures, pressures, or magnetic fields. Subsequently, they study the collective behavior of the intrinsic particles.

One intriguing phenomenon recognized in the 1940s involved a “hole,” which describes a lack of negative electrons that should normally be present. By analyzing these holes as if they were independent entities, researchers were able to develop semiconductors that power modern laptops and smartphones.

“Essentially, modern electronics hinge on both electrons and holes,” remarks Leon Balents from the University of California, Santa Barbara. “We continuously utilize these quasiparticles.”

Over the years, we have uncovered an entire spectrum of exotic quasiparticles. Magnons emerge from spin waves, a fundamental quantum property related to magnetism. Cooper pairs, present at low temperatures, can transmit charge without resistance in superconductors. The list expands, continually growing as physicists predict and observe peculiar new types with strange names, such as pi tons, fractures, and even wrinkles.

Among the more thrilling discoveries is the non-Abelian anyon. Unlike typical particles, these quasiparticles possess the ability to retain memory of how they were altered.

The practicality of these quasiparticles remains uncertain, according to Balents. Nonetheless, major companies like Microsoft have heavily invested in research involving quasiparticles.

The ongoing investigation raises fundamental questions about particle nature itself. If quasiparticles exhibit particle-like characteristics, one must consider whether the “fundamental” particles (e.g., electrons, photons, quarks) might emerge from a more profound underlying framework.

“Are what we classify as fundamental particles truly elementary, or could they be quasiparticles arising from more basic fundamental theories?” ponders Natelson. “An eternally looming question.”

Explore more articles in this series via the links below:

Source: www.newscientist.com

We learned how our brains distinguish between imagination and reality.

Overlap of Brain Regions in Imagination and Reality Perception

Naeblys/Alamy

How can we differentiate between what we perceive as real and what we imagine? Recent findings have uncovered brain pathways that may assist in this distinction, potentially enhancing treatments for hallucinations associated with conditions like Parkinson’s disease.

It’s already established that the brain regions activated during imagination closely resemble those engaged when perceiving real visual stimuli; however, the mechanism distinguishing them remains elusive. “What allows our brains to discern between these signals of imagination and reality?” asks Nadine Dijkstra from University College London.

To explore this, Dijkstra and her team observed 26 participants engaged in visual tasks while their brain activity was monitored via MRI scans. The tasks included displaying static grey blocks on the screen for 2 seconds, repeated over 100 times. Participants were prompted to imagine diagonal lines within each block, with half of the blocks containing actual diagonal lines.

Subsequently, participants rated the vividness of the lines they perceived on a scale of 1-4 and indicated whether the lines were real or imagined.

Through the analysis of brain activity, researchers found that when participants viewed the lines more vividly, the fusiform gyrus, a specific brain area, was more active, irrespective of the line’s actual presence.

“Prior research indicated that this area is engaged in both perception and imagination, but this study reveals its role in tracking the vividness of visual experiences,” notes Dijkstra.

Crucially, a spike in activity in the fusiform gyrus above a certain threshold led to increased activity in an associated area known as the previous island, causing participants to perceive the lines as real. “This additional area connects to the spindle-like moment, possibly aiding decision-making by processing and re-evaluating signals,” she adds.

While it’s likely that these brain regions are not the sole players in discerning reality from imagination, further investigation into these pathways could refine our understanding of treating visual hallucinations linked to disorders such as schizophrenia and Parkinson’s disease.

“Individuals experiencing visual hallucinations might exhibit heightened activity when visualizing their imagined scenarios, or the monitoring of their signals could be inadequate,” Dijkstra suggests.

“I believe this research could be pivotal for clinical cases,” says Adam Zeman, from the University of Exeter, UK. “However, distinguishing whether minor shifts in sensory experiences are driven by real-world events, discerning fully formed hallucinations, and determining the duration of beliefs remains a significant challenge,” he explains.

To address this knowledge gap, Dijkstra’s team is currently studying the brain pathways of individuals with Parkinson’s disease.

Topics:

Source: www.newscientist.com

Emerging Theories May Finally Bring “Quantum Gravity” to Reality

Researchers might be on the brink of solving one of the most significant challenges in physics, potentially laying the groundwork for groundbreaking theories.

At present, two distinct theories—quantum mechanics and gravity—are employed to elucidate various facets of the universe. Numerous attempts have been made to fuse these theories into a cohesive framework, but a compelling unification remains elusive.

“Integrating gravity with quantum theory into a single framework is one of the primary objectives of contemporary theoretical physics,” states Dr. Mikko Partanen, the lead author of the recently published research in Report on Progress in Physics. He elaborates on this innovative approach in the context of BBC Science Focus, calling it “the holy grail of physics.”

The challenge of formulating a theory of “quantum gravity” arises from the fact that these two concepts operate on entirely different scales.

Quantum mechanics investigates the minutest scale of subatomic particles, leading to the development of standard models. These models link three fundamental forces: electromagnetic, strong (which binds protons and neutrons), and weak (responsible for radioactive decay).

The fourth fundamental force, gravity, is articulated by Albert Einstein’s general theory of relativity, which portrays gravity as a curvature of spacetime. Massive objects and high-energy entities distort spacetime, influencing surrounding objects and governing the domain of planets, stars, and galaxies. Yet, gravity seems resistant to aligning with quantum mechanics.

The Duality of Theories

A significant issue is that gravity is rooted in a “deterministic classical” framework, meaning the laws predict specific outcomes. For instance, if you drop a ball, gravity guarantees it will fall.

In contrast, quantum theory is inherently probabilistic, offering only the likelihood of an event rather than a definitive outcome.

“These are challenging to merge,” Partanen comments. “Attempts to apply quantum theory within gravitational contexts have yielded numerous nonsensical results.”

For example, when quantum physicists measure the electron’s mass, the equations spiral into infinity. Similarly, applying gravity in extreme conditions, like at the edge of a black hole, renders Einstein’s equations meaningless.

Even general relativity fails to explain phenomena within a black hole. -NASA

“While intriguing approaches like string theory [which substitutes particles with vibrating energy strings] exist, we currently lack unique, testable predictions to differentiate these theories from standard models or general relativity,” notes Partanen.

Instead of crafting an entirely new theory for unification, Partanen and his colleague, Professor Jukka Tulkki, approached gravity through the lens of quantum mechanics by reformulating the gravitational equations using fields.

Fields represent how quantum theory elucidates the variation of physical quantities over space and time. You may already be acquainted with electric and magnetic fields.

This novel perspective allowed them to replicate the principles of general relativity in a format that combines effortlessly with quantum mechanics.

Testing the Theories

A particularly promising aspect of this new theory is that it does not require the introduction of exotic new particles or altered physical laws, meaning physicists already possess the necessary tools for its verification.

According to him, this new theory generates equations that account for phenomena like the bending of light around massive galaxies and redshifts—the elongation of light’s wavelength as objects recede in the expanding universe.

This new theory aligns with predictions from general relativity. – Credits: ESA/Hubble & NASA, D. Thilker

While this validates the theory, it does not confirm its correctness.

To establish this, experiments must be conducted in extreme gravitational environments where general relativity falters.

If quantum gravity can make superior predictions in such scenarios, it would serve as a crucial step towards validating this new theory and suggesting that Einstein’s framework might be incomplete.

However, this is challenging due to the minimal differences between the two theories.

For instance, when observing how the sun’s mass bends light from a distant star, the predictive discrepancy is a mere 0.0001%. Current astronomical tools are insufficient for precise measurements.

Fortunately, larger celestial bodies can amplify these differences dramatically.

“For neutron stars with intense gravitational fields, relative differences can reach a few percent,” Partanen observes. While no observatory currently exists to make such observations, advancements in technology could soon enable this.

The theory remains in its nascent stages, with the team embarking on a mission to finalize mathematical proofs to ensure the theory avoids diverging into infinities or other complications.

If progress remains encouraging, they will then apply the theory to extreme situations, such as the singularity of a black hole.

“Our theory represents a novel endeavor to unify all four fundamental forces of nature within one coherent framework, and thorough investigation may unveil phenomena beyond our current understanding,” concludes Partanen.

read more

About Our Experts

Mikko Partanen is a postdoctoral researcher in the Department of Physics and Nanoengineering at Aalto University in Espoo, Finland. He specializes in studying light and its quantum properties, with his research appearing in journals such as Physics Chronicles, New Journal of Physics, and Scientific Reports.

Source: www.sciencefocus.com

The reality of your risk for digital dementia

Technology can actually offer some amazing benefits in slowing down cognitive decline as we age, as shown in new research published in the journal Natural Human Behavior. According to Professor Michael Scullin, co-author of the study, the idea of “digital dementia” is concerning, but the study’s results were surprising.

The study, conducted by Professor Jared Benge and his colleagues, compiled data from 57 scientific studies involving approximately 410,000 middle-aged or elderly participants. The results showed that technology use was associated with better cognitive outcomes and a reduced risk of cognitive impairment.

Despite concerns about excessive technology use, the study found that technology could actually benefit brain health by providing mental stimulation. This includes learning new things and engaging in mentally stimulating behaviors using computers, the internet, and smartphones.

The study also highlighted how technology can help older individuals maintain independence and cognitive function through tools like GPS devices and digital calendars. These compensatory behaviors can offset age-related declines in memory and attention.

How to Use Technology Responsibly

The key takeaway from the study is that technology can be a valuable tool for maintaining cognitive health in older adults. By introducing older individuals to digital devices and patiently teaching them how to use them, we can help them benefit from the positive aspects of technology.

For older adults who may be hesitant to adopt technology, it’s important to encourage them to give it a try and provide support throughout the learning process. By making technology use more accessible and engaging, we can help older individuals experience the benefits of digital tools.

In conclusion, while there is still ongoing research on the impact of technology on cognitive aging, the study provides a hopeful message that technology use can have positive effects on brain health. By focusing on the beneficial aspects of technology and finding ways to integrate it into daily life, older adults can potentially slow down cognitive decline and improve overall cognitive function.

About the Experts

Michael Scullin: Professor of Psychology and Neuroscience at Baylor University, specializes in sleep physiology and memory. He explores how memory can be used to fulfill daily intentions and investigates the impact of technological solutions on memory difficulties in older adults.

Jared Benge: Clinical Neuropsychologist and Associate Professor at the Dell School of Medicine, University of Texas at Austin. His research focuses on cognitive impairment, early detection of cognitive decline, and real-world functions in older adults with neurodegenerative diseases.

Source: www.sciencefocus.com

Could our universe be a membrane on the fringe of an unfamiliar reality far away?

Getty Images/Shutterstock

String theory is the best candidate we have for all theories. Bends to that rule, various entangled theories of traditional physics emerge as part of a sublime, higher-dimensional tapestry. It can unify all four of nature, including the most troublesome gravity of all. If you're lucky, you might even tame big bangs and black holes without losing threads.

There's only one catch. String theory cannot explain the universe like ours. That mathematics can explain billions of different possible universes, but not expanding at speeds of acceleration, it's exactly what we see. Certainly, no one knows that this acceleration is driving. Mystical “dark energy” is the usual placeholder. According to theory, it probably shouldn't happen at all.

For 25 years, this was a big problem, but now I may have found a way past it. On the surface, the answer does not shock anyone who is used to the luxury of modern physics. We need to rethink the universe as part of a much larger company. Doing this can bloat into the content of your mind. In fact, the acceleration of expansion seems to come naturally. However, this new scheme could be the wildest scheme ever. Our familiar spaces are delicately settled between high-dimensional hyperspace and total meaninglessness. “Our proposal says that our existence is like a shadow: a projection onto a wall at the end of the world.” Antonio Padillaa physicist at the University of Nottingham in the UK.

For all the grandeur of the present, the string…

Source: www.newscientist.com

How Virtual Reality Goggles Contributed to my Journey to Physical Rehabilitation

IYou were asking me a month or two ago if I had ever had a spatially immersive experience, or if I was 60 years old and could have been hired early. Virtual Reality Goggles, I’d say it was as likely as the Silicon Valley tech giant appointed to “disrupt” the US federal government.

Let me explain a series of events that are likely to have led me to the latest technology.

Over the years I had to be in the perfect acrobatic position that would qualify me Cirque du Soleil To avoid discomfort while working on my computer. Despite moving to multiple rounds of standing desk and boring physics, I can no longer use my right arm.

Last year in particular felt like a tortured battle between cognitive decline and brain fog. This is the result of the steady use of ineffective anti-inflammatory agents. I have never done much work in my life and never had much time in my hands. I am currently armed with an MRI scan indicating that my shoulder tendon has ruptured. This records the date to combat the three herniated neck discs compressing the nerves running through my arm, and my malicious guilt.

To cope, I have learned to develop curiosity, a great source of distraction. I leave the house and open to places where adventures may take me… because you never know. Last week I did a shuffle dance with a great DJ Camden assembly Pub in the afternoon.

Two months ago I left for something I thought I would stop by at the museum, but instead found myself in a store looking for a charger for my iPhone. While standing there, I explained to my empathetic young sales assistant that I was a benched writer and that my right arm was temporarily disabled, and jokingly asked if he had a gadget that could put food in my mouth with my left hand without stabbing my eyes with a fork.

I looked ignorant when he asked if he wanted to try out the “mixed reality headset.” He explained that it is used in multimedia experiences such as watching movies and games on virtual screens. He suggested that eye tracking, coupled with voice control in the accessibility feature, could make me work.

The next moment I was sitting in the demonstration area wearing thick, heavy glass goggles. After a quick setup, a little green dot floats in the air. Tap your thumb and finger together to see a group of familiar app icons in a transparent visual overlay. Pixelization of the graphics makes the icon more clear than the chair in front of me. Eye tracking is the most surprising thing. All you need to do is look at the app you want to launch, gently tap your finger and thumb together to open the program. You can move multiple screens close or far apart, like furniture in a room.

I try other programs, open the photos and extend them to isologies. I’m watching immersive videos that look just as realistic as nightmares with dinosaurs blown away, but they’re kind of adorable and check me out the same way I keep the gaps in them. It reminds me of a parenting moment when my son was a toddler and struggled to resolve whether the stuffed animal was real or not. He was relieved to be told there was no gruffalo-like thing, despite being a replica of a stuffed animal. Once he realized it was an optical trick, he didn’t need to ask again the difference between the real thing and the pretend.

Navigating my path around various programs is a bit like learning to balance on a bike. I grow up confused. Speed ​​of movement requires constant adaptation to spatial and visual cues. I start to relax. I reach out and interact with digital objects – butterflies land on my finger. I have found myself reacting to the same wonders I experience in the real world.

And de Noument. A small dial appears on the side of the goggles, allowing you to control the reality you want to immerse yourself in. The actual room is gone and surrounded by mountain scenes. I breathe in surprise. Its spatial depth, light and shadow make the scene very clear and I feel the space around me. I know it’s not real, but the distinction is clear – I experience a mood shift as if it were. It’s similar to getting into a Piccadilly Circus tube and surface at the next stop and being on the beach in the Bahamas.

If the possibilities of these immersive spaces are slightly frightening, consider the historical adaptation of neurocognition and spatial awareness as a species, as well as the consequences of cultural and scientific advances. It takes how physics evolved because we were able to imagine visual art, the fundamental shift in spatial perspectives from two dimensions to three dimensions of painting, or the invisible behavior of the universe that is not available to the human eye.

In a few minutes, I was easily in and out of the program. Using my eyes and hands, my arms relaxed without firing nerve pain. It shows software that demonstrates breakthroughs in medical training, an immersive experience that can be used in educational environments, art, architecture and design. Seeing this, I realized I was overwhelmed by tears thinking that I might be able to work for the months leading up to my surgery and during rehabilitation.

Until my thoughts move to my next dilemma. How did you intend to break the news to your husband? 30 years of marriage and rules have always been the case. We are consulting each other about purchasing over £100. How was it when you explained it to him over the phone? What is the difference in my mood, the vision I had the following year in my life? It felt like someone had given me a smart medicine, a magical cure for brain fog. In fact, I always change goggles and painkillers.

There was only one answer. I have to bring them home so he can try it out for himself. I took a picture of the boxless purchase and texted him with the message “No heart attacks, I can return them.” He immediately texts, “I’m having a heart attack.” I leave the store with deep creative thinking and new virtual reality goggles, carrying the bag with my left arm. I take the bus that goes in the wrong direction and go without picking up a visual clue that will stop another 10.

Once I send it home, negotiations will continue all night. I’ll refrain from reviewing. He admits there is a consensus that it is relatively best on a VR headset, but that’s it. virtual reality. I think it’s like claiming that smartphones are nothing more than mobile phones. He points out that even influencers and early adopters are predicting market failures because prices are so prohibitively prohibitive (from £3,499). Why don’t you wait for prices to drop? Point out that waiting will defeat your purpose. It’s about doing my job and helping me survive mentally next year. He’s certain. He gets it and is actually at ease for me. Even it moved. The goggles stay.

A few days later During coaching on accessibility features, you can block gestures from your right arm and force a break in the part of your brain you want to steer on the right side. Accelerate your learning to become hands-free on other devices. And that’s good because I can only use my goggles for a few hours a day before I have a neck cramp with weight. But I learned to hack for that. By lying down to serve as a table of some kind.

I’m not dying with bug eyes in public, but after experiencing the panic attacks I experienced after taking them off, I am beginning to feel relieved with my new hip identity – the consequences of physical disorientation and fear towards their seductive charm.

This turned my way from friends and family despite the enormous amount of ridiculous laughter and skeptical concerns. I have not been a target of this much stack since being arrested after trying joints as a teenager. Am I at risk of letting go of my struggle to become a human in the real world? Look at this space. This article was described as hands-free.

Debora Harding’s dance with Toctopus has been published by Profile Books and Bloomsbury USA. Buy for £9.99 Guardianbookshop.com

Source: www.theguardian.com

Online virtual reality tools offer free assistance for public neurology work

A new online platform has been launched to help speakers practice in front of virtual audiences, easing the anxiety many face in public speaking situations.

Dr. Chris McDonald, founder of Cambridge University’s Immersive Technology Lab and creator of the platform, aims to eliminate long waits and high costs associated with seeking help for language anxiety.

“Most people experience language anxiety but don’t have access to treatment. This project aims to break down those barriers,” he explained.

Virtual reality public speaking The platform uses exposure therapy, combining breathing exercises and eye movements to reduce heart rate and fear response.

Users can practice public speaking in various virtual reality settings, from empty classrooms to large stadiums with thousands of people. McDonald refers to the latter as “overexposure therapy.”

McDonald mentioned that the platform, compatible with Android and iOS, offers scenarios like study materials, feedback mechanisms, and job interviews accessible via laptop, VR headset, or smartphone with a cheap mount.

In a recent study published in the Frontier Journal of Virtual Reality, 29 Chinese adolescents showed significant improvement in public speaking confidence and enjoyment after using the platform.

Further research is planned, but McDonald revealed that tens of thousands have already used the platform during development. He emphasized the importance of creating an effective and accessible tool for users.

Psychologist Dr. Matteo Cella from King’s College London’s Virtual Reality Lab acknowledged the platform’s potential benefits but stressed the need for robust trials to evaluate its efficacy.

Dr. Kim Smallman of Cardiff University highlighted the importance of assessing the impact and effectiveness of new technologies like VR in addressing mental health challenges.

Source: www.theguardian.com

Uncovering the Shocking Reality of TikTok’s “Brain Rot” from a Neuroscientist’s Perspective

“Brain corruption” was named the term Oxford’s year 2024. This is defined as the “degradation of a person’s mental or intellectual state” that arises from seeing “trivial” content online, such as a Tiktok video.

It’s a term that is often joked about, but what If there is a grain of truth? This is the seemingly scary implications of a new study published by a large team of brain scientists based at Tianjin Division University in China.

What did this study find?

They scanned the brains of over 100 undergraduates and completed a survey on their habits of watching short online videos. The survey included statements such as “My life will be empty without a short video” and “Not able to watch a short video will be as painful as losing a friend,” indicating how much they agreed.

Interestingly, researchers found that those who felt most obsessed with short videos had significant differences in brain structure. These participants had more gray matter in the orbitofrontal cortex (OFC). This is an area near the front of the brain that is involved in decision-making and emotional regulation. Similarly, they had more gray matter in the cerebellum – the small cauliflower-shaped structures behind the brain play a role in movement and emotions.

The team concluded that this is bad news, as for Tiktok enthusiasts, having an oversized OFC could be a sign that it is described as “an increased sensitivity to rewards and stimuli associated with short video content.” They speculated that watching too many Tiktok videos could have led to this nerve distension.

Similarly, they suggested that enhanced cerebellum could help the brain process short video content more efficiently – perhaps the result of frequent rampages. This can create a reinforcement cycle. In this cycle, watching more videos strengthens these brain pathways and habits become even more ingrained.

Over 23 million videos are uploaded to Tiktok every day – Photo Credit: Getty

But that’s not all. The team also performed a second brain scan to track participants’ brain activity while participants were resting with their eyes closed.

They found a greater synchronization of activity within multiple regions of the brain. These include the dorsal prefrontal cortex (areas involved in self-control), the posterior cingulate cortex (areas involved in thinking about oneself), the thalamus (a type of relay station for brain signals), and the cerebellum.

The researchers suggested that these functional brain differences could reflect a variety of issues among addiction participants. The issues include the tendency to overly social comparisons while having trouble leaving the video and watching them.

They also asked participants to fill out a survey on “promising temperament.” This is a factor measured by agreeing to statements such as “I strive to reach other people’s outstanding results.”

Interestingly, scientists have found that many links between video addiction and brain differences are also linked to a higher level of envy. This suggests that feeling of envy can make someone more likely to watch a short video. And over time, this habit can lead to potentially harmful changes in the brain.

Does Tiktok cause brain decay?

If you are an avid consumer of fun online videos, or a related parent, the idea that seeing habits can reconstruct brain structures is no surprise.

However, it is important to consider this study in a broader historical context in which new technologies and media have long been causing exaggerated neurological claims. It is also important to understand the deep limitations of research.

It’s been nearly 20 years Atlantic Ocean The magazine ran a cover function that asked, “Is Google making us stupid?” And, in a nutshell, the answer that was asserted was “Yes!” Author Nicholas Kerr lamented that he was once a “scuba diver in the sea of words,” but now, thanks to Google, he zipped “along the surface like a jet ski man.”

Countless brain imaging studies of questionable quality were also published in the same era. Many aim to demonstrate the disaster effect of the World Wide Web on our vulnerable minds.

A few years later, Professor Susan Greenfield, a neuroscientist professor of Baronness, launched a media campaign claiming that “mind change” (the impact of the internet and video games on the brain) is just as serious threat to humanity as climate change.

She even wrote dystopian novels about the dehumanizing effects of the internet, but received mixed reviews (One critic (I questioned whether this was one of the worst science fiction books ever written).

Scientists still don’t know how much Tiktok affects the brains of young people, but research is still underway. – Photo credit: Getty

Almost 20 years later, we’re fine. At least I don’t think our brains have been transformed into mash. But of course, these previous horrors were before the appearance of Tiktok. Perhaps there is something uniquely damaging about the types of short, scrollable, meaningless content available today.

I asked Professor Peter Etchellsif this is plausible, expert on the psychological impact of digital technology at Bathspa University. “As far as I know, there is no good science to support the idea that short videos are either tangible or uniquely bad in terms of their impact on the brain,” he says.

read more:

Is short video brain research a good science? Not so, but the evidence suggests that it is not.

What is wrong with this research?

Let’s take a look at some of the limitations of the research. If the goal was to prove that seeing tiktok is harmful to the brain, a more effective approach would be to scan participants’ brains and then consume different amounts of harmful content.

However, this study is completely cross-sectional, meaning that only a single snapshot was captured in time. This was not a pre- and post-comparison of causes and effects.

Or, as Etchell says: “[From this study] I can’t say anything about whether watching a short video will cause brain changes, or whether certain types of brain structures precede certain types of video consumption.

“This research doesn’t really add anything that will help us understand how digital technology affects us.”

Even if we accept the speculative leap of researchers that Tiktok’s videos may have caused the brain changes they observed, there are still some issues to consider.

First, the researchers searched the entire brain for differences that correlated with the scores on the video addiction scale. This approach is a common problem in brain imaging studies as it increases the risk of finding false positives. In other words, the more comparisons you make, the more likely you will stumble over random differences that seem important but are actually just a coincidence.

Second, even if we accept that the observed brain differences are real and caused by seeing Tiktok, interpreting them involves a lot of speculation. Researchers enveloped an increase in brain synchronization (known as regional homogeneity (Reho). However, Rejo itself is not inherently a good or bad thing. In fact, other studies have associated with an increase in Reho in certain brain regions. positive Results such as results observed during meditation training.

Perhaps the biggest flaw in the study relies on questionable survey-based measures of short video addiction that lacks strong scientific validity.

As Etchells put it, “Short video addiction is essentially an invented term, not a formal diagnostic clinical disorder.”

Taken together, these issues suggest that we should not be overly concerned that Tiktok fundamentally shapes the brains of young people in harmful ways.

That said, the excessive amount of time spent watching frivolous videos can still be a problem for some. However, it is more productive to focus on developing healthy media habits rather than worrying about brain changes or addiction.

“In many cases, when research like this hits the news, it’s a good opportunity to pause and reflect on whether we’re happy with the use of the technology,” says Etchells.

“If there’s concerns there, it’s worth thinking about what you can do to eliminate your frustration, knowing that you’ll benefit a lot from these technologies.”


About our expert, Professor Pete Etchell

Pete is a professor of psychology at Bath Spa University. His research focuses on how playing video games and using social media affects our mood and behavior. He is the author of I got lost in a good game We are currently investigating whether game mechanics can promote gambling behavior in other parts of our lives.

read more:

Source: www.sciencefocus.com

Experience Tasting Cakes in Virtual Reality with an Electronic Tongue

Taste hydrogels are administered to the mouth via small tubes

Shryn Chen

Electronic tongues that can replicate flavors like cake and fish soup can help you replicate food in virtual reality, but still can’t simulate anything else that affects taste, such as smell.

Yizhen Jia Ohio State University and his colleagues developed a system called e-Taste. This can solve a way to sample food and partially reproduce its flavor in someone’s mouth.

This includes using chemicals that correspond to five basic flavors. Sodium chloride, salty water, sour citric acid, glucose for sweetness, magnesium chloride for bitterness, umami glutamic acid. “These five flavors already explain the very large spectrum of food we have every day,” says Jia.

The system uses sensors to detect the levels of these chemicals in the food, convert them into digital measurements, pumping these values ​​into a pump, and pushing a small amount of hydrogel containing different flavors into a small tube under a person’s tongue.

First, the researchers tested a system with a single flavor and asked how well the device reproduced sour on a 5-point scale, comparing it to a real sample of sour taste. They gave the same number for 70% of the time for the true sour thing that was reproduced.

The team then tested whether the system was able to replicate more complex flavors such as lemonade, cake, fried eggs, fish soup, coffee, etc., and asked a group of six if they could distinguish them, and felt they could have over 80% of the time.

However, I say it’s not very useful to focus on such flavors alone. Alan Chalmers Because other sensations are also involved in our taste at Warwick University in the UK. “Next time you have strawberries, close your nose and eyes. Strawberries are very sour, but are perceived as sweet because of their aroma and red colour. So if you send them just sour on your device, you’ll never know that they’re actually from strawberries.”

“This kind of electron can extract the amount of sweetness [and] It’s sour, but it’s not a taste for a human tongue,” he says.

topic:

Source: www.newscientist.com

Finally Got My Virtual Reality Setup Working: A Week of Work, Exercise, and Relaxation

II’m writing this from a room slowly orbiting the Earth. Behind a screen that floats in front of me, through a huge opening where a wall should be, a planet slowly rotates and appears close enough to take up most of my field of vision. To my right it’s morning in Australia. The first vestiges of India and Europe are illuminated and dotted to my left. The soft drone of the air circulation system hums quietly behind me.

I spent a week using a virtual reality headset to do everything I could: work, exercise, compose music. This was the year that VR threatened to go mainstream as prices became more affordable and Apple entered the market. So I wanted to see how far VR has come since I first tried it in the mid-2010s, when the main experience was available. It was a nauseating roller coaster simulator. I used Meta’s latest model, called the Quest 3, and the conclusion was clear. This means it works now. It feels a little unfinished, but we’ve finally reached the point where VR is possible. It becomes really useful.

The biggest surprise was working in VR. This is not recommended. When you put on the headset, you can summon multiple screens, all connected to your computer, and make them as large as you want and place them anywhere in your environment. “Pass-through” – the ability to see digital objects superimposed on the real world, enabled by a camera built into the front of the headset – means you can cut out a window from the virtual environment to see the keyboard. You can also choose any number of work environments, from minimalist cafes to mountain huts, and switch between them at will. I quickly reached the point where if I was working alone, I would rather work in virtual reality than in real life.

The main problem is the overall lack of polish. The headset doesn’t feel like a finished product. It’s probably 10% too heavy, like a lab prototype that hasn’t been improved yet. The battery alone won’t last the entire day. Sometimes the controller disconnects without explanation. I brought it on a plane to do some work, but the challenge of connecting to my laptop using the onboard Wi-Fi proved insurmountable.

But watching movies in VR while flying was nothing special. Yes, I felt the need to apologize to my neighbor in a very British way. Wearing a headset in public has not yet become socially acceptable. But as soon as I pressed play, I realized I would never be able to go back to in-flight entertainment. I was sitting in a movie theater with the lights dimmed and several rows of seats separating me from a giant screen on a virtual wall. In long periods without turbulence, we really forgot we were flying. The one downside is that I was so engrossed in it that I almost missed the breakfast cart passing by.


“Today, the key to getting the most out of VR is to use it for activities you do yourself, even if you’re not a gamer.” Photo: Marissa Leshnoff/The Guardian

Of course, the movie theater was empty except for me – by design. Other apps are intentionally less sparsely populated. I downloaded one that promised a live virtual concert. Upon entering the virtual lobby, I discovered that there was no concert going on and no sign of one being scheduled. No problem. It also provided a space for people to mingle when acts weren’t performing. I loaded it. It was a beautifully designed virtual world, all domes and arches and curved slopes. But it was a ghost town. I was the only one there. And this is considered to be one of the most popular apps for live music on the internet.

When most people think of VR, they often think of Ready Player One. This is a science fiction novel and film about a world where people spend most of their time in a shared virtual reality, where they gather as avatars to interact, talk, and watch sports and music. together. This feels like a long way off. There are games that give hints about this group experience, such as “Gorilla Tag,” where children gather together after school to play tag as gorillas, talking to each other and moving around by waving their arms. However, VR adoption has not yet become widespread enough to make Ready Player One’s vision a reality. Now, the key to getting the most out of VR is to use it for activities you do yourself, even if you’re not a gamer.

For at least some types of knowledge workers, work is one such activity, and someone closely involved in the industry recently told me that it is considered the fastest growing use case. It is being I feel that productivity can be easily improved with VR. Gone is the office clutter, replaced by a calming environment that matches your mood for the day. Monitors that would cost thousands of dollars in the real world are displayed in front of you on demand. A virtual forest in the mountains is far better than the gray walls of my study as a place to sit down at my keyboard and write music. All distractions disappear from view.

Another thing is exercise. I was doing a personal training session in my garden, and a virtual trainer was floating in the air right in front of me. Passthrough, which was only recently added to Quest, is important here because it means you can use weights. This was not a smart idea in previous models as it completely obscured the real world. While many people have tried to join a gym temporarily and failed, it’s not unreasonable to hope that on-demand personal training at home might help them get back to exercising regularly.


Apple’s Vision Pro headset, launched earlier this year, was meant to be the starting gun for VR. That wasn’t the case. It’s a marvel of engineering and has magical uses, but it’s still lacking in compelling apps. £3,500 price tag For most people that is ruled out. Stories of headsets gathering dust or being returned have led some to believe that VR is nothing more than a hype bubble created by a tech industry desperate to find the next big thing.

But VR isn’t all hype. Sure, there are kinks that need to be smoothed out. But I think we’ve reached a tipping point. It’s really useful if you’re bringing this in for single player and something you don’t really use in public. Work, entertainment, exercise – everything is already great in VR. Don’t rely on tiny rectangular screens as a way for humans to communicate with machines.

  • Ed Newton-Rex is the founder of Fairly Trained, a nonprofit organization that certifies generative AI companies that respect the rights of creators, and a visiting scholar at Stanford University.

Source: www.theguardian.com

‘Quantum teleportation defies expectations: It’s a reality now’

A groundbreaking achievement in human communication has been made by scientists with quantum teleportation. However, this technology is not meant for teleporting people or objects, but rather for teleporting information.

The scientists have found a way to instantly teleport information over any distance without the need for advanced technology. They believe that quantum teleportation is a feasible option, as discussed in a study published in optica.

Professor Prem Kumar from Northwestern University led the research and expressed excitement about the possibilities this breakthrough opens up for quantum and classical networks. This advancement could revolutionize quantum communications and make them more efficient.

Optical communications, which involve transmitting information as light signals, underpin most telecommunications systems. The recent study proposes that quantum teleportation could enhance the security and speed of these communications, limited only by the speed of light.

An Innovative Breakthrough

Quantum teleportation harnesses quantum entanglement, allowing particles to exchange information instantly regardless of their distance apart. Instead of using millions of light particles like classical communication, quantum communication relies on pairs of single photons.

A team at Northwestern University, funded by the U.S. Department of Energy, discovered a method to guide these delicate photons through fiber optic cables more efficiently. By identifying specific wavelengths that minimize interference from other signals and implementing special filters, they successfully transmitted quantum information alongside regular internet traffic.

This success could pave the way for secure and rapid quantum communications, aligning with the goals of the International Year of Quantum Technology designated by the United Nations in 2025.

Future Applications

With this breakthrough, existing fiber optic networks could integrate quantum teleportation, eliminating the need for specialized infrastructure. This advancement holds promise for applications like quantum cryptography, sensing, computing, and potentially a new quantum internet.

Professor Kumar aims to test quantum teleportation over longer distances and explore entanglement swapping to enhance communication quality and security. Once proven effective on real underground cables, this technology could be fully integrated into communication networks.

Meet the Experts

Jim Al-Khalili CBE FRS, a theoretical physicist and Emeritus Professor of Physics at the University of Surrey, is a prominent figure in the field. He has made significant contributions to science communication through his books and media appearances.

For more information:

Source: www.sciencefocus.com

Brain implants for treatment of epilepsy, arthritis, and incontinence: A closer reality than you think | Healthcare

ohRan Knowles, a British teenager with a severe form of epilepsy called Lennox-Gastaut syndrome, became the first person to try the new brain implant last October, with astonishing results: his daytime seizures reduced by 80 percent.

“The device has had a huge impact on my son's life as he no longer falls and injures himself like he used to,” said his mother, a consultant paediatric neurosurgeon at Great Ormond Street Hospital in London (Gosh), who implanted the device. She added that there has been a huge improvement in her son's quality of life as well as his cognitive abilities. He is more alert and outgoing.”

Oran's neurostimulator is implanted under the skull and sends constant electrical signals deep into the brain with the aim of blocking the abnormal impulses that cause seizures. The implant, called Picostim, is about the size of a cell phone battery, is charged through headphones and works differently during the day and at night.

“The device has the ability to record from the brain, to measure brain activity, and we can use that information to think about how to improve the effectiveness of the stimulation that children are receiving,” says Tisdall. “What we'd really like to do is to make this treatment available on the NHS.”

As part of the trial, three children with Lennox-Gastaut syndrome will be fitted with the implant in the coming weeks, with a full trial planned for 22 children early next year. If the trial is successful, academic sponsors Ghosh and University College London plan to apply for regulatory approval.

Tim Denison, a professor of engineering science at the University of Oxford and co-founder and chief engineer at Amber Therapeutics, a London-based company that developed the implant in collaboration with the university, hopes that the device will be available on the NHS and around the world within the next four to five years.

The technology is one of a number of neural implants being developed to treat a range of conditions, including brain tumors, chronic pain, rheumatoid arthritis, Parkinson's disease, incontinence and tinnitus. These devices are more sophisticated than traditional implants in that they not only decode the brain's electrical activity but also control it, and this is where Europe is racing against the US to develop life-changing technology.

The latest generation of brain implants can not only detect brain activity but also control it. Photo: UCL

Amber isn't the only company working on brain implants to treat epilepsy. California-based Neuropace has developed a device that responds to abnormal brain activity and has been cleared by US regulators for use by people aged 18 and over. But the battery is not rechargeable and must be surgically replaced after a few years. Other devices are implanted in the chest with wires running to the brain that must be reinserted as the child grows.

When most people think of brain chips, they think of Neuralink, another California-based startup from Elon Musk that just implanted a brain chip in a second patient with a spinal cord injury. The device uses tiny wires thinner than a human hair to capture signals from the brain and translate them into actions.

The first recipient, Noland Arbaugh, was in January and is paralyzed from the neck down. Some of the wires had shifted and the implant needed to be adjusted. The implant allows Arbaugh to control a mouse cursor on a computer screen with his mind, as if he were watching a movie. Star Wars A Jedi who “uses the Force.”

Other US companies, such as Syncron, backed by Bill Gates and Jeff Bezos, have also recently implanted brain-computer interfaces (BCIs) in people who cannot move or speak.

But scientists say these implants simply decode electrical signals. In contrast, a number of companies in the U.S., Britain and Europe, like Amber, are working on so-called “BCI therapy,” or modulating signals in deep brain stimulation to treat disease. Amber's implants are also being used in academic trials for Parkinson's disease, chronic pain and multiple system atrophy, a condition that gradually damages nerve cells in the brain. The company is also sponsoring an early trial in Belgium to treat incontinence, with promising results.

Professor Martin Tisdall led the team that gave Oran Noorsson, who suffers from severe epilepsy, the implant last October. Photo: UCL

A different kind of technology will be tested in humans in clinical trials starting in a few weeks, using the first brain implant made from graphene, a “miracle material” discovered 20 years ago at the University of Manchester.

Medical teams at Salford Royal Infirmary will implant a device with 64 graphene electrodes into the brains of patients with glioblastoma, a fast-growing form of brain cancer. The device will stimulate and read neural activity with high precision, to spare other parts of the brain while removing the cancer. The implant will be removed after surgery.

“We use this interface to map out where the glioblastoma is and then remove it. [cut it out] “Without affecting areas of function such as language or cognition,” says Carolina Aguilar, co-founder and CEO of InBrain Neuroelectronics, the Barcelona-based company that developed the implant in collaboration with the Catalan Institute of Nanoscience and Nanotechnology and the University of Manchester.

Traditionally, platinum and iridium have been used in implants, but graphene, made from carbon, is ultra-thin, harmless to human tissue, and can be decoded and modulated very selectively.

InBrain plans to conduct clinical trials of similar artificial intelligence-powered implants in people with speech disorders caused by Parkinson's disease, epilepsy and stroke.

Skip Newsletter Promotions

Professor Costas Kostarellos, head of nanomedicine at the University of Manchester, co-founder of InBrain and principal investigator on the glioblastoma trial, says the company's goal is to “develop more intelligent implantable systems”.

Equipped with AI, the device, with 1,024 electrical contacts, “will help provide optimal treatment for each patient without the neurologist having to program all those contacts individually, as they do today,” he says.

InBrain has partnered with German pharmaceutical company Merck to use its graphene device to stimulate the vagus nerve, which controls many bodily functions including digestion, heart rate and breathing, to treat severe chronic inflammatory, metabolic and endocrine diseases such as rheumatoid arthritis.

Galvani Bioelectronics, founded in 2016 by the UK's second-largest pharmaceutical company GSK and Alphabet's Verily Life Sciences, has a pioneering treatment that treats rheumatoid arthritis by stimulating the splenic nerve. Galvani has begun clinical trials with patients in the UK, US and the Netherlands, with first results expected within the next 6-12 months.

Bioelectronics, which combines biological sciences and electrical engineering, is a market worth $8.7 billion today and is predicted to reach more than $20 billion (£15 billion) by 2031. According to Verified Market Research:The field focuses on the peripheral nervous system, which transmits signals from the brain to organs and from organs to the brain. When brain-focused neuromodulation and BCIs are added, Aguilar believes the overall market could be worth more than $25 billion.

While U.S. neuromodulation companies are making waves with devices targeting chronic pain and sleep apnea, a growing number of European startups are also working on the technology. MintNeuro, a spinout from Imperial College London, Working on developing next-generation chips The company is developing an implant that can be combined into a smaller implant and has partnered with Amber. With the support of an Innovate UK grant, its first project will be to develop an implant to treat mixed urinary incontinence.

Geneva-based Neurosoft has developed a device that uses a thin metal film attached to stretchy silicon – soft enough to put less pressure on the brain and blood vessels – to target severe tinnitus, which affects 120 million people worldwide.

“Tinnitus begins with ear damage, typically caused by loud noise, but it can also cause changes in the wiring of the brain, making it effectively a neurological disorder,” said Nicholas Batsikouras, the company's chief executive officer.

Founded in 2009 by 13 neurosurgeons, neurologists, engineers and other scientists from the Policlinico Research Center and the University of Milan, Neuronica has developed a rechargeable deep brain neurostimulator that can be used to treat Parkinson's disease. The device provides closed-loop stimulation and adapts moment-to-moment to the patient's condition, and is currently being tested on patients.

“Europe and the UK can compete head-to-head with the US when it comes to getting treatments onto the NHS and distributing them around the world,” Denison said. “It's a fair competition and we're going to give it our all.”

Source: www.theguardian.com

Do I truly need to save all of my text and photos? | In reality

a A few years ago, I encountered an unexpected problem: New York City had very few reliable phone repair shops, and even fewer that would repair a 2010 BlackBerry. No one seemed to understand my situation. Get your broken cell phone working again. It held text messages from my high school days. It was a significant part of my life.

For a brief moment, my BlackBerry actually turned on. I scrolled through my long-lost inbox, hoping to find some forgotten treasure: a written account of teenage heartbreak, memories of excitement, or moments shared with friends. However, my search yielded little. Most were emails about schoolwork.

I could never manage to get the device working again. This felt like a crisis, even if it was a personal and self-centered one. It felt tragic that all these materials — records of my feelings, communication, and my friends’ conversations during my teenage years — were stuck in a broken device.

Over time, the sadness faded, but my digital footprint continued to expand. Each day, I come across more content that I’ll want to revisit in the future: Numerous text messages. An average of 75 exchanges per day — Photos, videos, emails, social media likes, metadata from countless Google searches, group chat memes, “be there in 5 minutes” texts, my last message from my grandmother, and the complete story of a now-ended long-distance relationship.

I learned from my BlackBerry mishap. Instead of relying on a device destined to become outdated, I now invest in a cloud service that stores everything in a vast, overwhelming digital repository. For just $2.99 a month, I have over 200GB of digital storage, including 16,000 photos, eight years’ worth of Gmail, and 44GB of iMessages exchanged since I set my iPhone to “Don’t Delete” in 2017.

In the physical world, I lack the impulse to regularly discard old, irrelevant items without much consideration. However, I am sentimental and tend to engage in what experts label as “digital hoarding” — accumulating excessive digital content that leads to stress and anxiety.

Even with a more moderate approach, one’s digital footprint remains vast, dispersed, disorganized, and controlled by technology companies at their discretion. Experts reveal that each individual generates about 8MB to 2MB of data traveling online daily, a significant surge from 2MB ten years ago. The average American possesses about 500GB of storage, which includes social media usage, and this figure continues to grow amid escalating data demands. 328.77 million terabytes of new data are generated daily.

Our digital storage capacities are expanding, becoming more costly, and having a detrimental impact on the environment. The internet and digital industry’s annual emissions are equivalent to those of the aviation industry. Entering Cloud Storage Hell and facing storage limitations, there is a growing need for Data Storage Experts and financially struggling journalists to engage in Digital spring cleaning — eliminating duplicate photos as you would declutter old wardrobe items.

For many, including myself, the link between mobile phones and the cloud remains unclear and under-researched. Dr. Liz Silens, a psychology professor at Northumbria University and one of the few researchers to delve into this subject, discovered through Personal Digital Data Storage that most individuals don’t know where to begin with their data. “Is it genuinely mine? Is it stored in the cloud? Even if I delete content from my device, does it persist? Do I require additional backups if I can’t trust them? This exacerbates the data issue,” she remarked.

The topic of data makes me anxious as well because I’m not well-versed in technology and lack organizational skills. Data storage, like money, isn’t something I enjoy contemplating. If it’s accessible and usable, that suffices. Periodically, I attempt to transfer my data from the cloud in a casual, DIY manner, such as copying and pasting all my Facebook messages with my best friend at 16 into a Word document. I quickly become overwhelmed by technical terminology and multi-step processes recommended in various Reddit threads populated by individuals, like me, who fear losing themselves and remnants of their past. Digital Legacy of a Loved One.

One holiday season, my sister gifted me a subscription to iMazing, a service that backs up your iPhone and converts your iMessages into easily readable PDFs. However, after numerous failed attempts and frustration due to inadequate storage space on my 2017 MacBook, I abandoned the endeavor. For months, I manually removed photos from texts to address the memory shortage on my phone. Subsequently, rather than risking unintentional deletion from the cloud, I opted to purchase a new phone.

Archivist Margot Note highlighted a growing trend of private clients seeking to preserve caches of digital treasures, particularly text messages documenting “everyday history and significant moments.” Analogous to physical letters, they reveal the evolution of relationships over time, she mentioned.

The desire to safeguard such content stems from curiosity: What conversations did my best friend and I have in 2018, fresh out of college, full of vigor, and continents apart? How did my former partner indicate our relationship exceeded friendship? When did our bond begin to unravel?

The predominant emotion driving this preservation effort is anxiety. Losing these emails would mean forfeiting evidence of myself and my connections. It would signify losing one of the few constants after a loved one’s passing: their voice, its evolution over time, and their unique tone addressing me. Reflecting on her diary in Ongoingness, writer Sarah Manguso articulated the wish to shield “against awakening at the end of one’s life and realizing you’ve missed it.”

Skip Newsletter Promotions

“Just the thought of data triggers anxiety because of its enigmatic nature. It can be overwhelming,” Silens remarked. “Anxiety serves as a significant barrier to addressing the reorganization and management of one’s digital information.”

Engagement with social media introduces its own set of risks. In her book The End of Forgetting: Growing Up with Social Media, cultural and media scholar Kate Eichhorn contends that the internet’s ability to swiftly transport us back in time undermines our capacity to develop adult identities, evolve, and mature. “There’s a risk in the fact that anything can resurface in your life,” she noted. “We haven’t fully grasped the psychological repercussions of that yet.”


Whenever I delve into my 44GB repository of texts, I emerge feeling overwhelmed by information, nostalgic for the past, and acutely aware of the relentless march of time. Memory’s fallibility becomes apparent, as the records don’t always align with my idealized view of history. These texts aren’t my memories but fragments of experiences frozen in time. What’s the harm in forgetting? What do we truly gain from revisiting the past?

Both Eichhorn and Silens question the necessity of retaining such copious digital content. Eichhorn highlights the incessant accumulation of data. “Is this an archive? Or is it simply another form of clandestine, socially acceptable storage?” Silens proposes that tidying up the cloud could evolve into a routine, akin to filing taxes: “Review your day’s photos and only delete those you know won’t be needed in the future.”

I appreciate the notion of being more discerning. We can begin to be deliberate about our digital archives. We can organize and discard unnecessary items. Apps like “Second Brain App” serve as external memory for various content, from text to tasks. Note, the archivist, reassured me that my struggle to organize my digital repository isn’t foolish. There currently isn’t an optimal solution. While institutions possess robust preservation mechanisms, “it requires significant effort and resources,” she noted. “This hasn’t trickled down to personal digital archives yet. It’s likely to happen eventually, but the necessary solutions remain largely unknown to the public.”

Hence, I’ll likely procrastinate until my cloud storage reaches capacity before making a decision. At that point, I’ll likely purchase additional storage. My cloud storage operates quietly in the background, easy to delay, always present but forgotten. Similar to the old BlackBerry tucked away in a desk drawer, never to be used again but comforting in its mere existence.

Source: www.theguardian.com

How tractor beams could soon become a reality: A breakdown of how they’ll work

A beam is a stream of particles moving from a source to a target, exerting a pushing force rather than a pulling force on the target.

On Earth, we can use a vacuum cleaner to pull something towards us, but in reality, we are creating a pressure difference that causes the remaining air molecules to push the object.

This method is not considered a beam and would not work in space where there are no molecules in a vacuum.


However, in space, objects can be moved without using beams. The “gravity tractor” is a concept for a spacecraft that would maneuver near an asteroid and utilize mutual gravity to alter its trajectory.

The spacecraft uses ion thrusters to counteract the gravitational pull of the asteroid, effectively pulling it forward at a controlled pace.

Although gravity is a universal force present between all objects with mass, it is relatively weak.

As an alternative, the European Space Agency (ESA) has explored the possibility of utilizing electrostatic attraction as a stronger force. However, this force can be neutralized due to the canceling of positive and negative charges on objects.

The ESA study discussed methods to charge an asteroid, such as bombarding it with electrons to create a charged object that can be influenced by the spacecraft charged to around 20,000 volts, acting as a type of tractor beam.

While this method is slower than a science fiction tractor beam, it demonstrates a potential approach to manipulating objects in space.

This article was written in response to a question from Alexandra Rowland about the feasibility of a Star Trek-style tractor beam.

If you have any inquiries, please contact us at the email address below. For additional details, you can also reach out to us on our social media platforms: Facebook, Twitter, or Instagram (please include your name and location).

For more intriguing science facts, visit our Ultimate Fun Facts page.

Read more:

Source: www.sciencefocus.com

How social media and screen time impact young people: The reality

“Put that phone away!” Most parents have yelled something similar to this at their children, usually resulting in a shocked look on the child’s face.

In recent years, the spread of smartphones and social media has led us to spend more time in front of screens. Children are no exception. The COVID-19 pandemic has led to a significant increase in children’s screen time due to lockdowns and school closures.

There are many frightening claims about excessive screen time for children and teens: that it harms their mental health, leading to depression, eating disorders and even suicide; that it cuts into time they could be spending on socializing and exercise, making them feel lonely and less physically fit; and more. In short, the fear is that spending too much time on digital devices is ruining our children’s lives, with the tech companies who design the apps that keep us hooked being complicit. It’s no wonder that governments around the world are considering restricting screen time for under-18s.

Yet a closer look at the evidence does not support this overwhelmingly negative view. This does not mean that the tech giants are harmless and that further regulation is not needed. But it does mean that we need to think more carefully about what healthy screen time looks like for young people, and how we can make the online world the most accessible to them. So here is your guide to what we actually know about the impact of screens and social media.

One thing is clear in this complex field: children and young people, like the rest of us, spend a lot of time in front of screens.

Source: www.newscientist.com

Science debunks 7 common myths about your reality

Our perception of reality is quite limited because we evolved on the African plains 3 million years ago. Our senses were shaped to help us survive in that environment, with eyes that can detect approaching predators and ears that can hear the rustling of grass.

Although our senses have given us a basic understanding of the world, they also deceive us at times. The majority of nature remains hidden from us, and things are not always as they appear.

Here are a few examples of things that seem obvious but are not necessarily true:

1. The Earth is flat

Many ancient peoples believed the Earth was a disk. – Photo credit: Alamy

While the Earth may appear flat, evidence such as ships disappearing over the horizon and the curved shadow of the Earth on the Moon during a lunar eclipse point to its spherical nature. Observations like the first circumnavigation of the globe also support the round Earth theory.

Proving the Earth’s size involved measurements and calculations, with early estimates by Eratosthenes aligning closely with modern figures.

2. The stars revolve around the Earth

It may seem logical that stars move around a stationary Earth, but evidence such as artillery deviations and the Foucault pendulum disproves this. The invention of the pendulum provided physical proof of the Earth’s rotation.

3. Living things are designed to suit their habitats

The apparent design in nature is often attributed to mutations and natural selection rather than intentional design. DNA plays a crucial role in the adaptation of organisms to their environments.

4. Your time is the same as everyone else’s

Speeds close to the speed of light and strong gravitational fields (such as near a black hole) distort time. – Photo credit: Science Photo Library

The concept of time is influenced by speed and gravity, as demonstrated by Einstein’s theories. Time dilation occurs in different gravitational fields, impacting the flow of time.

5. The moon won’t fall

Newton’s insights about gravity and orbital mechanics explain why the moon stays in orbit rather than falling to the Earth. Objects in free fall experience weightlessness due to the effect of gravity.

6. Stars are tiny dots on the celestial sphere

The Milky Way galaxy contains over 100 billion stars. – Photo credit: Getty

The apparent size of stars is deceiving, with parallax observations revealing their true distance and magnitude. Spectral analysis further confirms the nature of stars as distant suns.

7. We can know what the universe is like “now”

The concept of “now” is complex in a universe where light travels slowly through vast distances. Observations of distant objects reflect their past states, allowing us to study the history of the universe but not its current state.

Source: www.sciencefocus.com

Podcast reveals how reality show deceived women into believing fake Prince Harry was real

A new retrospective podcast series has emerged, delving into the gritty and boundary-pushing world of early 2000s reality TV.

One shocking example featured on the podcast is “There’s Something About Miriam,” where six men unknowingly went on a date with a transgender woman, sparking controversy and discussion. This series gained renewed attention following the tragic death of star Miriam Rivera a decade after filming.

Pandora Sykes and Shirin Kale’s investigative series “Unreal” sheds light on the ethics and exploitation behind era-defining reality shows like Big Brother, The X Factor, The Swan, and Love Island. Similarly, Jack Peretti’s exploration of shows like “The Bachelor” and “Married at First Sight” delves into the questionable practices within the genre.

Another standout from the early 2000s, “I Want to Marry Harry,” featured single American women vying for the affection of a man they believed to be Prince Harry, but turned out to be an imposter named Matt with dyed ginger hair.

In “The Bachelor at Buckingham Palace,” TV expert Scott Bryan interviews former contestants to reveal how easily they were deceived by the absurd concept of the show.

The podcast also features insights into the competitive world of educational scholarships and a scripted drama about AI and grief from Idris and Sabrina Elba.

Holly Richardson
Television Editor Assistant

This week’s picks

Sir Lenny Henry, star of Halfway. Photo: David Bintiner/Guardian

Competition
All episodes available on Wondery+ starting Monday
Sima Oriei’s journey for a high-paying scholarship in Mobile, Alabama, is revisited, showcasing a grueling competition where one girl is crowned America’s Outstanding Young Woman and wins a $40,000 education.

Letter: Ripple Effect
Weekly episodes available
Amy Donaldson’s true crime podcast explores the mysterious murder of a young father in Utah in 1982, delving into the impact on loved ones and the quest for answers.

Incomplete
Audible, all episodes now available
Idris and Sabrina Elba’s scripted podcast raises ethical questions about AI and grief, featuring a stellar cast led by Lenny Henry.

The Long Shadow: In the Guns We Trust
Weekly episodes available
Garrett Graf’s exploration of the right to bear arms in the US, 25 years after the Columbine shooting, sheds light on the voices of gun violence survivors.

Bachelor of Buckingham Palace
Wondery+, all episodes now available
Scott Bryan’s in-depth interviews with former contestants from “I Want to Marry Harry” reveal the surprising reality behind the show’s deceptive premise.

There’s a podcast for that

Dua Lipa, host of “At Your Service.” Photo: JMEternational/Getty Images

Hannah Verdier We’ve curated the 5 best podcasts hosted by pop stars, from Tim Burgess’ listening party to Sam Smith’s poignant exploration of HIV history.

Source: www.theguardian.com

Understanding the Most Disturbing Theory of Reality: A Guide

Are there so many people in so many parallel worlds, almost duplicates of you, reading almost duplicate articles of this article? Is consciousness a fundamental property of all matter? The reality is Is it a computer simulation? Dear reader, I can hear you groaning from right here in California.

We tend to reject ideas like this because they sound ridiculous. But some of the world's leading scientists and philosophers support them. why? And assuming you are not an expert, how should you react to this kind of hypothesis?

Things quickly go awry when faced with fundamental questions about the nature of reality. As a philosopher specializing in metaphysics, I argue that strange things are inevitable and that fundamentally strange things will turn out to be true.

That doesn't mean all weird hypotheses are created equal. On the contrary, some strange possibilities are worth taking more seriously than others. The idea of ​​Zorg the Destroyer hidden at the center of the galaxy, pulling protons by invisible threads, would of course be laughed off as some sort of explanation. But even in the absence of direct empirical tests, we can carefully evaluate various seemingly absurd ideas that are worth serious consideration.

The key is to become comfortable weighing competing unreality. Anyone can try this, as long as they don't expect everyone to come to the same conclusion.

First, let me start by clarifying that we are talking here about a tremendously big and scary problem: the foundations of reality and the foundations of our understanding of those foundations. Sho. What is the underlying structure?

Source: www.newscientist.com

Chinese Hackers for Hire Exposed in Major Cybersecurity Breach | The Dark Reality of Cybercrime

The recent data breach from a Chinese cybersecurity company has exposed national security agencies paying substantial amounts of money to collect information about a variety of targets, including foreign governments, while hackers gather vast amounts of data on individuals and organizations that might be of interest to potential customers for their companies.

A set of over 500 leaked files from the Chinese company, I-Soon, has been posted on the developer’s website Github, with cybersecurity experts confirming their authenticity. The targets discussed in the leaked files include NATO and the UK Foreign Office.

The leak provides an unprecedented glimpse into the world of Chinese-employed hackers, with Britain’s security chief describing it as a “significant” challenge for the country. The leaked files consist of chat logs, company prospectuses, and data samples, revealing the scope of China’s intelligence-gathering operations and highlighting the market pressures faced by Chinese commercial hackers in a sluggish economy.

Yisun is believed to have collaborated with another Chinese hacking organization, Chengdu 404, which has been indicted by the U.S. Department of Justice for cyberattacks not only in the United States but also on companies in China and Hong Kong democracy activists.

Other targets discussed in the I-Soon leak include the British think tank Chatham House, public health agencies of Asean countries, and foreign ministries. The leak also indicates that certain data has been collected according to specifications, while in other cases special agreements have been made with the Chinese Public Security Bureau to collect specific types of data.

Chatham House has expressed concern over the leaked data, emphasizing the importance of safeguarding their data and information. Similarly, NATO has acknowledged the persistent cyber threats and stated that it is investing in large-scale cyber defense. However, the British Foreign Office declined to comment.

I-Soon’s services range from gaining access to email inboxes to hacking accounts, obtaining personal information from social media platforms, retrieving data from internal databases, and compromising various operating systems. The leaked files also suggest that the Chinese state is collecting as much data as possible.

Isun’s office building in Chengdu, Sichuan Province, southwest China. Photo: Kang Dak/AP

The leaked documents further reveal that I-Soon has sought “anti-terrorism” support and has claimed to have obtained data from various organizations. The company was also involved in discussions about sales practices and the company’s internal situation.

The leaked data also includes screenshots and chat logs where employees discuss the company’s operations and the impact of the COVID-19 pandemic on their business. The company’s CEO expressed concerns about the loss of core staff, the subsequent impact on customer confidence, and the loss of business.

Source: www.theguardian.com

The unsettling reality of cannibalism in human history

Archaeologists have discovered the remains of at least six people at Gough's Cave in the Cheddar Valley in southwest England. Many of the bones were intentionally broken, and the fragments are covered in cut marks, the result of people using stone tools to separate the bones and remove the flesh.Additionally, 42 percent of bone fragments traces of human teeth. There is little doubt that the people who lived in this cave 14,700 years ago practiced cannibalism.

Today, cannibalism is considered taboo in many societies. We think that's an anomaly, as evidenced by films like . texas chainsaw massacre. We associate it with zombies, psychopaths, and serial killers like the fictional Hannibal Lecter. There are very few positive stories about cannibals. But despite our preconceptions, evidence is accumulating that cannibalism was a common human behavior, so perhaps it's time to reconsider.

Our ancestors have been eating each other for over a million years. In fact, it seems that about one-fifth of society has practiced cannibalism since ancient times. While some of this cannibalism may have been done simply to survive, in many cases the reasons appear to be more complex. For example, in places like Gough's Cave, eating the bodies of the dead appears to have been part of the funerary ritual. Some archaeologists say cannibalism may be a way to show respect and love for the dead, rather than a horrific insult to nature.

Stories of cannibals can be found throughout human history.At Homer's Odyssey,…

Source: www.newscientist.com

Enhancing Virtual Reality with Artificial Touch Technology for a More Immersive Experience

When you open the door, it hits you and warmth spreads over your skin. Fighting the smoke and heat, I brace myself and head inside. As you walk through a burning building, flames flicker around you. You find what you want and run away. It's so cold outside that I start shivering and my hands and feet go numb.

But when I remove the headset, everything stops. An incredibly realistic training exercise is now complete. All of these sensations felt real, but they were not caused by changes in my surroundings. Instead, carefully selected chemicals were injected into the skin to mimic different emotions.

Such stimuli have long helped us understand the most complex of the human senses: touch. In the 1990s, research into capsaicin, an extract from chili peppers, and menthol, found in peppermint, helped determine how our bodies respond to heat and cold. now, Jasmine Lu and colleagues at the University of Chicago They use this knowledge to create chemically induced sensations that make virtual environments incredibly realistic.

With a technology called chemical haptics, they built a wearable device that, when placed on the skin, can cause the wearer to experience different sensations, such as hot or cold, numbness or tingling, depending on their needs. . Its uses could include creating highly realistic virtual worlds for gamers to explore, training firefighters, and more. But will we ever be able to fully recreate the experience of touching the real thing? And if we can't, what might we stand to lose?

Source: www.newscientist.com

Google cuts hundreds of jobs in hardware, augmented reality, and Assistant divisions

Google has laid off hundreds of employees across its hardware, voice assistant, and engineering teams as part of its cost-cutting measures.

Google said in a statement that the job cuts are aimed at “responsibly investing in our biggest priorities and important opportunities for the future.”

“Some teams continue to make these types of organizational changes, including the elimination of some roles globally,” the paper said.

Google previously announced it would eliminate hundreds of roles across its engineering, hardware, and Assistant teams, with most of the impact hitting the company's augmented reality hardware division. The job cuts follow pledges by executives at Google and its parent company Alphabet to cut costs. A year ago, Google announced it would lay off 12,000 people, or about 6% of its workforce.

On the same day that news of the layoffs broke, Google announced the following: Deprecating 17 “underutilized” features in Google Assistantuse voice commands to play an audiobook, send an email, or start a meditation session in Calm.

In a post on X (formerly known as Twitter), the Alphabet union described the layoffs as “another unnecessary layoff.”

“Our members and teammates work hard every day to build great products for our users, and our company cannot continue to lay off our colleagues while making billions of dollars every quarter.” the union wrote. “We will not stop fighting until our jobs are safe!”

Google achieved record growth in the early days of the coronavirus pandemic, but its expansion has slowed over the past year, forcing it to adjust its business forecasts.

It's not the only technology company in this boat. Meta, the parent company of Facebook, Instagram and WhatsApp, has cut more than 20,000 jobs. In December, Spotify announced it would lay off 17% of its global workforce in 2023, the music streaming service's third round of layoffs, in a bid to cut costs and improve profitability.

Skip past newsletter promotions

Earlier this week, Amazon laid off hundreds of employees in its Prime Video and Studios divisions. The company also plans to lay off about 500 employees who work at live streaming platform Twitch. Amazon has cut thousands of jobs following a surge in hiring during the pandemic. In March, the company announced plans to lay off 9,000 employees, in addition to the 18,000 employees it announced in January 2023.

Google is currently in fierce competition with Microsoft, with both companies trying to take the lead in the field of artificial intelligence. Office software giants are ramping up their artificial intelligence offerings to rival Google. In September, Microsoft introduced its Copilot feature for business customers to integrate artificial intelligence into products such as search engine Bing, browser Edge, and Windows.

Source: www.theguardian.com

The reality of your “dessert stomach” and why there’s no need to feel guilty about it

I’m currently sitting in a trendy pub in a small village on the outskirts of Cambridge. It’s a Thursday night in early December, so it’s dark and freezing outside. But here there is a warm and cozy fireplace, and the whole place is decorated with festive decorations. Michael Bublé is singing Christmas songs on the radio and I’m holding a big glass of Malbec. life is wonderful.

It’s been a long day (actually a long week) and let me prove to you that this is definitely the place to be. This is one of his “gastropubs” that serves lovely food and where I enjoyed a weekday date night with his wife Jane.

For dinner we both had salted trout to start, then as main courses Jane had hake and I had burger and chips. The portion sizes were healthy and we were both pretty full by the time we finished eating.

Then you know what happened next. The waiter comes over with the dessert menu and asks, “Are you tempted?” Yes, it’s definitely possible. And even though we were full, even on a weeknight, we both ordered dessert. I had sticky toffee pudding with ice cream and my wife had a slice of tarte au citron with crème fraîche. Like clockwork, the dessert stomach hits again.

read more:

Now the question arises, why is it so specific to dessert? Would I have been better off having another burger? Would Jane have eaten more hake? Absolutely not. So what’s so special about the dessert?

To answer this question, we need to look to evolution. Flashback to the Serengeti River 50,000 years ago and your ancestors dragging an antelope into their village. Let’s just say, metabolically speaking, they spent 2,000 calories stalking, chasing, and defeating them.

It is clear that once they return to their village, they will have to burn at least 2,000 calories to recoup their expenses. Otherwise, it’s not sustainable. However, there is no guarantee that you will successfully catch an antelope next time. This means that if they only They don’t live very long if they eat to meet their metabolic needs.

At that time, the pleasurable part of the brain is activated. This dominates the sense of reward we all receive from eating, leading us to eat more than we actually need. But how do you overcome the mechanical challenge of having 2,000 calories of food stuck in your stomach?

Well, our brains can be very picky. They begin to crave more calorie-dense foods, meaning they contain more calories per gram. This allows you to fill every inch of your stomach.

So what are the foods with the most calories? Those high in free sugars and fats. So, what foods are high in sugar and fat? dessert.

In other words, your dessert tripe is actually an evolutionary holdover from your days in the Serengeti. It’s there to make sure you’re craving the right types of food even when you’re full so you can maximize your calorie intake at every meal. After all, there was no guarantee at all when the next meal would arrive.

You’ve probably noticed an obvious problem here. While this movement has kept us living in a periodic cycle of feast and famine, many people today live in cycles of feast and then more feast. I definitely wasn’t need That sticky toffee pudding (which I really enjoyed and didn’t regret for a minute!).

By the way, the “dessert belly” is not just a strange human phenomenon. Now, I completely understand that I’m not going to top my lunch of crème brûlée and a glass of chilled muscat with a freshly killed antelope by a lion. But consider a grizzly bear during a salmon run upstream in the Pacific Northwest of the United States.

Grizzlies arrive at the salmon run swimming buffet in the fall, with the aim of storing as much fat as possible for the upcoming hibernation.

The bear first eats the fish almost whole, down to the bones. However, as they become fuller and filler and store more and more fat, they will only eat the skin of the salmon and the thin layer of fat underneath. why? Because this is the most caloric part of the fish. They begin to change what they eat to maximize their energy reserves.

So, while desserts are clearly a human cultural construct, the phenomenon of maximizing the caloric density of foods that we crave when we are full has been conserved through evolution. That means it’s not your fault for finding room for dessert even after a satisfying meal.

read more:

Source: www.sciencefocus.com

Unveiling the Reality of Sleep Disorders: When a Night Shift Becomes a Nightmare

A new study investigated the relationship between shift work patterns, sociodemographic factors, and sleep disorders. They found that shift work, especially night shifts, significantly disrupted sleep, with about a third of all participants reporting at least one sleep disorder. The study also found that demographic factors such as gender, age, and education level influence sleep health.

A new study shows that working night shifts increases the incidence of sleep disorders, especially in young people with low levels of education.

Sleep is important not only for physical and mental health, but also for daytime and neurocognitive function. When people work in shifts (21% of workers in the European Union worked shifts in 2015), their circadian sleep-wake rhythms are often disrupted. Now, Dutch researchers have investigated the relationship between different shift work patterns, sociodemographic factors, and sleep disorders.

“Compared to working regular shifts during the day, working other shift types has been shown to have a higher incidence of sleep disturbances, especially those working rotational or regular night shifts,” GGZ Drenthe said Dr. Marike Lancel, a state mental health researcher.Institute and lead author of the study published in frontiers of psychiatry. “Notably, 51% of those working night shifts tested positive for at least one sleep disorder.”

ask about sleep

“There is a lot of evidence that shift work reduces sleep quality. However, there is little evidence of the impact that different types of shifts have on the prevalence of different sleep disorders and how this varies depending on demographic characteristics. “We know very little about whether they will,” Lancel continued.

To fill these gaps, researchers recruited more than 37,000 participants and provided demographic information indicating their shift work patterns (regular morning, evening, night, or switching between shifts).

They also completed a questionnaire screening on six common sleep disorder categories: insomnia, hypersomnia, parasomnias, sleep-related breathing disorders, sleep-related movement disorders, and circadian rhythm sleep-wake disorders.

Responses suggested that regular night shifts are the most debilitating condition when it comes to sleep. Half of night shift workers reported sleeping less than 6 hours in a 24-hour period, 51% reported one sleep disorder, and 26% reported two or more sleep disorders.

In the overall study population, approximately one-third tested positive for at least one sleep disorder and 12.6% tested positive for two or more sleep disorders.

Demographic factors and sleep health

Researchers also investigated whether demographic factors such as gender, age, and highest level of education influenced sleep health. We also considered whether participants lived alone, with a partner or children, or with others, such as friends or parents.

The results showed that although men slept less than women, sleep problems were more common in women. Age also affected sleep health. Although older participants tended to sleep less, most sleep disorders and their comorbidities were found to be more prevalent in the youngest participant group, those under 30 years of age.

Researchers found a correlation between education level and the likelihood of having disrupted sleep. “The effects of shift work on sleep are most pronounced among young people with low levels of education,” Lancel said. This group had shorter sleep duration and significantly higher prevalence of sleep disorders and their comorbidities.

Night shifts and sleep challenges

Researchers found that some people who work night shifts may have fewer sleep-related problems than others, but for the average night shift worker, this irregular work pattern can lead to less regular sleep-related problems. They said they would be more likely to struggle with healthy sleep. sleep. “People who work night shifts are unlikely to be completely immune to all the negative effects of night shifts, as they remain focused on their day jobs and out of sync with the environment in which they live,” Lancell said. explained.

The researchers also noted that their study had certain limitations. For example, people with sleep disorders may be more likely to participate in studies focused on sleep than people who sleep well. Nevertheless, the authors said their findings may provide important information for employers in occupations where shift work is common. It may also be used to educate strategies on how to best address and reduce the effects of night work and sleep days.

References: “Shift work is associated with widespread sleep disturbances, especially when working at night,” GJ Boersma, T. Mijnster, P. Vantyghem, GA Kerkhof, Marike Lancel, October 17, 2023. frontiers of psychiatry.
DOI: 10.3389/fpsyt.2023.1233640

Source: scitechdaily.com