Revolutionizing Temperature Measurement: A Quantum Device Approach to Defining Temperature

Cooling and trapping rubidium atoms

Key Components of a New Rubidium Atom Cooling Setup

Tomasz Kawalec CC BY-SA 4.0

A groundbreaking quantum device utilizing giant rubidium atoms may redefine temperature measurement.

While some nations utilize Celsius or Fahrenheit to measure temperature, physicists universally rely on Kelvin. This unit signifies “absolute temperature,” where 0 Kelvin represents the lowest temperature permitted by physical laws. However, confirming the accuracy of a 1 Kelvin measurement is a meticulous endeavor.

“When making absolute temperature measurements, one typically purchases a temperature sensor calibrated against another sensor, and the chain continues. Ultimately, one of those sensors was previously sent to the American Standards Institute,” explains Noah Schlossberger from NIST in Colorado.

Schlossberger and his team have developed an innovative device leveraging quantum mechanics to directly measure Kelvin, eliminating the need for extensive sensor calibrations.

This device, a compact metal and glass structure housing trapped rubidium atoms, employs lasers to displace outer electrons far from the atomic nucleus, resulting in significantly enlarged atoms. Subsequently, the researchers cool these atoms to roughly 0.5 milliKelvin—about 600,000 times cooler than room temperature—using lasers and electromagnetic fields.

Consequently, the outer electrons of rubidium atoms exhibit heightened sensitivity to minute temperature fluctuations. When exposed to certain quantum states, these electrons “jump,” allowing the device to function effectively as a temperature sensor. Established mathematical models can accurately relate the temperature difference necessary for such jumps, facilitating a new Kelvin definition.

The International Bureau of Weights and Measures similarly defines Kelvin via various quantum constants. Yet, institutions like NIST often resort to non-quantum devices for calibration. The new quantum device aims to deliver a calibration-free definition of Kelvin.

According to Schlossberger, “Every rubidium atom behaves identically in the same conditions. You can replicate a device anywhere in the world, and it will perform the same way.” This uniformity is crucial for maintaining high-precision instruments, such as atomic clocks, which require operation at very low Kelvin temperatures.

However, the prototype still faces challenges: it struggles with accurately detecting quantum states and is currently too cumbersome for practical use. Researchers are actively refining the design for enhanced practicality and precision.

Schlossberger presented this groundbreaking research at the American Physical Society Global Physics Summit in Colorado on March 16th.

Topic:

Source: www.newscientist.com

Revolutionizing Cryonics: We’re Closer to Reviving Life from Cryogenic Freezing

Recent research findings suggest that long-term cryo-sleep and revival may no longer be purely science fiction. A study published in PNAS reveals intriguing advancements.

Scientists from Friedrich-Alexander-University Erlangen-Nuremberg (FAU) and Erlangen University Hospital successfully froze mouse brain tissue and restored its functionality upon thawing.

Although only a fraction of the brain tissue was revitalized, the neurons retained the ability to transmit electrical signals, sustaining complex processes essential for memory and learning.

“Before conducting the experiment, we weren’t sure it would succeed,” stated Dr. Alexander German, first author of the study from the Department of Molecular Neurology at Erlangen University Hospital, as reported by BBC Science Focus.

“Public focus is likely to transition from ‘pure science fiction’ to ‘serious scientific and technological challenges.’”

Nature’s Cryo-Sleep Solutions

Interestingly, nature already exhibits cryo-sleep capabilities. Siberian salamanders can endure temperatures as low as -50°C (-58°F), remaining in a dormant state for years in permafrost until conditions are favorable for revival.

This remarkable resilience is attributed to their liver, which produces glycerol—a natural antifreeze that inhibits the formation of ice crystals within cells.

Ice formation has historically obstructed human cryopreservation efforts, as crystals damage the intricate nanostructures of living tissues.

Current cryoprotective agents have their own drawbacks; many are toxic to sensitive cells, and fluctuations in their concentrations can disrupt fluid balance in tissues.

The Siberian salamander, the coldest amphibian on Earth, employs an extraordinary evolutionary strategy to freeze and thaw safely – Photo credit: Getty

The research team employed a technique known as vitrification. This process replaces much of the tissue fluid with a blend of cryoprotective agents, cooling the molecules rapidly enough to stabilize them in a glass-like state. While both ice and glass are hard solids, glass’s random structure prevents crystallization and subsequent mechanical damage.

German and his team utilized a custom solution called V3, meticulously optimized to reduce toxicity while inhibiting ice formation.

Focusing on the hippocampus—a brain region crucial for memory and learning—the researchers processed slices of mouse hippocampus, approximately three times thicker than a human hair, through increasingly concentrated V3 solutions before rapidly cooling them to -196°C (-321°F) on a copper cylinder chilled with liquid nitrogen, and storing them at -150°C (-238°F) for durations ranging from 10 minutes to 7 days.

Upon thawing, the structural integrity of the neurons was preserved, and electrical recordings confirmed that the neurons were active and communicating within hippocampal circuits.

The breakthrough was evidenced by the presence of long-term potentiation (LTP), a vital process that strengthens connections between frequently used neurons, serving as the cellular foundation for learning and memory—it continued to function effectively.

This was a significant finding for German, as LTP is a rigorous measure of brain function, dependent on a complex interplay of cellular mechanisms, including signaling chemicals, receptor activation, calcium ion processing, and a cascade of molecular events that fortify neuronal connections.

The successful maintenance of these processes post-vitrification indicates that the tissue emerged in remarkably good condition.

“This result demonstrates that the synaptic machinery remains sufficiently intact to support de novo plasticity after complete cryoarrest,” German stated.

Bridging Science Fiction and Reality

The immediate applications are terrestrial rather than interstellar. Surgeons who excise brain tissue during epilepsy surgeries often need to analyze it rapidly. With effective vitrification techniques, these samples could be preserved for re-examination years later.

Germany’s spin-off company, Hiber, is actively working on developing reliable technology for preserving human neural tissue, aimed at advancing drug discovery and disease research.

German also noted that the physics underlying long-term storage is surprisingly encouraging. When tissue drops below its glass transition temperature, molecular movement and chemical degradation essentially halt.

However, he mentioned that radiation could pose more significant challenges, especially if this technology is utilized in future long-distance space missions.

The vitrified tissue on the left remains intact, while the tissue on the right is compromised by crystallization and cracking – Photo credit: Alexander German

Expanding from Tissues to Organisms

Scaling up from thin tissue slices to entire organs—or even whole organisms—poses considerably different challenges.

In thin slices, antifreeze can diffuse from all surfaces effectively. In intact organs, however, delivery and removal through blood vessels becomes complex due to the blood-brain barrier.

If thawing occurs unevenly, the tissue risks cracking or partial recrystallization, jeopardizing the structure that vitrification aims to protect.

“Our PNAS study serves as proof of principle for neural cryobiology, rather than demonstrating cryostasis for complete organisms,” German emphasized.

“This study shows that adult mammalian brain tissue can recover near-physiological circuit function after being completely stopped in cryogenic glass without ice. This point addresses the concern that adult brain tissue is too fragile for cryopreservation.”

For German, the significance of this research is less about cinematic science-fiction narratives and more about tangible scientific advancements. “The cold version of the science fiction concept isn’t solely about interstellar travel; it’s about gaining time,” he explained.

“If medicine can develop more effective methods to preserve tissues, organs, and potentially patients, we may pave the way for better treatment options in the future.”

Read more:

Source: www.sciencefocus.com

Revolutionizing Mathematics: The Biggest Changes in History

Old textured vintage paper page featuring higher math calculations

Will the Era of Handwritten Mathematics End?

Credit: Laborant / Alamy

In March 2025, renowned mathematician Daniel Litt placed a bet regarding the impact of artificial intelligence on mathematics. He asserted that by 2030, there would be only a 25 percent chance that AI could produce mathematical papers comparable to those of top human mathematicians. However, just a year later, he anticipates losing this wager, stating, “I now expect to lose this bet,” as noted in his blog.

The rapid advancements in AI’s problem-solving capabilities have left mathematicians astounded. “Only a few years ago, AI struggled with even simple high school math problems, but now it can tackle real challenges faced by mathematicians,” Litt comments from the University of Toronto.

This acceleration in AI development is unprecedented, with mathematicians expressing concerns about the rate at which their field is evolving. “There’s no place to hide,” warns Jeremy Avigado from Carnegie Mellon University in Pennsylvania in his essay. “We must confront the reality that AI will soon outperform us in theorem-proofing.”

This shift is not due to a singular event but the cumulative progress AI is making in mathematics. Last year, companies like OpenAI and Google DeepMind accomplished unprecedented feats at the International Mathematics Olympiad—an elite competition once deemed too complex for AI tools. In January, mathematicians began leveraging AI to address longstanding questions posed by Hungarian mathematician Paul Erdős.

AI is now addressing more intricate mathematical challenges, tackling real-world research problems and assisting in the automatic verification of complex proofs that traditionally required extensive collaboration among mathematicians.


In February, Nikhil Srivastava from the University of California, Berkeley, launched the First Proof project to establish realistic benchmarks for evaluating AI’s mathematical capabilities. The initial phase consisted of ten problems drawn from various mathematical areas that researchers regularly encounter.

Evidence of AI Progress

Once the challenge was publicized, solutions began to pour in. Researchers from technology giants like OpenAI and Google DeepMind participated in solving the First Proof challenge. OpenAI reported that it correctly answered half of the questions based on “expert feedback,” while Google DeepMind achieved success on six of ten questions, according to consulted mathematicians.

“Everything changed rapidly,” reflects Thanh Luong from Google DeepMind. “AI has become a legitimate research collaborator, capable of yielding significant research results, as demonstrated by First Proof.”

Google’s AI mathematics tool, Aletheia, combines a compute-intensive version of the Gemini AI chatbot with validation algorithms to identify flaws in proposed solutions. The iterative nature allows researchers to refine their answers continuously. While Google has not disclosed the number of iterations taken to solve problems, mathematicians remain impressed.

Not all proposed solutions received unanimous approval. For example, in geometry, out of seven experts consulted, only five agreed on the correctness of one solution. Ivan Smith, a professor at the University of Cambridge not involved with Google’s team, noted that AI is approaching problems sensibly and showing promise. “If this were a PhD student presenting ideas, it would encourage confidence that the results are valid,” Smith states.

This situation highlights the complications associated with AI-generated proofs. The challenge lies in the verification process. The speed at which AI generates proofs may outpace human verification capabilities. If an AI produces a theorem but no one is available to verify it, has it truly been proven? AI may assist in this area as well.

Technology is rapidly advancing, converting handwritten proofs expressed in natural language, like those posed in the First Proof challenge, into formats that computers can validate through a process called formalization.

Recently, Math, Inc. surprised mathematicians by announcing that its AI tool, Gauss, had successfully formalized and verified an award-winning proof. This proof pertains to how many spheres can be efficiently packed in space, a subject central to Marina Wiazowska’s 2022 Fields Medal, the mathematics equivalent of the Nobel Prize.

Efforts to formalize Wiazowska’s work began in late 2024, independent of Math, Inc.’s initiative to manually convert the problem into code. They initially analyzed Wiazowska’s eight-dimensional sphere-packing solution. As they made steady strides, Math, Inc. unexpectedly declared it had already obtained a complete proof, along with a broader version of the result in 24 dimensions.

Bhavik Mehta and his team at Imperial College London initially outlined a framework for formalizing the research and identifying essential mathematical definitions. Without this groundwork, Mehta notes, the AI tools would have been unable to complete the proof.

“I compiled all the components but didn’t provide instructions on assembling them,” states Chris Birkbeck, a PhD candidate at the University of East Anglia, who is part of the team.

A New Era of Mathematicians

The final proof consists of around 200,000 lines of code, representing about ten percent of all formalized mathematics to date. Although this output may be ten times longer than a human would typically take, it marks a significant achievement, according to Johann Kommelin from Utrecht University. “This is groundbreaking work that is effectively being formalized,” he affirms.

Similar initiatives could emerge across various fields, transforming traditional mathematical practices. “The future we envision is a tool that automates the formalization of new research and mathematical papers, while also flagging potential errors,” Commelin emphasizes. “This would greatly influence peer review processes and evaluations.”

Faced with a future where AI completes a significant portion of mathematical tasks, some mathematicians, like Avigad, are raising concerns about the ramifications on our ability to innovate and engage with new mathematics.

Engaging with tools to solve problems presented in First Proof can yield concrete proofs, notes Anna Marie Bowman. However, she emphasizes that we’re losing valuable “learning opportunities.” The process of generating and formulating new ideas and confronting complex problems is vital for consolidating knowledge for both learners and practitioners.

Similarly, Tony Fen, a member of the Google DeepMind Aletheia team, expresses hesitance toward the tool’s use. “I often believe in doing one’s own work and fostering personal intuition,” he states.

Mehta adds that merely formalizing the proof provides crucial insights, and now he and his colleagues must meticulously sift through the 200,000 lines of AI-generated proof to extract useful components for future projects.

However, mathematicians remain optimistic about their role in an increasingly AI-driven environment. Reflecting on historical parallels, Kommelin notes that manual computations once formed the backbone of mathematical work but have since transitioned to automated methods. “I believe we are on a similar track; this will revolutionize our field. Yet, even in 10 or 20 years, we’ll still possess a unique identity in mathematics.”

Topics:

  • Artificial Intelligence/
  • Mathematics

Source: www.newscientist.com

Revolutionizing Energy Storage: How Old EV Batteries Can Fulfill China’s Energy Demands

Automotive Battery Factory in Guangxi, China

Cost Photo/NurPhoto (via Getty Images)

Used electric vehicle (EV) batteries have the potential to fulfill two-thirds of China’s grid storage requirements by storing energy when renewable sources are plentiful and delivering power during peak demand periods.

During times when the wind isn’t blowing and the sun isn’t shining, the generation of renewable energy may decline, risking supply shortages, particularly during peak demand times in the mornings and evenings, as well as in winter. Typically, natural gas and coal plants compensate for this gap. Countries like China, the USA, the UK, and Australia are constructing large-scale battery-based grid storage solutions to harness renewable energy for later use.

As electric vehicle adoption rises, experts like Ma Ruifei from Tsinghua University argue that repurposed EV batteries can be integrated into the power grid, accelerating the transition to a carbon-neutral power system more affordably. Their research indicates that used batteries could meet 67% of China’s power grid storage needs by 2050, while simultaneously reducing costs by 2.5%.

EV batteries naturally degrade over time with repeated charging and discharging cycles and are often discarded once they reach about 80% of their original capacity. Although this degradation impacts the vehicle’s range and acceleration, it has minimal effect on grid storage applications, where multiple batteries are charged and discharged over extended periods.

“It still retains ample power, and when utilized for storage, its degradation is relatively slow,” says Gil Lacy from Teesside University, UK.

“Materials that are costly to mine and process for batteries should not be wasted when the cells still have 80% usable capacity,” asserts Rhodri Jarvis from University College London. “There’s significant interest in utilizing second-life battery packs, not only for cost reduction but also for enhancing sustainability.”

In a related study, researchers have drawn differing conclusions regarding whether energy storage using used batteries is more cost-effective than new lithium-ion batteries, whose prices are steadily decreasing.

However, with the increasing popularity of electric vehicles, used batteries may become a more economical option. Over 17 million electric vehicles are set to be sold in 2024, accounting for about 20% of global car sales, with nearly two-thirds being purchased in China.

The study projects that in a scenario where various battery chemistries are procured across China and utilized at 40% of their original capacity, second-life grid storage will grow significantly after 2030, as the demand for new batteries stabilizes. By 2050, total capacity is anticipated to reach 2 trillion watts.

In a contrasting scenario that relies solely on new batteries and pumped hydro storage (where water is pumped into a reservoir and released to drive turbines), the total capacity would only reach about half of this figure.

Second-life battery storage remains largely untested; however, US startup Redwood Materials has implemented a 63-megawatt-hour project using 10-year-old car batteries to power a data center in Nevada. The company claims its system is priced under $150 per kilowatt-hour and can deliver power for over 24 hours, exceeding the capabilities of new lithium-ion batteries.

Nonetheless, sorting and grouping used batteries by similar capacity levels is essential. If not, the management system must bypass individual batteries; otherwise, the group will cease to charge once the weakest battery reaches capacity.

Furthermore, damaged batteries need to be identified, and every several hundred cells must be equipped with temperature and voltage sensors. Overheating can result in significant fire hazards.

“The risks are obviously elevated, so ensuring safety, isolation, balance, and implementing robust risk-reduction measures is crucial,” Lacey emphasizes.

Topics:

Source: www.newscientist.com

CRISPR: Revolutionizing Genetic Code Editing – The Most Innovative Idea of the Century

New Scientist: Your source for the latest in science news and long-form articles from expert journalists covering advancements in science, technology, health, and environmental issues.

“The pain was like being struck by lightning and being hit by a freight train at the same time,” shared Victoria Gray. New Scientist reflects on her journey: “Everything has changed for me now.”

Gray once endured debilitating symptoms of sickle cell disease, but in 2019, she found hope through CRISPR gene editing, a pioneering technology enabling precise modifications of DNA. By 2023, this groundbreaking treatment was officially recognized as the first approved CRISPR therapy.

Currently, hundreds of clinical trials are exploring CRISPR-based therapies. Discover the ongoing trials that signify just the beginning of CRISPR’s potential. This revolutionary tool is poised to treat a wide range of diseases beyond just genetic disorders. For example, a single CRISPR dose may drastically lower cholesterol levels, significantly reducing heart attack and stroke risk.

While still in its infancy regarding safety, there’s optimism that CRISPR could eventually be routinely employed to modify children’s genomes, potentially reducing their risk of common diseases.

Additionally, CRISPR is set to revolutionize agriculture, facilitating the creation of crops and livestock that resist diseases, thrive in warmer climates, and are optimized for human consumption.

Given its transformative capabilities, CRISPR is arguably one of the most groundbreaking innovations of the 21st century. Its strength lies in correcting genetic “misspellings.” This involves precisely positioning the gene-editing tool within the genome, akin to placing a cursor in a lengthy document, before making modifications.

Microbes utilize this genetic editing mechanism in their defense against other microbes. Before 2012, researchers identified various natural gene-editing proteins, each limited to targeting a single location in the genome. Altering the target sequence required redesigning the protein’s DNA-binding section, a process that was time-consuming.

However, scientists discovered that bacteria have developed a diverse range of gene-editing proteins that bind to RNA—a close relative of DNA—allowing faster sequence matching. Producing RNA takes mere days instead of years.

In 2012, Jennifer Doudna and her team at the University of California, Berkeley, along with Emmanuelle Charpentier from the Max Planck Institute for Infection Biology, revealed the mechanics of one such gene-editing protein, CRISPR Cas9. By simply adding a “guide RNA” in a specific format, they could target any desired sequence.

Today, thousands of variants of CRISPR are in use for diverse applications, all relying on guide RNA targeting. This paradigm-shifting technology earned Doudna and Charpentier the Nobel Prize in 2020.

Topics:

Source: www.newscientist.com

How Cows Using Tools is Revolutionizing Our Perception of Livestock

Veronica the cow demonstrating tool use

Veronica the cow: A groundbreaking example of non-primate mammal tool use

Antonio J. Osuna Mascaro

Recently, while riding in a taxi, the driver shared a transformative experience involving a pig. My childhood with dogs shaped my expectations of animals, but my encounter with pigs was eye-opening.

The driver explained how he constructed a bell-and-string system that allowed the animals to signal when they wanted to go outside. Interestingly, both dogs and pigs learned this cue, but the pigs took it further by ringing the bell to inform their humans about the dogs waiting outside. The driver spoke of these moments with affection and pride. Remarkably, I later learned that this had changed his dietary choices—he no longer eats pork.

This narrative reflects a broader trend in research on animal cognition. Historically, scientists focused primarily on non-human primates, often deemed the “feathered apes,” like parrots and crows. Recently, however, studies have expanded to include a variety of species, such as honey bees, octopuses, and crocodiles.

In line with this expanded focus, new research conducted by Antonio Osuna Mascaro and Alice Auersperg at the University of Veterinary Medicine in Vienna investigates the cognitive abilities of cows, an often-overlooked species. Veronica, a pet cow (Bos taurus), displays remarkable innovation by using a broom to scratch her body. She employs the bristles for her back and flips it over for her more sensitive areas.

This observation marks the first documented instance of flexible tool use among non-primate mammals. What does Veronica’s tool use reveal about her cognition, and might it change how we view and treat cows?

Tool use, in broad terms, is defined as the manipulation of an object to achieve a specific goal. This definition excludes behaviors like nest-building or hiding, where actions serve static ends. Instead, true tool use involves active manipulation, such as using a stone to crack nuts or a stick to extract termites.

For many years, tool use was considered a trait unique to humans. This notion changed when Jane Goodall observed a chimpanzee named David Greybeard creating and utilizing tools to fish for termites. Subsequent discoveries revealed tool use in unexpected corners of the animal kingdom. For instance, antlion larvae throw sand at prey, while certain digger wasp species employ pebbles in their burrows. Such specialized behaviors evolved over millions of years, contrasting with the flexible tool use demonstrated by animals like Veronica.

Veronica cleverly uses different broom sides for various scratches

Antonio J. Osuna Mascaro

Remarkably, Veronica learned to use tools independently, progressing from twigs to the intelligent use of a broom without any direct teaching.

This behavior suggests that Veronica possesses cognitive traits described by psychologists, notably those identified by Josep Cole. Three key elements define a creative tool user. Firstly, the ability to gather and learn about the physical properties of objects. Secondly, combining this knowledge to navigate challenges—understanding that a hard object can provide relief for an itch. Lastly, the willingness to manipulate objects creatively, as mere physical capability is insufficient. For example, while both squirrel monkeys and capuchin monkeys possess similar hands, only capuchins tent to exhibit object manipulation.

This insight into cow cognition may revolutionize how we treat farm animals. Research indicates a correlation between perceived intelligence and how we consider animals’ worthiness of ethical treatment. In one study, participants rated animals with lower intelligence as more edible, while higher-assigned intelligence led to lower perceptions of their edibility. Participants introduced to the Bennett’s tree kangaroo perceived those identified as food as lacking in sentience.

Our treatment of animals correlates significantly with our perception of their intellect. Veronica’s story is likely the first of many that will challenge our views of “simple” domestic animals. For this knowledge to reshape our practices, we must confront our cognitive dissonance. Denial of animal consciousness allows us to overlook the ethical implications of our treatment. It requires courage to acknowledge their sentience instead of ignoring it.

Marta Halina, Professor of Philosophy of Science at Cambridge University

Topics:


This revised content emphasizes SEO with targeted keywords, maintains HTML structure and tags, and enhances readability while conveying the original message.

Source: www.newscientist.com

Home Care Chatbots in Australian Health Systems: AI Tools Revolutionizing Patient Support

Petalol looked forward to Aida’s call each morning at 10 AM.

While daily check-in calls from the AI Voice bot weren’t part of the expected service package when she enrolled in St. Vincent’s home care, the 79-year-old agreed to participate in the trial four months ago to assist with the initiative. However, realistically, her expectations were modest.

Yet, when the call comes in, she remarks: “I was taken aback by how responsive she is. It’s impressive for a robot.”

“She always asks, ‘How are you today?’ allowing you to express if you’re feeling unwell.”

“She then follows up with, ‘Did you get a chance to go outside today?’

Aida also inquires about what tasks she has planned for the day, stating, “I’ll manage it well.”

“If I say I’m going shopping, will she clarify if it’s for groceries or something else? I found that fascinating.”

Bots that alleviate administrative pressure

Currently, the trial, which is nearing the end of its initial phase, exemplifies how advancements in artificial intelligence are impacting healthcare.

The Digital Health Company collaborated with St. Vincent’s health to trial its generative AI technology aimed at enhancing social interaction, enabling home care clients to follow up with staff regarding any health concerns.

Dean Jones, the national director at St. Vincent’s, emphasizes that this service is not intended to replace face-to-face interactions.

“Clients still have weekly in-person meetings, but during these sessions… [AI] the system facilitates daily check-ins and highlights potential issues to the team or the client’s family,” Jones explains.

Sign up: AU Breaking NewsEmail

Dr. Tina Campbell, Health Managing Director, states no negative incidents have been reported from the St. Vincent trial.

The company employs open AI “with clearly defined guardrails and prompts” to ensure conversations remain safe and can promptly address serious health concerns, according to Campbell. For instance, if a client experiences chest pain, the care team is alerted, and the call is terminated, allowing the individual to call emergency services.

Campbell believes that AI is pivotal in addressing significant workforce challenges within the healthcare sector.

“With this technology, we can lessen the burden on workforce management, allowing qualified health professionals to focus on their duties,” she states.

AI isn’t as novel as you think

Professor Enrico Coyera, founder of the Australian Alliance for Artificial Intelligence in Healthcare, notes that older AI systems have been integral to healthcare in “back-office services,” including medical imaging and pathology report interpretations.

Coyera, who directs the Center for Health Information at Macquarie University, explains:

“In departments like Imaging and Radiology, machines already perform these tasks.”

Over the past decade, a newer AI method called “deep learning” has been employed to analyze medical images and enhance diagnoses, Coyera adds.

In November, New South Wales became the first in Australia to implement mechanical measurement technology in population-based screening programs to aid radiologists with the interpretation of mammographic images.

These tools remain specialized and require expert interpretation, and ultimately, responsibility for medical decisions rests with practitioners, Coyera stresses.

The role of AI in early disease identification

The Murdoch Children’s Institute in Melbourne, in partnership with researchers at UCL London, has developed an AI method to identify brain abnormalities in epilepsy, specifically local cortical dysplasia in MRI scans.

These lesions can cause seizures that are resistant to medication, making surgery the only treatment option. However, successful surgery depends on the ability to identify the abnormal tissue.

In a study published this week in Epilepsia, a team led by neurologist Emma McDonald Rouse demonstrated that “AI epilepsy detectors” can identify lesions in up to 94% of MRI and PET scans, even detecting a subtype of lesions that are often missed by over 60%.

This AI was trained using scans from 54 patients and was tested on 17 children and 12 adults. Of the 17 children, 12 underwent surgery, and 11 are currently seizure-free.

This tool employs a neural network classifier, similar to breast cancer screening, to highlight abnormalities that experts still need to review, emphasizing a much faster path to diagnosis.

She underlines that researchers remain in the “early stages” of development, and further study is necessary to advance the technology for clinical use.

Professor Mark Cook, a neurologist not associated with the research, states that MRI scans yield vast amounts of high-resolution data that are challenging for humans to analyze. Thus, locating these lesions is akin to “finding needles in a haystack.”

“This exemplifies how AI can assist clinicians by providing quicker and more precise diagnoses, potentially enhancing surgical access and outcomes for children with otherwise severe epilepsy,” Cook affirms.

Prospects for disease detection

Dr. Stefan Buttigieg, vice-president of the Digital Health and Artificial Intelligence section at the European Association of Public Health, notes that deep neural networks are integral to monitoring and forecasting disease outbreaks.

At the Australian Public Health Conference in Wollongong last month, Buttigieg referenced the early detection of the Covid-19 outbreak by Blue Dot, a firm established by infectious disease specialists.

Generative AI represents a subset of deep learning, allowing technology to create new content based on its training data. Applications in healthcare include programs like Healthyly’s AI Voice Bot and AI Scribes for doctors.

Dr. Michael Wright, president of the Royal Australian GPS College, mentions that GPs are embracing AI Scribes, which transform consultations into notes for patient records.

Wright highlights that the primary benefit of scribes is to enhance the quality of interactions between physicians and patients.

Dr. Daniel McMullen, president of the Australian Medical Association, concurs, stating that scribes assist doctors in optimizing their time and that AI could help prevent redundant testing for patients. The promised digitization of health records remains a challenge.

Buttigieg argues that one of AI’s greatest potential is in delivering increasingly personalized healthcare.

“For years, healthcare has relied on generic tools and solutions. Now, we are moving towards a future with more sophisticated solutions, where AI fulfills the same roles,” Buttigieg concludes.

Researchers can utilize AI to analyze MRI data to aid in identifying brain lesions. Photo: Karly Earl/Guardian

Source: www.theguardian.com

Transform Your Filmmaking: How New AI Tools Are Revolutionizing the Industry

A US stealth bomber glides through the darkened skies en route to Iran. In Tehran, a solitary woman tends to a stray cat amidst the remains of a recent Israeli airstrike.

For novice viewers, this could easily be mistaken for a cinematic representation of the geopolitical turmoil that has unfolded recently.

Yet, despite its high-quality production, the scene was not filmed in any real location, and the woman feeding the cat is not an actress—she is a fictional character.


Midnight Drop, an AI film about the bombing of US Israel in Iran

The captivating visuals originate from “Rough Cut,” a 12-minute short film showcasing a US attack on Iranian nuclear sites last month, crafted entirely by directors Samir Malal and Bukha Kazumi using artificial intelligence.

This clip is rooted in the details gathered from news reports surrounding the US bombings. The woman seen traversing the empty streets of Tehran is the same one feeding the stray cat. Armed with pertinent information, the creators produced sequences resembling those directed by Hollywood’s finest.

The remarkable speed at which this film has emerged, along with the comfort it brings to some, does not go unnoticed by broadcasting experts.

Recently, television producer and bestselling author Richard Osman remarked that a new era is dawning in the entertainment industry, signaling the close of one chapter and the beginning of another.


Still from Midnight Drop showing a woman feeding a stray cat in Tehran at night. Photo: Oneday Studios

“I saw this and thought, ‘This marks the conclusion of the beginning of something new,'” he stated during the rest of the entertainment podcast.

Osman continued:

For Mallal, a London-based documentary filmmaker known for creating advertisements for Samsung and Coca-Cola, AI has ushered in a novel genre of “Cinematic News.”

The Tehran-based film, titled Midnight Drop, serves as a sequel to Sky in the Sky, a recreation of Ukrainian drone strikes on Russian bombers from June.

In a matter of weeks, Mallal, who also directed Spiders in the Sky, managed to create a film depicting the Ukrainian attack—a project that would typically take millions and at least two years to develop.

“It should be feasible to utilize AI to create something unprecedented,” he remarked. “I’ve never encountered a news-reel film produced in a fortnight, nor a thriller based on current events completed in two weeks.”

Spiders in the Sky primarily utilized VEO3, a video generation model developed by Google alongside various other AI tools. ChatGPT assisted Mallal in streamlining the lengthy interview with the drone operator, which became the backbone of the film’s narrative; however, the voiceover, script, and music were not AI-generated.


Filmmakers recreate Ukrainian drone attacks against Russia using AI in Spiders in the Sky

Google’s filmmaking tools, flow, are equipped with VEO3, enabling users to generate audio, sound effects, and background noise. Since its debut in May, the impact of these tools on YouTube and social media has been remarked upon. As Ottoman’s podcast partner Marina Hyde mentioned last week, “The expansion is astonishing.”

There is a significant amount of “nonsense” emerging. This refers to an AI-generated concept, Olympic diving dogs showcasing an appealing quality.

Mallal and Kazumi aspire to finalize a film depicting stealth bomber missions and thwarting the Iranian narrative, aiming for a runtime six times longer than Spiders in the Sky by August, leveraging models like VEO3, OpenAI’s Sora, and Midjourney.

“I seek to demonstrate a key point,” states Malal. “It shows that you can produce high-quality content rapidly, maintaining pace with cultural developments, especially since Hollywood operates at a notably slower rate.”

Skip past newsletter promotions

Spiders in the Sky, an AI film directed by Samir Mallal, tells the story of a Ukrainian drone attack on a Russian airfield. Photo: Oneday Studios

He adds: “The creative journey often involves generating poor ideas to eventually unearth the good ones. With AI, we can now expedite this process, allowing for a greater volume of ‘bad ideas.’

Recently, Mallal and Kazumi produced Atlas, Interrupted, a short film centered around the 3i/Atlas Comet, a recent news event featured on the BBC.

David Jones, CEO of BrandTech Group, an advertising startup utilizing generative AI (a term encompassing tools like chatbots and video generators) for marketing campaigns, remarks:

“Currently, less than 1% of branded content is generated with generative AI; however, 100% is created either fully or partially using generative AI,” he explains.

Last week, Netflix disclosed its initial use of AI on one of its television productions.


The Ukrainian drone is located at the target of the spider in the sky. Photo: Oneday Studios

However, this surge in AI-driven creativity raises concerns about copyright. In the UK, the creative sector is outraged by the government’s proposal to train AI models on copyrighted material without the owners’ consent, unless they explicitly opt out.

Mallal advocates for “an easily accessible and user-friendly program that ensures artists are compensated for their creations.”

Beevan Kidron, a crossbench peer and prominent supporter against the government’s proposal, acknowledges AI’s filmmaking tools as “remarkable,” but questions the extent of reliance on creators’ works. She emphasizes: “Creators require fairness in this new system, or invaluable assets will be lost.”

YouTube has established terms allowing Google to utilize creators’ works for training AI models, while denying the use of the entire YouTube catalog for this purpose.

Mallal advocates using AI as a tool for “promptocraft,” a term for employing prompts to innovate AI systems. He reveals that during the production of the Ukrainian films, he was astonished by how swiftly he could adjust camera angles and lighting with a few keystrokes.

“I’ve deeply engaged with AI, learning how to collaborate with engineers, and how to translate my directorial skills into prompts. Yet, I had never produced any creative outcome until VEO3 emerged.”

Source: www.theguardian.com

Reducing Bias, Improving Recruitment: How AI is Revolutionizing Hiring for Small Businesses

Artificial intelligence is trained on human-created content, known as actual intelligence. To train AI to write fiction, novels are used, while job descriptions are used to train AI for writing job specifications. However, a problem arises from this approach. Despite efforts to eliminate biases, humans inherently possess biases, and AI trained on human-created content may adopt these biases. Overcoming bias is a significant challenge for AI.

“Bias is prevalent in hiring and stems from the existing biases in most human-run recruitment processes,” explains Kevin Fitzgerald, managing director of UK-based employment management platform Employment Hero. The platform utilizes AI to streamline recruitment processes and minimize bias. “The biases present in the recruitment team are embedded in the process itself.”

One way AI addresses bias is through tools like SmartMatch offered by Employment Hero. By focusing on candidates’ skills and abilities while omitting demographic information such as gender and age, biases can be reduced. This contrasts with traditional methods like LinkedIn and CVs, which may unintentionally reveal personal details.

AI helps businesses tackle bias when screening for CVs. Photo: Fiordaliso/Getty Images

Another concern is how AI processes information compared to humans. While humans can understand nuances and subtleties, AI may lack this capability and rely on keyword matching. To address this, tools like SmartMatch evaluate a candidate’s entire profile to provide a holistic view and avoid missed opportunities due to lack of nuance.

SmartMatch not only assists in matching candidates with suitable roles but also helps small businesses understand their specific hiring needs. By analyzing previous hires and predicting future staffing requirements, SmartMatch offers a comprehensive approach to recruitment.

Understanding SME needs and employment history allows SmartMatch to introduce you to suitable candidates. Photo: Westend61/Getty Images

By offering candidates the ability to maintain an employment passport, Employment Hero empowers both job seekers and employers. This comprehensive approach to recruitment ensures that both parties benefit from accurate and efficient matches.

For small and medium-sized businesses, the impact of poor hiring decisions can be significant. By utilizing advanced tools like SmartMatch, these businesses can access sophisticated recruitment solutions previously available only to larger companies.

Discover how Employment Hero can revolutionize your recruitment process.

Source: www.theguardian.com

Black Ops 6: Omni Movement – Revolutionizing Gameplay

HHere’s a fact I’m not entirely proud of: I’ve played every Call of Duty game since the series launched in 2003. I’ve experienced the very good (Call of Duty 4) and the very not so good (Call of Duty: Roads to Victory). There have been times when I was put off by narrative decisions, the mindless bigotry pervasive in online multiplayer servers, and the series-wide “America is the best!” mentality, but I’ve always come back to the games.

In that time, I’ve seen a lot of attempts to tweak the core feel of the game, from perks to jetpacks (thanks, Advanced Warfare!), but after spending a weekend testing the multiplayer beta for Call of Duty: Black Ops 6, I think developer Treyarch may have stumbled upon their best thing yet: something called Omni-Movement.

In essence, this seemingly minor addition allows players to sprint and dive in any direction, not just forward, and also allows for a degree of aftertouch, so you can glide around corners and change direction in the air. Being able to run sideways and jump backwards over couches isn’t all that important in a fast-paced game anyway, but this seems to have really changed the game. The beta test only features three of the full version’s 16 online multiplayer maps and a small selection of online game modes, but it’s already ridiculously fun.

There are always people flying around during the game. AnywhereIn the Skyline map, players dive through windows, run across hallways, and leap off the balconies of a ridiculously luxurious modern penthouse. In the Rewind map, they slide on their backs across the polished floors of a video rental store, pounce on each other from various heights, and dodge gunfire and remote-controlled bomb cars at the last moment. At critical moments, it feels like a giant John Woo shootout, with equal parts balletic choreography and bloodshed.

But rather than feeling chaotic and unbalanced like jetpack-era titles Advanced Warfare and Infinite Warfare, it actually seems to bring more depth and variety to the moment-to-moment experience. The ability to slip under gunfire gives you a way out of encounters that were previously deadly, and it also lets you move very quickly to different cover positions, which is extremely useful in modes like Domination and Hardpoint, where you have to capture and defend specific areas. I like the longer durations between spawns, which allows you to think in more spatially interesting ways.

Skip Newsletter Promotions

Why did it take so long? A recent interview with gaming site VGCTreyarch associate design director Matt Scronce and production director Yale Miller said the game’s unusual four-year development cycle (CoD games are typically two-years max) allowed the team to experiment with fundamental elements and refine new features. Omni Movement was born out of that process; the team even read a white paper from the Air Force Academy about how fast a human could run backwards.

Otherwise, the game feels more solid than innovative. Skyline is the most fun map, with sleek multi-storey interiors and hidden ventilation ducts, while Squad is a standard Middle Eastern CoD map with sandy trenches, caves and a destroyed radar station. Rewind is a deserted shopping mall with store interiors, fast food joints, parking lots and extremely long sightlines along storefronts that could be called Sniper’s Avenue. The new game mode, Kill Order, is a familiar old-school FPS staple. One player on each team is designated as a high-value target, and the opponent must eliminate that target to score. This leads to very dense skirmishes and a ton of chases around the map, with HVTs trying to hide in little nooks and crannies. It’s like a Benny Hill sketch, but with high-end military weaponry.

It’s like a Benny Hill sketch, but with high-end military weaponry… Call of Duty: Black Ops 6. Photo: Activision

There are also some new weapons, such as the Ames 85, a fully automatic assault rifle similar to the M16, and the Crazy Jackal PDW, a small Scorpion-esque machine pistol like the ones Ernie used in 1980s action movies. The latter has an incredible rate of fire, but is also highly accurate at long range, making it a devastating force in beta matches. It will most likely be significantly nerfed before the game is released. Perhaps the most controversial addition is the body shield. This is a new ability that allows you to sneak up behind an enemy player and take them hostage by double tapping the melee attack button. The victim can then be used as a human shield for a few seconds, and Treyarch says you’ll be able to actually talk to the hostage via the headset’s microphone. This will inevitably lead to the most offensive homophobic trolling imaginable. It’s exactly what Call of Duty needs.

Black Ops 6 looks set to be a strong addition to the series, at least in terms of multiplayer. I’m not proud of the fact that I spent an entire weekend happily recreating my favorite scenes from Hard Boiled, darting sideways through modern interiors and firing shiny fetish rifles at strangers. But I’ve been doing this for 20 years, and for some reason, I have no plans to stop just yet.

Call of Duty: Black Ops 6 will be available on October 25th for PC, PS4/5, Xbox One, and Xbox Series X/S.

Source: www.theguardian.com

A Virtual Assistant Revolutionizing Cancer Research Through Interactivity

Imagine asking your virtual assistant, “Hey Google/Alexa, tell me the lyrics to ‘Beautiful People’ by Ed Sheeran.” Voice User Interface You could possibly receive the information you need within seconds. Cancer doctors and researchers face the challenge of exploring and interpreting cancer genomic data, which resembles a huge library with billions of pieces in different categories. What if you had an Alexa-like tool that could answer questions about the data within seconds?

Traditionally, researchers have used computer programming and interactive websites with point-and-click capabilities to analyze cancer genomic data. Researchers agree that these methods are not only time-consuming, but also often require advanced technical knowledge that not all clinicians and researchers possess. Scientists from Singapore and the United States have collaborated to develop a conversational virtual assistant to navigate the vast library of cancer genomes. They named this assistant Melvin. Their goal was to make relevant information quickly available to all users, regardless of technical expertise.

The scientists described Melvin as a software tool that allows users to interact with cancer genomic data through simple conversations with Amazon Alexa. It incorporates familiar Alexa features, such as the ability to understand and speak everyday English and the ability for researchers to initiate a conversation by saying the name “Alexa.” Additionally, the scientists incorporated a knowledge base containing genomic data for 33 types of cancer from a global cancer database. The Cancer Genome AtlasIt contains a variety of data, including gene expression data, mutations known to increase the risk of developing cancer, etc. It also incorporates secondary information from each database, such as the definition and location of human genes, protein information, and anti-cancer drug efficacy records, to help users effectively interpret the results.

The scientists collected nearly 24,000 pronunciation samples for cancer genes, cancer types, mutations, types of genomic data, and synonyms of all terms in these categories from nine cancer experts at the Cancer Science Institute of Singapore. These experts were from Singapore, Indonesia, Sri Lanka, the United States, and India, which was needed to increase the diversity of Melvin’s accents. The scientists said that due to the lengthy data collection time, the pronunciations did not cover all known cancer genes and traits.

The scientists explained that a voice user interface works well if it correctly hears and understands the user, including the context of the conversation. Because cancer terms differ from regular English vocabulary, the researchers trained Melvin to learn cancer vocabulary using a machine learning process that gives meaning to previously unknown words. Out-of-Vocabulary Mapper Service Design.

Additionally, the researchers developed a web portal where users can submit pronunciations of certain cancer features that Melvin may not initially recognize. This will allow Melvin to know what the user means when he hears those words. To address users’ potential security concerns about the recordings, the researchers noted that users can avoid data storage by deleting the recordings by following the instructions in their Amazon Alexa account. The researchers discussed opportunities to expand Melvin’s capabilities through crowdsourcing for pronunciation improvements. The researchers hope that these pronunciations will provide more data to match regional and national accents so that Melvin can understand and speak.

The scientists say Melvin will work with any device that supports Alexa and will be able to ” Gene Name” and “What percentage of lung cancer patients have a mutation in that gene?” Melvin reported that within seconds it processes these questions and returns responses in audio and visual form.

They also reported being able to ask follow-up questions based on previous conversations. They described the difficulty of getting valuable information from a single question and highlighted the value of Melvin’s ability to maintain context through incremental questioning. The scientists asserted that this design makes it easy for users to explore multiple relevant questions in a single conversation. They also demonstrated that Melvin performs advanced analytical tasks, such as comparing mutations of specific genes across different cancer types and analyzing how gene expression changes.

The scientists concluded that MELVIN can accelerate scientific discoveries in cancer research and help translate research results into solutions that clinicians can apply to patients. They acknowledged that while MELVIN’s framework is currently centered on cancer genes, it can be expanded to support more characteristics of cancer. The team plans to enhance MELVIN by adding more valuable datasets and features based on user feedback..


Post View: 203

Source: sciworthy.com

Introducing Aethir Edge Devices: Powered by Qualcomm, Revolutionizing Distributed Edge Computing for the Future

Singapore, Singapore, April 18, 2024, Chainwire

  • At a Dubai press conference, Aethir Edge debuted as a pioneering edge computing device and first licensed mining machine from Aethir, one of the industry's leading distributed cloud computing infrastructure providers alongside Qualcomm. This will allow the user to mine his 23% of Aethir's native token $ATH supply. Integrated with a decentralized cloud network to overcome the barriers of centralization, his Aethir Edge combines unparalleled edge computing capabilities, decentralized access, and exclusive benefits.

The future of distributed edge computing is here. Ethil debut Esil Edge, Token 2049 was supported by Qualcomm technology at an official press conference in Dubai. Aethir Edge spearheads the evolution to decentralized edge computing as the first sanctioned mining device integrated with decentralized cloud infrastructure, delivering elite GPU performance, 23% of Aethir's native token $ATH supply, and equity Access everything on one device.

Enter the multi-trillion computing market

The edge computing sector is rapidly evolving into a multi-trillion dollar industry, but for too long edge capacity has been siled into centralized data centers. Aethir Edge breaks through these barriers with a breakthrough architecture that interconnects high-performance edge AI devices into a distributed cloud network. By pooling localized resources, Aethir Edge brings elite computing power home and makes it accessible to everyone.

Computing power holds immense potential as an energy source for the digital realm. Aethir Edge, with support from Aethir and Qualcomm, leverages this power and takes it to the next level. Aethir Edge's vision is to fundamentally transform how users access, contribute to, and own a future that transcends the constraints of centralized networks and unleashes the full potential of edge AI technologies. Aethir Edge represents the beginning of this user-driven decentralized evolution.

The first and only certified mining device by Aethir

Aethir Edge, Aethir's only whitelisted mining product, allows users around the world to take advantage of exclusive benefits and share their spare bandwidth, IP addresses, and computing power. You can earn income. With its authorized status, Aethir Edge reserves up to 23% of the total supply of its native token $ATH for mining potential.

“We are excited to support this innovative convergence of decentralized cloud, edge infrastructure, and fair incentives,” said Mark Rydon, co-founder of Aethir. “Aethir Edge is pioneering community-powered edge computing technology through rugged hardware, proprietary mining, and Aethir’s decentralized cloud network.”

When unparalleled edge computing power meets open accessibility

Powered by the Qualcomm® SnapdragonTM 865 chip, Aethir Edge delivers superior performance for data-intensive workloads. 12GB LPDDR5 memory and 256GB UFS 3.1 storage ensure ample resources for smooth parallel processing. Distributed architecture ensures reliability and uptime by distributing capacity across peer nodes, overcoming the vulnerabilities of centralized networks.

“I am very pleased to congratulate the Aethir team on the launch of their next-generation products targeted at distributed edge computing use cases and, more importantly, powered by Qualcomm Technologies and Qualcomm processors. ,” said Qualcomm's vice president and head of enterprise development. and industrial automation. “We are very proud to work with partners like Aethir to advance our edge capabilities.”

Aethir Edge seamlessly interoperates with a variety of applications and delivers ultra-low latency through localized processing. Users around the world can access optimized experiences regardless of their location.

The backbone of innovation in the decentralized cloud ecosystem

As a core component of Aethir's decentralized cloud, Aethir Edge powers innovative new products such as the APhone, the first decentralized cloud smartphone. Localized edge capabilities enable implementation and operation across gaming, AI, VR/AR, real-time streaming, and many other applications.

“Aethir Edge perfectly complements APhone's mission to make Web3 available to everyone. APhone brings high-performance gaming, AI, graphics rendering, and more to every smartphone user around the world through a virtual OS. ” – William Peckham, APhone Chief Business Officer.

Democratize access to the future of edge computing

Aethir Edge spearheads a decentralized infrastructure that is owned and managed by users, rather than a centralized organization. This makes high-performance computing available as an elegant, easy-to-use product that is integrated with profitability. Featuring superior enterprise-grade hardware and distributed cloud infrastructure, Aethir Edge leads the transition from centralized data monopoly to the unbiased edge environment of the future.

Aethir Edge is currently actively building partnerships with distributors around the world, including crypto mining companies, hardware vendors, and distributors. If you are interested, please fill out Aethir Edge. Sales agent application form In doing so, teams can explore win-win opportunities to distribute products together and shape tomorrow's landscape through community power.

Users can visit www.myedge.io Be one of the first to unlock distributed edge computing power.

About Ethyl Edge

Esil Edge is an enterprise-grade edge computing device integrated with Aethir's distributed GPU cloud infrastructure, ushering in a new era of edge computing. As Aethir’s first and only licensed mining device, we combine powerful computing, exclusive revenue, and decentralized access into one device, unlocking the true potential of DePIN.

Website | documentation | twitter

About Esil

Ethil is a cloud computing infrastructure platform that revolutionizes the ownership, distribution, and usage paradigm of enterprise-grade graphics processing units (GPUs). By moving away from traditional centralized models, Aethir has deployed a scalable and competitive framework for sharing distributed computing resources to serve enterprise applications and customers across various industries and geographies.

Aethir is revolutionizing DePIN with its highly distributed, enterprise-grade, GPU-based computing infrastructure customized for AI and gaming. He has raised over $130 million in funding for the ecosystem, backed by major Web3 investors including Framework Ventures, Merit Circle, Hashkey, Animoca Brands, Sanctor Capital, and Infinity Ventures Crypto (IVC). , Aethir is paving the way for his Web3 future. distributed computing.

Website | documentation | twitter | discord | telegram | linkedin

contact

marketing leader
diksha
Ethil
diksha@aethir.com

Source: the-blockchain.com

Quantum Batteries: Revolutionizing Power Source Technology

Quantum batteries, with their innovative charging methods, are a revolutionary development in battery technology and offer potential for greater efficiency and a broader range of uses in sustainable energy solutions. These batteries use quantum phenomena to capture, distribute, and store power, surpassing the capabilities of traditional chemical batteries in certain low-power applications. A counterintuitive quantum process known as “indefinite causal order” is being used to improve the performance of these quantum batteries, bringing this futuristic technology closer to reality.

Despite being mostly limited to laboratory experiments, researchers are working on various aspects of quantum batteries with the hope of integrating them into practical applications in the future. Researchers, including Chen Yuanbo and associate professor Yoshihiko Hasegawa from the University of Tokyo, are focusing on finding the best way to charge quantum batteries in the most efficient manner.

Using a new quantum effect called “indefinite causal order,” the research team has found that charging quantum batteries can have a significant impact on their performance. This effect has also led to a surprising reversal of the relationship between charger power and battery charging, enabling higher energy batteries to be charged using significantly less electricity. Furthermore, the fundamental principles uncovered through this research have the potential to improve performance in various thermodynamics and heat transfer processes, such as solar panels.

The research paper, titled “Charging Quantum Batteries with Undefined Causal Order: Theory and Experiments,” provides further details on this groundbreaking work and its potential applications in sustainable energy solutions.

Source: scitechdaily.com

Miniature VR goggles revolutionizing brain research

This diagram shows a VR setup with an “overhead threat” projected into the top field of view.Credit: Dom Pinke/Northwestern University
For the first time, the goggles allow researchers to study responses to overhead threats. northwestern university
Researchers have developed a new virtual reality (VR) goggle for mice. These tiny goggles aren’t just cute, they offer a more immersive experience for lab mice. By more faithfully simulating natural environments, researchers can more accurately and precisely study the neural circuits underlying behavior. A leap forward in VR goggles The new goggles represent a breakthrough compared to current state-of-the-art systems that simply surround a mouse with a computer or projection screen. Current systems allow the mouse to see the laboratory environment peeking out from behind the screen, but the flat nature of the screen prevents it from conveying three-dimensional (3D) depth. Another drawback was that the researchers couldn’t easily attach a screen above the mice’s heads to simulate overhead threats, such as looming birds of prey. New VR goggles avoid all of these problems. And as VR grows in popularity, the goggles could also help researchers gain new insights into how the human brain adapts and responds to repeated VR exposure. . This area is currently poorly understood. The study was published in the journal Dec. 8. neuron. This is the first time researchers have used a VR system to simulate overhead threats. A view through new miniature VR goggles.Credit: Dom Pinke/Northwestern University “For the past 15 years, we’ve been using VR systems on mice,” said Daniel Dombeck of Northwestern University, lead author of the study. “Traditionally, labs have used large computers and projection screens to surround the animals. For humans, this is like watching TV in the living room. You can still see the couch and walls. You There are cues around it that let you know you’re not in the scene. Next, consider wearing VR goggles, like the Oculus Rift, that occupy your entire field of vision, except the projected scene. They can’t see anything, and each eye projects a different scene to create depth information, which the rats lacked.” Dombeck is a professor of neurobiology in Northwestern University’s Weinberg College of Arts and Sciences. His laboratory is a leader in the development of his VR-based systems and high-resolution laser-based imaging systems for animal research. The value of VR Although researchers can observe animals in nature, it is extremely difficult to image patterns of brain activity in real time while animals interact with the real world. To overcome this challenge, the researchers integrated his VR into a laboratory setting. In these experimental settings, animals use a treadmill to move through a scene, such as a virtual maze, projected onto a screen around them. By keeping the mouse in place on a treadmill, rather than running it through a natural environment or a physical maze, neurobiologists can use tools to The brain can be observed and mapped. Ultimately, this will help researchers understand the general principles of how neural circuits activated during different behaviors encode information. “VR essentially recreates a real-life environment,” Dombeck says. “While we’ve had a lot of success with this VR system, the animals may not be as immersed as they would be in a real environment. Force the mouse to pay attention to the screen and ignore the surrounding lab.” That alone requires a lot of training.” Introduction to iMRSIV Recent advances in hardware miniaturization led Dombeck and his team to wonder if they could develop VR goggles that more closely replicate real-world environments. We created compact goggles using custom-designed lenses and a small organic light-emitting diode (OLED) display. The system, called Miniature Rodent Stereo Illumination VR (iMRSIV), consists of two lenses and two screens, one on each side of the head, that illuminate each eye individually for 3D vision. This provides each eye with a 180-degree field of view that fully immerses the mouse and excludes the surrounding environment. An artist’s interpretation of a cartoon of a mouse wearing VR goggles. Credit: @rita
Unlike VR goggles for humans, the iMRSIV (pronounced “immersive”) system does not wrap around the mouse’s head. Instead, the goggles are attached to experimental equipment and sit snugly right in front of the mouse’s face. Since the mouse runs in place on the treadmill, the goggles still cover the mouse’s field of view.
“We designed and built a custom holder for the goggles,” said John Issa, a postdoctoral fellow in Dombeck’s lab and co-first author of the study. “The entire optical display, the screen and lens, goes all the way around the mouse.” Enhance learning and engagement By mapping the brains of mice, Dombeck and his team found that the brains of mice wearing goggles activated in a manner very similar to that of freely moving animals. And in a side-by-side comparison, the researchers found that mice with goggles were able to immerse themselves in the scene much faster than mice with traditional VR systems. “We went through the same kind of training paradigm that we’ve done in the past, but the mice with the goggles learned faster,” Dombeck said. “After the first session they were already able to complete the task. They knew where to run and were looking for the right place to get the reward. We think they may not actually need as much training because they can interact with their environment in such a way.” Simulating overhead threats for the first time Next, the researchers used goggles to simulate overhead threats. This was not possible with the current system. Since the hardware for the imaging technology is already on top of the mouse, there is no place to attach a computer screen. But the skies above rats are often where animals are searching for important, sometimes life-or-death information. “The upper part of the visual field in mice is very sensitive to detecting predators from above, like in birds,” said co-first author Dom Pinke, a research specialist in Dombeck’s lab. . “It’s not a learned behavior. It’s an imprinted behavior. It’s hardwired into the mouse’s brain.” To create the looming threat, the researchers projected a dark, expanding disk onto the top of the goggles and above the mouse’s field of view. In experiments, mice ran faster and froze up when they noticed the disc. Both behaviors are common responses to overhead threats. Researchers were able to record neural activity to study these responses in detail. “In the future, we would like to investigate situations in which rats are predators rather than prey,” Issa said. “For example, we can observe brain activity while chasing a fly. This activity involves a lot of depth perception and distance estimation. Those are things we can start to capture. is.” Accessibility in neurobiological research Dombeck hopes the goggles will not only open the door to further research, but also to new researchers. He believes the goggles could make neurobiology research more accessible because they are relatively inexpensive and require less intensive laboratory preparation. “Traditional VR systems are very complex,” Dombeck says. “It’s expensive and it’s big. You need a large lab with plenty of space. Additionally, the long time it takes to train a mouse to perform a task limits the number of experiments you can perform. Although we are still working on improvements, our goggles are small, relatively inexpensive, and also very easy to use. This could make VR technology available to other labs. There is a gender.” References: “Full-field virtual reality goggles for mice” by Domonkos Pinke, John B. Issa, Gabriel A. Dara, Gergely Dobos, Daniel A. Dombeck, December 8, 2023. neuron.DOI: 10.1016/j.neuron.2023.11.019 This research “Full-field virtual reality goggles for mice” National Institutes of Health (Award Number R01-MH101297), the National Science Foundation (Award Number ECCS-1835389), the Hartwell Foundation, and the Brain and Behavioral Research Foundation. (function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.6”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));

Source: scitechdaily.com