“The pain was like being struck by lightning and being hit by a freight train at the same time,” shared Victoria Gray. New Scientist reflects on her journey: “Everything has changed for me now.”
Gray once endured debilitating symptoms of sickle cell disease, but in 2019, she found hope through CRISPR gene editing, a pioneering technology enabling precise modifications of DNA. By 2023, this groundbreaking treatment was officially recognized as the first approved CRISPR therapy.
Currently, hundreds of clinical trials are exploring CRISPR-based therapies. Discover the ongoing trials that signify just the beginning of CRISPR’s potential. This revolutionary tool is poised to treat a wide range of diseases beyond just genetic disorders. For example, a single CRISPR dose may drastically lower cholesterol levels, significantly reducing heart attack and stroke risk.
While still in its infancy regarding safety, there’s optimism that CRISPR could eventually be routinely employed to modify children’s genomes, potentially reducing their risk of common diseases.
Additionally, CRISPR is set to revolutionize agriculture, facilitating the creation of crops and livestock that resist diseases, thrive in warmer climates, and are optimized for human consumption.
Given its transformative capabilities, CRISPR is arguably one of the most groundbreaking innovations of the 21st century. Its strength lies in correcting genetic “misspellings.” This involves precisely positioning the gene-editing tool within the genome, akin to placing a cursor in a lengthy document, before making modifications.
Microbes utilize this genetic editing mechanism in their defense against other microbes. Before 2012, researchers identified various natural gene-editing proteins, each limited to targeting a single location in the genome. Altering the target sequence required redesigning the protein’s DNA-binding section, a process that was time-consuming.
However, scientists discovered that bacteria have developed a diverse range of gene-editing proteins that bind to RNA—a close relative of DNA—allowing faster sequence matching. Producing RNA takes mere days instead of years.
In 2012, Jennifer Doudna and her team at the University of California, Berkeley, along with Emmanuelle Charpentier from the Max Planck Institute for Infection Biology, revealed the mechanics of one such gene-editing protein, CRISPR Cas9. By simply adding a “guide RNA” in a specific format, they could target any desired sequence.
Today, thousands of variants of CRISPR are in use for diverse applications, all relying on guide RNA targeting. This paradigm-shifting technology earned Doudna and Charpentier the Nobel Prize in 2020.
Veronica the cow: A groundbreaking example of non-primate mammal tool use
Antonio J. Osuna Mascaro
Recently, while riding in a taxi, the driver shared a transformative experience involving a pig. My childhood with dogs shaped my expectations of animals, but my encounter with pigs was eye-opening.
The driver explained how he constructed a bell-and-string system that allowed the animals to signal when they wanted to go outside. Interestingly, both dogs and pigs learned this cue, but the pigs took it further by ringing the bell to inform their humans about the dogs waiting outside. The driver spoke of these moments with affection and pride. Remarkably, I later learned that this had changed his dietary choices—he no longer eats pork.
This narrative reflects a broader trend in research on animal cognition. Historically, scientists focused primarily on non-human primates, often deemed the “feathered apes,” like parrots and crows. Recently, however, studies have expanded to include a variety of species, such as honey bees, octopuses, and crocodiles.
In line with this expanded focus, new research conducted by Antonio Osuna Mascaro and Alice Auersperg at the University of Veterinary Medicine in Vienna investigates the cognitive abilities of cows, an often-overlooked species. Veronica, a pet cow (Bos taurus), displays remarkable innovation by using a broom to scratch her body. She employs the bristles for her back and flips it over for her more sensitive areas.
This observation marks the first documented instance of flexible tool use among non-primate mammals. What does Veronica’s tool use reveal about her cognition, and might it change how we view and treat cows?
Tool use, in broad terms, is defined as the manipulation of an object to achieve a specific goal. This definition excludes behaviors like nest-building or hiding, where actions serve static ends. Instead, true tool use involves active manipulation, such as using a stone to crack nuts or a stick to extract termites.
For many years, tool use was considered a trait unique to humans. This notion changed when Jane Goodall observed a chimpanzee named David Greybeard creating and utilizing tools to fish for termites. Subsequent discoveries revealed tool use in unexpected corners of the animal kingdom. For instance, antlion larvae throw sand at prey, while certain digger wasp species employ pebbles in their burrows. Such specialized behaviors evolved over millions of years, contrasting with the flexible tool use demonstrated by animals like Veronica.
Veronica cleverly uses different broom sides for various scratches
Antonio J. Osuna Mascaro
Remarkably, Veronica learned to use tools independently, progressing from twigs to the intelligent use of a broom without any direct teaching.
This behavior suggests that Veronica possesses cognitive traits described by psychologists, notably those identified by Josep Cole. Three key elements define a creative tool user. Firstly, the ability to gather and learn about the physical properties of objects. Secondly, combining this knowledge to navigate challenges—understanding that a hard object can provide relief for an itch. Lastly, the willingness to manipulate objects creatively, as mere physical capability is insufficient. For example, while both squirrel monkeys and capuchin monkeys possess similar hands, only capuchins tent to exhibit object manipulation.
This insight into cow cognition may revolutionize how we treat farm animals. Research indicates a correlation between perceived intelligence and how we consider animals’ worthiness of ethical treatment. In one study, participants rated animals with lower intelligence as more edible, while higher-assigned intelligence led to lower perceptions of their edibility. Participants introduced to the Bennett’s tree kangaroo perceived those identified as food as lacking in sentience.
Our treatment of animals correlates significantly with our perception of their intellect. Veronica’s story is likely the first of many that will challenge our views of “simple” domestic animals. For this knowledge to reshape our practices, we must confront our cognitive dissonance. Denial of animal consciousness allows us to overlook the ethical implications of our treatment. It requires courage to acknowledge their sentience instead of ignoring it.
Marta Halina, Professor of Philosophy of Science at Cambridge University
Topics:
This revised content emphasizes SEO with targeted keywords, maintains HTML structure and tags, and enhances readability while conveying the original message.
Petalol looked forward to Aida’s call each morning at 10 AM.
While daily check-in calls from the AI Voice bot weren’t part of the expected service package when she enrolled in St. Vincent’s home care, the 79-year-old agreed to participate in the trial four months ago to assist with the initiative. However, realistically, her expectations were modest.
Yet, when the call comes in, she remarks: “I was taken aback by how responsive she is. It’s impressive for a robot.”
“She always asks, ‘How are you today?’ allowing you to express if you’re feeling unwell.”
“She then follows up with, ‘Did you get a chance to go outside today?’
Aida also inquires about what tasks she has planned for the day, stating, “I’ll manage it well.”
“If I say I’m going shopping, will she clarify if it’s for groceries or something else? I found that fascinating.”
Bots that alleviate administrative pressure
Currently, the trial, which is nearing the end of its initial phase, exemplifies how advancements in artificial intelligence are impacting healthcare.
The Digital Health Company collaborated with St. Vincent’s health to trial its generative AI technology aimed at enhancing social interaction, enabling home care clients to follow up with staff regarding any health concerns.
Dean Jones, the national director at St. Vincent’s, emphasizes that this service is not intended to replace face-to-face interactions.
“Clients still have weekly in-person meetings, but during these sessions… [AI] the system facilitates daily check-ins and highlights potential issues to the team or the client’s family,” Jones explains.
Sign up: AU Breaking NewsEmail
Dr. Tina Campbell, Health Managing Director, states no negative incidents have been reported from the St. Vincent trial.
The company employs open AI “with clearly defined guardrails and prompts” to ensure conversations remain safe and can promptly address serious health concerns, according to Campbell. For instance, if a client experiences chest pain, the care team is alerted, and the call is terminated, allowing the individual to call emergency services.
Campbell believes that AI is pivotal in addressing significant workforce challenges within the healthcare sector.
“With this technology, we can lessen the burden on workforce management, allowing qualified health professionals to focus on their duties,” she states.
AI isn’t as novel as you think
Professor Enrico Coyera, founder of the Australian Alliance for Artificial Intelligence in Healthcare, notes that older AI systems have been integral to healthcare in “back-office services,” including medical imaging and pathology report interpretations.
Coyera, who directs the Center for Health Information at Macquarie University, explains:
“In departments like Imaging and Radiology, machines already perform these tasks.”
Over the past decade, a newer AI method called “deep learning” has been employed to analyze medical images and enhance diagnoses, Coyera adds.
These tools remain specialized and require expert interpretation, and ultimately, responsibility for medical decisions rests with practitioners, Coyera stresses.
These lesions can cause seizures that are resistant to medication, making surgery the only treatment option. However, successful surgery depends on the ability to identify the abnormal tissue.
In a study published this week in Epilepsia, a team led by neurologist Emma McDonald Rouse demonstrated that “AI epilepsy detectors” can identify lesions in up to 94% of MRI and PET scans, even detecting a subtype of lesions that are often missed by over 60%.
This AI was trained using scans from 54 patients and was tested on 17 children and 12 adults. Of the 17 children, 12 underwent surgery, and 11 are currently seizure-free.
This tool employs a neural network classifier, similar to breast cancer screening, to highlight abnormalities that experts still need to review, emphasizing a much faster path to diagnosis.
She underlines that researchers remain in the “early stages” of development, and further study is necessary to advance the technology for clinical use.
Professor Mark Cook, a neurologist not associated with the research, states that MRI scans yield vast amounts of high-resolution data that are challenging for humans to analyze. Thus, locating these lesions is akin to “finding needles in a haystack.”
“This exemplifies how AI can assist clinicians by providing quicker and more precise diagnoses, potentially enhancing surgical access and outcomes for children with otherwise severe epilepsy,” Cook affirms.
Prospects for disease detection
Dr. Stefan Buttigieg, vice-president of the Digital Health and Artificial Intelligence section at the European Association of Public Health, notes that deep neural networks are integral to monitoring and forecasting disease outbreaks.
At the Australian Public Health Conference in Wollongong last month, Buttigieg referenced the early detection of the Covid-19 outbreak by Blue Dot, a firm established by infectious disease specialists.
Generative AI represents a subset of deep learning, allowing technology to create new content based on its training data. Applications in healthcare include programs like Healthyly’s AI Voice Bot and AI Scribes for doctors.
Dr. Michael Wright, president of the Royal Australian GPS College, mentions that GPs are embracing AI Scribes, which transform consultations into notes for patient records.
Wright highlights that the primary benefit of scribes is to enhance the quality of interactions between physicians and patients.
Dr. Daniel McMullen, president of the Australian Medical Association, concurs, stating that scribes assist doctors in optimizing their time and that AI could help prevent redundant testing for patients. The promised digitization of health records remains a challenge.
Buttigieg argues that one of AI’s greatest potential is in delivering increasingly personalized healthcare.
“For years, healthcare has relied on generic tools and solutions. Now, we are moving towards a future with more sophisticated solutions, where AI fulfills the same roles,” Buttigieg concludes.
Researchers can utilize AI to analyze MRI data to aid in identifying brain lesions. Photo: Karly Earl/Guardian
A US stealth bomber glides through the darkened skies en route to Iran. In Tehran, a solitary woman tends to a stray cat amidst the remains of a recent Israeli airstrike.
For novice viewers, this could easily be mistaken for a cinematic representation of the geopolitical turmoil that has unfolded recently.
Yet, despite its high-quality production, the scene was not filmed in any real location, and the woman feeding the cat is not an actress—she is a fictional character.
Midnight Drop, an AI film about the bombing of US Israel in Iran
The captivating visuals originate from “Rough Cut,” a 12-minute short film showcasing a US attack on Iranian nuclear sites last month, crafted entirely by directors Samir Malal and Bukha Kazumi using artificial intelligence.
This clip is rooted in the details gathered from news reports surrounding the US bombings. The woman seen traversing the empty streets of Tehran is the same one feeding the stray cat. Armed with pertinent information, the creators produced sequences resembling those directed by Hollywood’s finest.
The remarkable speed at which this film has emerged, along with the comfort it brings to some, does not go unnoticed by broadcasting experts.
Recently, television producer and bestselling author Richard Osman remarked that a new era is dawning in the entertainment industry, signaling the close of one chapter and the beginning of another.
Still from Midnight Drop showing a woman feeding a stray cat in Tehran at night. Photo: Oneday Studios
“I saw this and thought, ‘This marks the conclusion of the beginning of something new,'” he stated during the rest of the entertainment podcast.
Osman continued:
For Mallal, a London-based documentary filmmaker known for creating advertisements for Samsung and Coca-Cola, AI has ushered in a novel genre of “Cinematic News.”
The Tehran-based film, titled Midnight Drop, serves as a sequel to Sky in the Sky, a recreation of Ukrainian drone strikes on Russian bombers from June.
In a matter of weeks, Mallal, who also directed Spiders in the Sky, managed to create a film depicting the Ukrainian attack—a project that would typically take millions and at least two years to develop.
“It should be feasible to utilize AI to create something unprecedented,” he remarked. “I’ve never encountered a news-reel film produced in a fortnight, nor a thriller based on current events completed in two weeks.”
Spiders in the Sky primarily utilized VEO3, a video generation model developed by Google alongside various other AI tools. ChatGPT assisted Mallal in streamlining the lengthy interview with the drone operator, which became the backbone of the film’s narrative; however, the voiceover, script, and music were not AI-generated.
Filmmakers recreate Ukrainian drone attacks against Russia using AI in Spiders in the Sky
Google’s filmmaking tools, flow, are equipped with VEO3, enabling users to generate audio, sound effects, and background noise. Since its debut in May, the impact of these tools on YouTube and social media has been remarked upon. As Ottoman’s podcast partner Marina Hyde mentioned last week, “The expansion is astonishing.”
Mallal and Kazumi aspire to finalize a film depicting stealth bomber missions and thwarting the Iranian narrative, aiming for a runtime six times longer than Spiders in the Sky by August, leveraging models like VEO3, OpenAI’s Sora, and Midjourney.
“I seek to demonstrate a key point,” states Malal. “It shows that you can produce high-quality content rapidly, maintaining pace with cultural developments, especially since Hollywood operates at a notably slower rate.”
Spiders in the Sky, an AI film directed by Samir Mallal, tells the story of a Ukrainian drone attack on a Russian airfield. Photo: Oneday Studios
He adds: “The creative journey often involves generating poor ideas to eventually unearth the good ones. With AI, we can now expedite this process, allowing for a greater volume of ‘bad ideas.’
Recently, Mallal and Kazumi produced Atlas, Interrupted, a short film centered around the 3i/Atlas Comet, a recent news event featured on the BBC.
David Jones, CEO of BrandTech Group, an advertising startup utilizing generative AI (a term encompassing tools like chatbots and video generators) for marketing campaigns, remarks:
“Currently, less than 1% of branded content is generated with generative AI; however, 100% is created either fully or partially using generative AI,” he explains.
Last week, Netflix disclosed its initial use of AI on one of its television productions.
The Ukrainian drone is located at the target of the spider in the sky. Photo: Oneday Studios
However, this surge in AI-driven creativity raises concerns about copyright. In the UK, the creative sector is outraged by the government’s proposal to train AI models on copyrighted material without the owners’ consent, unless they explicitly opt out.
Mallal advocates for “an easily accessible and user-friendly program that ensures artists are compensated for their creations.”
Beevan Kidron, a crossbench peer and prominent supporter against the government’s proposal, acknowledges AI’s filmmaking tools as “remarkable,” but questions the extent of reliance on creators’ works. She emphasizes: “Creators require fairness in this new system, or invaluable assets will be lost.”
YouTube has established terms allowing Google to utilize creators’ works for training AI models, while denying the use of the entire YouTube catalog for this purpose.
Mallal advocates using AI as a tool for “promptocraft,” a term for employing prompts to innovate AI systems. He reveals that during the production of the Ukrainian films, he was astonished by how swiftly he could adjust camera angles and lighting with a few keystrokes.
“I’ve deeply engaged with AI, learning how to collaborate with engineers, and how to translate my directorial skills into prompts. Yet, I had never produced any creative outcome until VEO3 emerged.”
Artificial intelligence is trained on human-created content, known as actual intelligence. To train AI to write fiction, novels are used, while job descriptions are used to train AI for writing job specifications. However, a problem arises from this approach. Despite efforts to eliminate biases, humans inherently possess biases, and AI trained on human-created content may adopt these biases. Overcoming bias is a significant challenge for AI.
“Bias is prevalent in hiring and stems from the existing biases in most human-run recruitment processes,” explains Kevin Fitzgerald, managing director of UK-based employment management platform Employment Hero. The platform utilizes AI to streamline recruitment processes and minimize bias. “The biases present in the recruitment team are embedded in the process itself.”
One way AI addresses bias is through tools like SmartMatch offered by Employment Hero. By focusing on candidates’ skills and abilities while omitting demographic information such as gender and age, biases can be reduced. This contrasts with traditional methods like LinkedIn and CVs, which may unintentionally reveal personal details.
AI helps businesses tackle bias when screening for CVs. Photo: Fiordaliso/Getty Images
Another concern is how AI processes information compared to humans. While humans can understand nuances and subtleties, AI may lack this capability and rely on keyword matching. To address this, tools like SmartMatch evaluate a candidate’s entire profile to provide a holistic view and avoid missed opportunities due to lack of nuance.
SmartMatch not only assists in matching candidates with suitable roles but also helps small businesses understand their specific hiring needs. By analyzing previous hires and predicting future staffing requirements, SmartMatch offers a comprehensive approach to recruitment.
Understanding SME needs and employment history allows SmartMatch to introduce you to suitable candidates. Photo: Westend61/Getty Images
By offering candidates the ability to maintain an employment passport, Employment Hero empowers both job seekers and employers. This comprehensive approach to recruitment ensures that both parties benefit from accurate and efficient matches.
For small and medium-sized businesses, the impact of poor hiring decisions can be significant. By utilizing advanced tools like SmartMatch, these businesses can access sophisticated recruitment solutions previously available only to larger companies.
Discover how Employment Hero can revolutionize your recruitment process.
HHere’s a fact I’m not entirely proud of: I’ve played every Call of Duty game since the series launched in 2003. I’ve experienced the very good (Call of Duty 4) and the very not so good (Call of Duty: Roads to Victory). There have been times when I was put off by narrative decisions, the mindless bigotry pervasive in online multiplayer servers, and the series-wide “America is the best!” mentality, but I’ve always come back to the games.
In that time, I’ve seen a lot of attempts to tweak the core feel of the game, from perks to jetpacks (thanks, Advanced Warfare!), but after spending a weekend testing the multiplayer beta for Call of Duty: Black Ops 6, I think developer Treyarch may have stumbled upon their best thing yet: something called Omni-Movement.
In essence, this seemingly minor addition allows players to sprint and dive in any direction, not just forward, and also allows for a degree of aftertouch, so you can glide around corners and change direction in the air. Being able to run sideways and jump backwards over couches isn’t all that important in a fast-paced game anyway, but this seems to have really changed the game. The beta test only features three of the full version’s 16 online multiplayer maps and a small selection of online game modes, but it’s already ridiculously fun.
There are always people flying around during the game. AnywhereIn the Skyline map, players dive through windows, run across hallways, and leap off the balconies of a ridiculously luxurious modern penthouse. In the Rewind map, they slide on their backs across the polished floors of a video rental store, pounce on each other from various heights, and dodge gunfire and remote-controlled bomb cars at the last moment. At critical moments, it feels like a giant John Woo shootout, with equal parts balletic choreography and bloodshed.
But rather than feeling chaotic and unbalanced like jetpack-era titles Advanced Warfare and Infinite Warfare, it actually seems to bring more depth and variety to the moment-to-moment experience. The ability to slip under gunfire gives you a way out of encounters that were previously deadly, and it also lets you move very quickly to different cover positions, which is extremely useful in modes like Domination and Hardpoint, where you have to capture and defend specific areas. I like the longer durations between spawns, which allows you to think in more spatially interesting ways.
Why did it take so long? A recent interview with gaming site VGCTreyarch associate design director Matt Scronce and production director Yale Miller said the game’s unusual four-year development cycle (CoD games are typically two-years max) allowed the team to experiment with fundamental elements and refine new features. Omni Movement was born out of that process; the team even read a white paper from the Air Force Academy about how fast a human could run backwards.
Otherwise, the game feels more solid than innovative. Skyline is the most fun map, with sleek multi-storey interiors and hidden ventilation ducts, while Squad is a standard Middle Eastern CoD map with sandy trenches, caves and a destroyed radar station. Rewind is a deserted shopping mall with store interiors, fast food joints, parking lots and extremely long sightlines along storefronts that could be called Sniper’s Avenue. The new game mode, Kill Order, is a familiar old-school FPS staple. One player on each team is designated as a high-value target, and the opponent must eliminate that target to score. This leads to very dense skirmishes and a ton of chases around the map, with HVTs trying to hide in little nooks and crannies. It’s like a Benny Hill sketch, but with high-end military weaponry.
It’s like a Benny Hill sketch, but with high-end military weaponry… Call of Duty: Black Ops 6. Photo: Activision
There are also some new weapons, such as the Ames 85, a fully automatic assault rifle similar to the M16, and the Crazy Jackal PDW, a small Scorpion-esque machine pistol like the ones Ernie used in 1980s action movies. The latter has an incredible rate of fire, but is also highly accurate at long range, making it a devastating force in beta matches. It will most likely be significantly nerfed before the game is released. Perhaps the most controversial addition is the body shield. This is a new ability that allows you to sneak up behind an enemy player and take them hostage by double tapping the melee attack button. The victim can then be used as a human shield for a few seconds, and Treyarch says you’ll be able to actually talk to the hostage via the headset’s microphone. This will inevitably lead to the most offensive homophobic trolling imaginable. It’s exactly what Call of Duty needs.
Black Ops 6 looks set to be a strong addition to the series, at least in terms of multiplayer. I’m not proud of the fact that I spent an entire weekend happily recreating my favorite scenes from Hard Boiled, darting sideways through modern interiors and firing shiny fetish rifles at strangers. But I’ve been doing this for 20 years, and for some reason, I have no plans to stop just yet.
Imagine asking your virtual assistant, “Hey Google/Alexa, tell me the lyrics to ‘Beautiful People’ by Ed Sheeran.” Voice User Interface You could possibly receive the information you need within seconds. Cancer doctors and researchers face the challenge of exploring and interpreting cancer genomic data, which resembles a huge library with billions of pieces in different categories. What if you had an Alexa-like tool that could answer questions about the data within seconds?
Traditionally, researchers have used computer programming and interactive websites with point-and-click capabilities to analyze cancer genomic data. Researchers agree that these methods are not only time-consuming, but also often require advanced technical knowledge that not all clinicians and researchers possess. Scientists from Singapore and the United States have collaborated to develop a conversational virtual assistant to navigate the vast library of cancer genomes. They named this assistant Melvin. Their goal was to make relevant information quickly available to all users, regardless of technical expertise.
The scientists described Melvin as a software tool that allows users to interact with cancer genomic data through simple conversations with Amazon Alexa. It incorporates familiar Alexa features, such as the ability to understand and speak everyday English and the ability for researchers to initiate a conversation by saying the name “Alexa.” Additionally, the scientists incorporated a knowledge base containing genomic data for 33 types of cancer from a global cancer database. The Cancer Genome AtlasIt contains a variety of data, including gene expression data, mutations known to increase the risk of developing cancer, etc. It also incorporates secondary information from each database, such as the definition and location of human genes, protein information, and anti-cancer drug efficacy records, to help users effectively interpret the results.
The scientists collected nearly 24,000 pronunciation samples for cancer genes, cancer types, mutations, types of genomic data, and synonyms of all terms in these categories from nine cancer experts at the Cancer Science Institute of Singapore. These experts were from Singapore, Indonesia, Sri Lanka, the United States, and India, which was needed to increase the diversity of Melvin’s accents. The scientists said that due to the lengthy data collection time, the pronunciations did not cover all known cancer genes and traits.
The scientists explained that a voice user interface works well if it correctly hears and understands the user, including the context of the conversation. Because cancer terms differ from regular English vocabulary, the researchers trained Melvin to learn cancer vocabulary using a machine learning process that gives meaning to previously unknown words. Out-of-Vocabulary Mapper Service Design.
Additionally, the researchers developed a web portal where users can submit pronunciations of certain cancer features that Melvin may not initially recognize. This will allow Melvin to know what the user means when he hears those words. To address users’ potential security concerns about the recordings, the researchers noted that users can avoid data storage by deleting the recordings by following the instructions in their Amazon Alexa account. The researchers discussed opportunities to expand Melvin’s capabilities through crowdsourcing for pronunciation improvements. The researchers hope that these pronunciations will provide more data to match regional and national accents so that Melvin can understand and speak.
The scientists say Melvin will work with any device that supports Alexa and will be able to ” Gene Name” and “What percentage of lung cancer patients have a mutation in that gene?” Melvin reported that within seconds it processes these questions and returns responses in audio and visual form.
They also reported being able to ask follow-up questions based on previous conversations. They described the difficulty of getting valuable information from a single question and highlighted the value of Melvin’s ability to maintain context through incremental questioning. The scientists asserted that this design makes it easy for users to explore multiple relevant questions in a single conversation. They also demonstrated that Melvin performs advanced analytical tasks, such as comparing mutations of specific genes across different cancer types and analyzing how gene expression changes.
The scientists concluded that MELVIN can accelerate scientific discoveries in cancer research and help translate research results into solutions that clinicians can apply to patients. They acknowledged that while MELVIN’s framework is currently centered on cancer genes, it can be expanded to support more characteristics of cancer. The team plans to enhance MELVIN by adding more valuable datasets and features based on user feedback..
At a Dubai press conference, Aethir Edge debuted as a pioneering edge computing device and first licensed mining machine from Aethir, one of the industry's leading distributed cloud computing infrastructure providers alongside Qualcomm. This will allow the user to mine his 23% of Aethir's native token $ATH supply. Integrated with a decentralized cloud network to overcome the barriers of centralization, his Aethir Edge combines unparalleled edge computing capabilities, decentralized access, and exclusive benefits.
The future of distributed edge computing is here. Ethil debut Esil Edge, Token 2049 was supported by Qualcomm technology at an official press conference in Dubai. Aethir Edge spearheads the evolution to decentralized edge computing as the first sanctioned mining device integrated with decentralized cloud infrastructure, delivering elite GPU performance, 23% of Aethir's native token $ATH supply, and equity Access everything on one device.
Enter the multi-trillion computing market
The edge computing sector is rapidly evolving into a multi-trillion dollar industry, but for too long edge capacity has been siled into centralized data centers. Aethir Edge breaks through these barriers with a breakthrough architecture that interconnects high-performance edge AI devices into a distributed cloud network. By pooling localized resources, Aethir Edge brings elite computing power home and makes it accessible to everyone.
Computing power holds immense potential as an energy source for the digital realm. Aethir Edge, with support from Aethir and Qualcomm, leverages this power and takes it to the next level. Aethir Edge's vision is to fundamentally transform how users access, contribute to, and own a future that transcends the constraints of centralized networks and unleashes the full potential of edge AI technologies. Aethir Edge represents the beginning of this user-driven decentralized evolution.
The first and only certified mining device by Aethir
Aethir Edge, Aethir's only whitelisted mining product, allows users around the world to take advantage of exclusive benefits and share their spare bandwidth, IP addresses, and computing power. You can earn income. With its authorized status, Aethir Edge reserves up to 23% of the total supply of its native token $ATH for mining potential.
“We are excited to support this innovative convergence of decentralized cloud, edge infrastructure, and fair incentives,” said Mark Rydon, co-founder of Aethir. “Aethir Edge is pioneering community-powered edge computing technology through rugged hardware, proprietary mining, and Aethir’s decentralized cloud network.”
When unparalleled edge computing power meets open accessibility
Powered by the Qualcomm® SnapdragonTM 865 chip, Aethir Edge delivers superior performance for data-intensive workloads. 12GB LPDDR5 memory and 256GB UFS 3.1 storage ensure ample resources for smooth parallel processing. Distributed architecture ensures reliability and uptime by distributing capacity across peer nodes, overcoming the vulnerabilities of centralized networks.
“I am very pleased to congratulate the Aethir team on the launch of their next-generation products targeted at distributed edge computing use cases and, more importantly, powered by Qualcomm Technologies and Qualcomm processors. ,” said Qualcomm's vice president and head of enterprise development. and industrial automation. “We are very proud to work with partners like Aethir to advance our edge capabilities.”
Aethir Edge seamlessly interoperates with a variety of applications and delivers ultra-low latency through localized processing. Users around the world can access optimized experiences regardless of their location.
The backbone of innovation in the decentralized cloud ecosystem
As a core component of Aethir's decentralized cloud, Aethir Edge powers innovative new products such as the APhone, the first decentralized cloud smartphone. Localized edge capabilities enable implementation and operation across gaming, AI, VR/AR, real-time streaming, and many other applications.
“Aethir Edge perfectly complements APhone's mission to make Web3 available to everyone. APhone brings high-performance gaming, AI, graphics rendering, and more to every smartphone user around the world through a virtual OS. ” – William Peckham, APhone Chief Business Officer.
Democratize access to the future of edge computing
Aethir Edge spearheads a decentralized infrastructure that is owned and managed by users, rather than a centralized organization. This makes high-performance computing available as an elegant, easy-to-use product that is integrated with profitability. Featuring superior enterprise-grade hardware and distributed cloud infrastructure, Aethir Edge leads the transition from centralized data monopoly to the unbiased edge environment of the future.
Aethir Edge is currently actively building partnerships with distributors around the world, including crypto mining companies, hardware vendors, and distributors. If you are interested, please fill out Aethir Edge. Sales agent application form In doing so, teams can explore win-win opportunities to distribute products together and shape tomorrow's landscape through community power.
Users can visit www.myedge.io Be one of the first to unlock distributed edge computing power.
About Ethyl Edge
Esil Edge is an enterprise-grade edge computing device integrated with Aethir's distributed GPU cloud infrastructure, ushering in a new era of edge computing. As Aethir’s first and only licensed mining device, we combine powerful computing, exclusive revenue, and decentralized access into one device, unlocking the true potential of DePIN.
Ethil is a cloud computing infrastructure platform that revolutionizes the ownership, distribution, and usage paradigm of enterprise-grade graphics processing units (GPUs). By moving away from traditional centralized models, Aethir has deployed a scalable and competitive framework for sharing distributed computing resources to serve enterprise applications and customers across various industries and geographies.
Aethir is revolutionizing DePIN with its highly distributed, enterprise-grade, GPU-based computing infrastructure customized for AI and gaming. He has raised over $130 million in funding for the ecosystem, backed by major Web3 investors including Framework Ventures, Merit Circle, Hashkey, Animoca Brands, Sanctor Capital, and Infinity Ventures Crypto (IVC). , Aethir is paving the way for his Web3 future. distributed computing.
Quantum batteries, with their innovative charging methods, are a revolutionary development in battery technology and offer potential for greater efficiency and a broader range of uses in sustainable energy solutions. These batteries use quantum phenomena to capture, distribute, and store power, surpassing the capabilities of traditional chemical batteries in certain low-power applications. A counterintuitive quantum process known as “indefinite causal order” is being used to improve the performance of these quantum batteries, bringing this futuristic technology closer to reality.
Despite being mostly limited to laboratory experiments, researchers are working on various aspects of quantum batteries with the hope of integrating them into practical applications in the future. Researchers, including Chen Yuanbo and associate professor Yoshihiko Hasegawa from the University of Tokyo, are focusing on finding the best way to charge quantum batteries in the most efficient manner.
Using a new quantum effect called “indefinite causal order,” the research team has found that charging quantum batteries can have a significant impact on their performance. This effect has also led to a surprising reversal of the relationship between charger power and battery charging, enabling higher energy batteries to be charged using significantly less electricity. Furthermore, the fundamental principles uncovered through this research have the potential to improve performance in various thermodynamics and heat transfer processes, such as solar panels.
The research paper, titled “Charging Quantum Batteries with Undefined Causal Order: Theory and Experiments,” provides further details on this groundbreaking work and its potential applications in sustainable energy solutions.
This diagram shows a VR setup with an “overhead threat” projected into the top field of view.Credit: Dom Pinke/Northwestern University For the first time, the goggles allow researchers to study responses to overhead threats. northwestern university Researchers have developed a new virtual reality (VR) goggle for mice. These tiny goggles aren’t just cute, they offer a more immersive experience for lab mice. By more faithfully simulating natural environments, researchers can more accurately and precisely study the neural circuits underlying behavior. A leap forward in VR goggles The new goggles represent a breakthrough compared to current state-of-the-art systems that simply surround a mouse with a computer or projection screen. Current systems allow the mouse to see the laboratory environment peeking out from behind the screen, but the flat nature of the screen prevents it from conveying three-dimensional (3D) depth. Another drawback was that the researchers couldn’t easily attach a screen above the mice’s heads to simulate overhead threats, such as looming birds of prey. New VR goggles avoid all of these problems. And as VR grows in popularity, the goggles could also help researchers gain new insights into how the human brain adapts and responds to repeated VR exposure. . This area is currently poorly understood. The study was published in the journal Dec. 8. neuron. This is the first time researchers have used a VR system to simulate overhead threats. A view through new miniature VR goggles.Credit: Dom Pinke/Northwestern University “For the past 15 years, we’ve been using VR systems on mice,” said Daniel Dombeck of Northwestern University, lead author of the study. “Traditionally, labs have used large computers and projection screens to surround the animals. For humans, this is like watching TV in the living room. You can still see the couch and walls. You There are cues around it that let you know you’re not in the scene. Next, consider wearing VR goggles, like the Oculus Rift, that occupy your entire field of vision, except the projected scene. They can’t see anything, and each eye projects a different scene to create depth information, which the rats lacked.” Dombeck is a professor of neurobiology in Northwestern University’s Weinberg College of Arts and Sciences. His laboratory is a leader in the development of his VR-based systems and high-resolution laser-based imaging systems for animal research. The value of VR Although researchers can observe animals in nature, it is extremely difficult to image patterns of brain activity in real time while animals interact with the real world. To overcome this challenge, the researchers integrated his VR into a laboratory setting. In these experimental settings, animals use a treadmill to move through a scene, such as a virtual maze, projected onto a screen around them. By keeping the mouse in place on a treadmill, rather than running it through a natural environment or a physical maze, neurobiologists can use tools to The brain can be observed and mapped. Ultimately, this will help researchers understand the general principles of how neural circuits activated during different behaviors encode information. “VR essentially recreates a real-life environment,” Dombeck says. “While we’ve had a lot of success with this VR system, the animals may not be as immersed as they would be in a real environment. Force the mouse to pay attention to the screen and ignore the surrounding lab.” That alone requires a lot of training.” Introduction to iMRSIV Recent advances in hardware miniaturization led Dombeck and his team to wonder if they could develop VR goggles that more closely replicate real-world environments. We created compact goggles using custom-designed lenses and a small organic light-emitting diode (OLED) display. The system, called Miniature Rodent Stereo Illumination VR (iMRSIV), consists of two lenses and two screens, one on each side of the head, that illuminate each eye individually for 3D vision. This provides each eye with a 180-degree field of view that fully immerses the mouse and excludes the surrounding environment. An artist’s interpretation of a cartoon of a mouse wearing VR goggles. Credit: @rita Unlike VR goggles for humans, the iMRSIV (pronounced “immersive”) system does not wrap around the mouse’s head. Instead, the goggles are attached to experimental equipment and sit snugly right in front of the mouse’s face. Since the mouse runs in place on the treadmill, the goggles still cover the mouse’s field of view. “We designed and built a custom holder for the goggles,” said John Issa, a postdoctoral fellow in Dombeck’s lab and co-first author of the study. “The entire optical display, the screen and lens, goes all the way around the mouse.” Enhance learning and engagement By mapping the brains of mice, Dombeck and his team found that the brains of mice wearing goggles activated in a manner very similar to that of freely moving animals. And in a side-by-side comparison, the researchers found that mice with goggles were able to immerse themselves in the scene much faster than mice with traditional VR systems. “We went through the same kind of training paradigm that we’ve done in the past, but the mice with the goggles learned faster,” Dombeck said. “After the first session they were already able to complete the task. They knew where to run and were looking for the right place to get the reward. We think they may not actually need as much training because they can interact with their environment in such a way.” Simulating overhead threats for the first time Next, the researchers used goggles to simulate overhead threats. This was not possible with the current system. Since the hardware for the imaging technology is already on top of the mouse, there is no place to attach a computer screen. But the skies above rats are often where animals are searching for important, sometimes life-or-death information. “The upper part of the visual field in mice is very sensitive to detecting predators from above, like in birds,” said co-first author Dom Pinke, a research specialist in Dombeck’s lab. . “It’s not a learned behavior. It’s an imprinted behavior. It’s hardwired into the mouse’s brain.” To create the looming threat, the researchers projected a dark, expanding disk onto the top of the goggles and above the mouse’s field of view. In experiments, mice ran faster and froze up when they noticed the disc. Both behaviors are common responses to overhead threats. Researchers were able to record neural activity to study these responses in detail. “In the future, we would like to investigate situations in which rats are predators rather than prey,” Issa said. “For example, we can observe brain activity while chasing a fly. This activity involves a lot of depth perception and distance estimation. Those are things we can start to capture. is.” Accessibility in neurobiological research Dombeck hopes the goggles will not only open the door to further research, but also to new researchers. He believes the goggles could make neurobiology research more accessible because they are relatively inexpensive and require less intensive laboratory preparation. “Traditional VR systems are very complex,” Dombeck says. “It’s expensive and it’s big. You need a large lab with plenty of space. Additionally, the long time it takes to train a mouse to perform a task limits the number of experiments you can perform. Although we are still working on improvements, our goggles are small, relatively inexpensive, and also very easy to use. This could make VR technology available to other labs. There is a gender.” References: “Full-field virtual reality goggles for mice” by Domonkos Pinke, John B. Issa, Gabriel A. Dara, Gergely Dobos, Daniel A. Dombeck, December 8, 2023. neuron.DOI: 10.1016/j.neuron.2023.11.019 This research “Full-field virtual reality goggles for mice” National Institutes of Health (Award Number R01-MH101297), the National Science Foundation (Award Number ECCS-1835389), the Hartwell Foundation, and the Brain and Behavioral Research Foundation. (function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.6”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.