Universal Detectors Identify AI Deepfake Videos with Unprecedented Accuracy

Deepfake video showcasing Australian Prime Minister Anthony Albanese on a smartphone

Australia’s Associated Press/Alamy

Universal DeepFake Detectors have demonstrated optimal accuracy in identifying various types of videos that have been altered or entirely produced by AI. This technology can assist in flagging adult content, deepfake scams, or misleading political videos generated by unregulated AI.

The rise of accessible DeepFake Creation Tools powered by inexpensive AI has led to rampant online distribution of synthetic videos. Numerous instances involve non-consensual depictions of women, including celebrities and students. Additionally, deepfakes are utilized to sway political elections and escalate financial scams targeting everyday consumers and corporate leaders.

Nevertheless, most AI models designed to spot synthetic videos primarily focus on facial recognition. This means they excel in identifying a specific type of deepfake where a person’s face is swapped with existing footage. “We need a single video with a manipulated face and a model capable of detecting background alterations or entirely synthetic videos,” states Rohit Kundu from the University of California Riverside. “Our approach tackles that particular issue, considering the entire video could be entirely synthetically produced.”

Kundu and his team have developed a universal detector that leverages AI to analyze both facial features and various background elements within the video. It can detect subtle signs of spatial and temporal inconsistencies in deepfake content. Consequently, it identifies irregular lighting conditions for people inserted into face-swapped videos, as well as discrepancies in background details of fully AI-generated videos. The detector can even recognize AI manipulation in synthetic videos devoid of human faces, and it flags realistic scenes in video games like Grand Theft Auto V, independent of AI generation.

“Most traditional methods focus on AI-generated facial videos, such as face swaps and lip-synced content.” says Siwei Lyu from Buffalo University in New York. “This new method is broader in its applications.”

The universal detector reached an impressive accuracy rate of 95% to 99% in recognizing four sets of test videos featuring manipulated faces. This performance surpasses all previously published methods for detecting this type of deepfake. In evaluations of fully synthetic videos, it yielded more precise results than any other detectors assessed to date. Researcher I presented their findings at the 2025 IEEE Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee, on June 15th.

Several researchers from Google also contributed to the development of these new detectors. Though Google has not responded to inquiries regarding whether this detection method would be beneficial for identifying deepfakes on platforms like YouTube, the company is among those advocating for watermarking tools that help label AI-generated content.

The universal detectors have room for future enhancements. For instance, it would be advantageous to develop capabilities for detecting deepfakes utilized during live video conference calls—a tactic some scammers are now employing.

“How can you tell if the individual on the other end is genuine or a deepfake-generated video, even with network factors like bandwidth affecting the transmission?” asks Amit Roy-Chowdhury from the University of California Riverside. “This is a different area we’re exploring in our lab.”

Topics:

Source: www.newscientist.com

Scientists Achieve Breakthroughs in Crystal Bit Manipulation Accuracy

A group of physicists from Oxford University has accomplished the lowest error rate (just 0.000015%, or one error in 6.7 million operations) in quantum logic operations.



Ion trap chip rendering. Image credit: Jocchen Wolf and Tom Harty of Oxford University.

“As far as we know, this is the most accurate qubit manipulation ever reported globally,” stated Professor David Lucas from Oxford University.

“This represents a crucial milestone in constructing a practical quantum computer capable of solving real-world problems.”

To conduct meaningful calculations on quantum computers, millions of operations must engage numerous qubits.

Consequently, if the error rate is excessively high, the end result of the computation becomes useless.

Error correction techniques can address mistakes, but they require additional qubits, which come at a cost.

By minimizing errors, new methodologies decrease the number of qubits needed, leading to a reduction in both the cost and size of the quantum computer itself.

“By significantly decreasing the chances of errors, this advancement will greatly lessen the infrastructure necessary for error correction, paving the way for future quantum computers to be smaller, faster, and more efficient,” said Molly Smith, a graduate student at Oxford University.

“Kitz’s precise control is beneficial for other quantum technologies, including timepieces and quantum sensors.”

This groundbreaking accuracy was attained using trapped calcium ions as qubits.

These ions are ideal candidates for storing quantum information due to their longevity and resilience.

Researchers adopted an alternative method, using electron (microwave) signals to manage the quantum states of calcium ions instead of traditional lasers.

This technique is more stable than laser control and offers several advantages for constructing practical quantum computers.

For instance, electronic control is less expensive and more robust than lasers, facilitating easier integration into ion trap chips.

Moreover, the experiment was conducted at room temperature and without magnetic shielding, simplifying the technical necessities of operating quantum computers.

“This record-setting achievement signifies a significant milestone, but it is part of a larger challenge,” the author remarked.

“In quantum computing, both single and two-qubit gates must function together.”

“Currently, the gates of the two-qubit systems still experience a very high error rate, approximately 1 in 2,000 even in the best demonstration to date.

Their paper has been published online in the journal Physical Review Letters.

____

MC Smith et al. 2025. Single qubit gate with errors at the 10-7 level. Phys. Rev. Lett, in press; doi: 10.1103/42w2-6ccy

Source: www.sci.news

Advanced AI Experiences “Total Accuracy Breakdown” When Confronted with Complex Issues, Research Finds

Researchers at Apple have identified “fundamental limitations” in state-of-the-art artificial intelligence models, prompting concerns about the competitive landscape in the tech industry for developing more robust systems.

In a study, Apple noted that the advanced AI model, known as the large-scale inference model (LRMS), experienced a “complete collapse in accuracy” when faced with complex challenges.

Standard AI models outperformed LRMS on tasks of lower complexity, yet both encountered “complete collapse” on highly complex tasks. LRMS attempts to handle intricate queries by creating detailed reasoning processes to break down issues into manageable steps.


The research, which evaluated the models’ puzzle-solving capabilities, revealed that LRMS began to “reduce inference efforts” as it neared performance breakdowns—something researchers labeled as “particularly concerning.”

Gary Marcus, a noted academic voice on AI capabilities, characterized the Apple paper as “quite devastating” and highlighted that these findings raise pivotal concerns regarding the race towards achieving artificial general intelligence (AGI), which would enable systems to emulate human-level cognitive tasks.

Referencing large language models (LLMs), Marcus remarked: “[of] AGIs, who can fundamentally change society, are joking about themselves.”

Moreover, the paper indicated that early in the “thinking” process, the inference model often squandered computational resources seeking solutions for simpler problems. However, as complexity increased, the model initially considered incorrect answers before ultimately arriving at correct ones.

When confronted with complex issues, the model experienced “collapse” and failed to generate accurate solutions. In one instance, it could not succeed even with an algorithm provided to assist.

The findings illustrated that “as problem difficulty rises, models begin to intuitively diminish inference efforts as they approach critical thresholds that closely align with the accuracy collapse point.”

According to Apple experts, these findings highlight “fundamental scaling limitations” in the reasoning capabilities of current inference models.

The study involved LRMS-based assignments like the Tower of Hanoi and River Crossing puzzle. The researchers acknowledged that their focus on puzzles signifies a boundary to their work.

Skip past newsletter promotions

The study concluded that current AI methodologies may have hit fundamental limitations. Models tested included OpenAI’s O3, Google’s Gemini Thinking, Anthropic’s Claude 3.7 Sonnet-Thinking, and Deepseek-R1. Google and Deepseek will be approached for comments, while OpenAI, the organization behind ChatGPT, opted not to provide a statement.

Discussing AI models’ capacity for “generalizable reasoning” or broader conclusions, the paper observes:

Andrew Rogoiski from the People-centered AI Institute at Surrey University remarked that Apple’s findings illustrate the industry remains grappling with AGI, suggesting that the current methods may have hit a “dead end.”

He added, “The revelation that the large model underperforms on complex tasks while faring well in simpler or medium-complexity contexts indicates we may be approaching a profound impasse.”

Source: www.theguardian.com

Reduced number of flights may decrease prediction accuracy

The National Weather Service has been releasing weather observations at over 100 sites across the country for decades, operating like clockwork in the Pacific and Caribbean.

Meteorologists launch balloons equipped with radiozond devices twice a day at 8am and 8pm ET. These balloons rise about 15 feet every 2 hours, collecting data on temperature, humidity, and wind speed as they ascend through the atmosphere. The data is transmitted back using radio waves.

When the balloons reach a certain altitude, they pop and descend back to Earth with parachutes, completing their mission. The data gathered from these balloons is crucial for feeding into weather models that form the basis of forecasts in the United States.

However, many of the launch sites have been impacted by staffing cuts under the Trump administration, leading to reduced launches and restrictions. Meteorologists and experts are concerned that these changes will compromise forecast quality and increase risks during severe weather events.

The cuts in balloon launches are part of a broader downsizing effort across federal agencies. The National Oceanic and Atmospheric Administration, which includes the National Weather Service, has seen significant staff reductions and budget cuts.

Recent announcements about balloon launch suspensions in various locations across the country have raised concerns among meteorologists. These cuts could have implications for weather forecasting accuracy, particularly in regions prone to severe weather events.

Weather balloons play a critical role in providing high-resolution data on atmospheric conditions, which is essential for accurate weather modeling. Without this data, forecasters may struggle to predict events like storms and precipitation types.

Private companies are attempting to fill the gaps left by the National Weather Service cuts, but it is unlikely they will fully replace the services provided by NOAA. These companies are looking to expand coverage and enhance existing data collection efforts.

The impact of these cuts on weather forecasting remains to be seen, but there is concern among experts that forecast accuracy could suffer without the crucial data collected by weather balloons.

Source: www.nbcnews.com

Robotic exoskeleton helps professional pianists improve speed and accuracy

Robotic exoskeleton can train people to move their fingers faster

Shinichi Furuya

The robot hand’s exoskeleton helps professional pianists learn to play faster by moving their fingers.

Robotic exoskeletons have long been used to rehabilitate people who have lost the use of their hands due to injury or medical conditions, but their use to improve performance in able-bodied people has been less studied.

now, Shinichi Furuya and his colleagues at Sony Computer Science Laboratories in Tokyo found that using a robotic exoskeleton can improve the finger speed of trained pianists after a single 30-minute training session.

“I’m a pianist, but [injured] My hands got damaged from practicing too much,” Furuya says. “I was struggling with the dilemma between over-practicing and preventing injury, so I decided I had to figure out a way to improve my skills without practicing.”

Furuya recalled how his teacher would often teach him how to play a particular song by holding up his hand. “I could understand it intuitively, tactilely, without using words,” he says. This led him to wonder if it would be possible to replicate this effect in robots.

This robotic exoskeleton can raise and lower each finger independently up to four times per second using separate motors attached to the base of each finger.

To test the device, the researchers recruited 118 experienced pianists who had played for at least 10,000 hours since before they were eight years old and asked them to practice one piece for two weeks until they stopped improving.

The pianists then underwent a 30-minute training session using the exoskeleton, during which they moved their right-hand fingers slowly or quickly in various combinations of simple and complex patterns. This allowed Furuya and his colleagues to pinpoint what type of movement was causing the improvement. .

Pianists who experienced high-speed, complex training were able to better coordinate their right-hand movements and move the fingers of either hand faster, both immediately after training and one day later. This, along with evidence from brain scans, suggests that the training changed the pianists’ sensory cortex, allowing them to better control overall finger movements, Furuya says.

“This is the first time I’ve seen someone use it.” [robotic exoskeletons] It is about learning beyond normal dexterity and beyond what is naturally possible.” Nathan Lepora At the University of Bristol, UK. “Why it worked is a little counterintuitive, because we thought actually performing the movements ourselves spontaneously would be the way we learned. But passive movements seem to work better.”

topic:

Source: www.newscientist.com

AI from DeepMind outperforms current weather predictions in accuracy

Weather forecasting today relies on simulations that require large amounts of computing power.

Petrovich9/Getty Images/iStockphoto

Google DeepMind claims its latest weather forecasting AI can predict faster and more accurately than existing physics-based simulations.

GenCast is the latest in DeepMind's ongoing research project to improve weather forecasts using artificial intelligence. The model was trained on 40 years of historical data from the European Center for Medium-Range Weather Forecasts (ECMWF). ERA5 ArchiveThis includes regular measurements of temperature, wind speed, and barometric pressure at various altitudes around the world.

Data up to 2018 was used to train the model, and then 2019 data was used to test predictions against known weather conditions. The company found that it outperformed ECMWF's industry standard ENS forecasts 97.4% of the time, and 99.8% of the time when forecasting more than 36 hours ahead.

Last year, DeepMind collaborated with ECMWF to create an AI that outperformed the “gold standard” high-resolution HRES 10-day forecast by more than 90%. Previously, he developed a “nowcasting” model that used five minutes of radar data to predict the probability of rain over a given one square kilometer area from five to 90 minutes in advance. Google is also working on ways to use AI to replace small parts of deterministic models to speed up calculations while maintaining accuracy.

Existing weather forecasts are based on physical simulations run on powerful supercomputers to deterministically model and estimate weather patterns as accurately as possible. Forecasters typically run dozens of simulations with slightly different inputs in groups called ensembles to better capture the variety of possible outcomes. These increasingly complex and large numbers of simulations are computationally intensive and require ever more powerful and energy-consuming machines to operate.

AI has the potential to provide lower-cost solutions. For example, GenCast uses an ensemble of 50 possible futures to create predictions. Using custom-built, AI-focused Google Cloud TPU v5 chips, each prediction takes just 8 minutes.

GenCast operates at a cell resolution of approximately 28 square kilometers near the equator. Since the data used in this study were collected, ECMWF's ENS has been upgraded to a resolution of just 9 kilometers.

Yilan price DeepMind says AI doesn't have to follow, and could provide a way forward without collecting more detailed data or performing more intensive calculations. “If you have a traditional physics-based model, that's a necessary requirement to solve the physical equations more accurately, and therefore to get more accurate predictions,” Price says. “[With] machine learning, [it] It is not always necessary to go to higher resolution to get more accurate simulations and predictions from your model. ”

david schultz Researchers at the University of Manchester in the UK say AI models offer an opportunity to make weather forecasts more efficient, but they are often over-hyped and rely heavily on training data from traditional physically-based models. states that it is important to remember that

“is that so [GenCast] Will it revolutionize numerical weather forecasting? No, because in order to train a model, you first have to run a numerical weather prediction model,” says Schulz. “These AI tools wouldn't exist if ECMWF didn't exist in the first place and without creating the ERA5 reanalysis and all the investment that went into it. It's like, 'I can beat Garry Kasparov at chess. But only after studying every move he's ever played.''

Sergey Frolov Researchers at the National Oceanic and Atmospheric Administration (NOAA) believe that further advances in AI will require higher-resolution training data. “What we're basically seeing is that all of these approaches are being thwarted.” [from advancing] “It depends on the fidelity of the training data,” he says. “And the training data comes from operational centers like ECMWF and NOAA. To move this field forward, we need to generate more training data using higher-fidelity physically-based models. .”

But for now, GenCast offers a faster way to perform predictions at lower computational costs. kieran hunt A professor at the University of Reading in the UK believes ensembles can improve the accuracy of AI predictions, just as a collection of physics-based predictions can produce better results than a single prediction. states.

Mr Hunt points to the UK's record temperature of 40C (104C) in 2022 as an example. A week or two ago, there was only one member of the ensemble who was predicting it, and they were considered an anomaly. Then, as the heat wave approached, the predictions became more accurate, providing early warning that something unusual was about to happen.

“You can get away with it a little bit if you have one member who shows something really extreme. That might happen, but it probably won't happen,” Hunt says. “I don’t think it’s necessarily a step change; it’s a combination of new AI approaches with tools we’ve been using in weather forecasting for a while to ensure the quality of AI weather forecasts. There is no doubt that this will yield better results than the first wave of AI weather forecasting.”

topic:

Source: www.newscientist.com

New JWST images confirm accuracy of theories on young star formation

Serpens Nebula: A row of jets appears as red streaks in the upper left corner

NASA, ESA, CSA, STScI, Klaus Pontoppidan (NASA-JPL), Joel Green (STScI)

Astronomers have captured a star alignment: New images from the James Webb Space Telescope (JWST) show jets emanating from a young star lining up in a straight line, finally proving a phenomenon that has long been suspected but never before been observed.

As a giant gas cloud collapses and begins to form stars, its rotation accelerates — similar to how an ice skater pulls their arms in closer to their body to spin faster. This rotation causes a disk of dust and gas to form around the young star at the cloud's center, feeding the cloud itself.

Strong magnetic fields in the disk send jets of material along the star's rotation axis, which can be used to measure the young star's rotation direction. The JWST image of the Serpens Nebula, about 1,400 light-years away, shows 12 baby stellar clumps. All the jets are pointing in roughly the same direction..

“Astronomers have long assumed that when clouds collapse and stars form, the stars tend to rotate in the same direction,” he said. Klaus Pontoppidan At NASA's Jet Propulsion Laboratory in California statement“But we've never seen it so directly before.”

The new observations suggest that these stars all inherit their rotation from the same long string of gas. Over time, this rotation may change as the stars interact with each other and other space objects. This is evident from the fact that another group of younger, possibly older, stars in the same image of the Ophiuchus Nebula do not have aligned jets.

topic:

  • Performer/
  • James Webb Space Telescope

Source: www.newscientist.com

Can MRI scans improve the accuracy of prostate cancer screening?

MRI scans may improve prostate cancer screening accuracy

Skynesher/Getty Images

There is both good news and bad news when it comes to prostate cancer testing. First, the bad news. Blood tests that measure a compound called prostate-specific antigen (PSA) are too inaccurate. As a result, some men end up undergoing cancer treatments they didn’t actually need, causing incontinence and erectile dysfunction.

On the other hand, combining a PSA test with an MRI scan of the prostate can make screening more accurate, especially if double testing is recommended only for people at high risk of tumors. An expert group called the Lancet Committee on Prostate Cancer made this recommendation in a new report.

We certainly need to rethink prostate screening, but will these new proposals succeed in reducing harm?

Prostate testing has long been controversial. PSA is released at high levels by cancerous prostate cells, but is also produced at low levels by healthy prostate cells.

Blood tests were introduced as a way to track the success of cancer treatment. It began being used as a screening test in the 1990s, in part as a result of a campaign by men’s health groups for something comparable to breast cancer testing.

The problem is that PSA alone is not reliable as a screening tool. Levels may rise temporarily, such as after sex, during a urinary tract infection, or while riding a bicycle. Even if the increase continues, most prostate cancers grow so slowly that if left untreated, they will never be noticed or cause any problems.

These problems wouldn’t be so important if it weren’t for the fact that the treatments used to remove the cancer (usually surgery or injecting radioactive material into the tumor) can cause permanent incontinence and erectile dysfunction. It would have been. Biopsies to determine whether cancer is present can also cause these problems.

randomized trial It has been shown that for every 1,000 men who undergo regular PSA testing, one fewer man will die from prostate cancer over a 10-year period, but three will remain incontinent and 25 will remain impotent.

These disturbing figures are forcing health services in most high-income countries, including the UK and Australia, into uneasy compromises. Unlike breast and colorectal cancer tests, no invitations for prostate tests will be sent out, but those who wish to undergo the test can take it if the risks are explained to them.

As a result, higher-income men are more likely to take the PSA test, and lower-income and black men are less likely to be tested, the new report says. This is unfortunate because men of African descent are about twice as likely to develop prostate cancer as men of European descent.

The report’s authors say health systems need to use more sophisticated forms of screening, including both PSA tests and MRI scans. This scan allows your doctor to assess the size of your prostate and identify suspicious areas that may be cancerous.

Something close to this dual method is already in place in some countries, including the UK, where the next step for people found to have high PSA levels is an MRI scan. This means that people who are reassured by their scan results can avoid a more invasive biopsy. “This greatly reduces the problem of overdiagnosis,” he says. nicholas james, a researcher at the Institute of Cancer Research in London and one of the authors of the report.

But James says it may be even better to combine the PSA test with an MRI scan before the results are fed back to avoid men being mistakenly told they may have cancer.

The committee says health care organizations should use this combined approach to launch formal screening campaigns targeting three groups known to be at high risk. Black men, people with a family history of prostate cancer, and men who have a mutation in one of their prostate cancers. BRCA Genes also associated with breast cancer.

This would avoid the current situation where men at low risk are probably getting too many PSA tests, while men at high risk are getting too few or no PSA tests.

The proposal is certainly suggestive, but it remains to be seen whether it will discourage people from getting prostate exams. recently” cure cancer phobia.

The arrival of the PSA test may be like opening a Pandora’s box, James says, but the proposed new approach will likely alleviate at least some of the harm.

topic:

Source: www.newscientist.com

The Important Facts about Testing and Accuracy

The United States is currently in the midst of an outbreak of the new coronavirus. JN.1 variant That's driving up hospitalizations and deaths across the country. But for most people, the new variant does not seem to cause worsening of symptoms.

That's why many people are wondering whether they should keep swabbing their nasal passages for coronavirus tests at the first sign of nasal congestion or pain. How well do rapid at-home tests work against new variants?

Here's what you need to know:

Do I still need to take a Covid test?

Influenza and some cold viruses are also circulating along with the new coronavirus. So there is good reason to know which virus you have, especially if you are at high risk of becoming seriously ill.

“It's important to know whether you have COVID-19, influenza, or a completely non-viral infection such as strep throat,” said Dr. Abrar Karan, an infectious disease physician at Stanford University. Because they have different treatments.” “There are different treatments for each, and the sooner you receive treatment, the better the results.”

If you're a healthy 25-year-old, there’s still some value in getting tested. For example, if you have someone in your household with a weakened immune system or someone who is battling cancer, it is important to isolate them to see if they have COVID-19.

“Remember that all of these viral and bacterial infections are transmissible differently and get sick differently,” Curran said.

Joseph Petrosino, a professor of molecular biology and microbiology at Baylor College of Medicine, said while there may not be much need for young, healthy people to get tested at home, they should be tested for the coronavirus just in case. He acknowledged that it might be helpful to know. Eventually, the symptoms will persist.

“Some people, even healthy people, runners and people who train, can get a prolonged COVID-19 infection,” he said. “We really don't know. It's difficult to predict based solely on comorbidity factors.”

Otherwise, for people at low risk, a positive coronavirus test does not change treatment much. Whether you have coronavirus, a cold or the flu, get plenty of rest, stay hydrated, and stay away from others.

How will new variants impact testing?

Experts say there is no data showing the JN.1 variant affects the results of rapid home tests.

“We have not seen anything to suggest that the new variant has evaded test detection,” Curran said. “Certainly, similar things have happened in the past with other diagnostics early in the pandemic, but right now tests should be able to detect these mutations.”

Susan Butler Wu, a clinical pathologist at the Keck School of Medicine at the University of Southern California, said she hasn't seen any data on this particular variant, but if it's similar to other variants, it’s not a problem. Rapid tests actually look for parts of the virus that are less likely to mutate and bypass testing.

“There's always the fear that a mutation will occur and the test won't work, but so far that hasn't really happened,” Butler-Wu said.

When is the best time to test?

An individual's viral load is highest early in a pandemic, when most people first develop symptoms, either after infection or before they have developed any immunity from vaccines.

Now, one researcher says that virus levels may actually be at their highest in the first few days of illness. study It was published last fall in the journal Clinical Infectious Diseases by researchers at Harvard Medical School. They found that in people with pre-existing immunity, virus levels peaked around the fourth day after symptoms appeared.

This means that if you are tested in the early stages of the disease, it may turn out to be negative.

“Their symptoms may be caused by an immune response,” Curran said. “That means there's inflammation going on, which is causing the symptoms, and that's preventing the virus from multiplying as quickly. That's why the initial test could be negative.”

The U.S. Centers for Disease Control and Prevention still recommends getting tested immediately if you think you have been infected with the coronavirus and have symptoms such as a stuffy nose, cough, or body aches.

The CDC says to wait five days if you are infected but have no symptoms.

Butler-Wu says there is a misconception that rapid tests are “one-and-done”.

“If you have symptoms and your first test was negative, you should test again,” she says.

Official guidance from the CDC is to get a rapid test if you have symptoms and then test again 48 hours later if you test negative.

The test result was positive. Does that mean I'm contagious?

A rapid at-home test is a good way to find out if someone is contagious.

Simply put, rapid tests require higher levels of virus to be positive, and higher virus levels usually mean you're more contagious.

However, the test has some limitations.

Curran said they can be a good surrogate for contagiousness in the early stages of the disease, but are not as reliable at the end of the disease.

Rapid tests have shown positive results, but data shows the virus could not be cultured when samples were taken from people. That means those people are less likely to be contagious, Curran said.

2022 study Researchers at Harvard Medical School suggested that only half of people who test positive after five days are actually infectious.

“Even after that period, even if you test positive on a rapid test, there is no guarantee that you are still contagious.”

Source: www.nbcnews.com