Study Shows Humans Struggle to Accurately Interpret Dog Emotions

We often believe we can accurately gauge our dogs’ emotions, yet recent studies indicate that many of us may be misunderstanding their feelings.

Researchers at Arizona State University (ASU) discovered that when individuals are in a good mood, they are more prone to perceive their dog as looking sad. Conversely, when experiencing mild depression, they are likely to view the same dog as happy.

This contrasts with how we interpret human emotions. In social interactions, we generally perceive others’ feelings as mirroring our own.

“I am continually fascinated by how people interpret emotions in dogs,” stated the study’s co-author, Clive Wynn. “We have only begun to uncover what is shaping up to be a significant mystery.”

The researchers believe these findings could greatly influence how we care for our pets.

“By enhancing our understanding of how we recognize emotions in animals, we can improve their care,” explained the first author, Dr. Holly Molinaro, who was a doctoral student at ASU focused on animal behavior at the time.

Dogs involved in the study, from left to right: Canyon, a 1-year-old Catahoula; Henry, a 3-year-old French Bulldog; and Oliver, a 14-year-old mongrel. The video background was black, ensuring only the dogs were visible. – Credit: Arizona State University

The research stemmed from two experiments with about 300 undergraduate students.

Participants first viewed images designed to evoke positive, negative, or neutral moods. They then watched a brief video featuring an adorable dog to assess its emotional state.

Those who saw uplifting images rated the dog in the video as sadder, while participants who viewed more somber images rated it as happier.

The video included three dogs—Oliver, Canyon, and Henry—depicted in scenarios reflecting cheerful, anxious, or neutral moods. Factors like snacks, toys, and the promise of visiting “Grandma” elevated their spirits, while a vacuum cleaner and a photo of a cat were used to bring them down.

Scientists are still puzzled about why humans misinterpret dogs’ emotions. “Humans and dogs have coexisted closely for at least 14,000 years,” Wynn noted.

“Over this time, dogs have learned much about cohabitation with humans. However, our research indicates significant gaps in our understanding of how dogs truly feel.”

read more:

Source: www.sciencefocus.com

Smart Devices Can Accurately Measure Breastfed Babies’ Intake

Measuring the Amount of Breast Milk in Breastfeeding

Lebedinskaia Natalia/Getty Images

Parents may soon be able to monitor how much breast milk their baby consumes through devices that provide real-time notifications to their smartphones.

“The anxiety surrounding breastfeeding often stems from the uncertainty about how much milk a baby is receiving,” explains Daniel Robinson from Northwestern University, Illinois. “This can heighten stress for nursing mothers, parents, and healthcare professionals.” Insufficient nutrition can lead to slower weight gain in infants and, in severe cases, dehydration.

Clinicians typically evaluate breastfeeding effectiveness by comparing weights before and after feeds and monitoring diaper changes. However, these methods are somewhat cumbersome and imprecise, according to Robinson.

To create a more precise measurement system, he and his team engineered a device featuring four electrodes, each 4 cm wide, that attach to the breast away from the nipple. Two electrodes transmit a very low electrical current across the breast, while the other pair receives it.

This device relays recordings to a smartphone app, leveraging the weaker electrical signals produced as milk is released, enabling real-time calculations of milk volume, Robinson shares.

Researchers tested this system with breastfeeding mothers who expressed milk into a bottle for approximately 15 minutes. The device’s average milk collection was within 2 ml of the actual amount, as each participant expressed about 50 ml.

This innovation could allow parents to monitor their baby’s nutrition more effectively, potentially leading to timely adjustments such as supplementing with formula under medical guidance, Robinson notes.

The device consists of sticky electrodes that adhere to the breast

Northwestern University

In another trial, a woman used the device while nursing, and the app reported that her baby consumed 24 ml of milk. This closely matched the 20 ml estimation derived from traditional weight measurements taken before and after feeding, Robinson notes.

“A prevalent reason many mothers discontinue breastfeeding is the belief that their milk supply is inadequate, making this technology crucial for determining its accuracy,” states Mary Fewtrell from University College London.

However, to ensure the credibility of this device, further research is necessary to understand any potential impacts on milk production, long-term side effects, and whether parents find it desirable, observes Amy Brown from Swansea University, UK.

Topics:

Source: www.newscientist.com

Latamgpt’s goal is to develop AI that accurately reflects the diverse culture of Latin America

Latin America has been a source of inspiration for various aspects, including a popular literary and musical genre and staple foods like potatoes. A famous Happy meal is now an indication of this inspiration. There is potential for Latin America to also become a cradle for AI.

A coalition of research institutes is collaborating on a project called latamgpt, which aims to create a tool that considers regional language differences, cultural experiences, and “specificity.” This tool is intended to provide more accurate representations for users in Latin America and the Caribbean compared to existing Large Language Models (LLM) primarily trained by US or Chinese companies in English.

The project lead, Rodrigo Duran Rojas, expressed the importance of developing local AI solutions to better serve Latin America. The goal is to offer a representative outlook tailored for the region, with initial tests showing promising results in areas like South American history.

Over 30 institutions are involved in the development of Latamgpt from countries across the hemisphere, including collaborations with Latinos in the US like Freddy Vilci Meneseth, an associate professor of Hispanic Studies at Lewis & Clark College, Oregon.

Latamgpt’s launch is planned for around June, following a significant commitment from various regions for improved AI governance. Projects like monitoring deforestation in the Amazon Rainforest and preserving historical documents from past dictatorships are contributing to the dataset used for training Latamgpt.

With a dataset of over 8 terabytes, Latamgpt aims to provide a nuanced and localized model for various applications. The project faces challenges in incorporating diverse dialects and complex grammatical structures, but emphasizes the importance of collaboration for continued development.

Diversified dialects and complex grammar challenges

Efforts like Latamgpt, CHATGPT, and Google’s Gemini are working towards incorporating a wider range of data and improving localization for non-English languages. Challenges in training models for languages with complex grammar and dialects persist.

Despite these challenges, Latamgpt aims to address these issues through collaboration with institutions, libraries, and archives across the region. The project continues to receive data and feedback to enhance its capabilities and explore applications in public policy and regulation.

The long-term goal of Latamgpt is to create an interconnected network for developing AI solutions with a Latinx touch, emphasizing the impact of collaboration in shaping the future of technology in Latin America and beyond.

An earlier version of this story was first published by Noticias Telemundo.

Source: www.nbcnews.com

Advancements in Dementia Research: Science can now accurately assess the “biological age” of your brain

If you’re like Khloe Kardashian, who recently turned 40, you may have considered testing your biological age to see if you feel younger than your actual age. But while these tests can tell you a lot about your body’s aging, they often overlook the aging of your brain. Researchers have now developed a new method to determine how quickly your brain is aging, which could help in predicting and preventing dementia. Learn more here.

Unlike your chronological age, which is based on the number of years since you were born, your biological age is determined by how well your body functions and how your cells age. This new method uses MRI scans and artificial intelligence to estimate the biological age of your brain, providing valuable insights for brain health tracking in research labs and clinics.

Traditional methods of measuring biological age, such as DNA methylation, do not work well for the brain due to the blood-brain barrier, which prevents blood cells from crossing into the brain. The new non-invasive method developed at the University of Southern California combines MRI scans and AI to accurately assess brain aging.

Using AI to analyze MRI brain scans, researchers can now predict how quickly the brain is aging and identify areas of the brain that are aging faster. This new model, known as a 3D Convolutional Neural Network, has shown promising results in predicting cognitive decline and Alzheimer’s disease risk based on brain aging rates.

Researchers believe that this innovative approach can revolutionize the field of brain health and provide valuable insights into the impact of genetics, environment, and lifestyle on brain aging. By accurately estimating the risk of Alzheimer’s disease, this method could potentially lead to the development of new prevention strategies and treatments.

Overall, this new method offers a powerful tool for tracking brain aging and predicting cognitive decline, bringing us closer to a future where personalized brain health assessments can help prevent and treat neurodegenerative diseases.

For more information, visit Professor Andrei Ilimia’s profile here.

https://c02.purpledshub.com/uploads/sites/41/2025/02/MRI-scan.mp4
Using AI to analyze MRI brain scans, you can see how quickly your brain is aging.

Source: www.sciencefocus.com

Electronic tongue accurately identifies chemical makeup of alcoholic beverages

Molecular tests can be used to assess the quality of drinks

Evgeny Parilov/Alamy

Beverage manufacturers and consumers may soon have access to small, portable kits not unlike coronavirus tests to check the quality and safety of alcoholic beverages.

The device is called an “artificial tongue” because it can detect additives, toxins, and sweetness in drinks with just a few drops.

Shuo Fan Researchers at China’s Nanjing University say this first-generation new technology cannot yet test for date rape drugs or detect methanol contamination in spiked drinks. A recent incident occurred in Laos in which six backpackers were killed.In future versions,

Current methods for analyzing alcoholic beverages, such as liquid chromatography, require expensive and cumbersome laboratory equipment and require specialized technicians to manipulate and analyze the samples.

The artificial tongue relies on biological nanopore technology. It uses modified organisms, such as bacteria, that have tiny holes, or pores, in their cell membranes that are just a few nanometers in diameter. By charging the membrane with an electrical charge, small molecules of the substance being tested can be drawn into the pores and passed through them.

When these molecules pass through the nanopore, they generate unique electrical signatures that can be analyzed to identify the chemicals present in the sample. Nanopores have already revolutionized DNA sequencing, allowing genetic material to be tested almost instantly using easily portable equipment.

Huang and colleagues used nanopores made in bacteria, called micropores, that have already been introduced for DNA sequencing. Smegmatis bacterium.

The device uses artificial intelligence to identify molecules that pass through the nanopores, such as fragrance compounds and additives, Huang said. “The sensor tells you right away what type of alcoholic beverage it is,” he says. “It can provide a quantitative standard of the product and also easily identify counterfeit alcoholic beverage products.”

Nanopore detectors require only a power source to operate, he says. “This nanopore sensing assay can be performed at home, in the office, or on the street as easily as a COVID-19 test,” Huang said. “You just add a drop of alcoholic beverage to the sensor and wait for the results. The machine learning algorithm does the rest of the work.”

topic:

  • biotechnology /
  • Eating and drinking

Source: www.newscientist.com

Study suggests that victims of Pompeii disaster may not be accurately identified

New DNA analysis has shed light on the victims of the Pompeii disaster, challenging previous assumptions.

Researchers from the United States and Italy conducted a recent study that uncovered remains believed to be of family members, suggesting that the gender of some individuals may have been misidentified. Source

The study’s scientists argue that gender roles may have influenced the misconceptions about the victims of Pompeii.

“This study highlights the unreliability of narratives based on limited evidence, often reflecting the biases of researchers at the time,” explained Dr. David Carameli, co-author of the study and researcher at the University of Florence.

When Mount Vesuvius erupted in 79 AD, over 2,000 people perished, and Pompeii was buried under 3 meters of volcanic material. The city was preserved until its rediscovery in 1599.

Using plaster casts created by archaeologist Giuseppe Fiorelli in the 19th century, researchers could analyze bone fragments mixed with plaster to extract DNA information about the victims’ gender, genetic relationships, and ancestry.

It is believed that, in the absence of DNA evidence, past researchers made assumptions based purely on the physical appearance of the casts.

For instance, a family discovered in the House of the Golden Bracelet in Pompeii was re-examined. Initial assumptions about their relationships were proven wrong through DNA evidence.

Notably, experts previously misidentified a pair as sisters or mother and daughter, while genetic testing revealed one of them to be male.

The study, which examined 14 victims and was reported in the journal current biology, hopes to improve the understanding of archaeological data and ancient societies in Pompeii and beyond.

Read more:

Source: www.sciencefocus.com

Blood test accurately detects ALS in 97% of cases

Biomarkers in blood may indicate certain medical conditions

Evgeny Sarov/Alamy

Researchers have linked eight genetic markers to amyotrophic lateral sclerosis (ALS), which may one day be able to diagnose the disease with a blood test.

Patients with ALS, the most common motor neuron disease, suffer from problems walking, speaking, swallowing and breathing that worsen over time and ultimately lead to death. There is no cure, but treatments such as physical therapy can help reduce the impact of these symptoms.

Doctors typically diagnose ALS using an assessment of symptoms, tests that measure electrical activity of the nerves and brain scans. A lack of awareness about ALS means doctors have to track how a patient’s symptoms progress over time before making a diagnosis, which delays treatment, doctors say. Sandra Banack At Brain Chemistry Labs, a research institute in Wyoming.

To diagnose the disease earlier, Banach and his colleagues have been analyzing blood samples from small groups of ALS patients and non-patients, and have found eight genetic markers that appear to be present at different levels in the two groups.

To test this, the team looked at blood samples from 119 people with ALS and 150 people without ALS from a biobank called the National ALS Biorepository and found that the same eight markers remained different between the two groups. These markers are related to neuronal survival, brain inflammation, memory and learning, Banak says.

The researchers then trained a machine learning model to distinguish between people with and without ALS based on the marker levels of 214 participants, and when they subsequently tested it on the remaining 55 participants, found that it correctly identified 96 percent of ALS cases and 97 percent of non-ALS cases.

“This is a wonderful thing.” Ahmad Al Khlifat “The test is excellent at distinguishing between people with ALS and those without,” said researchers from King’s College London.

The researchers estimate that the test will cost less than $150 and hope to have it available within two years, Banach said. But it needs to be tested in different groups of people first. If the team partners with the right diagnostic labs, Banach said, the test could be available within a year.

topic:

Source: www.newscientist.com

AI Technology can accurately recreate visual perceptions using mind-reading capabilities

Top row: Original image. Second row: AI-reconstructed image based on macaque brain recordings. Bottom row: Image reconstructed by the AI ​​system without the attention mechanism.

Thirza Dado et al.

Artificial intelligence systems can currently create highly accurate reconstructions of what a person sees, based on recordings of brain activity, and these reconstructed images improve significantly as the AI ​​learns which parts of the brain to pay attention to.

“As far as I know, these are the most accurate and closest reconstructions.” Umut Güçül Radboud University, Netherlands.

Güçül's team is one of several around the world using AI systems to understand what animals and humans see through brain recordings and scans. In a previous study, his team used a functional MRI (fMRI) scanner to record the brain activity of three people while they were shown a series of pictures.

In a separate study, the team used an implanted electrode array to directly record the brain activity of a single macaque monkey as it viewed AI-generated images — an implant done by a different team and for a different purpose, Güçül's colleagues say. Sarza Dado“We didn't put implants in macaques to restructure their perception,” she says. “That's not a good argument against doing surgery on animals.”

The research team has now reanalyzed the data from these earlier studies using an improved AI system that can learn which parts of the brain to pay most attention to.

“Essentially, the AI ​​is learning where to pay attention when interpreting brain signals,” Gyuklüh says, “which of course in some way reflects what the brain signals pick up on in the environment.”

By directly recording brain activity, some of the reconstructed images were very close to the images seen by the macaques, as generated by the StyleGAN-XL image-generation AI. But accurately reconstructing AI-generated images is easier than real images, because aspects of the process used to generate the images can be incorporated into the AI ​​training to reconstruct those images, Dado said.

The fMRI scans also showed a noticeable improvement when using the attention guidance system, but the reconstructed images were less accurate than those for the macaques. This is partly because real photographs were used, but Dado also says that it is much harder to reconstruct images from fMRI scans. “It's non-invasive, but it's very noisy.”

The team's ultimate goal is to develop better brain implants to restore vision by stimulating the higher-level parts of the visual system that represent objects, rather than simply presenting patterns of light.

“For example, we can directly stimulate the area that corresponds to a dog's brain,” Güçül says, “and in that way create a richer visual experience that is closer to that of a sighted person.”

topic:

Source: www.newscientist.com

Scientists in neuroscience claim that certain dreams can accurately forecast events to come

Kamran Dibba, an anesthesiologist at the University of Michigan, and his colleagues have found that during sleep, some neurons not only replay the recent past but also anticipate future experiences.

To dynamically track the spatial tuning of neurons offline, Mahboudi others We used a novel Bayesian learning approach based on spike-triggered average decoded positions in population recordings from freely moving rats.

“Certain neurons fire in response to certain stimuli,” Dr. Dibba said.

“Neurons in the visual cortex fire when presented with an appropriate visual stimulus, and the neurons we study show location preference.”

In their study, Dr. Dibba and his co-authors aimed to study the process by which these specialized neurons generate representations of the world after new experiences.

Specifically, the researchers tracked sharp ripples, patterns of neural activity that are known to play a role in consolidating new memories and, more recently, have also been shown to tag which parts of a new experience will be stored as a memory.

“In this paper, for the first time, we observe individual neurons stabilizing spatial representations during rest periods,” said Rice University neuroscientist Dr. Caleb Kemele.

“We imagined that some neurons might change their representation, mirroring the experience we've all had of waking up with a new understanding of a problem.”

“But to prove this, we needed to trace how individual neurons achieve spatial tuning – the process by which the brain learns to navigate new routes and environments.”

The researchers trained rats to run back and forth on a raised track with liquid rewards at each end, and observed how individual neurons in the animals' hippocampus “spiked” in the process.

By calculating the average spike rate over multiple round trips, the researchers were able to estimate a neuron's place field – the area of ​​the environment that a particular neuron is most “interested” in.

“The key point here is that place fields are inferred using the animal's behavior,” Dr Kemele said.

I’ve been thinking for a long time about how we can assess neuronal preferences outside the labyrinth, such as during sleep,” Dr. Dibba added.”

“We addressed this challenge by relating the activity of individual neurons to the activity of all the other neurons.”

The scientists also developed a statistical machine learning approach that uses other neurons they examined to infer where the animals were in their dreams.

The researchers then used the dreamed locations to estimate the spatial tuning process of each neuron in the dataset.

“The ability to track neuronal preferences in the absence of stimulation was a significant advance for us,” Dr. Dibba said.”

This method confirmed that the spatial representation formed during the experience of a novel environment remained stable in most neurons throughout several hours of sleep following the experience.

But as the author predicted, there was more to the story.”

“What I liked most about this study, and why I found it so exciting, was that it showed that stabilizing memories of experiences isn’t the only thing these neurons do during sleep. It turns out some of them are doing other things after all,” Dr. Kemmele said.”

“We can see these other changes that occur during sleep, and then when we put the animals back into the environment, we can see that these changes actually reflect something that the animals learned while they were asleep.”

“It’s as if the animal is exposed to that space a second time while they’re sleeping.”

This is important because it provides a direct look at the neuroplasticity that occurs during sleep.

“It appears that brain plasticity and rewiring require very fast timescales,” Dr. Dibba said.”

This study paper In the journal Nature.

_____

K. Mabudi others2024. Recalibration of hippocampal representations during sleep. Nature 629, 630-638; doi: 10.1038/s41586-024-07397-x

Source: www.sci.news

AI can accurately determine a person’s gender from a brain scan 90% of the time

Comparisons are difficult because men’s brains tend to be larger than women’s.

Sergiy Tryapitsyn / Alamy

Are male and female brains that different? A new way to investigate this question has led us to the conclusion that they exist, but we need artificial intelligence (AI) to tell them apart.

The question of whether we can measure differences between male and female brains has long been debated, and previous studies have yielded conflicting results.

One problem is that men’s brains tend to be slightly larger than women’s. This is likely due to the fact that men are generally larger, and some previous studies have compared the size of various small areas of the brain. Unable to adjust whole brain volume. However, no clear findings have been made so far. “When you correct for brain size, the results change quite a bit,” he says. Vinod Menon at Stanford University in California.

To tackle this problem in a different way, Menon’s team used a relatively new method called dynamic functional connectivity fMRI. This involves recording the brain activity of people lying in a functional MRI scanner and tracking changes in how activity in different areas changes in sync with each other.

The researchers designed an AI to analyze these brain scans and trained it on the results of about 1,000 young people from an existing database in the United States called the Human Connectome Project, identifying which individuals are male and which individuals. told the AI whether the person was female. In this analysis, the brain was divided into 246 different regions.

After this training process, the AI was able to differentiate between a second set of brain scan data from the same 1000 men and women with approximately 90% accuracy.

More importantly, the AI was equally effective at differentiating male and female brain scans from two different, never-before-seen brain scan datasets. Both consisted of about 200 people of similar age, ranging in age from 20 to 35, from the United States and Germany.

“What we bring to the table is a more rigorous study with replication and generalization to other samples,” Menon says. None of the people in the training or testing data were transgender.

“Replication with a completely independent sample from the Human Connectome Project gives us even more confidence in our results,” he says. Camille Williams At the University of Texas at Austin.

The next question is whether the AI will be just as accurate when tested on an additional, larger set of brain scan results. “Time will tell what results we get with other datasets,” he says Menon.

If confirmed, the findings could help us understand why some medical conditions and forms of neurodiversity, such as depression, anxiety, and attention-deficit hyperactivity disorder, differ by gender. No, says Menon.

“If we don’t develop these gender-specific models, we will miss important aspects of differentiating factors.” [for example]”An autistic man and a control man, and an autistic woman and a control woman,” Menon said.

topic:

Source: www.newscientist.com

New Science of Lie Detection: How to Accurately Identify a Liar

We naturally detect lies all the time. It can be a drop in our partner's voice that alerts us to the fact that they are hiding their feelings. The eyes of a child return to the drawer containing the present they are not allowed to open. Or the incredible story told by a colleague trying to explain why the company's petty cash went missing.

However, we often cannot see through the lies. why? Researchers have been trying to answer this question for more than a century, but liars still slip through our hands. But the latest research may help shed light on where we went wrong.

Recent notable research is Associate Professor Timothy Luke and colleagues at the University of Gothenburg.they saw Research published in the past 5 years Fifty international experts in lie detection analyzed how to tell when someone is lying.

But first they needed to determine exactly what a lie was. We might use the word “lie” to refer to someone who says they look good in clothes they don't know whether they fit, a partner who seems to be trying to hide an affair, or a murderer who claims to be innocent. yeah. But are they comparable? Surely some lies carry more weight than others? Luke likes to distinguish between “white” lies and what he calls deception.

“The structure of deception is more complex than many people think,” he says. “There may be a variety of psychological processes underlying it. We're not talking about the same thing. Even superficial things like the length and type of communication are important.”

Whether you're texting a lie or telling someone a lie to their face, Luke says the core of deception is a deliberate attempt to mislead another person. But determining what is a lie is another thing. Detecting it is another thing entirely. Is there really a surefire clue to someone else's deception?


undefined

Can you spot a liar just by looking at their eyes?

A common belief is that people who lie are reluctant to meet the gaze of others. Still, in the Gothenburg study, 82 percent of experts agreed that people who lie are less likely to avoid eye contact or look away than people who tell the truth.

“Empirical research on deception detection is vast,” he says. Per Anders Grand Hug, professor of psychology at the University of Gothenburg and one of the co-authors of the study. “But the one issue most experts agree on is that gaze aversion is not a diagnostic clue for deception.”

Similarly, 70% of experts agreed that liars appear no more nervous than truth tellers. This may be surprising since nervousness and gaze aversion are two of her four main behaviors that a liar exhibits.

Photo courtesy of Getty Images, Alamy. Image manipulation: Andy Potts.

Other traditional indicators include that liars continually change their posture, touch their body frequently, and offer explanations that are less plausible, logical, or consistent than they would be if they were telling the truth. There are things to do.

These beliefs are also based on shaky empirical evidence. The researchers investigated deception and fidgeting (body movements), how long subjects took to answer questions (response latency), and whether subjects' explanations were consistent, meaningful, and easily expressed ( found that the relationship between fluency and fluency was not clear. cut. Some experts said liars do these things more, some less, and others said there was no difference.

read more:

words are important

Professor Aldert FreiThe University of Portsmouth expert on the psychology of deception, who was not involved in the Gothenburg study, said the most widespread misconception about deception was “the idea that nonverbal lie detection works”. ing.

He suggests that people who try to use nonverbal lie detection methods, even if those methods include polygraphs, video analysis, taking brain “fingerprints” using neuroimaging equipment, or using audio Even if it involves technologies such as change exploration, it means we need to proceed with caution. Pitch – These are all controversial areas in deception detection research.

is that so Any What is an effective way to spot a liar? According to Luke, he has one promising lead. It's the lack of detail. About 72% of experts agreed that people who lie provide less detailed information than people who tell the truth.

Vrij agreed, saying that instead of looking at how people behave, find out what they say. He said there are several linguistic indicators, such as the number of details or “complexity” that appear in the subjects' statements.

Despite problems associated with purported behavioral cues, such as gaze aversion, many practitioners are reluctant to replace them with more useful cues based on what the suspect says. , says Vrij. Old myths and methods slowly disappear.

“The most annoying thing is the assumptions that come from the TV programs that lead the general public.” [and] “Experts believe they can catch individual liars.” Professor Amina Memon He is a professor at the University of London, a leading expert on lie detection and interrogation, and one of the co-authors of the Gothenburg study.

Police who have a hunch about a suspect based on a typical profile of a liar may use coercive tactics such as getting innocent people to confess to crimes they did not commit. For this reason, Memon advocates interviewing with a neutral, fact-finding approach, rather than guessing whether someone is lying.

Photo courtesy of Getty Images, Alamy. Image manipulation: Andy Potts.

But behind all this lies a bigger problem. Perhaps the reason we haven't found universal clues to deception is because they simply don't exist.

Over the past century, researchers have almost exclusively adopted what is known as the non-theoretical approach. This means they are looking for the “laws” of deception, the clues that everyone shows. But perhaps the reason this kind of one-size-fits-all approach doesn't work is simply because everyone lies differently.

Poker players apply this logic when looking for other players' “tells,” actions that indicate whether that person is bluffing or not. Tellurium varies from person to person, so some people may scratch their nose when their hands are not feeling well, others may cough more, and others may seem calmer than usual.

Even if you throw these three people into a research setting, a theoretical approach will not work. These differences appear to be just noise.

Signs of lying

If we want to understand the cues, Luke argues, researchers need to take an “ideographic” approach and focus on what makes each individual unique. This involves creating a personal profile of how each person lies about the same types of things and in similar situations.

“Testing the same people under different conditions (a so-called 'repeated measures' experimental design) is the best approach,” Memon says.

An example of this approach was published in a 2022 paper. Dr. Sophie van der Zee and co-author, who has developed the first deception model specifically tailored to the individual.

It remains to be seen how researchers will overcome the logical hurdles, but it seems clear that the science of lie detection is changing. It's time to move away from what Luke calls “crude averages.” “People are a little too fascinated by cool tricks to spot someone's lies,” he says.

Importantly, researchers studying deception have repeatedly found that evidence from controlled environments shows that most people are bad at detecting lies. is. Liars are able to escape detection in part because they know and exploit stereotypes.

Photo courtesy of Getty Images, Alamy. Image manipulation: Andy Potts.

Our confirmation bias can also make us overconfident. We remember a disproportionate amount of the times when we caught a liar, and we don't notice the times when we didn't lie at all.

Even if we succeed, Luke is not convinced that the clues we think we used are really the keys we used to unlock the truth.

“Remember the last time you caught someone in a lie? How did you know?” he asks. “It probably wasn't because they were looking up and left. They probably had some kind of evidence, like receipts, text messages, witnesses. These are things that make people wonder if someone is offering the truth. That’s how we tend to actually judge whether or not.”

Even in the absence of concrete external evidence, it may be possible to assess situational factors. “In the real world, we can often understand to some extent why people would want to lie,” Luke says.

When someone we know is lying, we can better guess from subtle cues such as their gaze because we know them well. In these situations, Luke says it's best to read the situation better than the other person and try to understand their motives.

The key message is that behavioral cues to deception may exist, but they are likely to be highly personal. “It's better to trust your own detective work and check what people say against the evidence,” says Luke.

Fixed cues won't work. In fact, it can make it even harder to spot a liar. And what if no evidence is found? Luke's advice is simple. “Proceed with caution.”

read more:

Source: www.sciencefocus.com

The ultrasound patch developed by MIT accurately detects bladder fullness

MIT researchers have developed a wearable ultrasound patch that can non-invasively image internal organs, primarily focusing on bladder health. The device eliminates the need for an ultrasound operator or gel and could transform the monitoring of various organ functions and disease detection.

The wearable device is specifically designed to monitor the health of the bladder and kidneys and could be instrumental for early diagnosis of cancers deep within the body.

Designed in the form of a patch, the ultrasound monitor can capture images of organs inside the body without requiring an ultrasound operator or gel application. The patch can accurately image the bladder and determine its fullness, allowing patients with bladder or kidney problems to efficiently monitor the functionality of these organs.

Additionally, the wearable patch has the potential for use in monitoring other organs in the body by adjusting the ultrasound array’s position and signal frequency. This capability could enable the early detection of deep-seated cancers like ovarian cancer.

The researchers behind this groundbreaking technology are based at the Massachusetts Institute of Technology (MIT), and the study has been published in Nature Electronics. Their aim is to develop a series of devices that improve information sharing between clinicians and patients and ultimately shape the future of medical device design.

In an initial study, the wearable ultrasound patch was able to obtain bladder images comparable to traditional ultrasound probes. To advance the clinical application of this technology, the research team is working on a portable device that can be used to view the images.

The MIT team also has aspirations to develop an ultrasound device capable of imaging other deep-seated organs in the body, such as the pancreas, liver, and ovaries. This will involve designing new piezoelectric materials and conducting further research and clinical trials.

Funding for this research was provided by various organizations, including the National Science Foundation, 3M Non-Tenured Faculty Award, Texas Instruments Corporation, and the MIT Media Lab Consortium, among others.

Source: scitechdaily.com