Doctors Create AI Stethoscope Capable of Identifying Major Heart Conditions in Just 15 Seconds

A doctor has successfully created an AI-powered stethoscope that can identify three cardiac conditions in just 15 seconds.

The classic stethoscope, which was invented in 1816, has been crucial for listening to internal body sounds and has remained a vital tool in medical practice for over two hundred years.

The research team is now working on a sophisticated AI-enhanced version that can diagnose heart failure, heart valve issues, and irregular heartbeats.

Developed by researchers at Imperial College London and Imperial College Healthcare NHS Trust, this innovative stethoscope can detect minute variations in heartbeat and blood flow that are beyond the capacity of human ears, while simultaneously performing quick ECG readings.


The details of this groundbreaking advancement that could enhance the early diagnosis of these conditions were shared with thousands of doctors during the European Heart Association Annual Meeting in Madrid, the largest cardiac conference globally.

Timely diagnosis is crucial for heart failure, heart valve disease, and irregular heart rhythms, enabling patients to access life-saving medications before their condition worsens.

A study involving around 12,000 patients from a UK GP practice tested individuals exhibiting symptoms like shortness of breath and fatigue.

Those who were evaluated using the new technology were twice as likely to receive a diagnosis of heart failure compared to similar patients who were not subjected to this method.

Patients were three times more likely to be diagnosed with atrial fibrillation—an irregular heart rhythm that heightens the stroke risk—and nearly twice as likely to be identified with heart valve disease, characterized by malfunctioning heart valves.


The AI-led stethoscope identifies subtle differences in heartbeat and blood flow that are imperceptible to the human ear while recording ECG. Photo: Eko Health

Dr. Patrick Bectiger from Imperial College London remarked:

“It’s amazing to utilize a smart stethoscope for a quick 15-second assessment, allowing AI to promptly provide results indicating whether a patient has heart failure, atrial fibrillation, or heart valve disease.”

Manufactured by Eko Health in California, the device resembles a credit card in size. It is placed on a patient’s chest to record electrical signals from the heart while a microphone picks up the sound of blood circulation.

This data is transmitted to the cloud—an encrypted online storage space—where AI algorithms analyze the information to uncover subtle heart issues that may be overlooked by humans.

Results indicating whether a patient should be flagged for any of the three conditions will be sent back to a smartphone.

While breakthroughs like these can carry risks of misdiagnosis, researchers stress that AI stethoscopes should only be employed for patients presenting heart-related symptoms, not for routine screening in healthy individuals.

However, accelerating the diagnosis process can ultimately save lives and reduce healthcare costs.

Dr. Mikhilkelsiker, also from Imperial College, stated:

“This test demonstrates that AI-enabled stethoscopes can make a significant difference, providing GPs with a rapid and straightforward method to detect issues early, ensuring patients receive timely treatment.”

“Early diagnosis allows individuals to access the necessary treatment to enhance their longevity,” emphasized Dr. Sonya Babu Narayan, clinical director of the British Heart Foundation, which sponsored the research alongside the National Institute of Health and Therapy (NIHR).

Professor Mike Lewis, Director of the Innovation Science Department at NIHR, remarked, “This tool represents a transformative advance for patients, delivering innovation right into the hands of GPs. AI stethoscopes empower local practitioners to identify problems sooner, diagnose patients within their communities, and address leading health threats.”

Source: www.theguardian.com

Paleontologists Discover New Biomarkers for Identifying Megafauna Species in Australia’s Fossil Record

Paleontologists have discovered peptide markers for three extinct Australian megafauna. This breakthrough facilitates research on creatures such as hippo-sized wombats, colossal kangaroos, and marsupials with enormous claws, aiding our understanding of the series of enigmatic extinctions that took place 50,000 years ago and the potential role of humans in these events.



Palorchestes Azael. Image credit: Nellie Pease/CABAH/CC BY-SA 4.0.

“The geographical distribution and extinction timeline of Australia’s megafauna, along with their interaction with early modern humans, are subjects of intense debate,” commented Professor Katerina Dorca from the University of Vienna.

“The limited fossil finds at various paleontological sites across Australia complicate the testing of hypotheses regarding the extinction of these animals,” added Dr. Kali Peters, Ph.D., of the University of Algarbe.

“Using ZooMS (Zoo departments by mass spectrometry) can aid in increasing the number of identified megafauna fossils, provided that collagen peptide markers for these species are accessible.”

Through the analysis of peptides in collagen samples, researchers can differentiate between various animal species, occasionally even distinguishing among different variants.

Collagen proves to be more resilient than DNA, making this method effective in tropical conditions where DNA may not endure.

However, most reference markers originate from Eurasian species that are not found elsewhere.

This study aims to develop new reference markers tailored for Australian contexts, enhancing the understanding gleaned from the fragmented fossil records of Australia.

“Proteins tend to endure better over extensive time periods and in harsh environments compared to DNA,” noted Dr. Peters.

“Thus, in studying megafauna extinction, proteins might still be preserved even in the absence of DNA.”

The research focused on three species crucial for comprehending megafauna extinction: Zygomaturus trilobus, Palorchestes Azael, and Protemnodon Mamkurra.

Zygomaturus trilobus and Palorchestes Azael belong to a lineage of animals that vanished entirely during the late Quaternary period, while Protemnodon Mamkurra survived long enough to likely coexist with humans arriving in Tasmania.

Scientists previously dated fossilized bones from one species back over 43,000 years.

Zygomaturus trilobus was among the largest marsupials that ever lived, appearing much like a hippo-sized wombat,” said Professor Douka.

Protemnodon Mamkurra was a massive, sluggish kangaroo that might have occasionally walked on all fours.”

Palorchestes Azael was a uniquely shaped marsupial with a distinctive nose and long tongue, powerful forelimbs, and a skull equipped with large claws.”

“If ancient continents connected early modern humans to what we now know as Australia, New Guinea, and Tasmania 55,000 years ago, they would have encountered astonishing creatures.”

The researchers eliminated contaminants and compared peptide markers using reference markers.

The collagen in all three samples was well-preserved, enabling the identification of appropriate peptide markers for each species.

With these markers, paleontologists successfully differentiated Protemnodon from five living genera and one extinct genus of kangaroo.

They could also differentiate Zygomaturus and Palorchestes as these two species couldn’t be distinguished from other large extinct marsupials.

This is common in ZooMS, given that collagen changes accumulate slowly over millions of years of evolution.

Unless further studies enhance specificity, these markers are most effective at identifying bones at the genus level rather than the species level.

Nevertheless, Zoom’s ability to distinguish genera from temperate regions presents opportunities to try and identify bones from tropical regions, where closely related species may feature similar or identical peptide markers, since DNA preservation is rare in these environments.

“The introduction of newly developed collagen peptide markers allows us to identify a multitude of megafauna remains in Australia’s paleontological collections,” stated Dr. Peters.

“Yet, many more species still require characterization through collagen peptide markers.”

“For instance, Diprotodon, the largest marsupial genus ever known, and Thylacoleo, the largest marsupial predator.”

The team’s findings will be published in the journal Frontiers in Mammal Science.

____

Kari Peters et al. 2025. Collagen peptide markers from three Australian megafauna species. Front. Mammal. Sci. 4; doi:10.3389/fmamm.2025.1564287

Source: www.sci.news

Tips on Identifying and Avoiding Deception by AI-Generated Misinformation

Many of the AI-generated images look realistic upon closer inspection.

On the road

Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you're seeing may be fake can help you avoid being fooled.

World leaders are concerned. World Economic Forum ReportMisinformation and disinformation “have the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools “has already led to an explosion in counterfeit information and so-called 'synthetic' content, from sophisticated voice clones to fake websites.”

While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.

“The problem with AI-driven disinformation is the scale, speed and ease with which it can be deployed,” he said. Hany Farid “These attacks no longer require nation-state actors or well-funded organizations — any individual with modest computing power can generate large amounts of fake content,” the University of California, Berkeley researchers said.

He is a pioneer of generative AI (See glossary below“AI is polluting our entire information ecosystem, calling into question everything we read, see, and hear,” and his research shows that AI-generated images and sounds are often “almost indistinguishable from reality.”

However, Farid and his colleagues' research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.

How to spot fake AI images

Remember when we saw the photo of Pope Francis wearing a down jacket? Fake AI images like this are becoming more common as new tools based on viral models (See glossary below), now anyone can create images from simple text prompts. study Google's Nicolas Dufour and his colleagues found that since the beginning of 2023, the share of AI-generated images in fact-checked misinformation claims has risen sharply.

“Today, media literacy requires AI literacy.” Negar Kamali at Northwestern University in Illinois in 2024 studyShe and her colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot them on their own. The good news is that their research shows that people are currently about 70% accurate at detecting fake AI images. Online Image Test To evaluate your detective skills.

5 common types of errors in AI-generated images:

  1. Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
  2. Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
  3. Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
  4. Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
  5. Violation of Physics: Do the shadows point in different directions? Does the mirror's reflection match the world depicted in the image?

Strange objects or behaviors can be clues that an image was created by AI.

On the road

How to spot deepfakes in videos

An AI technology called generative adversarial networks (See glossary belowSince 2014, deepfakes have enabled tech-savvy individuals to create video deepfakes, which involve digitally manipulating existing videos of people to swap out different faces, create new facial expressions, and insert new audio with matching lip syncing. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially allowing celebrities such as Taylor Swift and everyday people alike to unwillingly appear in deepfake porn, scams, and political misinformation and disinformation.

The AI ​​techniques used to spot fake images (see above) can also be applied to suspicious videos. What's more, researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have A few tips There has been a lot of research into how to spot these deepfakes, but it's acknowledged that there is no foolproof method that will always work.

6 tips to spot AI-generated videos:

  1. Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
  2. Anatomical defects: Does your face or body look strange or move unnaturally?
  3. face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
  4. Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person's eyes, eyebrows, and glasses.
  5. hair: Does your facial hair look or move oddly?
  6. Blink: Blinking too much or too little can be a sign of a deepfake.

A new category of video deepfakes is based on the diffusion model (See glossary below), the same AI technology behind many image generators, can create entirely AI-generated video clips based on text prompts. Companies have already tested and released commercial versions of their AI video generators, potentially making them easy to create for anyone without requiring special technical knowledge. So far, the resulting videos tend to feature distorted faces and odd body movements.

“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.

How to spot an AI bot

Social media accounts controlled by computer bots have become commonplace across many social media and messaging platforms. Many of these bots also leverage generative AI techniques such as large-scale language models.See glossary below) will be launched in 2022, making it easier and cheaper to mass-produce grammatically correct, persuasive, customized, AI-written content through thousands of bots for a variety of situations.

“It's now much easier to customize these large language models for specific audiences with specific messages.” Paul Brenner At the University of Notre Dame in Indiana.

Brenner and his colleagues found that volunteers were only able to distinguish between AI-powered bots and humans when About 42 percent Even though participants were told they might interact with a bot, they would still be able to test their bot-detection skills. here.

Brenner said some strategies could help identify less sophisticated AI bots.

5 ways to tell if a social media account is an AI bot:

  1. Emojis and hashtags: Overusing these can be a sign.
  2. Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
  3. Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
  4. Ask a question: These may reveal the bot's lack of knowledge on a topic, especially when it comes to local locations and situations.
  5. Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.

How to detect audio duplication and audio deepfakes

Voice Clone (See glossary belowAI tools have made it easier to generate new voices that can imitate virtually anyone, which has led to a rise in audio deepfake scams replicating the voices of family members, business executives and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.

“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” he said. Rachel TobackCo-founder of SocialProof Security, a white hat hacking organization.

Detecting these AI voice deepfakes can be difficult, especially when they're used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.

4 steps to use AI to recognize if audio has been duplicated or faked:

  1. Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person's views or actions.
  2. Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
  3. Awkward Silence: If you're listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
  4. Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person's voice and large language models to generate accurate phrasing.

Out of character behaviour by public figures like Narendra Modi could be a sign of AI

Follow

Technology will continue to improve

As it stands, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve, allowing them to quickly generate content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”

While it's important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won't be enough to stop the damage, and the responsibility for spotting it can't be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”

Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.

Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.

Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.

Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.

Voice CloneA potential way to use AI models to create a digital copy of a person's voice and generate new voice samples with that voice.

topic:

Source: www.newscientist.com

Identifying and Overcoming Body Dysmorphic Disorder

If you’ve ever made it a goal to change your appearance, you’re not alone. Around 43% of UK adults have the goal of losing weightFor many, these goals can extend to more extreme methods of altering their appearance, such as cosmetic or plastic surgery.

But when does this common and widespread desire for self-improvement and betterment become something more sinister? 3 percent of the general populationThis can lead to a constant desire to modify or change one’s physical appearance, a condition known as body dysmorphic disorder (BDD).

People with BDD believe that they have significant physical flaws or defects that, to other people, may be minor or even non-existent.

Rebecca*, a 36-year-old woman, strongly believes she looks like she has a “moon face” because she can’t help but look in the mirror at the acne scars that cover her skin.

Or Tyson*, the 17-year-old who spends hours every day in the gym trying to build muscle mass because he feels he looks like a “toothpick.”

Tyson and Rebecca have been told time and time again by family, friends, and medical professionals that what they see is different from what others see, but they don’t believe it. Seeing is believing, right? But what if your eyes can deceive you?


undefined


Long Attachment

BDD isn’t a new condition: It was first described by Italian psychiatrist Enrico Morselli in 1891, long before we became obsessed with our TikTok feeds.

He described people with “body dysmorphic disorder” (the former name for BDD) as people who are “suddenly overcome by the fear that their body may be deformed in the midst of everyday life, while talking, while reading, while sitting at a table – in fact anywhere and at any time throughout the day.”

BDD is thought to be caused by a complex interplay of biopsychosocial factors, including genetic factors, differences in brain structure and function, and a history of adverse childhood experiences such as bullying, abuse or neglect, which can lead individuals to feel ashamed about themselves and their bodies.

This tendency is often manifested or exacerbated by societal pressures around appearance — in fact, research supports the idea that attractive people often enjoy social advantages, such as being perceived as more trustworthy, healthy, confident, and intelligent.

This increases your chances of finding love, getting a job, or even earning a higher salary.

This often has a negative impact on the minds of people with BDD, making them feel like they have less chance of succeeding in life.While we cannot ignore that these benefits of beauty exist in our society, the pursuit of beauty at all costs can be harmful to both our physical and mental health.

read more:

Unrealistic perfection

Although BDD existed before the development of social media, social media has certainly played a role in increasing the prevalence and severity of BDD.

The emphasis on sharing the “perfect” selfie, the use of filters, the various ways to augment or enhance an image, and powerful algorithms that ensure you (or your disability) are served up with the content that interests you most are the perfect combination to increase the focus on appearance.

Heavy social media use and photo editing have been linked to an increased risk of developing BDD, comparing appearances, and interest in undergoing surgical and non-surgical cosmetic procedures.

This relationship arises in a variety of ways. First, our perception of attractiveness and beauty is often influenced by our “visual diet.” After an extended period of consuming curated content showcasing the best angles, lighting, makeup and outfit choices, and artificial enhancements through the addition of filters, our perception of beauty can start to become biased towards highly idealized and edited images.

As a result, viewers may feel pressured to fit into this newly formed ideal of beauty and may attempt to conform by applying filters to themselves or seeking cosmetic procedures to better meet this standard.

Unfortunately, the positive effect a filter has on your self-image only lasts while it’s applied; once the filter is removed or you see yourself in the mirror in the real world, you may find yourself feeling unattractive or unacceptable.

An estimated 3 percent of people suffer from body dysmorphic disorder. – Photo credit: Getty

There is also an increased pressure to build a “personal brand” online, which can extend beyond just posting the “perfect” photos to achieving the most beautiful feed, the right captions, hashtags and themes. This can lead to a lack of authenticity and cause a widening disconnect between your “online self” and your “real self.”

Some people say their use of social media has increased social anxiety in their daily lives. They fear they will unintentionally “catfish” others who have become accustomed to looking a certain way in the online world. Thus, a personal brand or polished online persona can lead to feelings of embarrassment and shame about their true appearance and personality.

Once BDD has developed, the disorder is often maintained by harmful patterns of thinking and behavior.

For example, engaging in excessive behavior to check, camouflage, or change one’s appearance, including prolonged observation of oneself in the mirror or taking photographs from different angles, hiding oneself with loose clothing, hats, scarves, or glasses, beauty treatments, new hairstyles, or cosmetic procedures.

Many of these are common everyday behaviors that people undertake for self-expression and self-improvement. However, when taken to an extreme, these behaviors can lead to an excessive focus on appearance.

One study compared the mirror-gazing patterns of people with and without BDD and found that even healthy people who stared at themselves in the mirror for more than 10 minutes experienced heightened awareness of their “flaws” and increased levels of distress.

People with BDD experienced this intense distress even after only looking at themselves briefly, for around 25 seconds. These findings support the idea that people with BDD have different patterns of visual processing when looking at faces, often focusing on small details and individual features rather than the overall picture.

Similarly, if someone without BDD looks at themselves long enough, they too will begin to see themselves in parts rather than as a whole, which is one reason why the “Zoom effect” and the proliferation of video calls during the COVID-19 pandemic has increased self-image distress for many people.

read more:

Extreme Measures

People with BDD seek out beauty and cosmetic treatments at a much higher rate than the general population to help them feel better about themselves.

Approximately 70% of people with BDD have previously undergone cosmetic surgeryThey account for up to 15% of all cosmetic surgery patients.

These high rates make sense: For Rebecca, who is concerned about the scars on her face, dermatological treatments like chemical peels and anti-wrinkle injections seem like the obvious solution.

Unfortunately, while most people who seek cosmetic surgery are satisfied with the results, studies have shown that this is not the case for people with BDD. In up to 91 percent of casesBecause your symptoms don’t change, you continue to focus on the areas that were treated and continue to work on ways to hide, check up, or cover up your “flaws.”

After undergoing cosmetic surgery, your concerns may change. People who were concerned about having a “hooked” nose before surgery may develop new concerns after rhinoplasty (nose surgery) about their nose appearing larger as a result of the surgery and about others criticizing them for having the surgery.

In other cases, BDD symptoms may actually worsen after treatment, making someone who is already self-conscious and vulnerable even more so.

Nearly three in ten adult men over the age of 18 have experienced insecurity about their body image.

It is an ethical and professional obligation for cosmetic surgeons to identify BDD in their patients before performing procedures, as patients with BDD may sue, complain, or demand compensation for procedures that do not meet their expectations.

It may be disheartening to learn that cosmetic surgery may not be the answer to your intense and painful obsession with appearance, but the good news is that effective, evidence-based treatments exist.

The National Institute for Health and Technology Assessment First-line treatment for BDD should include cognitive behavioral therapy with exposure and response prevention (CBT-ERP) and the addition of psychiatric medication for moderate to severe cases.

CBT for BDD involves identifying unhelpful stereotypes and expectations you have about yourself and your appearance (such as “I have to always be well-dressed when I leave the house” or “No one will love me with a nose this size”) and learning new ways to move away from these thoughts or develop more flexible, helpful thought processes (such as “I want to find a partner who is attracted to my values, interests, and passions, not the look of my nose”)

Adding ERP involves gradually exposing the patient to situations, environments, or people that they would normally avoid, while at the same time trying not to engage in the compulsive behavior.

For example, Rebecca might work on gradually eliminating cosmetics from her daily routine so she can go out without wearing heavy makeup to hide her skin, while Tyson might work on reducing his training schedule or going to the beach with his friends without having to cover up with a t-shirt.

These exposure exercises are designed to help individuals learn that what they fear most (being judged or ridiculed for their appearance) may not happen. Through exposure rather than avoidance, they can begin to live more productive, fulfilling, and joyful lives.

Current estimates suggest that CBT-ERP: Up to 70% of people with BDD experience significant relief from their symptoms.When combined with drug therapy, this rises to 80 percent.

If you’re reading this and you feel like you’re worrying a little too much about the way you look, here are some things you can try…

How to Worry Less About Your Appearance

Mirror hygiene

Set a limit on the amount of time you spend looking at yourself. Unless you’re doing it for a specific purpose like putting on makeup or shaving, staring at yourself for more than 10 minutes can cause stress. Don’t avoid mirrors, but only look at them when necessary.

BDD is often associated with an excessive focus on appearance at the expense of other activities. Spending time with friends and family and doing the activities you love can boost your self-esteem and help you realize that your strengths go beyond just your appearance.

Social Media Detox

Look at your social media and notice how much of the content you’re consuming is highly edited images or content promoting fitness, beauty or cosmetic procedures. Unfollow or hide any content that makes you feel self-conscious, or set limits on the time and amount of time you spend on social media.

Stop looking for reassurance

Try not to talk to others about your appearance. Asking for feedback on your appearance can make you feel bad, whether the answer is positive or negative. Focus the conversation on more interesting topics.

Rather than chasing the perfect body, maybe it’s time to discover a broader sense of self-worth that can withstand the inevitable challenges of aging and growing up that we all experience, whether we like it or not.

*Names and descriptions do not reflect actual clients.

read more:

Source: www.sciencefocus.com

Safely Viewing the April Solar Eclipse: Tips on Using Eclipse Glasses and Identifying Key Features

Use special eclipse glasses to prevent eye damage

Gino Santa Maria/Shutterstock

Watching a total solar eclipse is an experience you’ll never forget, but if you don’t take the right precautions, it could end up for the wrong reasons. Looking directly at the sun can be dangerous, so read on to learn how to safely observe a solar eclipse and what you need to prepare in advance.

On April 8, 2024, a total solar eclipse will be visible to more than 42 million people across North America. The total path is only about 185 kilometers wide and touches Mexico, 13 U.S. states, and parts of Canada. Most people in North America will experience this phenomenon as a partial solar eclipse, rather than a total solar eclipse.

“For those outside the path of totality, the moon will never completely cover the sun,” he says. Jeff Todd At Prevent Blindness, a Chicago-based eye care advocacy group. No matter how you look at it, eye protection is essential.

“To avoid damaging your eyes, you should wear eclipse glasses throughout the eclipse,” says Todd. Otherwise, you risk burning your retina. This phenomenon, also known as “eclipse blindness,” can occur painlessly and can be permanent. It may take several days after seeing a solar eclipse before you realize something is wrong. Sunglasses do not provide sufficient protection. However, it is perfectly safe to wear eclipse glasses over your prescription glasses.

How to safely view a solar eclipse

The prize for those traveling the path of totality is seeing the sun’s corona with the naked eye. However, it is only visible for a short few minutes during totality. Otherwise, partial phases will be visible and must be observed through eclipse glasses. Todd says people on the path to totality should wear eclipse glasses at all times, except during totality, a brief period of darkness when the sun is completely hidden by the moon. “Only then can you take off your eclipse glasses,” he said.

It is important for those in the path of totality to use their naked eyes to view the Sun during a total solar eclipse. “You have to look without a protective filter, otherwise you won’t see anything,” he says. ralph chow At the University of Waterloo, Canada.

solar eclipse 2024

On April 8th, a total solar eclipse will pass over Mexico, the United States, and Canada. Our special series covers everything you need to know, from how and when to see a solar eclipse to the strangest solar eclipse experience of all time.

Just before totality ends, light from the Sun’s photosphere flows between the Moon’s peaks and valleys. Called Bailey beads, they appear for a few seconds and eventually become a flashing “diamond ring,” exposing enough of the sun’s photosphere for sunlight to return. “It gives us ample warning that it’s time to resume viewing partial solar eclipses with protective filters,” Chow said.

Which solar eclipse glasses should I buy?

It is important to wear eclipse glasses that meet the ISO 12312-2 international standard. ISO 12312-2 applies to products used for direct viewing of the sun. “Look for the ISO standard label and buy your glasses from a trusted source,” says Todd. “Get your glasses early in time for the eclipse.” Before you buy, make sure the company or brand is listed on the American Astronomical Society’s site. A vetted list of suppliers and resellers.

Do not use Eclipse glasses with binoculars or telescopes. If you want to use these instruments to observe a solar eclipse, you’ll need to attach a solar filter over the objective lens (the lens opposite the one you’re looking through). Never place solar filters or eclipse glasses between the telescope eye and the eyepiece or binocular eyecup.

Another way to safely view the eclipse is with a pinhole projector. This is a simple device that projects an image of the sun onto paper or cardboard through a small hole. An even easier method is to use a colander or a small hole in a spaghetti spoon. This projects a small crescent sun onto every surface.

topic:

  • solar eclipse/
  • solar eclipse 2024

Source: www.newscientist.com

Identifying Lost Bullets at a Crime Scene Through Ricochet Residue

Analytical chemistry could help forensic teams get more information from crime scenes

Orange County Register/Media News Group (via Getty Images)

Even if no bullets are found at the scene, the brand of bullet used in the crime can be determined by analyzing the small pieces of metal left behind.

Forensic experts may try to link a suspect to a crime by analyzing bullets or spent shell casings found at a crime scene and proving that they were fired by the suspect's gun. . But doing so when the bullet is not present at the scene, such as when it has been removed…

Source: www.newscientist.com