Key Concepts for Improving Plastic Recycling

“To enhance both plastic recycling and reuse, brands should utilize similar packaging for products within the same category.”

Elaine Knox

Since its inception in 1899, the US National Biscuit Company has utilized packaging as a marketing strategy by wrapping Uneda soda crackers in wax paper inside cardboard boxes. Over the decades, businesses have increasingly turned to plastics, making unique packaging a key component for establishing brand identity.

However, the fragile economics of plastic recycling are deteriorating, compounded by the complexity introduced by varying pigments, materials, and more. Currently, only 10% of plastic packaging is recycled globally. In contrast, reusable packaging remains in a niche market.

There are effective and straightforward methods to enhance both the recycling and reuse of plastics, such as having brands adopt similar packaging for products in the same category.

Initially, prioritize recycling. Despite decades of consumer awareness and infrastructure investments, managing various plastic types into specific subcategories remains prohibitively costly. Eliminating pigments and sorting by color is expensive, leading to many plastic varieties being downcycled into gray pipes and construction materials. The supply chain is inconsistent and fragmented, with virgin plastics remaining cheaper, resulting in a lack of reliable buyers for most recycled plastics.

Standardization could significantly improve this situation. If product categories adopt uniform guidelines for plastic types, colors, labels, and adhesives, recyclers could potentially recover much more material at a reduced cost. This would enhance economic viability for recycling and facilitate the vision of producing new bottles from old ones.

The case for standardized reuse systems is equally compelling. Presently, many brands experimenting with reuse employ different containers, necessitating individual return points coupled with specialized cleaning equipment and quality assurance checks, which adds costs and complexity while reducing convenience. Systems based on standardized packaging and shared infrastructure could capture 40% of the market through a more consolidated approach, as noted by the Ellen MacArthur Foundation.

While standardized packaging might seem anti-capitalistic to some, many brands already produce similar packaging, such as milk jugs in the UK and toothpaste tubes in various countries. Standardization does not imply that all products must look identical. Brands can still employ unique labels, washable inks, embossing, and other distinguishing features. They can also maintain their own shapes and sizes.

It’s undoubtedly challenging to envision competitors like Procter & Gamble and Unilever willingly agreeing to package shampoo in identical bottles. However, with billions lost annually due to single-use plastics, where data ends up incinerated or in landfills, research increasingly highlights health risks associated with unstudied chemicals in plastics. Brands may find it challenging to safeguard their interests. Legally, it could be argued that the harm stemming from customized packaging outweighs the advantages of standardized containers.

More brands might soon have little choice. Regulatory frameworks are evolving in Europe and other regions, focusing on reuse targets and increased recycled content. Standardized packaging offers brands a pathway to meet these objectives while minimizing complexity and cost increases.

Undoubtedly, like-colored shampoo bottles won’t solve all issues, but such changes are becoming increasingly sound from a business perspective. Without them, achieving truly circular packaging remains a distant goal.

topic:

Source: www.newscientist.com

Transformative Concepts: The Case for Embracing AI Doctors | Books

wOur physicians are exceptional, tireless, and often accurate. Yet, they are human. Increasingly, they face exhaustion, working extended hours under tremendous stress, and frequently with insufficient resources. Improved conditions—like more personnel and better systems—can certainly help. However, even the best-funded clinics with the most committed professionals can lack essential standards. Doctors, like all of us, often operate with a mindset reminiscent of the Stone Age. Despite extensive training, the human brain struggles to cope with the speed, pressure, and intricacies of contemporary healthcare.

Since patient care is the principal aim of medicine, what or who can best facilitate this? While AI can evoke skepticism, research increasingly illustrates how it can resolve some of the most enduring problems, including misdiagnosis, errors, and disparate access to care, and help rectify overlooked failures.

As patients, each of us will likely encounter at least one diagnostic error during our lifetime. In the UK, conservative estimates indicate that 5% of primary care visits result in an inability to diagnose correctly, putting millions at risk. In the US, diagnostic errors can lead to death or lasting harm, affecting 800,000 individuals each year. The risk of misdiagnosis is amplified for the one in ten people globally with rare diseases.

Modern medicine prides itself on being evidence-based, yet doctors don’t always adhere to what the evidence suggests. Studies reveal that evidence-based treatments are dispensed only about half the time for adults in the US. Furthermore, your doctor might not concur with the diagnosis either. In one study, reviewers providing second opinions on over 12,000 radiology images disagreed with the original assessment in roughly one-third of cases, leading to nearly 20% of treatment changes. As workloads increase, quality continues to decline, resulting in inappropriate antibiotic prescriptions and falling cancer screening rates.

While this may be surprising, there is a comprehensible reason for these errors. From another perspective, it’s remarkable that doctors often get it right. The human aspects—distraction, multitasking, even our circadian rhythms—play a significant role. However, burnout, depression, and cognitive aging affect more than just physicians; they raise the likelihood of clinical mistakes.

Additionally, medical knowledge advances more rapidly than any doctor can keep up with. By graduation, many medical students’ knowledge is already outdated, with an average of 22 hours required for a study to influence clinical practice. With a new biomedical article published every 39 seconds, even reviewing just the summaries demands a similar time investment. There are over 7,000 rare diseases, with 250 more identified each year.

In contrast, AI processes medical data at breakneck speeds, operating 24/7 without breaks. While doctors may waver, AI remains consistent. Although these tools can also make mistakes, it’s important not to underestimate the capabilities of current models. They outperform human doctors in clinical reasoning related to complex medical conditions.

AI’s superpower lies in identifying patterns often overlooked by humans, and these tools have proven surprisingly adept at recognizing rare diseases—often surpassing doctors. For instance, in a 2023 study, researchers tasked ChatGPT-4 with diagnosing 50 clinical cases, including 10 involving rare conditions. It accurately resolved all common cases by the second suggestion and achieved a 90% success rate for rare conditions by the eighth guess. Patients and their families are increasingly aware of these advantages. One child, Alex, consulted 17 doctors over three years for chronic pain, unable to find answers until his mother turned to ChatGPT, which suggested a rare condition known as tethered cord syndrome. The doctor confirmed this diagnosis, and Alex is now receiving appropriate treatment.

Next comes the issue of access. Healthcare systems are skewed. The neediest individuals—the sickest, poorest, and most marginalized—are often left behind. Overbooked schedules and inadequate public transport result in missed appointments for millions. Parents and part-time workers, particularly those in the gig economy, struggle to attend physical examinations. According to the American Time Use Survey, patients sacrifice 2 hours for a mere 20-minute doctor visit. For those with disabilities, the situation often worsens. Transportation issues, costs, and extended wait times significantly increase the likelihood of missed care in the UK. Women with disabilities are over seven times more likely to face unmet needs due to care and medication costs compared to men without disabilities.

Yet, it is uncommon to challenge the notion of waiting for a physician because it has always been the norm. AI has the potential to shift that paradigm. Imagine having a doctor in your pocket, providing assistance whenever it’s needed. The workers’ 10-year plan unveiled by Health Secretary Wes Streeting proposes that patients will be able to swiftly discuss AI and health concerns via the NHS app. This is a bold initiative, potentially offering practical clinical advice to millions much quicker.

Of course, this hinges on accessibility. While internet access is improving globally, substantial gaps remain, with 2.5 billion people still offline. In the UK, 8.5 million individuals lack basic digital skills, and 3.7 million families fall below the “minimum digital living standard.” This implies poor connectivity, obsolete devices, and limited support. Confidence is also a significant barrier; 21% of people in the UK feel they are behind in technological understanding.

Currently, AI healthcare research primarily focuses on its flaws. Evaluating biases and errors in technology is crucial. However, this focus overlooks the flaws and sometimes unsafe systems we already depend upon. A balanced assessment of AI must weigh its potential against the reality of current healthcare practices.

Charlotte Brees is a health researcher; Dr. Bott: Why Doctors Can Fail Us, and How AI Can Save LifePublished by Yale September 9th.

Read more

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Eric Topol (basic book, £28)

Co-Intelligence: Life and cooperation with AI Ethan Morrick (WH Allen, £16.99)

Artificial Intelligence: A Guide to Thinking about Humans Melanie Mitchell (Pelican, £10.99)

Source: www.theguardian.com

Contemplating the Most Controversial Concepts in Science

2FMXX1B USA. (c) Jeff Goldblum in the Universal Pictures Film Scene: Jurassic Park (1993). Plot: A practical paleontologist visiting an almost complete theme park is tasked with protecting several children after the blackout of the park escapes due to a blackout. Director: Steven Spielberg Ref: LMK110-J7096-110521 Provided by LMKMedia. Editorial only. Landmark Media only serves recognized media outlets, not copyright owners of these films and television stills. pictures@lmmedia.com

Jeff Goldblum has made many contributions to this world, but perhaps the best is the delivery of the iconic line in the 1993 film Jurassic Park. In the scene where his character Ian Malcolm bets Dinosaur Park creator John Hammond, Goldblum speaks of what has become a long-standing meme.

As we might call it high, paradigms are a great way to think about the risks and rewards of scientific efforts.

Still, it is rare to see scientists appear strongly in their field of research. As a mathematician, Malcolm probably didn’t really care much about the development of genetics. Perhaps this has given us a recent warning against creating “Mirror Life.” Molecules can wreaking havoc through the biosphere, where they have an opposite orientation to everything else on Earth.

The creation of mirror life can cause havoc through the biosphere

Mirror Life fails violently on the “must-have” side of the scale, but there seems to be little reason to create it – in other cases, the decision is not that easy. Perhaps the most troublesome recent example is gain-of-function research. This is where often pathogenic organisms are modified and increase their ability to both risk and reward. For example, changing the flu virus makes it obviously a risk to make it easier to infect humans. But if it helps us understand the virus and potentially prevent the pandemic, is it worth it?

The acquisition of features has always been controversial, but recently the debate over it has exploded. People who believe that SARS-Cov-2, the virus behind Covid-19, was created in the lab – no evidence-based belief jumped on gain-of-function research as a smoking gun. Does this mean that such work must be prohibited? Perhaps not, but in Malcolm’s words, we need to keep in mind the distinction between “possibility” and “essential.”

topic:

Source: www.newscientist.com

Unconventional geoengineering concepts with potential for substantial reduction in radioactivity

Make the climate core

We all know that climate change is dangerous. In other words, it is attractive to take dramatic measures to work on it. It is placed deeper than before, such as the construction of a nuclear bomb, or deeper on the seabed.

News reporter Alex Wilkins has drawn attention to feedback on this small scheme. That is the idea of Andrew HayberryWho explained his thoughts paper It was released on ARXIV on January 11th. This is an online repository without a pear review.

The Haybalry plan is based on an existing approach called Enhanced Rock -Weathering. Rock -like rocks react with carbon dioxide in the air, slowly removing greenhouse gases, and trapped in the form of minerals. By crushing such rocks into powder, this chemical weather can accelerate and speed up CO.2 Removal. However, even if it is an optimistic estimation, this only supplements a small part of the greenhouse gas emissions.

That is where the nucleus appears. A decent nuclear explosion reduces a large amount of basalt to powder, enabling significant eruption of rock weathering. Hayberry suggests filling at least 3 km nuclear bombs from the seabed of the South Sea. The surrounding rocks restrict explosions and radiation, minimizing the risk of life. However, the explosion will crush enough rock to absorb 30 years of CO.2 Exhaust.

The first hurdle of Haybalry is the size of the necessary bombs. The biggest nuclear explosion was the explosion of TSAR BOMBA, which exploded by the Soviet Union in 1961. There was a yield equivalent to TNT 50 megaton. Hayberry is a device with 81 gigaton yield and hopes to have a bigger explosion of more than 1600 times the emperor Bomba. Such bombs are written in Sole strictly that they should not be taken lightly.

How we build this, and transport it to the infamous windy South Ocean, safely lower it to the seabed, and then send it to the sea floor a few km below. It is left. Hayberry estimates that this effort costs “about $ 10 billion”. But the feedback doesn't know how he came up with the number.

Anyway, no one tells Eron Mask.

Later generations of sneak peak

Feedbacks often experience revelation through social media media. Our latest one was due to X user's favor @pallnandiOccupational therapist, a “fair realist” posted on January 12.Heaven leaked photos It has become a social media viral. It's no wonder that Christians have decided to reach them! “”

The accompanying image shows a city engraved with white stones. The architecture looks like an intersection between Istanbul Hagia Sofia Mosque, Rome's Colosseo, and Liberdel. Road of the ring。 All hundreds of windows shine the same color of Golden Yellow. There is a dark starry sky on the city, and there are things that seem to be broken.

Therefore, the revelation of feedback: If you wait for a long time, the stupid claim that is lurking for a long time circulates again.

This dates back in 1994. Weekly world news The story of the headline was releasedHeaven taken by the Hubble Telescope“. It contains the blurred black and white image of the starfield, and there was a big shine in the middle, which contains a luxurious building collection. Remember how Asgard, the home of the Nordic gods, looked. Anyone who is Tall Movies have the right idea.

You don't have to say this image It wasn't from HubbleOr even NASA is fake. However, it was not recently as in February 2024. Emphasized in the video On Instagram Titoku

Not one year later New image There is a similar catchphrase I became a viral。 There are some reports Pointed out that The image looks like it is generated in AI: Especially on the Milky Way, there is a pattern like a glitch.

But the real problem of feedback is that it looks like a terrible place. First of all, the star means a clear lack of air. It looks like it is frozen, and the structure is like a character of an Adam driver's monoac architect in the movie. Megalopolis。 Science fiction Author Naomi Aradman Walking BLUESKY: “Yeah, animals, plants, trees, rivers and lakes, cold marble -there is no dark sky and the sun -I can't literally see people.” It is compared to the output of the “terrible neighboring committee”.

Maybe we will get this Mome repetition, which looks like a good place for heaven to actually spend eternity. However, feedback is not recommended to stop.

Fish -like finale

The press release warns us in a new book To a large wide sea: Life in a habitat that is the most known on the earthSönkejohnsen. The author explains what we know in a huge amount of water under the sea, isolated from the air, the seabed, and continent shelves. How do you spend a lifetime in a place where you can know how the power of gravity and the slight fluctuation of the light level are up and which is down?

We don't know, we know that this fish -like book illustrator is one of Merlin Peterson.

Did you talk about feedback?

Feedback@newscientist.com allows you to send a story to feedback by email. Include your home address. This week and past feedback can be seen on our website.

Source: www.newscientist.com

The detrimental effects of banning frightening concepts may outweigh the sense of security it provides

Yuichiro Chino/Getty Images

In 1818, Mary Shelley invented a technology that has been used for both good and bad in the centuries since. It's called science fiction.

Although you might not think that literary genres count as technology, science fiction has long been a tool for predicting and critiquing science. Shelley’s Frankenstein Considered by many to be the first serious science fiction novel, it was so powerful that South Africa banned it in 1955. This story set the formula with a story that still serves today as a warning against unintended consequences.

As far as we know, the exact science that the eponymous Victor Frankenstein used to create is impossible. But today researchers can restore dead human brains to something resembling life. Experiments are underway to restart cell activity (but importantly not consciousness) after death to test its effectiveness in treating conditions such as Alzheimer's disease (see “Fundamental treatments that bring people back from the brink of death”).

It reminds me of many science fiction stories that feature similar scenarios and I can’t help but imagine what will happen next. The same is true for the study reported in “1000 people’s AI simulation accurately reproduces their behavior.” In this study, researchers used the technology behind ChatGPT to recreate the thoughts and actions of specific individuals with surprising success.

The team behind this work blurs the lines between fact, fiction, and what it means to be human.

In both cases, the teams behind this research are blurring the lines between fact, fiction, and what it means to be human, and their research is being conducted under strong ethical oversight. We are deeply aware that there are ethical concerns in the details. It was announced early on. But now that the technology is proven, there is nothing to stop more violent groups from attempting the same thing without oversight, potentially causing significant damage.

Does that mean the research should be banned for fear of it falling into the wrong hands, as Shelley’s book was? Far from it. Concerns about technology are best addressed through appropriate evidence-based regulation and swift punishment of violators. When regulators go too far, they miss out on not only the technology but also the opportunity to criticize and debate it.

topic:

Source: www.newscientist.com

5 Unexpected Concepts About the Mind and Consciousness

The problem of consciousness is one of the greatest mysteries in science

Yuichiro Kayano/Getty Images

It's been two years since we opened our New York office, and we're excited to share it with you. New Scientist is launching a new live event series in the US. It starts in New York on June 22nd. A one-day masterclass on the science of the brain and human consciousness. To celebrate this, we've unlocked access to his five in-depth features that explore the mysteries of the human heart.

Perhaps there is no greater mystery in human experience than consciousness. In the simplest terms, it's about being aware of our existence. It is our experience of ourselves and the world.

It's less clear how and why this happens, and whether other living things, or indeed machines or forms of artificial intelligence, can experience consciousness in the same way that we do.

For much of human history, the idea that consciousness could somehow be explained or fully understood seemed fanciful and beyond the scope of scientific study. However, in recent decades, we have come closer than ever to identifying the physical structures, mechanisms, and neural networks responsible.

As neuroscientist Christoph Koch had to admit last year, we're not there yet. “When you're young, you have to believe that things are simple,” said Koch, who has been working with philosopher David Chalmers for 25 years to find out exactly which brain cells give rise to life by 2023. He admitted that he lost the bet. into our conscious experience of the world.

Still, Koch doesn't have to think too hard. We're always getting a little closer, from what happens inside our brains when we sleep and dream to how increasingly sophisticated artificial intelligence is challenging the meaning of the world. It brings out fresh insights into everything. Be aware – and how can a machine even recognize this if it happens?

One day master class on consciousness

Join us on June 22nd in New York City for an instant expert event on the latest science of consciousness and the human brain.

topic:

Source: www.newscientist.com