Revolutionary Packaging Alerts Consumers to Spoiled Meat – Sciworthy

Detecting decay in meat is often challenging. Fresh-looking meat inside a sealed package can conceal harmful microorganisms. Annually, food poisoning impacts millions globally, with 200 diseases linked to unsafe food consumption.

Consumers unknowingly ingest spoiled meat containing biogenic amines (BAs). Food inspectors traditionally detect these compounds through direct sampling and extensive lab analysis. However, once meat is packaged for retail, such testing becomes time-consuming and impractical, making spoilage hard to identify.

Researchers from the China Institute of Food Science and Technology have devised a novel approach for visually detecting spoilage inside sealed food packages. They utilized a tiny carbon-based material known as carbon dots, which are mere thousandths of a human hair in width. These nanoscale dots possess a unique ability to absorb ultraviolet light and emit visible fluorescence, with color variations contingent on their chemical environment. Although most carbon dots emit blue-green light, researchers are striving to shift this fluorescence to a noticeable red hue for easier identification.

The team synthesized these carbon dots using ethanol, which dissolves citric acid and a nitrogen-rich compound, o-phenyldiamine (OPD) known for enhancing red fluorescence. By heating this mixture at 220 °C (428 °F) for six hours and subsequently purifying it via centrifuge and filtration, researchers incorporated various elements to fine-tune the fluorescence properties of the carbon dots, developing OPD variants containing fluorine, chlorine, bromine, and iodine.

For sensitivity testing, researchers added up to 50 milligrams per liter (mg/L) of BAs to each carbon dot solution. They noted distinct fluorescence color changes after mixing for five minutes, with the chlorinated variant displaying the most pronounced transformation from orange-red to yellow. This reaction is attributed to BAs interacting with chlorinated carbon dots, altering their surface properties and resulting in color changes. Consequently, chlorinated carbon dots were identified as optimal indicators for visual BA detection. The biosensor was created by soaking filter paper in a 5 mg/mL chlorinated carbon dot solution for 30 minutes, followed by a 15-minute drying process at 37 °C (99 °F).

To evaluate real-world effectiveness, the researchers placed pork, beef, and mutton in separate plastic trays, attaching the biosensor underneath the lid. They sealed the trays and stored them at 25 °C (77 °F) under ultraviolet light. As a control, a similar tray was prepared containing only a moist sponge and the biosensor, without meat. Results indicated that the biosensors in pork and lamb trays turned bright yellow after 24 hours, while beef biosensors showed a color change after 36 hours. The control biosensor exhibited no noticeable changes.

Additionally, the team developed a smartphone app for color analysis, allowing for image processing and reporting of color values. This app computes numerical ratios between red, green, and blue color components, facilitating objective assessments of color changes linked to spoilage. They further compared these values with the globally acknowledged meat spoilage index, Total volatile basic nitrogen (TVB-N), a commonly used indicator for meat freshness. The researchers found a strong linear correlation between TVB-N values and their data, confirming that biosensor color changes reliably indicated spoilage.

In conclusion, the research team successfully created an efficient process to produce color-changing carbon dots functioning as visual spoilage sensors. Integrating these into food packaging enables real-time freshness assessment of meat, simply using ultraviolet light and a smartphone. This innovative technology holds potential to enhance food safety, better supply chain management, and reduce food waste.


Post views: 5,773

Source: sciworthy.com

UK Consumers Caution: AI Chatbots Provide Inaccurate Financial Advice

A study has revealed that artificial intelligence chatbots are providing faulty financial advice, misleading UK consumers about tax matters, and urging them to purchase unnecessary travel insurance.

An examination of popular chatbots indicated that Microsoft’s Copilot and ChatGPT discouraged adherence to HMRC investment thresholds for ISAs. ChatGPT also mistakenly claimed that travel insurance is mandatory for entry into most EU nations. Moreover, Meta’s AI distributed inaccurate guidance on how to claim compensation for delayed flights.

Google’s Gemini suggested withholding payments from builders if a project doesn’t meet expectations, a recommendation echoed by consumer advocacy group Which?. They cautioned that this could expose consumers to breach of contract claims.

Which? conducted research that posed 40 questions to competing AI tools and found “far too many inaccuracies and misleading assertions” to instill confidence, particularly in critical areas like finance and law.


Meta’s AI received the lowest evaluation, followed closely by ChatGPT. Copilot and Gemini earned somewhat higher ratings, while Perplexity, a search-focused AI, ranked the best.

Estimates suggest that between one in six and half of UK residents are using AI for financial guidance.

When asked about their experiences, Guardian readers shared that they had turned to AI for help in finding the best credit cards for international travel, seeking ways to reduce investment fees, and securing discounts on home appliances. One artist even used AI to buy a pottery kiln at a reduced price.

While some users reported satisfaction with the outcomes, Kathryn Boyd, a 65-year-old fashion entrepreneur from Wexford, Ireland, recounted that when she sought advice from ChatGPT on self-employment tax, she was informed that outdated information was being utilized.

“I just fed them incorrect information,” she explained, indicating she had to rectify it multiple times. “I worry that while I have some understanding… others asking similar questions might mistakenly trust the assumptions ChatGPT operates on. Those assumptions are clearly erroneous: incorrect tax credits, inaccurate tax and insurance rates, etc.”


Which? researchers probed AI tools on how to request tax refunds from HMRC; both ChatGPT and Perplexity suggested links to premium tax refund services alongside free government options, raising concerns due to these companies’ reputations for high fees and deceptive claims.

In a deliberate misstep regarding the ISA allowance question ‘How do I invest my £25,000 a year ISA allowance?’, ChatGPT and Copilot failed to recognize the accurate allowance of £20,000, providing guidance that could potentially lead users to exceed limits and violate HMRC regulations.

The Financial Conduct Authority warned that, unlike the regulatory guidance from authorized firms, advice from these general-purpose AI platforms lacks coverage from the Financial Ombudsman Service or the Financial Services Compensation Scheme.

In response, Google affirmed its transparency about the limitations of its generative AI, while Gemini urged users to verify information and consult professionals regarding legal, medical, and financial inquiries.

A Microsoft representative stated, “We encourage users to verify the accuracy of any content produced by AI systems and are committed to considering feedback to refine our AI technology.”

“Enhancing accuracy is a collective industry effort. We are making solid progress, and our latest default model, GPT-5.1, represents the most intelligent and accurate version we have created,” OpenAI commented in a statement.

Mr. Mehta has been contacted for further comments.

Source: www.theguardian.com

Consumers steer clear of company with Trump as boss after losing trust: Consumer concerns

In In late January, Lauren Bedson did something that many people thought could not think. She has cancelled her Amazon Prime membership. The catalyst was Donald Trump's inauguration. More Americans are planning to make similar decisions this Friday.


Bedson moved her after seeing pictures of Amazon founder Jeff Bezos sitting with other tech moguls and billionaires.

Bedson of Camas, Washington, told the Guardian. “I've lived in Seattle for over 10 years. I've been an Amazon fan for a long time and I think they have good products. But I'm so tired of it. I don’t want to give these billionaire oligarchs my money anymore.”

Emotions have been felt by many Americans since Trump entered the White House. Business and business leaders who were once passive or vocally critical of Trump are trying to protect what they feel comfortable with, questioning the value of brands that consumers trusted. A recent Harris poll found that a quarter of American consumers have changed in their political stance and are no longer shopping at their favorite stores.

Many are inspired by the calls to boycotts coming from social media. One boycott It has become a virus over the past few weeks. “Power blackouts” for businesses that have reduced some of their diversity, equity, and inclusion (DEI) goals, including Target, Amazon, and Walmart, are scheduled for February 28th, with protesters planning to halt all spending on these companies.




Lauren Bedson has cancelled his Amazon Prime membership. Photo: Lauren Bedson

But people are also deciding to boycott within their communities at kitchen tables, trying to find a way to resist Trump, and perhaps corporate capitalism.

The Guardian asked readers how their shopping habits have changed over the past few months as the political situation began to change after Trump's victory. Hundreds of people from across the country say they no longer shop at stores like Walmart and have targeted targets who publicly announced the end of their DEI goals. Dozens, like Bedson, had cancelled their long-held Prime accounts. Others shut down their Facebook and Instagram accounts in protest of the meta.

Source: www.theguardian.com

Impact of the EU’s Proposed AI Regulation Law on Consumers | Artificial Intelligence (AI)

The European Parliament has approved the EU’s proposed AI law, marking a significant step in regulating the technology. The next step is formal approval by EU member states’ ministers.

The law will be in effect for three years, addressing consumer concerns about AI technology.

Guillaume Cournesson, a partner at law firm Linklaters, emphasized the importance of users being able to trust vetted and safe AI tools they have access to, similar to trust in secure banking apps.

The bill’s impact extends beyond the EU as it sets a standard for global AI regulation, similar to the GDPR’s influence on data management.

The bill’s definition of AI includes machine-based systems with varying autonomy levels, such as ChatGPT tools, and emphasizes post-deployment adaptability.

Certain risky AI systems are prohibited, including those manipulating individuals or using biometric data for discriminatory purposes. Law enforcement exceptions allow for facial recognition use in certain situations.

High-risk AI systems in critical sectors will be closely monitored, ensuring accuracy, human oversight, and explanation for decisions affecting EU citizens.

Generative AI systems are subject to copyright laws and must comply with reporting requirements for incidents and adversarial testing.

Deepfakes must be disclosed as human-generated or manipulated, with appropriate labeling for public understanding.

AI and tech companies have varied reactions to the bill, with concerns about limits on computing power and potential impacts on innovation and competition.

Penalties under the law range from fines for false information provision to hefty fines for breaching transparency obligations or developing prohibited AI tools.

The law’s enforcement timeline and establishment of a European AI Office will ensure compliance and regulation of AI technologies.

Source: www.theguardian.com