Liz Kendall: Ofcom Risks Losing Public Trust Over Online Harm Issues

Technology Secretary Liz Kendall has warned that Britain’s internet regulator, Ofcom, may lose public confidence if it doesn’t take adequate measures to address online harm.

During a conversation with Ofcom’s Chief Executive Melanie Dawes last week, Ms. Kendall expressed her disappointment with the slow enforcement of the Online Safety Act, designed to shield the public from dangers posed by various online platforms, including social media and adult websites.

While Ofcom stated that the delays were beyond their control and that “change is underway,” Ms. Kendall remarked to the Guardian: “If they utilize their authority, they risk losing public trust.”

The father of Molly Russell, who tragically took her life at 14 after encountering harmful online material, expressed his disillusionment with Ofcom’s leadership.

Kendall did not offer any support when questioned about his faith in the regulator’s leadership.

Her comments come amidst worries that key components of the online safety framework may not be implemented until mid-2027—nearly four years after the Online Safety Act was passed—and that the rapid pace of technological advancement could outstrip government regulations.

Kendall also voiced significant concerns about “AI chatbots” and their influence on children and young adults.

This concern is underscored by a U.S. case involving teenagers who sadly died by suicide after forming deep emotional bonds with ChatGPT and Character.AI chatbots, treating them as confidants.

“If chatbots are not addressed in the legislation or aren’t adequately regulated—something we are actively working on—they absolutely need to be,” Kendall asserted. “Parents need assurance that their children are safe.”

With Ofcom Chairman Michael Grade set to resign in April, a search for his successor is underway. Ms. Dawes has been CEO for around six years, having served in various roles in public service. Ofcom declined to provide further comment.




Michael Grade will soon step down as chairman of Ofcom. Photo: Leon Neal/Getty Images

On Thursday, regulators imposed a £50,000 fine on the Nudify app for failing to prevent minors from accessing pornography. The app typically uses AI to “undress” uploaded photos.

Mr. Kendall stated that Ofcom is “progressing in the right direction.” This marks the second fine issued by regulators since the law was enacted over two years ago.

He spoke at the launch of a new AI ‘Growth Zone’ in Cardiff, which aims to draw £10 billion in investment and create 5,000 jobs across various locations, including the Ford Bridgend engine factory and Newport.

The government noted that Microsoft is one of the companies “collaborating with the government,” although Microsoft has not made any new investment commitments.

Ministers also plan to allocate £100 million to support British startups, particularly in designing chips that power AI, where they believe the UK holds a competitive edge. However, competing with U.S. chipmaker Nvidia, which recently reported nearly $22 billion in monthly revenue, may prove challenging.


On Wednesday, Labour MPs accused Microsoft of “defrauding” British taxpayers, as U.S. tech firms raked in at least £1.9 billion from government contracts in the 2024-25 financial year.

When asked for his thoughts, Mr. Kendall praised Microsoft’s AI technology being utilized for creating lesson plans in schools within his constituency but emphasized the need for better negotiation expertise to secure optimal deals. He also expressed a desire to see more domestic companies involved, especially in the AI sector.

A Microsoft spokesperson clarified that the NHS procures its services through a national pricing framework negotiated by the UK government, which “ensures both transparency and value for money,” stating that the partnership is delivering “tangible benefits.”

“The UK government chooses to distribute its technology budget among various suppliers, and Microsoft is proud to be one of them,” they added.

Source: www.theguardian.com

Consumers steer clear of company with Trump as boss after losing trust: Consumer concerns

In In late January, Lauren Bedson did something that many people thought could not think. She has cancelled her Amazon Prime membership. The catalyst was Donald Trump's inauguration. More Americans are planning to make similar decisions this Friday.


Bedson moved her after seeing pictures of Amazon founder Jeff Bezos sitting with other tech moguls and billionaires.

Bedson of Camas, Washington, told the Guardian. “I've lived in Seattle for over 10 years. I've been an Amazon fan for a long time and I think they have good products. But I'm so tired of it. I don’t want to give these billionaire oligarchs my money anymore.”

Emotions have been felt by many Americans since Trump entered the White House. Business and business leaders who were once passive or vocally critical of Trump are trying to protect what they feel comfortable with, questioning the value of brands that consumers trusted. A recent Harris poll found that a quarter of American consumers have changed in their political stance and are no longer shopping at their favorite stores.

Many are inspired by the calls to boycotts coming from social media. One boycott It has become a virus over the past few weeks. “Power blackouts” for businesses that have reduced some of their diversity, equity, and inclusion (DEI) goals, including Target, Amazon, and Walmart, are scheduled for February 28th, with protesters planning to halt all spending on these companies.




Lauren Bedson has cancelled his Amazon Prime membership. Photo: Lauren Bedson

But people are also deciding to boycott within their communities at kitchen tables, trying to find a way to resist Trump, and perhaps corporate capitalism.

The Guardian asked readers how their shopping habits have changed over the past few months as the political situation began to change after Trump's victory. Hundreds of people from across the country say they no longer shop at stores like Walmart and have targeted targets who publicly announced the end of their DEI goals. Dozens, like Bedson, had cancelled their long-held Prime accounts. Others shut down their Facebook and Instagram accounts in protest of the meta.

Source: www.theguardian.com

Can you trust a robot to care for your cat?

Created by scientists and explosion theory artists from the University of Nottingham cat royale is a multispecies world centered around a custom-built enclosure where three cats and a robotic arm coexist for six hours a day during a 12-day installation period.

Professor Steve Benford from the University of Nottingham and colleagues said: “Robots are finding a place in everyday life, from cleaning houses to mowing the lawn, shopping around hospitals and delivering parcels.”

“In doing so, they will inevitably have interactions and encounters with animals.”

“They could be companion animals, pets that share a home, guide dogs that help people navigate public places, but they could also be wild animals.”

“Often these encounters are unplanned and incidental to the robot’s primary mission, such as navigating a world inhabited by cats riding Roombas, guide dogs confused by delivery robots, and lawn mowing robots. Such as a hedgehog.”

“But it could also be intentional. We could also design robots to serve animals.”

“Little is known about how to design robots for animals, even though such encounters are inevitable, whether planned or not. Can you do that?

“We present Cat Royale, a creative quest to design a domestic robot to enrich cats’ lives through play.”

schneiders other. It suggests we need more than carefully designed robots to care for cats. In addition to human interaction, the environment in which the robot operates is also important. Image credit: Schneiders other., doi: 10.1145/3613904.3642115.

Cat Royale was unveiled at the World Science Festival in Brisbane, Australia in 2023, has been touring ever since, and just won a Webby Award for its creative experience.

The installation centers around a robotic arm that provides activities to make cats happier, including dragging a “mouse” toy along the floor and raising a feathered “bird” into the air. , which included feeding the cat treats.

The team then trained the AI to learn which games cats liked best so they could personalize their experience.

“At first glance, this project is about designing a robot that can play with cats and enrich the lives of families,” Professor Benford says.

“But beneath the surface, we are exploring the question of what it takes to entrust robots to care for our loved ones, and in some cases, ourselves.”

By working with Blast Theory to develop and study Cat Royale, researchers gained important insights into robot design and interaction with cats.

They had to design a robot that would pick up toys and deploy them in a way that excited the cats, all while learning which games each cat liked.

They also designed an entire world for the cat and robot to live in, providing a safe space for the cat to observe and sneak around the robot, and decorating it so that the robot had the best chance of spotting the approaching cat. did.

This means that robot design involves not only engineering and AI, but also interior design.

If you want to bring a robot into your home to take care of your loved ones, you will likely need to redesign your home.

Dr Ike Schneiders, a researcher at the University of Nottingham, said: ‘As we learned through Cat Royale, to create a multi-species system where cats, robots and humans are all taken into account, you simply need to design robots. That’s not enough.”

“We needed to ensure the animal’s health at all times, while also ensuring that the interactive installation would attract a global (human) audience.”

“Many factors were considered in this, including the design of the enclosure, the robot and its underlying systems, the different roles of the humans, and of course the selection of the cat.”

The authors announced their results in CHI 2024 meeting in Honolulu, Hawaii.

_____

Ike Schneiders other. Design multispecies worlds for robots, cats, and humans. CHI ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems. article #593; doi: 10.1145/3613904.3642115

Source: www.sci.news

The reasons behind placing trust in people’s words despite conflicting evidence

Despite the recent surge in “fake news,” misinformation has actually been around for as long as humans have existed. Outlandish claims and conspiracy theories have always been a part of human culture.

Misinformation often originates from, spreads through, and holds significant influence on individuals.

When trying to convey complex information to a general audience, even with strong evidence and expert support, it may still be less convincing than anecdotal evidence like “someone I met in the pub said something different.”


Interestingly, the source of misinformation is often someone close or loosely connected to an individual, rather than a stranger in a pub. This can range from friends to distant acquaintances.

Despite lacking relevant expertise, these individual sources can hold significant influence in shaping beliefs and perceptions.

Humans are not always rational beings, and our brains are heavily influenced by emotions and social connections. Emotional experiences play a significant role in memory retention.

Our brains have evolved to rely on social connections and emotions to gather information. Empathy and emotional connections with others are key factors in how we process information.

Human faces and relationships play a crucial role in how we absorb and understand information. This is evident in the preference for newsreaders over text-only news delivery.

Individuals with personal connections or relatable stories often have a greater impact on us than impersonal sources of information.

Despite the importance of facts, emotions play a significant role in shaping our beliefs and actions. This is why anecdotal evidence from individuals can sometimes carry more weight than concrete research.

Source: www.sciencefocus.com

Utilizing New Technology to Detect Cancer Early: The Impact on Calderdale and Huddersfield NHS Foundation Trust in West Yorkshire

A West Yorkshire NHS Trust is utilizing advancements in technology, such as artificial intelligence and surgical robots, to achieve crucial cancer targets and alleviate widespread pressure on hospitals.

Calderdale and Huddersfield NHS The Foundation Trust is meeting three important cancer targets established by the government.

These targets include a waiting time of 28 days for patients who receive an emergency referral and are diagnosed with an infection or cancer, a 31-day wait from the patient’s treatment decision to the first treatment, and a 62-day wait from the emergency GP referral to the first treatment.

Sky News was given a tour of the innovations behind the hospital’s results, starting with a diagnostic test called Cytosponge. The Cytosponge is a small capsule with a string attached that is swallowed by the patient. When dissolved in the stomach, a brush collects cells from the esophageal lining, which are then analyzed for abnormalities.

image:
New diagnostic test site sponge could help doctors find cases of esophageal cancer faster

Cytosponges are used as an alternative to longer and more invasive endoscopies. Patients find the cytosponge less invasive and report a quicker procedure time.

Source: news.sky.com

Intrinsic, supported by Y Combinator, is developing essential infrastructure for trust and safety teams

Karine Mellata and Michael Lin met several years ago while working on Apple’s Fraud Engineering and Algorithmic Risk team. Both Mellata and Lin were involved in addressing online fraud issues such as spam, bots, account security, and developer fraud among Apple’s growing customer base.

Despite their efforts to develop new models to respond to evolving patterns of abuse, Melata and Lin feel they are falling behind and stuck in rebuilding core elements of their trust and safety infrastructure. I did.

“As regulation puts increased scrutiny on teams that centralize somewhat ad hoc trust and safety responses, we are helping modernize this industry and build a safer internet for everyone. We saw this as a real opportunity to do that,” Melata told TechCrunch in an email interview. “We dreamed of a system that could magically adapt as quickly as the abuse itself.”

Co-founded by So Mellata and Lin essentialis a startup that aims to give safety teams the tools they need to prevent product fraud. Intrinsic recently raised $3.1 million in a seed round with participation from Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.

Intrinsic’s platform is designed to moderate both user-generated and AI-generated content, allowing customers (primarily social media companies and e-commerce marketplaces) to detect and take action on content that violates their policies. We provide the infrastructure to do so. Intrinsic focuses on integrating safety products and automatically orchestrates tasks like banning users and flagging content for review.

“Intrinsic is a fully customizable AI content moderation platform,” said Mellata. “For example, Intrinsic can help publishers creating marketing materials avoid giving financial advice that carries legal liability. We can also help marketplaces discover listings such as:

Mellata notes that there are no off-the-shelf classifiers for such sensitive categories, and even for a well-resourced trust and safety team, adding a new auto-discovered category can take weeks of engineering. They claim it can take several months in some cases. -House.

Asked about rival platforms such as Spectrum Labs, Azure, and Cinder (almost direct competitors), Mellata said Intrinsic is superior in terms of (1) explainability and (2) significantly expanded tools. I said I was thinking about it. He explained that Intrinsic allows customers to “ask questions” about mistakes they made in content moderation decisions and provide an explanation as to why. The platform also hosts manual review and labeling tools that allow customers to fine-tune moderation models based on their own data.

“Most traditional trust and safety solutions were inflexible and not built to evolve with exploits,” Melata said. “Now more than ever, resource-constrained trust and safety teams are looking to vendors to help them reduce moderation costs while maintaining high safety standards.”

Without third-party auditing, it is difficult to determine how accurate a particular vendor’s moderation model is or whether it is susceptible to some type of influence. prejudice It plagues content moderation models elsewhere. But either way, Intrinsic appears to be gaining traction thanks to its “large and established” enterprise customers, who are signing deals in the “six-figure” range on average.

Intrinsic’s near-term plans include increasing the size of its three-person team and expanding its moderation technology to cover not just text and images, but also video and audio.

“The widespread slowdown in the technology industry has increased interest in automation for trust and safety, and this puts Intrinsic in a unique position,” Melata said. “COOs are concerned with reducing costs. Chief compliance officers are concerned with mitigating risk. Embedded helps both. , to catch more fraud.”

Source: techcrunch.com