European Parliament Advocates Prohibition of Social Media for Those Under 16

The European Parliament has proposed that children under the age of 16 should be prohibited from using social media unless their parents grant permission.

On Wednesday, MEPs overwhelmingly approved a resolution concerning age restrictions. While this resolution isn’t legally binding, the urgency for European legislation is increasing due to rising concerns about the mental health effects on children from unfettered internet access.

The European Commission, responsible for setting EU laws, is already exploring the option of a social media ban for those under 16 in Australia, anticipated to commence next month.

Commission Chair Ursula von der Leyen indicated in a September speech that she would closely observe the rollout of Australia’s initiative. She condemned “algorithms that exploit children’s vulnerabilities to foster addiction” and stated that parents often feel overwhelmed by “the flood of big tech entering our homes.”

Ms. von der Leyen pledged to establish an expert panel by the year’s end to provide guidance on effectively safeguarding children.

There’s increasing interest in limiting children’s access to social media and smartphones. A report commissioned by French President Emmanuel Macron last year recommended that children should not have smartphones until age 13 and should refrain from using social media platforms like TikTok, Instagram, and Snapchat until they turn 18.

Danish Social Democratic Party lawmaker Christel Schaldemose, who authored the resolution, stated that it’s essential for politicians to act in protecting children. “This is not solely a parental issue. Society must take responsibility to ensure that platforms are safe environments for minors, but only if they are above a specified age.”

Her report advocates for the automatic disabling of addictive elements like infinite scrolling, auto-playing videos, excessive notifications, and rewards for frequent use when minors access online platforms.

The resolution emphasizes that “addictive design features are typically integral to the business models of platforms, particularly social media.” An early draft of Schaldemose’s report referenced a study indicating that one in four children and young people exhibit “problematic” or “dysfunctional” smartphone use, resembling addictive behavior. It states that children should be 16 before accessing social media, although parents can consent from age 13.

The White House has urged the EU to retract its digital regulations, and supporters of the social media ban have contextualized their votes accordingly. U.S. Commerce Secretary Howard Lutnick mentioned at a meeting in Brussels that EU regulations concerning tech companies should be re-evaluated in exchange for reduced U.S. tariffs on steel and aluminum.

Stéphanie Yoncourtin, a French lawmaker from Macron’s party, responded to Lutnick’s visit, asserting that Europe is not a “regulatory colony.” After the vote, she remarked: “Our digital laws are not negotiable. We will not compromise child protections just because a foreign billionaire or tech giant attempts to influence us.”

The EU is already committed to shielding internet users from online dangers like misinformation, cyberbullying, and unlawful content through the Digital Services Act. However, the resolution highlights existing gaps in the law that need to be addressed to better protect children from online risks, such as addictive design features and financial incentives to become influencers.

Schaldemose acknowledged that the law, of which she is a co-author, is robust, “but we can enhance it further because we remain less specific and less defined, particularly in regards to addictive design features and harmful dark pattern practices.”

Skip past newsletter promotions

Dark patterns refer to design elements in apps and websites that manipulate user decisions, such as countdown timers pushing purchases or persistent requests to enable location tracking or notifications.

Schaldemose’s resolution was endorsed by 483 members, while 92 voted against it and 86 abstained.

Eurosceptic lawmakers criticized the initiative, arguing that it would overreach if the EU imposes a ban on children’s access to social media. “Decisions about children’s online access should be made as closely as possible to families in member states, not in Brussels,” stated Kosma Złotowski, a Polish member of the European Conservative and Reform Group.

The resolution was adopted just a week after the committee announced a delay in overhauling the Artificial Intelligence Act and other digital regulations that aim to relax rules for businesses under the guise of “simplification.”

Schaldemose acknowledged the importance of not overwhelming the legislative system, but added, “There is a collective will to do more regarding children’s protection in the EU.”

Source: www.theguardian.com

Olivia Williams Advocates for ‘Nude Rider’ Style Regulations for AI Body Scanning in Acting

In light of rising apprehensions regarding the effects of artificial intelligence on performers, actress Olivia Williams emphasized that actors should handle data obtained from body scans similarly to how they approach nude scenes.

The star of Dune: Prophecy and The Crown stated that she and fellow actors often face mandatory body scans by on-set cameras, with scant assurances on the usage and destination of that data.

“It would be reasonable to adhere to the ‘Nude Rider’ standard,” she noted. “This footage should only be used within that specific scene; it must not be repurposed elsewhere. Furthermore, any edited scenes must be removed across all formats.”

Williams drew attention to a vague provision in contracts that seems to grant studios extensive rights to use images of performers “on every platform currently existing or created in the future worldwide, indefinitely.”

A renewed conversation about AI’s impact on actors has been ignited by widespread criticism of the development of an AI performer named Tilly Norwood. Actors fear their likenesses and poses will be utilized to train AI systems, potentially threatening their employment.

Actors, stunt performers, dancers, and supporting actors relayed to the Guardian that they felt “ambushed” and compelled to participate in body scans on set. Many reported there was little time to discuss how the generated data would be handled or whether it could be used for AI training purposes.

Ms. Williams recounted her unsuccessful attempts to eliminate the ambiguous clause from her contract. She explored options for obtaining a limited license to control her body scan data, but her lawyer advised her that the legal framework was too uncertain. The costs of trying to reclaim the data were prohibitively high.

“I’m not necessarily looking for financial compensation for the use of my likeness,” she remarked. “What concerns me is being depicted in places I’ve never been, engaging in activities I’ve never done, or expressing views I haven’t shared.”

“Laws are being enacted, and no one is intervening. They’re establishing a precedent and solidifying it. I sign these contracts because not doing so could cost me my career.”

Williams expressed that she is advocating for younger actors who have scant options but to undergo scans without clear information regarding the fate of their data. “I know a 17-year-old girl who was encouraged to undergo the scan and complied, similar to the scene from Chitty Chitty Bang Bang. Being a minor, a chaperone was required to consent, but her chaperone was a grandmother unfamiliar with the legal implications.”

The matter is currently under discussion between Equity, the UK performing arts union, and Pact, the trade body of the UK film industry. “We are urging for AI safeguards to be integrated into major film and television contracts to prioritize consent and transparency for on-set scanning,” stated Equity Executive Director Paul W. Fleming.

“It is achievable for the industry to implement essential minimum standards that could significantly transform conditions for performers and artists in British TV and film.”

Pact issued a statement saying: “Producers are fully aware of their responsibilities under data protection legislation, and these concerns are being addressed during collective negotiations with Equity. Due to the ongoing talks, we are unable to provide further details.”

Source: www.theguardian.com

Apple Advocates for Revisions to Anti-Mass Law, Threatens to Suspend Shipments to the EU

Apple is requesting the European Commission to revoke the technology legislation, cautioning that if changes are not made, it may halt the shipment of specific products and services to the 27-member bloc.

In its latest dispute with Brussels, the iPhone manufacturer argued that the digital market regulations have resulted in poorer experiences for Apple users, increased security risks, and disrupted the integration of Apple products.

The Silicon Valley company faced scrutiny from a three-year-old anti-Monopoly Act committee review aimed at regulating the dominance of major digital companies, including search engines, app developers, and messaging platforms.

It claimed that the legislation has already postponed the introduction of features such as live translation via AirPods and the demands for interoperability with non-Apple products, including live translation and screen mirroring from iPhones to laptops.

“The DMA implies that the list of features delayed for EU users will likely grow, leading to further delays in their experience with Apple products,” the company stated. It also noted that Brussels is fostering unfair competition, as the same rules don’t apply to Samsung, the leading smartphone vendor in the EU.

Some DMA requirements necessitate that Apple ensures headphones from other brands operate on iPhones. Apple expressed that this is a barrier preventing the rollout of live translation services in the EU, as competing companies could access conversation data, raising privacy concerns.

Apple argued that the DMA should be retracted or at least replaced with more suitable regulations. While it did not clarify which products could hinder future sales in the EU, it mentioned that the Apple Watch, first introduced a decade ago, would not be able to launch in the EU today.

This marks another confrontation between the California-based firm and the European Commission. Earlier this year, Apple appealed a €500 million fine levied by the EU for allegedly hindering app developers from exploring cheaper alternatives outside the app store.

In August, former US President Donald Trump threatened tariffs on unspecified nations in retaliation for regulations impacting US tech companies.

In a post on Truth Social, he remarked: “I stand against a country that attacks our incredible American tech companies. Digital taxes, digital service laws, and digital market regulations are all aimed at harming or discriminating against American technology.”

“They also provide the largest high-tech firms with an outrageous advantage, effectively giving a free pass to China. This needs to end, and it needs to end now!”

Referring to the DMA, Apple stated: “Rather than competing through innovation, already successful companies are twisting these laws to further their agendas to collect more data from EU citizens or to gain access to Apple’s technology without cost.”

It emphasized that the regulations under this law affect how users access apps. “Certain adult apps are available on iPhones from other markets that are not permitted in the app store, particularly due to risks posed to children.”

The European Commission has been asked for a statement on this matter.

Source: www.theguardian.com

China Advocates for Global AI Collaboration Following Trump’s Announcement of a Low-Regulation Approach

Chinese Prime Minister Li Qiang has called for the nation to unite in advancing the development and security of rapidly evolving technologies, following the U.S.’s recent announcement regarding industry registrations.

Speaking at the annual World Artificial Intelligence Conference (WAIC) in Shanghai, Li referred to AI as a fresh engine for economic growth, highlighting the disjointed governance of the technology and advocating for improved international cooperation to establish a universally recognized AI framework.

On Saturday, Li cautioned that the advancement of artificial intelligence must be balanced against security concerns, emphasizing the urgent need for a global consensus.


His statements followed the announcement from President Donald Trump about a proactive low-regulation approach aimed at solidifying control in swiftly evolving sectors. One executive order specifically targeted what the White House termed an “awakening” AI model.

While addressing the World AI Conference, Li stressed the importance of governance and the promotion of open-source development.

“The risks and challenges associated with artificial intelligence have garnered significant attention. Finding a balance between progress and security necessitates a broader consensus from society,” the Prime Minister stated.

Li asserted that China would “actively promote” open-source AI development, expressing willingness to share advancements with other nations, particularly those in the Global South.

The three-day conference positioned AI as a critical battleground, as industry leaders and policymakers from the two largest global economies faced off in a growing technological rivalry between China and the U.S.

Washington has implemented export restrictions on advanced technologies to China, including high-end AI chips from companies like NVIDIA, citing concerns that such technologies could enhance China’s military capabilities.


Although Li did not specifically mention the U.S. in her address, she cautioned that AI could become an “exclusive game” for certain nations and corporations, highlighting issues such as a shortage of AI chips and limitations on the exchange of talent.

As AI is integrated across numerous industries, its applications have raised significant ethical concerns, ranging from misinformation dissemination to employment impacts and the potential for loss of technical oversight.

Earlier this week, news organizations alerted online audiences about the “devastating effects” of AI-generated summaries replacing traditional search results.

The World AI Conference is an annual government-sponsored gathering in Shanghai that typically draws participants from various sectors, including industry players, government representatives, researchers, and investors.

Speakers at the event included ANE Bouverot, the AI envoy for the French President, computer scientist Geoffrey Hinton, known as “The Godfather of AI,” and former Google CEO Eric Schmidt.

Tesla CEO Elon Musk did not participate this year, although he has been a regular speaker at both in-person and video openings in previous years.

The exhibition showcased Chinese tech corporations like Huawei and Alibaba, along with startups such as humanoid robot maker Unitree. Western participants included Tesla, Alphabet, and Amazon.

Reuters and Assen France Press

Source: www.theguardian.com

Inflatable Helmets: The Inventor Advocates for a Safer Cycling Future

As per the World Health Organization, approximately 41,000 individuals lose their lives each year while cycling. The exact number of those who were not wearing helmets remains unclear, but it is evident that helmets act as a deterrent for many.

Cycling UK, along with various charities advocating for bicycle use, suggests that when helmet usage is mandated, the number of people opting to cycle tends to decline.

For evidence, one can look at Australia, where after New South Wales and Melbourne implemented mandatory helmet laws, cycling rates in those two states dropped by 36%.

Research indicates that the hesitation to wear helmets stems largely from doubts about their protective capabilities and the challenges associated with their storage and cost. However, Ventete, a UK startup, aims to address these issues.

Storage issues

The AH-1 is an inflatable helmet, designed in the UK and manufactured in Switzerland, taking a decade to develop.

While earlier inflatable helmets functioned like airbags—only inflating upon impact—the AH-1 inflates using an electric pump before use, taking about 30 seconds to reach the optimal pressure of 32 psi.

Once used, the AH-1 can shrink to a compact size of less than 4 cm (1.5 inches) thick, making it easy to store almost anywhere.

“We recognized that many people are not fans of traditional helmets due to issues of portability,” says Colin Harperger, co-founder of Ventete. “This inspired us to transform 3D objects (helmets) into easily stored 2D objects.”

“The AH-1 comprises 11 inflatable chambers,” Harperger elaborates. “Each chamber is encased in protective ribs made from laminated nylon that resists punctures, wear, and stretching. The ribs are molded from glass-reinforced polymers, offering extra structural robustity.”

Each rib is additionally lined with rubber to help absorb impact energy.

A cyclist himself, Harperger knew that the pneumatic structure provides more compression than conventional helmets made of expanded polystyrene (EPS), yet there was initially no technology available to realize his vision.

“About five years ago, we experienced a breakthrough. After several iterations, we developed the AH-1.”

read more:

Safety Standards

While being inflatable enhances convenience in storage, what about safety? Can it effectively protect your head? Currently, the Ventetete AH-1 holds an EN 1078 certification.

This certification aligns with both European and UK safety standards, covering the helmet’s construction, field of view, and shock absorption capabilities. However, not all helmets provide the same level of protection.

“Once you achieve certification, you are not obligated to publish your findings,” Harperger points out. “We collaborated with brain injury specialists from the Human Experience, Analysis and Design (Head) Lab at Imperial College London, addressing similar concerns.

After use, the AH-1 can shrink to less than 4 cm (1.5 inches) thick.

“The highlight for us was achieving a 44.1% reduction in linear risk compared to the best-performing EPS helmet,” Harperger stated.

Linear risk relates to forces such as impacting the head against a surface, and reducing impact leads to decreased risk. “It may sound counterintuitive, but I aim to extend the impact duration to prevent the head from bouncing off.”

Imagine falling onto a bed rather than a hardwood floor. The impact on the hardwood floor is brief but increases the likelihood of brain movement within the skull.

“By prolonging the impact duration, we significantly reduce linear risk.”

This testing also looked at rotational impact, which assesses forces like twists or shears occurring when the helmet hits the ground at an angle.

In this domain, the AH-1 performed second best among four contenders, falling behind a helmet that includes a secondary inner layer designed to give it a 10-15mm (about 0.5 inch) mobility to reduce rotational forces affecting the brain.

These secondary layers are often found in higher-end helmets; however, the AH-1 aims to make these features available in more affordable options.

Cost remains a concern. Three helmets were tested, all priced under £50, while the AH-1 retails for £350. Thus, while it may resolve protection and storage issues for those hesitant to wear helmets, the price may still present a barrier.

About our experts

Colin Harperger is the co-founder and CEO of Ventetete. He holds a PhD in Architecture by Design from UCL London, UK.

read more:

Source: www.sciencefocus.com

Australia Lacks Alternatives, But Industry Minister Advocates for Embracing AI to Achieve Global Leadership

As stated by the new Minister of Industry and Science, Tim Ayles, Australia must either “aggressively pursue” the advantages of artificial intelligence or risk becoming “dependent on someone else’s supply chain.” The Labor government intends to impose further regulations on these rapidly advancing technologies.

Ayles, previously associated with a manufacturing union, recognized that there is significant skepticism surrounding AI in Australia. He emphasized the need for dialogue between employers and employees regarding the implications of automation in the workplace.

The minister insisted that Australia has “no alternative,” stating that the country is embracing new technologies while striving to become a global frontrunner in regulating and utilizing AI.


Ayers remarked to Guardian Australia, “The government’s responsibility is to ensure that we not only lean towards the opportunities for businesses and workers but also to be assured of our capacity to tackle potential challenges.”

“Australia’s strategy must prioritize regulation and strategy for the advantage of its people,” he added.

Ayers, who was elevated to Cabinet last month after serving in a junior role within manufacturing and trade, now leads the direction of the Labor Government’s flagship initiative. This comprehensive plan connects manufacturing, energy transition, research, and business policies.

Ayers faces immediate challenges regarding AI policy. His predecessor, Ed Husic, established critical frameworks focused on developing the local industry and setting essential guidelines for AI usage, which included discussions around new independent AI regulations.

Less than a month into his new role, Ayers stated that the government is still defining its actions, considering the rapid advancements in technology from similarly-minded countries. He indicated that the response would involve laws and regulations that have yet to be finalized, emphasizing the importance of swift action for Australia.

“There is no alternative but to adopt an Australian approach,” he asserted. “This approach dictates how we shape Australia’s digital future and how we ensure that we gain agency in technology development alongside global partners in these matters.

“The alternative is to remain passive and find ourselves at the mercy of someone else’s supply chain.”

The minister highlighted that Australia stands to “reap significant benefits” from AI adoption, particularly emphasizing increased productivity and economic growth. Ayers, who grew up on a cattle farm near Lismore, noted that both white-collar and blue-collar jobs have much to gain from automation and new technologies.

Drawing from his experience with manufacturing unions, he acknowledged the harsh reality that many workers have internalized the belief that the only more detrimental alternative would be for Australia to become a technological dead-end.

“However, I want to encourage companies and employers to consider the impact of AI adoption on enhancing job quality,” Ayers stated.

“Our industrial relations framework allows for adequate consultation and engagement at the corporate level, fostering discussions about these issues on an individual workplace basis.”

Recently, Australia’s Business Council released significant reports detailing Australia’s potential to emerge as a global leader in AI, enhancing productivity and boosting living standards through economic expansion.

The Australian Union Council reported in December that one-third of Australian workers are at risk of unemployment due to the introduction of AI.

“A recently published BCA document highlighted a significant level of skepticism among Australians regarding this new wave of technology, which is not unusual for our country,” Ayles remarked.


“Every wave of technological transformation shapes the labor market. This is a fact. The adverse consequences of technological evolution in employment have historically been outweighed by new investments and developments within employment and technology.”

Ayers also affirmed that the Labor party would sustain its forward agenda for Australia, emphasizing an “active” focus on boosting the production of key minerals, iron, and steel as part of the renewable energy transition.

“I am committed to doing everything in my power to establish new factories and enhance industrial capacities,” he stated.

“Specifically, areas like Central Queensland and Hunter and Latrobe have the opportunity to intersect with future energy benefits and industrial capabilities, permitting Australia to better support these communities as well.”

Source: www.theguardian.com

Trump Advocates for Increased Birth Rates but Dismisses Fertility Experts

Every year, tens of thousands of young women opt to freeze their eggs. This procedure can be costly and at times painful, with numbers rising as more Americans delay childbirth.

However, many uncertainties surround the process: What is the optimal donor age for egg freezing? What are the success rates? And importantly, how long can frozen eggs remain viable?

Finding reliable answers to these questions is challenging. During the significant downsizing at the Centers for Disease Control and Prevention, the Trump administration disbanded a federal research team dedicated to collecting and analyzing data from fertility clinics aimed at enhancing outcomes.

According to Aaron Levine, a professor at Georgia Tech’s Jimmy and Rosalyn Carter School of Public Policy, who collaborated with the CDC team on research, the dismissal of the six team members was “a real, serious loss.”

“They held the most extensive data on fertility clinics, focused on ensuring truthfulness in patient advertising,” stated Barbara Collura, CEO of the National Infertility Association.

Collura emphasized that losing the CDC team is a significant blow to both couples facing infertility and women contemplating egg freezing.

These layoffs come amidst rising political interest in declining U.S. fertility rates. President Trump has dubbed himself the “infertile president” and signed an executive order aimed at expanding access to in vitro fertilization.

“The White House is committed to IVF and remains focused on it,” Collura noted.

With one in seven married or unmarried women experiencing infertility, she remarked, “Looking at these statistics, it’s disheartening—and not surprising—that our public health agencies have chosen to sidestep this issue.”

When asked about the team’s elimination, a health and welfare spokesperson commented that the administration is “in the planning stage” of transitioning maternal health programs to a new Healthy America initiative, offering no further details.

The scientists from the National Assisted Reproductive Technology Surveillance System were working to address numerous questions surrounding IVF research.

“We lack comprehensive data on the success rates of egg freezing for personal use because it’s relatively new and tricky to track,” Dr. Levine explained.

This uncertainty weighs heavily on women wishing to have children. Simeonne Bookal, who collaborates with Collura at Resolve, froze her eggs in 2018 while waiting to find the right partner.

She got engaged earlier this year, with her wedding scheduled for next spring. At 38, she expressed that having her eggs banked offers her a “security blanket.”

Though she still has reservations about her chances of conceiving, the frozen eggs provide her some assurance.

The precise success rate of the egg freezing procedure remains ambiguous, as many published studies are based on theoretical models that utilize data from infertile patients or egg donors, which differ significantly from women preserving their eggs for future use.

Some studies provide limited insights, often involving fewer than 1,000 women who thaw their eggs and undergo IVF, according to Dr. Sarah Druckenmiller Cascante, a Clinical Assistant Professor of Obstetrics and Gynecology at Nyu Langone and author of a recent review paper on this topic.

“The available data is scant, and it’s crucial to be transparent with patients about this,” she said.

“I wouldn’t regard it as a guaranteed insurance policy. While it could lead to a baby, it’s more about improving the chances of having a biological child later in life, especially if done at a younger age.”

The CDC team maintained a database known as the National ART Surveillance System, established by Congress in 1992. This tracked success rates for various fertility clinics but now faces an uncertain future without continuous updates.

While the Society for Assisted Reproductive Technology offers similar databases to researchers, they are not as comprehensive as the CDC’s since they contain data from approximately 85% of U.S. fertility clinics.

According to Sean Tipton, Chief Advocacy and Policy Officer for the American Association of Reproductive Medicine, no dedicated research team oversees the database.

The surge in women opting to bank their eggs for future use has intensified the scrutiny regarding the risks and benefits of freezing eggs.

This procedure was regarded as non-experimental as of 2012. In 2014, only 6,090 patients opted to bank their eggs for fertility preservation. Fast forward to 2022, and that number soared to 28,207, with 39,269 recorded in 2023, the latest year for which data is available.

Source: www.nytimes.com

Commissioner Advocates for Ban on Apps Creating Deepfake Nude Images of Children

The “nudifice” app utilizing artificial intelligence to generate explicit sexual images of children is raising alarms, echoing concerns from English children’s commissioners amidst rising fears for potential victims.

Girls have reported refraining from sharing images of themselves on social media due to fears that generative AI tools could alter or sexualize their clothing. Although creating or disseminating sexually explicit images of children is illegal, the underlying technology remains legal, according to the report.

“Children express fear at the mere existence of this technology. They worry strangers, classmates, or even friends might exploit smartphones to manipulate them, using these specialized apps to create nude images,” a spokesperson stated.

“While the online landscape is innovative and continuously evolving, there’s no justifiable reason for these specific applications to exist. They have no rightful place in our society, and tools that enable the creation of naked images of children using deepfake technology should be illegal.”

De Souza has proposed an AI bill mandating that developers of generative AI tools address product functionalities, and has urged the government to implement an effective system for eliminating explicit deepfake images of children. This initiative should be supported by policy measures recognizing deep sexual abuse as a form of violence against women and girls.

Meanwhile, the report calls on Ofcom to ensure diligent age verification of nudification apps, and for social media platforms to restrict access to sexually explicit deepfake tools targeted at children, in accordance with online safety laws.

The findings revealed that 26% of respondents aged 13 to 18 had encountered deep, sexually explicit images of celebrities, friends, teachers, or themselves.

Many AI tools reportedly focus solely on female bodies, thereby contributing to an escalating culture of misogyny, the report cautions.

An 18-year-old girl conveyed to the commissioner:

The report highlighted cases like that of Mia Janin, who tragically died by suicide in March 2021, illustrating connections between deepfake abuse, suicidal thoughts, and PTSD.

In her report, De Souza stated that new technologies confront children with concepts they struggle to comprehend, evolving at a pace that overwhelms their ability to recognize the associated hazards.

The lawyer explained to the Guardian that this reflects a lack of understanding regarding the repercussions of actions taken by young individuals arrested for sexual offenses, particularly concerning deepfake experimentation.

Daniel Reese Greenhalgh, a partner at Cokerbinning law firm, noted that the existing legal framework poses significant challenges for law enforcement agencies in identifying and protecting abuse victims.

She indicated that banning such apps might ignite debates over internet freedom and could disproportionately impact young men experimenting with AI software without comprehension of the consequences.

Reece-Greenhalgh remarked that while the criminal justice system strives to treat adolescent offenses with understanding, previous efforts to mitigate criminality among youth have faced challenges when offenses occur in private settings, leading to unintended consequences within schools and communities.

Matt Hardcastle, a partner at Kingsley Napley, emphasized the “online youth minefield” surrounding access to illegal sexual and violent content, noting that many parents are unaware of how easily their children can encounter situations that lead to harmful experiences.

“Parents often view these situations from their children’s perspectives, unaware that their actions can be both illegal and detrimental to themselves or others,” he stated. “Children’s brains are still developing, leading them to approach risk-taking very differently.”

Marcus Johnston, a criminal lawyer focusing on sex crimes, reported working with an increasingly youthful demographic involved in such crimes, often without parental awareness of the issues at play. “Typically, these offenders are young men, seldom young women, ensnared indoors, while parents mistakenly perceive their activities as mere games,” he explained. “These offenses have emerged largely due to the internet, with most sexual crimes now taking place online, spearheaded by forums designed to cultivate criminal behavior in children.”

A government spokesperson stated:

“It is appallingly illegal to create, possess, or distribute child sexual abuse material, including AI-generated images. Platforms of all sizes must remove this content or face significant fines as per online safety laws. The UK is pioneering the introduction of AI-specific child sexual abuse offenses, making it illegal to own, create, or distribute tools crafted for generating abhorrent child sexual abuse material.”

  • In the UK, the NSPCC offers support to children at 0800 1111 and adults concerned about children can reach out at 0808 800 5000. The National Association of People Abused in Childhood (NAPAC) supports adult survivors at 0808 801 0331. In Australia, children, young adults, parents, and educators can contact the 1800 55 1800 helpline for children, or Braveheart at 1800 272 831. Adult survivors may reach the Blue Knot Foundation at 1300 657 380.

Source: www.theguardian.com

Federal police union advocates for creation of portal for reporting AI deepfake victimization

The federal police union is calling for the establishment of a dedicated portal where victims of AI deepfakes can report incidents to the police. They expressed concern over the pressure on police to quickly prosecute the first person charged last year for distributing deepfake images of women.

Attorney General Mark Dreyfus introduced legislation in June to criminalize the sharing of sexually explicit images created using artificial intelligence without consent. The Australian Federal Police Association (Afpa) supports this bill, citing challenges in enforcing current laws.

Afpa highlighted a specific case where a man was arrested for distributing deepfake images to schools and sports associations in Brisbane. They emphasized the complexities of investigating deepfakes, as identifying perpetrators and victims can be challenging.

Afpa raised concerns about the limitations of pursuing civil action against deepfake creators, citing the high costs and challenges in identifying the individuals responsible for distributing the images.

They also noted the difficulty in determining the origins of deepfake images and emphasized the need for law enforcement to have better resources and legislation to address this issue.

Skip Newsletter Promotions

The federal police union emphasized the need for better resources and legislation to address the challenges posed by deepfake technology, urging for an overhaul of reporting mechanisms and an educational campaign to raise awareness about this issue.

The committee is set to convene its first hearing on the proposed legislation in the coming week.

Source: www.theguardian.com

Bill Gates advocates for AI as a valuable tool in achieving climate goals

Bill Gates argues that artificial intelligence will assist, not hinder, in achieving climate goals, despite concerns about new data centers depleting green energy supplies.

The philanthropist and Microsoft co-founder stated that AI could enhance technology and power grids’ efficiency, enabling countries to reduce energy consumption even with the need for more data centers.

Gates reassured that AI’s impact on the climate is manageable, contrary to fears that AI advancements might lead to increased energy demand and reliance on fossil fuels.

“Let’s not exaggerate this,” Gates emphasized. “Data centers contribute an additional 6% in energy demand at most. But it’s likely around 2% to 2.5%. The key is whether AI can accelerate the reduction to 6% or beyond. The answer is, ‘Definitely.’

Goldman Sachs estimates that AI chatbot tool ChatGPT’s electricity consumption for processing queries is nearly ten times more than a Google search, potentially causing carbon dioxide emissions from data centers to double between 2022 and 2030.

Experts project that developed countries, which have seen energy consumption decline due to efficiency, could experience up to a 10% rise in electricity demand from the growth of AI data centers.

In a conference hosted by his venture fund Breakthrough Energy, Gates told reporters in London that the additional energy demand from AI data centers is likely to be offset by investments in green electricity, as tech companies are willing to pay more for clean energy sources.

Breakthrough Energy has supported over 100 companies involved in the energy transition. Gates is heavily investing in AI through the Gates Foundation Trust, which has allocated about a third of its $77 billion assets into Microsoft.

However, Gates’ optimism about AI’s potential to reduce carbon emissions aligns with peer-reviewed papers, suggesting that generative AI could significantly lower CO2 emissions by simplifying tasks like writing and creating illustrations.

AI is already influencing emissions directly, as demonstrated by Google using deep learning techniques to reduce data center cooling costs by 40% and decrease overall electricity usage by 15% for non-IT tasks.

Despite these advancements, concerns remain about the carbon impact of AI, with Microsoft acknowledging that its indirect emissions are increasing due to building new data centers around the world.

Gates cautioned that the world could miss its 2050 climate goals by up to 15 years if the transition to green energy is delayed, hindering efforts to decarbonize polluting sectors and achieve net-zero emissions by the target year.

He expressed concerns that the required amount of green electricity may not be delivered in time for the transition, making it challenging to meet the zero emissions goal by 2050.

Gates’ warning follows a global report indicating a rise in renewable energy alongside fossil fuel consumption, suggesting that meeting climate goals requires accelerated green energy adoption.

This article was corrected on Friday, June 28. The Gates Foundation does not invest in Microsoft. The Gates Foundation Trust, which is separate from the foundation, holds Microsoft shares.

Source: www.theguardian.com

Claude 3.5 advocates for the extensive use of AI in the near future as beneficial

TThe state of the art in AI just got a little bit further along: On Friday, Anthropic, an AI lab founded by a team of disgruntled OpenAI staffers, released the latest version of its Claude LLM. From Bloomberg:

The company announced on Thursday that a new model of the technology behind its popular chatbot, “Claude,” is twice as fast as its most powerful predecessor. In its evaluation, Anthropik said the model outperformed leading competitors such as OpenAI in several key intelligence capabilities, including coding and text-based reasoning.

Anthropik just released the previous version of Claude, 3.0, in March. This latest model is called 3.5, and it’s currently only available on the company’s mid-range model, “Sonnet.” The company says a faster, cheaper, less powerful “Haiku” version is coming soon, as well as a slower, more expensive, but most powerful “Opus.”

But even before Opus arrived, Anthropic claimed to have the best AI on the market. In a series of head-to-head comparisons posted on the company’s blog, 3.5 Sonnet outperformed OpenAI’s latest model, GPT-4o, in tasks like math quizzes, text comprehension, and undergraduate-level knowledge. It wasn’t a clean sweep, with GPT maintaining the lead in several benchmarks, but it was enough to justify the company’s claim that it’s on the cutting edge of what’s possible.

From a more qualitative perspective, AI seems to be a step forward. Anthropic states:

They have a significantly improved ability to understand nuance, humor, and complex instructions, and they excel at writing high-quality content in a natural, relatable tone.

They’re grading their own homework, and their explanation matches the changes I’ve noticed: No matter where the technical benchmarks are, I find talking to the latest version of Claude more enjoyable than any AI system I’ve used before.

But the company isn’t just selling power updates. Instead, in a way favored by smaller competitors around the world, Anthropic is focusing as much on cost as it is on features. The company claims that Claude 3.5 is not only smarter than its predecessor, but also cheaper.

Source: www.theguardian.com

Pope Francis advocates for global oversight of artificial intelligence | Science and Technology

Pope Francis has voiced support behind calls for regulation of AI.

pope With the annual World Peace Day message, artificial intelligence Safely developed and ethically used.

He warned that the technology lacks human values ​​such as compassion and morality, and could blur the line between what is real and what is fake.

The Pope should know, considering he was the subject of some of the most infamous AI-generated images of 2023.

In March, he was photographed wearing a stylish down jacket, leaving social media in awe.

This surreal image created using the AI ​​tool Midjourney was certainly too good to be true.

how Chat GPT Generating text content allows users to request images using a simple prompt.

The fake photo originated on Reddit and was shared tens of millions of times on social media, fooling people, including celebrities, and becoming one of the first major examples of AI-powered misinformation at scale.

This week: British charity Full Fact highlighted another false image of FranciscoThe photo showed him addressing a large crowd in Lisbon earlier this year.

image:
AI-generated image of the Pope addressing a crowd in Lisbon, Portugal.Photo: Complete Facts

Pope shares his biggest concerns about AI

Cardinal Michael Czerny, director of the Vatican Development Authority, shared the pope’s concerns in a written statement.

“The biggest risk is dialogue,” he said.

“Because without truth there can be no dialogue, and without responsibility there can be no truth.”

The Pope said the regulatory priorities are to prevent disinformation, discrimination and distortion, promote peace and guarantee human rights.

read more:
How the confusion arose in the creators of ChatGPT
The first year of the chatbot that changed the world

His intervention was a few days later. EU reaches agreement on how to regulate AIwhich covers generation tools such as Midjourney and ChatGPT, but will not come into effect until 2025 at the earliest.

joe biden us president The White House announced its own proposal in OctoberThis included the possibility of requiring AI-generated content to be watermarked.

In Britain, the Prime Minister Rishi Sunak They are becoming more cautious about AI laws, arguing they risk stifling innovation.

Source: news.sky.com

OpenAI investors and employees push back against Sam Altman’s firing, as he advocates for harmony within the company

Sam Altman on Monday threatened to walk away from his struggling AI startup, even as employees and major investors alike threatened to walk away from the struggling AI startup following the board’s shock move to oust him from the company. He insisted that he and OpenAI are “still one team” and have “one mission.”

Altman is now set to lead Microsoft’s new AI division, despite saying in an open letter that nearly all of OpenAI’s 770 employees will leave the company unless the entire board resigns. He insisted. Greg Brockman is back.

“We’re all going to collaborate in some way. We’re very excited,” Altman said.

“[Microsoft CEO Satya Nadella] My top priority is to ensure that OpenAI continues to thrive, and I am committed to providing full operational continuity to our partners and customers. The partnership between OpenAI and Microsoft makes this very possible. ” he added.

Mr. Altman’s remarks were met with a degree of skepticism, given the apparent chaos that followed one of the most unexpected and surprising coup attempts in Silicon Valley history.

The board announced late Friday that it “no longer has confidence in Altman’s ability to continue to lead OpenAI” because he “has not been consistently candid in his communications.”

His firing comes just a few of the announcements that despite having pumped more than $13 billion into OpenAI’s operations, he has blindly fired investment firms such as Thrive Capital and Khosla Ventures, as well as key partners including Microsoft. I found out a minute ago.

Investor Vinod Khosla slams OpenAI board of directors In a scorching column for “The Information.”its members wrote, had made a “serious miscalculation” and “set back the promise of artificial intelligence.”

Sam Altman said OpenAI will continue to operate as “one team.”
Reuters

“Every problem has a solution,” said Josh Kushner, founder of Thrive Capital. His company will be the lead buyer in the planned OpenAI stock sale, which values ​​the company at about $86 billion and is expected to close by the end of the year.

The battle over OpenAI’s future is getting stranger by the minute, with speculation mounting in the private market that a planned stock sale may fall through.

Ken Smythe of private capital advisor Next Round Capital told the Post that OpenAI’s funding plans are likely over, given the turmoil behind the scenes.

As of Monday, some major investors were “considering reducing the value of their holdings in OpenAI to zero.” reported by bloombergThis was reported by a person familiar with the matter. The newspaper said the possible move “appears to be aimed at putting pressure on the board to resign and encourage Mr. Altman to return.”

Satya Nadella
Reuters

Altman’s departure is a “material change in circumstances” and puts Thrive’s participation in the stock sale in doubt, although a sale could occur if Altman is reappointed as OpenAI’s CEO. Gender is still there. Sources told the Financial Times.

Thrive did not immediately respond to The Post’s request for comment.

Despite Altman’s public statements indicating he has stepped down, Altman himself reportedly has not yet closed the door on returning to his previous role as OpenAI CEO – people familiar with the matter said. The Verge He said he and Brockman are still open to returning, provided all remaining board members agree to resign.

Officials told the media that Altman’s comments about “work”[ing] “Together in some way” was “intended to indicate that the fight continues”.

Meanwhile, Microsoft has emerged as the big winner, having secured Altman’s services, and likely most of OpenAI’s employees, at a fraction of the valuation it would have been valued at last week.

Altman himself reportedly hasn’t closed the door on returning to his previous role as CEO of OpenAI just yet, with sources telling The Verge that he and the aforementioned Greg Brockman are still open to returning. Told.
Getty Images for SXSW

“Microsoft just pulled off one of the biggest coups in recent history, acquiring not only OpenAI’s technology but its employees within 48 hours,” Smythe said.

Nadella said Altman and Brockman will “join Microsoft to lead a new advanced AI research team.”

“We look forward to moving quickly to provide them with the resources they need to succeed,” Nadella said. He added that Microsoft remains “committed to our partnership with OpenAI.” [has] We are confident in our product roadmap. ”

In a scathing open letter, OpenAI staffers accused the board of lacking “competence, judgment, and consideration for our company’s mission and our people,” and said, “If they decide to… has ensured that all OpenAI employees will have a position in this new subsidiary.” stop.

OpenAI’s board of directors has named Emmett Shea, co-founder of the popular video game streaming platform Twitch, as interim CEO.
Reuters

The workers are demanding that OpenAI appoint two new lead independent directors, including former Twitter board chairman Brett Taylor and former U.S. congressman Will Hurd, who resigned from OpenAI’s board earlier this year. (Republican, Texas) emerged as a candidate.

At this time, the OpenAI board has named Emmett Shear, co-founder of the popular video game streaming platform Twitch, as interim CEO.

Mr. Shear is already scrambling to reassure employees and investors. In a lengthy statement posted to Company X, Mr. Shear pledged to reform the company’s management and conduct an independent investigation into the circumstances that led to Mr. Altman’s unexpected departure.

Source: nypost.com