Porn Company Fined £1 Million by Ofcom for Inadequate Age Verification

A pornography company managing 18 adult websites has incurred a £1 million fine from the regulatory body Ofcom for inadequate age verification measures, marking the largest penalty issued thus far under the UK’s Online Safety Act.

The Belize-based AVS Group has also faced an additional fine of £50,000 for not adhering to information request protocols.

This incident represents the third time the communications regulator has enforced fines on a company regarding the UK’s Online Safety Act, which implemented stringent age verification rules in July.

AVS has implemented what it describes as an age verification system; however, regulatory assessments have deemed it to be ineffective.

The company finds itself facing a £1,000 penalty for each day Ofcom considers the circumstance valid within a 72-hour window of the age check being introduced. This leads to a cumulative fine of £300 daily until they comply with the information request or for as much as 60 days.

Since the implementation of the new regulations, Ofcom has initiated investigations into 92 online services, giving priority to sites attracting millions of monthly visitors, considering the potential harm these sites pose.

Oliver Griffiths, Ofcom’s group director for online safety, shared with BBC Radio 4’s Today program that the fines are part of a “broader shift” focused on platform accountability, which includes the “large scale” rollout of age verification on adult sites to combat child sexual abuse material.

Mr. Griffiths noted that more than 90 websites, inclusive of 83 adult sites, are still under scrutiny for possible infringements of the law, and further penalties are expected.

Ofcom also indicated that a significant social media platform, unnamed, could face formal repercussions should it fail to enhance its compliance measures. This platform has reportedly provided insufficient risk assessments needed to evaluate the potential appearance of illegal content, such as scams and unlawful pornography, to its users.

“We reached back out to inform them a re-evaluation was necessary,” Griffiths stated. “Should they fail to treat this matter with due seriousness again, we will promptly escalate to enforcement.”

Furthermore, Ofcom disclosed its review of major anonymous platforms’ capabilities to eliminate illegal terrorist and hate-driven content, including anti-Semitic and anti-Muslim material, with possible enforcement actions on the horizon.

The Online Safety Act brings forth several new regulations designed to shield children and adults from harmful content, with violations potentially costing up to £18 million or 10% of annual UK revenue, or even business closures.

More than half of the UK’s 100 most frequented adult services have instituted age verification since the rule changes in July, alongside social media sites like X, TikTok, and Reddit, according to the regulator. Mr. Griffiths acknowledged a rapid increase in the usage of virtual private networks (VPNs), which enable users to bypass regional restrictions on certain sites, noting that this number had peaked at between 600,000 to over 1 million users when the age verification was assessed but has since declined “significantly” below 1 million.

“There has been a slight uptick in VPN usage; however, several elements were not sold wholesale. Interesting research …indicates that children do not seem to constitute a large proportion,” he noted.

Technology Secretary Liz Kendall stated: “Since the Online Safety Act was enacted, platforms are indeed beginning to accept responsibility for safeguarding children and eradicating illegal and hateful content.”

“Ofcom has the full backing of the Government and is leveraging every authority at its disposal to ensure a service prioritizing user safety. Ensuring the online safety of children remains a top priority for this Government and for me personally.”

Source: www.theguardian.com

Instagram’s Age Verification: Adults with Mustaches Over 16—But What About 13-Year-Olds?

Instagram’s method for confirming if a user surpasses 16 years old is fairly straightforward, especially when the individual is evidently an adult. However, what occurs if a 13-year-old attempts to alter their birth date to seem older?

In November, Meta informed Instagram and Facebook users whose birth dates are registered as under 16 that their accounts would be disabled as part of Australia’s prohibition on social media use for children. This rule will take effect on December 10, with Meta announcing that access for users younger than 16 will start being revoked from December 4.

Subscribe: AU breaking news email

Dummy social media accounts were created on phones as part of Guardian Australia’s investigation into what content different age groups access on the platform.




Instagram notification sent to a test account with an age set to 15. Photo: Instagram/Meta

One account was created on Instagram with the age set at 15 to observe the impact of the social media ban for users under 16. Instagram later stated: “Under Australian law, you will soon be unable to use social media until you turn 16.”

“You cannot use an Instagram account until you’re 16, which means your profile will not be visible to you or anyone else until that time.”

“We’ll inform you when you can access Instagram again.”




Notice informing that test account users will lose access due to the Australian social media ban. Photo: Instagram/Meta

The account was then presented with two choices: either download account data and deactivate until the user is 16, or verify their date of birth.




Instagram notification sent to test account set to age 15 regarding date of birth review options. Photo: Instagram/Meta

The second option enables users to submit a “video selfie” to validate that the account holder is older than 16. The app activated the front-facing camera and prompted the adult test user, distinguished by a thick beard, to shift their head side to side. This resembles the authentication method used for face unlock on smartphones.




Explanation on how the “Video Selfie” feature estimates the user’s age. Photo: Instagram/Meta

The notification indicated that the verification process usually takes 1-2 minutes, but may extend up to 48 hours.




Notification sent to the test account following the date of birth verification request. Photo: Instagram/Meta

The app promptly indicated that accounts created by adult test users were recognized as 16 years or older.




A notification confirming the user’s date of birth was updated by Instagram. Photo: Instagram/Meta

In another test, a 13-year-old boy created a fresh account on his mobile device, avoiding installing Instagram and using a birth date that clearly suggested he was under 16. There was no immediate alert regarding the upcoming social media ban.

When the child attempted to change their date of birth to reflect an adult age, the same video selfie facial age estimation process was performed.

Skip past newsletter promotions

Within a minute, it replied, “We couldn’t verify your age,” and requested a government-issued ID for date of birth verification.

Facial age testing during the Age Assurance Trial revealed that individuals over 21 were generally much less prone to being misidentified as under 16. Meanwhile, those closer to 16 years of age and minorities experienced higher rates of false positives and negatives.


Meta may have already assessed users who haven’t been notified as 18 years or older, utilizing data such as birth date, account lifespan, and other user activity.

A Meta representative mentioned that the experiment demonstrated that the process functions as expected, with “adult users being capable of verifying their age and proceeding, while users under 16 undergo an age check when attempting to alter their birth date.”

“That said, we must also recognize the findings of the Age Assurance Technical Examination, which highlights the specific difficulties of age verification at the 16-year threshold and anticipates that the method may occasionally be imperfect,” the spokesperson added.

Last month, Communications Minister Annika Wells acknowledged the potential challenges confronting the implementation of the ban.

“We recognize that this law isn’t flawless, but it is essential to ensure that there are no gaps,” she stated.

Meta collaborates with Yoti for age verification services. The company asserts on its website that facial images will be destroyed once the verification process concludes.

The ban impacts Meta’s Facebook, Instagram, and Threads platforms, as well as others such as Kick, Reddit, Snapchat, TikTok, Twitch, X, and YouTube.




Source: www.theguardian.com

Roblox Launches Age Verification Feature in Australia, Advocating Against Child Social Media Ban

Roblox maintains that Australia’s forthcoming social media restrictions for users under 16 should not extend to its platform, as it rolls out a new age verification feature designed to block minors from communicating with unknown adults.

The feature, which is being launched first in Australia, allows users to self-estimate their age using Persona age estimation technology built into the Roblox app. This utilizes the device’s camera to analyze facial features and provide a live age assessment.


This feature will become compulsory in Australia, the Netherlands, and New Zealand starting the first week of December, with plans to expand to other markets in early January.

After completing the age verification, users will be categorized into one of six age groups: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older.

Roblox has stated that users within each age category will only be able to communicate with peers in their respective groups or similarly aged groups.

Sign up: AU breaking news email

These changes were initially proposed in September and received positive feedback from Australia’s eSafety Commissioner, who has been in discussions with Roblox for several months regarding safety concerns on the platform, labeling this as a step forward in enhancing safety measures.

A recent Guardian Australia investigation revealed a week’s worth of virtual harassment and violence experienced by users who had set their profiles as eight years old while on Roblox.

Regulatory pressure is mounting for Roblox to be included in Australia’s under-16 social media ban, set to be implemented on December 10. Although there are exceptions for gaming platforms, Julie Inman-Grant stated earlier this month that eSafety agencies are reviewing chat functions and messaging in games.

“If online gameplay is the primary or sole purpose, would kids still utilize the messaging feature for communication if it were removed? Probably not,” she asserted.

During a discussion with Australian reporters regarding these impending changes, Roblox’s chief safety officer, Matt Kaufman, characterized Roblox as an “immersive gaming platform.” He explained, “I view games as a framework for social interaction. The essence lies in bringing people together and spending time with one another.”

When asked if this suggests Roblox should be classified as a social media platform subject to the ban, Kaufman responded that Roblox considers social media as a space where individuals post content to a feed for others to view.

“People return to look at the feed, which fosters a fear of missing out,” he elaborated. “It feels like a popularity contest that encapsulates social media. In contrast, Roblox is akin to two friends playing a game after school together. That’s not social media.”

“Therefore, we don’t believe that Australia’s domestic social media regulations apply to Roblox.”


When questioned if the new features were introduced to avoid being encompassed in the ban, Kaufman stated that the company is engaged in “constructive dialogue” with regulators and that these updates showcase the largest instance of a platform utilizing age verification across its entire user base.

Persona, the age verification company partnering with Roblox, Participating in Australian Age Guarantee Technology Trial. They reported a false positive rate of 61.11% for 15-year-olds identified as 16 years old and 44.25% for 14-year-olds.

Kaufman explained that the technology would likely be accurate within a year or two and that users who disagree with the assessment could correct it using a government ID or parental controls to establish an age. He assured that there are “strict requirements” for data deletion after age verification. Roblox states that ID images will be retained for 30 days for purposes such as fraud detection and then erased.

Users who opt not to participate in the age verification will still have access to Roblox, but they will be unable to use features like chat.

More than 150 million people globally engage with Roblox every day across 180 countries, including Australia. According to Kaufman, two-thirds of users are aged 13 and above.




Source: www.theguardian.com

Age Verification Hacking Firm Possibly Exposes ID Photos of Discord Users | Social Media

Photos of government IDs belonging to approximately 70,000 global Discord users, a widely used messaging and chat application amongst gamers, might have been exposed following a breach at the firm responsible for conducting age verification procedures.

Along with the ID photos, details such as users’ names, email addresses, other contact information, IP addresses, and interactions with Discord customer support could also have fallen prey to the hackers. The attacker is reportedly demanding a ransom from the company. Fortunately, full credit card information or passwords were not compromised.

The incident was disclosed last week, but news of the potential ID photo leak came to light on Wednesday. A representative from the UK’s Information Commissioner’s Office, which oversees data breaches, stated: “We have received a report from Discord and are assessing the information provided.”

The images in question were submitted by users appealing age-related bans via Discord’s customer service contractors, which is a platform that allows users to communicate through text, voice, and video chat for over a decade.


Some nations, including the UK, mandate age verification for social media and messaging services to protect children. This measure has been in effect in the UK since July under the Online Safety Act. Cybersecurity professionals have cautioned about the potential vulnerability of age verification providers, which may require sensitive government-issued IDs, to hackers aware of the troves of sensitive information.

Discord released a statement acknowledging: “We have recently been made aware of an incident wherein an unauthorized individual accessed one of Discord’s third-party customer service providers. This individual obtained information from a limited number of users who reached out to Discord through our customer support and trust and safety teams… We have identified around 70,000 users with affected accounts globally whose government ID photos might have been disclosed. Our vendors utilized those photos for evaluating age-related appeals.”

Discord requires users seeking to validate their age to upload a photo of their ID along with their Discord username to return to the platform.

Nathan Webb, a principal consultant at the British digital security firm Acumen Cyber, remarked that the breach is “very concerning.”

Skip past newsletter promotions

“Even if age verification is outsourced, organizations must still ensure the proper handling of that data,” he emphasized. “It is crucial for companies to understand that delegating certain functions does not relieve them of their obligation to uphold data protection and security standards.”

Source: www.theguardian.com

Leaked Age Verification IDs from Discord Data Breaches | Gaming News

Discord, the popular video game chat platform, has informed users about a data breach that has potentially compromised the personal information required for age verification.

Last week, the company reported that unauthorized individuals accessed one of Discord’s third-party customer service providers, impacting “a limited number of users” who interacted with customer service or the trust and safety teams.

Compromised data could encompass usernames, email addresses, billing details, the last four digits of credit card numbers, IP addresses, and messages exchanged with customer support.

According to Discord, the alleged attackers “gained access to a small number of government ID images (e.g., driver’s licenses, passports, etc.) from users who submitted appeals regarding their age verification.

The affected users were informed as of last week.

“If any ID is accessed, it will be explicitly mentioned in the email you receive,” Discord stated.

The support system was reportedly exploited to retrieve user data in an attempt to extort a financial ransom from Discord, the company clarified.

Discord mentioned that the third-party provider has since revoked access to the ticketing system and has initiated an internal investigation in collaboration with law enforcement.

Users who received the notification indicated that the attack likely occurred on September 20th.

With over 200 million active users each month, Discord continues to grow.

Earlier this year, Discord began verifying user ages in the UK and Australia using facial age verification tools. The company stated that age verification face and ID images are “deleted immediately afterwards,” but according to their website, users can reach out to the trust and safety team for a manual review if verification fails.

Under the upcoming social media ban for users under 16, effective December 10, the Australian government specified that platforms like Discord will have various ways to verify user ages and hopes to address unfavorable decisions swiftly.

As part of the age verification scheme, the platform can request an ID document, though it is not the sole method of age verification available under their policy.

Australia’s Privacy Committee has confirmed that it has been notified of the breach involving Discord.

Discord has been contacted for further comments.

Source: www.theguardian.com

Creation of an Age Verification System to Identify Users Under 18 Following Teenage Fatalities

OpenAI will restrict how ChatGPT interacts with users under 18 unless they either pass the company’s age estimation method or submit their ID. This decision follows a legal case involving a 16-year-old who tragically took their own life in April after months of interaction with the chatbot.

Sam Altman, the CEO, emphasized that OpenAI prioritizes “teen privacy and freedom over the board.” As discussed in a blog post, “Minors need strong protection.”

The company noted that ChatGPT’s responses to a 15-year-old should differ from those intended for adults.


Altman mentioned plans to create an age verification system that will default to a protective under-18 experience in cases of uncertainty. He noted that certain users might need to provide ID in some circumstances or countries.

“I recognize this compromises privacy for adults, but I see it as a necessary trade-off,” Altman stated.

He further indicated that ChatGPT’s responses will be adjusted for accounts identified as under 18, including blocking graphic sexual content and prohibiting flirting or discussions about suicide and self-harm.

“If a user under 18 expresses suicidal thoughts, we will attempt to reach out to their parents, and if that’s not feasible, we will contact authorities for immediate intervention,” he added.

“These are tough decisions, but after consulting with experts, we believe this is the best course of action, and we want to be transparent about our intentions,” Altman remarked.

OpenAI acknowledged that its system was lacking as of August and is now working to establish robust measures against sensitive content, following a lawsuit by the family of a 16-year-old, Adam Lane, who died by suicide.

The family’s attorneys allege that Adam was driven to take his own life after “monthly encouragement from ChatGPT,” asserting that GPT-4 was “released to the market despite known safety concerns.”

According to a US court filing, ChatGPT allegedly led Adam to explore the method of his suicide and even offered assistance in composing suicide notes for his parents.

OpenAI previously expressed interest in contesting the lawsuit. The Guardian reached out to OpenAI for further comments.

Adam reportedly exchanged up to 650 messages a day with ChatGPT. In a post-lawsuit blog entry, OpenAI admitted that its protective measures are more effective in shorter interactions and that, in extended conversations, ChatGPT may generate responses that could contradict those safeguards.

On Tuesday, the company announced the development of security features to ensure that data shared with ChatGPT remains confidential from OpenAI employees as well. Altman also stated that adult users who wish to engage in “flirtatious conversation” could do so. While adults cannot request instructions on suicide methods, they can seek help in writing fictional narratives about suicide.

“We treat adults as adults,” Altman emphasized regarding the company’s principles.

Source: www.theguardian.com

Roblox Enhances Age Verification to Limit Adult-Child Interactions | Games

Roblox, the popular gaming platform, is set to restrict communication between adult strangers and children by the end of the year in an effort to eliminate “bad actors.”

With over 110 million users engaged in games like Grow a Garden and Brookhaven RP, the platform has faced scrutiny over child safety as studies have indicated that children can interact with adults during gameplay.

Last month, the platform announced a ban on vigilantes attempting to confront predators. In the previous year, Roblox submitted over 24,000 reports to the National Center for Missing & Exploited Children, addressing allegations of child sexual exploitation in the U.S.

On Wednesday, it was announced that by the end of 2025, all users will be required to utilize text and chat communication tools alongside thousands of games and pass age verification. Users who cannot prove they are over 18 must show that they know who they wish to communicate with. Currently, Roblox mandates proof that the user is over 13 years old to enable communication.

“We’re instituting two major changes,” stated Matt Kaufman, the chief safety officer of the $92 billion gaming company. “By the end of the year, we will require all users with access to Roblox communications to undergo an age verification or estimation process.”

“This process will provide better insight into the user’s age, rather than solely relying on self-reported age at registration. This is especially crucial for younger users, making it imperative to use communication features appropriately based on age.”

The platform continues to expand rapidly, recently allowing companies like Netflix, Lionsgate, and Sega to develop games using their intellectual property on Roblox.

In April, The Guardian published a “deeply intrusive” study highlighting how easily children can encounter inappropriate content and unmonitored interactions with adults. One parent reported that adult strangers could contact their child on the platform, soliciting them to share explicit images. Others have expressed concerns regarding exposure to violent and sexual content, leading to distress for their children.

Roblox stated in a blog post, “Our users, creator community, parents, partners, and lawmakers strive for the same goal as us: to ensure young people remain safe while using Roblox. We are committed to continually innovating and enhancing our safety systems, policies, and moderation tools.”

Source: www.theguardian.com

UK Traffic to Popular Adult Sites Declines Following Introduction of Age Verification

Since the implementation of stringent age verification measures last month, visits to popular adult websites in the UK have seen a significant decline, according to recent data.

Daily traffic to PornHub, the most frequented porn site in the UK, dropped by 47%, from 3.6 million on July 24 to 1.9 million on August 8.

Data from digital market intelligence firm Sircerweb indicates that the next popular platforms, Xvideos and Xhamster, also experienced declines of 47% and 39% during the same period.

As reported initially by the Financial Times, this downturn seems to reflect the enforcement of strict age verification rules commencing on July 25 under the Online Safety Act. However, social media platforms implementing similar age checks for age-restricted materials, like X and Reddit, did not experience similar traffic declines.

A representative from Pornhub remarked, “As we have observed in various regions globally, compliant sites often see a decrease in traffic, while non-compliant ones may see an increase.”

The Online Safety Act aims to shield children from harmful online content, mandating that any site or app providing pornographic material must prevent access by minors.

Ofcom, the overseeing body for this law in the UK, endorses age verification methods such as: verifying age via credit card providers, banks, or mobile network operators; matching photo ID with a live selfie; or using a “digital identity wallet” for age verification.

Additionally, the law requires platforms to block access to content that could be harmful to children, including materials that incite self-harm or promote dangerous behaviors, which has sparked tension over concerns of excessive regulation.

Ofcom contends that the law does not infringe upon freedom of expression, highlighting clauses intended to protect free speech. Non-compliance can lead to penalties ranging from formal warnings to fines amounting to 10% of global revenue, with serious violations potentially resulting in websites being blocked in the UK.

Nigel Farage’s Reform British Party has vowed to repeal the act following the age verification requirement, igniting a heated exchange where the technology secretary, Peter Kyle, was accused by Farage of making inappropriate comments.

The implementation of age checks has accordingly led to a surge in virtual private network (VPN) downloads, as users seek to circumvent national restrictions on certain websites. VPN applications frequently dominate the top five spots in Apple’s App Store.

Source: www.theguardian.com

Ofcom Chief: Age Verification Crucial for Kids’ Online Safety

The UK’s primary media regulator has vowed to deliver a “significant milestone” in the pursuit of online safety for children, although it has cautioned that age verification measures must enforce stricter regulations on major tech firms.

Ofcom’s chief, Melanie Dawes, will unveil a new framework on Sunday. To be introduced later this month, marking a pivotal change in how the world’s largest online platforms are regulated.

However, she faces mounting pressure from advocates, many of whom are parents who assert that social media contributed to the deaths of their children, claiming that the forthcoming rules could still permit minors to access harmful content.

Dawes stated to the BBC on Sunday: “This is a considerable moment because the law takes effect at the end of the month.”

“At that point, we expect broader safeguards for children to become operational. We aim for platforms that host material inappropriate for under-18s, such as pornography and content related to suicide and self-harm, to either be removed or to implement robust age checks for those materials.”

She continued: “This is a significant moment for the industry and a critical juncture.”


Melanie Dawes (left) remarked that age checks are “a significant milestone for the industry.” Photo: Jeffover/BBC/PA

The regulations set to take effect on July 25th are the latest steps under the online safety law enacted in 2023 by the Conservative government.

The legislation was partially influenced by advocates like Ian Russell, whose 14-year-old daughter, Molly, tragically took her own life in 2017 after being exposed to numerous online resources concerning depression, self-harm, and suicide.

Minister Tory Removing certain bill sections has been criticized for potentially neglecting regulations on “legal but harmful” content in 2022.

Russell, who previously referred to the ACT as “timid,” expressed concerns regarding its enforcement by Ofcom on Sunday. He noted that while regulators allow tech companies to self-determine validation checks, they will evaluate the effectiveness of these measures.

Russell commented: “Ofcom’s public relations often portray a narrative where everything will improve soon. It’s clear that Ofcom must not only prioritize PR but must act decisively.”

“They are caught between families who have suffered losses like mine and the influence of powerful tech platforms.”

Skip past newsletter promotions

Ian Russell, a father currently advocating for child internet safety, expressed concerns about the enforcement of the law. Photo: Joshua Bratt/PA

Russell pressed Dawes to leverage her influence to urge the government for more stringent actions against tech companies.

Some critics have charged the minister with leaving substantial regulatory loopholes, including a lack of action against misinformation.

A committee of lawmakers recently asserted that social media platforms facilitated the spread of misinformation following a murder in Southport last year, contributing to the unrest that ensued. Labour MP Chi Onwurah, chair of the Science and Technology Committee, remarked that the online safety law “is unraveling.”

Dawes has not sought authority to address misinformation, but stated, “If the government chooses to broaden the scope to include misinformation or child addiction, Ofcom would be prepared to implement it.”

Nonetheless, she called out the BBC regarding their handling of Glastonbury’s coverage, questioning whether the lead singer should continue broadcasting footage of Bob Dylan’s performance amid anti-Israel chants.

“The BBC needs to act more swiftly. We need to investigate these incidents thoroughly. Otherwise, there’s a genuine risk of losing public trust in the BBC,” she stated.

Source: www.theguardian.com

Pornhub Owners Suspend French Sites in Protest of New Age Verification Law

Visitors from France accessing adult sites like PornHub, YouPorn, and RedTube will encounter a message that criticizes the nation’s age verification laws, as announced by the company on Tuesday.

A spokesperson indicated that Iro, the parent company, has set a requirement for users to be 18 years or older, responding to French legislation mandating that adult sites implement stricter measures to verify the ages of their users.

“It’s clear that Iro has made the tough choice to restrict access for French users on platforms like Pornhub, Youporn, and Redtube. Tomorrow, we will utilize these platforms to directly engage with the French public,” stated a Pornhub representative on Tuesday.

Instead of providing a vast array of adult content on PornHub, Iro aims to “directly communicate with the French populace about the dangers and invasiveness of privacy, along with the ineffectiveness of French laws,” said Solomon Friedman, owner of Iro and a partner at Ethical Capital Partners, during a video call with reporters on Tuesday.

This year, France will gradually implement new requirements for all adult sites, enabling users to verify their ages using personal information such as credit cards and identification documents.

To safeguard privacy, operators are required to offer third-party “double-blind” options that prevent the sites from accessing users’ identities.

However, Iro contends that this approach is flawed and jeopardizes user data to potential threats, hacks, and leaks.

The company argues that France should focus on the developers of operating systems like Microsoft’s Windows, Apple’s iOS, and Google’s Android, rather than targeting pornographic platforms.

“Iro takes age verification seriously,” executive Alex Kekesi noted during a media call.

She emphasized that individual platforms carry a “significant risk” to privacy rights concerning age verification.

Friedman from ECP stated, “Google, Apple, and Microsoft have integrated features within their operating systems to verify a user’s age at the device level.”

The capacity to “supply age signals to any site or application” can enable control over access to adult content while keeping sensitive information private, offering a viable solution,” he argued.

“We recognize that these three companies are powerful, but that doesn’t excuse France’s actions,” he added.

Iro’s message to adult content viewers includes imagery promoting freedom, inspired by Eugene Delacroix’s renowned painting featuring nude figures.

Culture Minister Auroa Berge mentioned that if adult sites choose to block French users instead of adhering to the law, it would be “very positive.”

“Minors in France will have less access to violent, degrading, and humiliating content,” she remarked.

“If Iro prefers to withdraw from France rather than comply with our regulations, they are free to do so,” stated Clara Chappaz, the French Minister of Artificial Intelligence and Digital Technology, on X.

According to Arcom, 2.3 million minors visit porn sites each month, even though they are legally prohibited from doing so.

Elsewhere in the European Union, adult content platforms face increased scrutiny. EU regulators announced last month that several sites, including Pornhub, are under investigation for failing to uphold child protection regulations.

Source: www.theguardian.com

UK Government App Now Offers Digital Passport for ID Verification

British citizens will soon have the option to store their passport digitally on their phone, along with their driving license, Universal Credit account, marriage certificate, and birth certificate.

These plans were revealed by Peter Kyle, Secretary of State for Science, Innovation, and Technology, as part of a new smartphone app to streamline interactions with government services. This move aims to eliminate the need for physical government letters and long wait times for basic appointments.

Initially, people will be able to access their driver’s licenses and veterans cards with the new digital wallet starting in June. The government’s digital service will later expand to include accounts related to student loans, car tax, benefits, childcare, and local councils.

Mr. Kyle mentioned that his department is collaborating with the Home Office to authorize a digital passport version. While physical copies will still be valid, their use for crossing borders will depend on other countries’ border systems.




An example of a digital driving license page stored in a smartphone wallet in the Gov.uk app, due to be released this summer. Photo: Faculty of Science/PA

Kyle stated: “We are closely monitoring international standards, and as those standards become clearer, governments will naturally want to benefit from them as much as possible.”

The digital wallet, similar to Apple and Google wallets, will be linked to a person’s ID to verify their identity. This will enable instant sharing of necessary certificates and benefit claims with ease. However, there are no immediate plans to use it for proving immigration status.

In case of a lost phone, a recovery system is in place to prevent loss of the digital wallet. Kyle reassured users about data breaches, mentioning that the app’s design complies with existing data laws.

“We are revolutionizing the interaction between citizens and the state,” said Kyle during a launch event in east London, drawing inspiration from Silicon Valley product launches.

He added that individuals under 18, accustomed to smartphones, would view current government and paper-based systems as outdated.

“Moving government services online doesn’t mean leaving behind those without internet access,” he emphasized. “Easier online access allows us to enhance public services and focus human resources where necessary, ensuring better service for all.”

The technology has been developed over the last six months since the Labor party took office and includes modern smartphone security features like facial recognition checks.

Source: www.theguardian.com